Quantcast
Channel: Grant Holliday's Blog
Viewing all 62 articles
Browse latest View live

What does a Premier Field Engineer (PFE) do, anyway?

$
0
0

To be perfectly honest, I had never heard of or understood the ‘PFE’ role at Microsoft until I moved away from Redmond and into the subsidiary. That was one of the traps of being in the product team on campus – by not being on a customer site every day, it was very easy to forget about the “real world” and lose your perspective. With my role as a Program Manager on the TFS team, I had the unique position of being a “internal consultant”, so I still got the face time with customers, they just happened to be mostly other teams at Microsoft.

When I was considering a move to PFE, I came across this PFE recruiting PDF and video (warning: its very geeky/boring & doesn’t represent the reality that PFE in Australia is mostly Proactive work), but the bit where the receptionist makes him wait (2:10) made me chuckle.

  • PFE are the main delivery mechanism for Premier Support
  • PFE deliver Proactive (workshops, health checks, risk assessments) and Reactive (on-site support, troubleshooting) engagements
  • PFEs are highly skilled engineers with deep technical expertise in a given technology

Here are just some of the services listed in the Premier Support Proactive Services Catalog and/or the Australian/NZ version:

What I like about being a PFE:

  • Customers buy and pay for a Premier Support agreement up-front. They work with the Technical Account Manager (TAM) to come up with a Service Delivery Plan (SDP) and agree on a number of hours to purchase for the next 12 months. By shifting the payment up front and planning a year’s worth of work, it makes life more predictable and reduces overheads (pre-sales, contract negotiation, invoicing, etc).
  • Technical depth and troubleshooting experience is mandatory. All you have to do is take a quick read of the Canberra PFE blog to understand that these guys (and gal, now) know their stuff. For example, got an AD problem? Guys like Jimmy or Chad are who you want on your team. Steve’s your Lync guy. I feel that I can have the greatest impact in PFE with my TFS skills and experience.
  • Certification is taken seriously. Even though I wrote the book on TFS, I still had to perform a ‘reverse shadow’ of a TFS Health Check with another TFS PFE (Hi, Micheal Learned!) before I was allowed to deliver one by myself. This is just one of the mechanisms in place to ensure that the quality of PFE deliveries stays consistently high.
  • All those “extra” things I do are not only valued, but encouraged and I can allocate work time to prepare them. For example: TechEd/TechReady presentations, User Groups, blog posts, product team feedback, etc. (Not book writing Smile, that’s covered by the moonlighting policy with agreement from management).
  • Variety and Scope/length of engagements. For a ‘transactional’ PFE like myself, the longest ‘dispatch’ is typically 5 days. Yes, this means that I’m literally working at a new place every week. I thrive on the entropy of different customers, it keeps me engaged (because everything is new) and it keeps me focussed (because its a short timeframe).
  • I control my calendar. Not only do I control, but I have a responsibility to keep my availability calendar up to date for 6 months in the future. I can block time like ‘Local visits only’ (if I need to be in Canberra) and I can block time to study for exams/certifications, or to prep/ramp-up on a new workshop before I get certified to deliver it.

Hopefully this post helps you understand what a Premier Field Engineer is and how we might be able to help you. If you have any questions, feel free to contact me via my blog or your local Microsoft office.


How to configure HTTP access to the TFS Analysis Services Cube

$
0
0

works on my machine, starburstThis is a fairly obscure Team Foundation Server (TFS) and SQL Server Analysis Services (SSAS) configuration, but if you find yourself in this situation, it is another option for making the TFS Analysis Services cube available to your users. I would be surprised if it was ever tested by the TFS product team, but I can confirm that It Works On My Machine and doesn't require any dubious registry changes.

The main configuration comes from this MSDN article: Configure HTTP Access to SQL Server Analysis Services on IIS 7.0, but this blog posts goes in to the TFS specific steps.

This approach provides an alternative means for connecting to Analysis Services when your OLAP solution calls for the following capabilities:

  • Client access is over Internet or extranet connections, with restrictions on which ports can be enabled. Or, client connections are from non-trusted domains in the same network.
  • Client application runs in a network environment that allows HTTP but not TCP/IP connections.
  • Authentication methods other than Windows integrated security are required. IIS supports Anonymous connections and Basic authentication. Configuring Analysis Services for HTTP access lets you use these alternative authentication methods with Analysis Services.

Note: The MSDN article and this blog post don't talk about how to configure using a HTTPS (SSL) connection. However, I don't see any reason why it wouldn't work - I just haven't tried it.

Configuring the SSAS ISAPI Plugin

  • Copy the contents of C:\Program Files\Microsoft SQL Server\MSAS10_50.MSSQLSERVER\OLAP\bin\isapi

Note: The directory may be different, based upon the version, name and configuration of your SSAS instance.

clip_image001[4]

  • To: C:\Inetpub\OLAP

clip_image002[4]

  • Open Internet Information Services (IIS) Manager
  • Navigate to \Sites\Team Foundation Server
  • Right-Click 'Team Foundation Server' and choose 'Add Application…'

clip_image003[4]

  • On the 'Add Application' dialog, enter the following information:
    • Alias: olap
    • Physical path: c:\inetpub\olap
    • Application Pool: Team Foundation Server Application Pool (default)

clip_image004[4]

Now we need to configure the Handler Mappings and Authentication.

  • First, open 'Handler Mappings'

clip_image005[6]

  • From the Actions pane, select 'Add Script Map…'

clip_image006[4]

  • On the 'Add Script Map' dialog, enter the following information:
    • Request path: msmdpump.dll
    • Executable: c:\inetpub\olap\msmdpump.dll
    • Name: olap

clip_image007[4]

  • On the dialog that appears, select 'Yes'. This will allow this particular extension to execute.

clip_image008[4]

  • Back on the /olap Feature configuration screen, select 'Authentication'

clip_image005[7]

We want to make the following changes to allow remote clients to authenticate as themselves to SSAS (Windows Authentication), rather than using the identity of the TFS application pool (Anonymous Authentication).

  • Anonymous Authentication: Disabled
  • Windows Authentication: Enabled
  • Basic Authentication: Disabled (see note below)

clip_image009[4]

Connecting to a remote SSAS server

At this point, you have added an additional way to access a SQL Server Analysis Services server that is hosted on the same server as your TFS application tier (i.e. a single-server installation).

If you SSAS server is on another machine (i.e. a dual-server or complex environment), then you can also configure the ISAPI extension to connect to that remote server.

Note: This topology adds a double-hop authentication step, where credentials must flow from the client to the web server, and on to the backend Analysis Services server. If you are using Windows credentials and NTLM, you will get an error because NTLM does not allow delegation of client credentials to a second server. The most common solution is to use Basic authentication with Secure Sockets Layer (SSL), but this will require users to provide a user name and password when accessing the MSMDPUMP virtual directory. A more straightforward approach might be to enable Kerberos and configure Analysis Services constrained delegation so that users can access Analysis Services in a transparent manner.

The ISAPI extension uses the Analysis Services OLE DB provider to connect to the SSAS server. You will need to install this provider if it is not installed already. For a TFS Application Tier, it will be already installed as part of the SQL Client Connectivity components that the TFS installation wizard checks for.

Microsoft SQL Server 2008 R2 SP1 Feature Pack download page

Open C:\Inetpub\olap\msmdpump.ini and modify the ServerName setting.

<ConfigurationSettings>

<ServerName>localhost</ServerName>

<SessionTimeout>3600</SessionTimeout>

<ConnectionPoolSize>100</ConnectionPoolSize>

</ConfigurationSettings>

Verifying the HTTP endpoint from Excel

With this additional endpoint configured we can now try and connect manually from Excel.

  • Open Excel
  • Select Data > From Other Sources > From Analysis Services (ALT-A-FO-Down-Enter)

clip_image010[4]

On the 'Data Connection Wizard' dialog, specify the path to the ISAPI plugin as the server name. It's not intuitive to put a URL in a place where it's expecting a server name, but this is a supported value for the connection string.

Server name: http://yourtfsserver:8080/olap/msmdpump.dll

clip_image011[4]

If everything is working correctly, you should see the following dialog:

clip_image012[4]

Configure TFS to use the ISAPI extension

On a Team Foundation Server Application Tier, open the Team Foundation Server Administration tool and navigate to the Reporting configuration screen. It should look something like this:

clip_image013[4]

  • Select the 'Edit' action link, then select 'OK' on the dialog that is advising you that warehouse/cube processing will be stopped.
  • Select the 'Analysis Services' tab and update the server settings to the path for the ISAPI extension. You will also need to re-enter the password for the data source.

Note: If your TFS server has been configured to allow access over the Internet, it's important to configure this server name to an address that users can reach. This server name is what Excel will use when you choose 'Create Report' in either Excel or Team Explorer. It is also used for the TFS data sources in Reporting Services. For example, use http://tfs.yourcompany.com:8080/olap/msmdpump.dll instead of http://tfs01:8080/olap/msmdpump.dll

clip_image014[4]

  • Once you select OK, the settings will be saved. You will then need to select the 'Start Jobs' link to start the warehouse/cube processing jobs again. You configuration page will look something like this:

clip_image015[4]

Verifying the TFS Cube Configuration in Excel

Now that we have configured TFS to use the ISAPI extension endpoint, we can use the TFS plugin for Excel to create a connection to the SSAS server, without having to tell users how to manually connect.

  • Open a blank workbook in Excel
  • Select Team > New Report from the menu.

clip_image016[4]

  • On the 'Connect to Team Project' dialog, select your server, collection and project. Select 'Connect'
  • On the 'New Work Item Report' dialog, select a work item query that you want to create a report from.

clip_image017[4]

clip_image018[4]

clip_image019[4]

  • Once the report is generated, you will now have a workbook that is configured with a data source connection to the TFS SSAS database.
  • In the PivotTable Field List, you will see all the available fields for reporting on.

clip_image020[4]

  • If you want to view the data source that the TFS Excel add-in generated, you can go to Data > Properties (ALT-A-P) on the menu, then select the 'Definition' tab. If you look closely, you will find the path to the ISAPI extension is listed in the connection string.

clip_image021[4]

If you have an environment that could use this configuration, and you want to see it supported by the product team, you can vote for it and add comments on the TFS UserVoice site.

C25K: Couch-to-5K Running Plan and Windows Phone App

$
0
0

Total C25K

(I try to keep this a purely technical/work blog and it’s rare that I post personal things up here, but this is too good not to share).

The 1st of July is the end of the Financial Year (in Australia, at least) and I find that I have better success with changes at this time of year, rather than at the new calendar year. After hearing about the Couch-to-5K running plan from a friend and knowing that all my work travel was going to take its toll, I decided it was time I started exercising again.

I got excited and started early and now I’m halfway through week two. What I like so far about this running plan:

  1. It’s quick. 5 minute warm-up, 20 minutes walking/running, 5 minute cool-down = 30 minutes (+ stretching/changing).
  2. It’s simple. 3 sessions a week (every second day), don’t need to measure distances – just run for the durations at a pace where you still have enough breath to talk.
  3. It’s free. No gym/club membership fees.
  4. No special equipment / location required. All you need is a change of clothes and a pair of runners. At home, I can run around the oval. When I’m travelling I can do it on the treadmill if I don’t want to venture out into traffic.

Windows Phone App

Perhaps the thing that has helped motivate me the most, is the awesome Windows Phone app from Six Bars called Total C25K (free) & Total C25K Premium ($1.49 currently).

The main screen shows you your progress and the workout screen shows you how much time is left on the current interval and in total.

1 of 7    3 of 7

The other thing that makes it a great app is the voice cues. You can be playing your workout playlist and when you reach the next interval, it will briefly pause your music, say ‘walk’, ‘run’, ‘half-way’ and then un-pause your music.

Once you pair Smile this app with a nice set of headphones like these Motorola S10-HD Bluetooth Stereo ones, you really can’t go wrong.

image

How to: Take a Memory Dump of an ASP.NET Application Pool Quickly

$
0
0

When I was running the internal Team Foundation Servers (TFS) at Microsoft, we sometimes encountered issues that could only be understood by analysing a memory dump. This was especially true on the Pioneer and Dogfood servers that were running pre-Beta builds. If the problem was serious enough (crashing, memory leaks, etc) that it needed a memory dump, it probably meant that it needed it quickly so that we could recycle the application pool and get the server healthy again.

The problem with dumping an ASP.NET Application Pool, is that all the application pools use the w3wp.exe process name. So, before you can take the dump, you need to work out which process corresponds to the application pool that you are targeting. If you can’t tell by looking at the process owner (e.g. service account/app pool identity). The easy (but slow) way of doing that is to open Task Manager and add the ‘Command Line’ column to the display. You will then see each of the application pool names in the command line of the w3wp.exe processes.

The other problem with app pools that are consuming a large amount of memory, is that the process will be suspended for a long time while the memory is dumped to disk. If this takes longer than the configured ASP.NET process ‘ping time’, then IIS will terminate your process (and start a new one) halfway through the dump and you’ll lose your repro.

To solve that problem, there is the ‘-r’ flag available in the Sysinternals Procdump.exe. It leverages a feature of Windows 7/Windows 2008 R2 that “clones” a process to take the dump and unsuspends the original process faster than normal.

-r      Reflect (clone) the process for the dump to minimize the time
        the process is suspended (Windows 7 and higher only).

We can then use the IIS management tools to look up the process ID for a particular app pool and now we have a simple batch file that we can put on the desktop of our TFS server for quick access.

DumpTfsAppPool.cmd

Create the following batch file and put it in the same directory as Procdump.exe. Don’t forget to create/update the path to the dump location.

%windir%\system32\inetsrv\appcmd list wps /apppool.name:"Microsoft Team Foundation Server Application Pool" /text:WP.NAME > "%temp%\tfspid.txt"

:: ProcDump.exe (faster, reflects/clones the process for the dump to minimize the time the process is suspended (Windows 7 and higher only))

for /F %%a in (%temp%\tfspid.txt) do "%~dp0\procdump.exe" -accepteula -64 -ma -r %%a f:\dumps

pause

Of course this is not TFS specific and you can use it for any ASP.NET Application Pools.

Fix: Analysis Services crashes while processing the TFS OLAP cube

$
0
0

I was on-site with a customer last week performing a Team Foundation Server Health Check (TFSHC). While I was there, I noticed that their SQL Server Analysis Services 2008 R2 SP1 instance had been crashing. So, I did what any good PFE would do and grabbed a copy of the crash dumps (SQLDmpr0001.mdmp) and analysed them. You can find these in the  \OLAP\Log folder under your instance directory. Normally something like: C:\Program Files\Microsoft SQL Server\MSSQL.2\OLAP\Log.

Here’s the stack trace for the crash:


msmdsrv!PFSetLastError+0x4ea
msmdsrv!PCMinorObjectCollectionBase::GetAt+0x58
msmdsrv!MDInfo::Init+0x1f72
msmdsrv!MDCube::PopulateInfo+0x48
msmdsrv!MDIInfoObject::GetInfo+0x81
msmdsrv!MDCube::PerformRefreshRelatedScopes+0xd8a
msmdsrv!PCDatabase::RefreshDependingSessions+0x35a
msmdsrv!PCDBPerm::OnCommit2+0x59
msmdsrv!PCTransaction::CommitPhase2+0x105f
msmdsrv!PCTransaction::Commit+0x38b

msmdsrv!PXSession::CommitTrans+0x2d0
msmdsrv!PXSession::CommitUserTrans+0x19b
msmdsrv!PCASTDDLTransaction::Dispatch+0x11e
msmdsrv!PCXAExecute::Dispatch+0xe30
msmdsrv!PXSession::InternalExecuteCommand+0x6d8
msmdsrv!PCSession::ExecuteCommand+0x9d

This stack trace was enough to help me find the bug in the internal Microsoft support systems.

The root cause for this bug is a version mismatch – the old cube object is calling GetInfo based on new info. It was at this point, I realised that I’d seen this same issue before. The SSAS bug was raised by a member of the TFS test team after they found the issue on the internal dogfood servers.

Now, before we move on – let’s get one thing clear: SQL, Analysis Services and TFS should never crash – any time they do, that’s a bug that needs to be addressed.

Although this issue can occur with any version of TFS (2005-2012) and SQL2008 or SQL2008R2, this particular bug is the reason why the TFS Installation Guide for TFS2012 has steps to “Configure Analysis Services to Recover on Failure”.

SQL Server Analysis Services - Service Properties for configuring restart on failure

Now that all those versions of SQL Analysis Services have been patched, there’s no reason to change the SSAS service to automatically restart on failure. In fact, if you do restart automatically on failure AND you don’t have adequate monitoring in place, you will mask subsequent failures.

Conclusion

The conclusion here is very simple. Keep your systems patched and up to date, and you will avoid many of the issues which have already been found and fixed.

From Server to Service: How Microsoft Moved TFS to Windows Azure

$
0
0

Recently I was invited to speak at TechEd 2012 Australia on my experience with building an Internet-scale service on Windows Azure. Rather than focus on the coding aspects which are documented quite comprehensively on MSDN, I focused on the operational aspects that are crucial to running a reliable service.

Visual Studio Team Foundation Server is a multi-tier client/server application that Microsoft ported to Windows Azure. In this session, you will get an idea of what it takes to take an existing application and turn it into an Internet-scale service. Getting the code running is the easy part - you will learn about all the other things you need to think about when shifting from building a server to running a service, including deployment, patching, monitoring, scaling and testing.

You can stream or download the video from channel9 MSDN:

Channel9 MSDN: From Server to Service: How Microsoft Moved Team Foundation Server to Windows Azure

The feedback on the session has been positive and it appears many people are in the process of standing up their own services or are planning to do so soon.

Key Tenets of Building and Running the Service:

The goal should be that a highly-reliable, 24x7 service should be maintained by a small 8x5 operations staff.

Engineer the problems. Don’t scale the operations team.

Low-cost administration correlates highly with how closely the development, test, and operations teams work together

The product team is held accountable for the success of the service. This drives the right behaviours.

Here are the links to the resources mentioned in the presentation:


image

I’ll also be redelivering this session at the Canberra .NET User Group on Monday 15th October 2012, if you want to come along and ask questions.

And if you’re after an entertaining session that’s not about TFS, take a look at Chad and Tristan’s session on Windows Server 2012 DirectAccess – one of the evals even said “Do the speakers do stand up comedy too?”

Premier Support Catalogue now available

$
0
0

As a Premier Field Engineer, we are constantly skilling up and getting accredited to deliver our proactive support offerings. All of these offerings are in the latest Microsoft Services Premier Support Catalogue for Australia and New Zealand (ANZ).

image

Here’s the main offering that I deliver – the TFS Health Check. Customers love it because they get:

  • A baseline of their TFS configuration
  • A report with recommendations to align with recommended best practices
  • Knowledge transfer sessions where we can talk about upgrades, migrations, and pretty much anything else related to TFS
  • … and me on-site for 3-5 days

image

We’re also almost ready to start delivering our Visual Studio 2012 workshops:

  • Visual Studio 2012 ALM Team Foundation Server Essentials Workshop
  • Visual Studio 2012 ALM Testing Tools Workshop

TFS Administration Tool 2.2 Released

$
0
0

I am pleased to announce that the TFS Administration Tool 2.2 has been released. This release works against Team Foundation Server 2012 and installs on a machine with Team Explorer 2012. You no longer need to install Team Explorer 2010 to use this tool.

Download TFS Administration Tool 2.2 (1.55 MB, MSI)

The TFS Administration Tool allows administrators to manage permissions across TFS, SharePoint and Reporting Services from one convenient interface.

Changes made:

  • Built the tool against the TFS2012 object model (v11.0 assemblies)
  • Updated the installer to look for the TFS2012 object model as a dependency
  • Updated the installer to support version upgrades in the future
  • Removed all references to 2.1 in the UI and installer
  • Updated assembly and installer versions to 2.2

There are no functional changes between the previous release (2.1) and this release – it was mostly just a recompile.

If you find a bug, please open an issue and include either the contents of the "Output" window or the contents of the log file saved in the "Logs" folder so that we can easily reproduce and investigate the problem.


Visual Studio and Team Foundation Server 2012 Update 1 is now available

$
0
0

The Visual Studio Updates are a new mechanism that the team is using to provide ongoing value through the year to Visual Studio and Team Foundation Server customers. For more information on the updates, see: 

Downloading the updates

You’ll notice that there is no standalone or ISO installer available for the Visual Studio update. Fortunately you can still download all the packages locally and use that without being connected to the Internet. Just download the web installer, then run it with /Layout as the command line parameter.

en_visual_studio_2012_x86_update_1_1203928.exe /Layout

The installer will then run and prompt you where you would like to download the packages to, and then run through the process of downloading the packages without actually installing them.

Visual Studio Update 1 - Download Location Visual Studio 2012 Update 1 - Acquiring dialog

This will then create a ‘packages’ folder with all the installation media that can be used to do an offline install of Visual Studio 2012 Update 1.

Screenshot of explorer showing VS2012 Update 1 packages folder

Brian Harry has blogged a list of improvements in TFS2012 Update 1. The major ones are:

  • Kanban support in TFS Web Access
  • Extend TFS server side path limits from 260 characters to 400 characters
  • Usability improvements for Version Control
  • TFS web access usability improvements
      • SCVMM 2012 SP1 support with Lab Management for Windows 2012 hosts

      TFS2012: Monitoring Management Pack

      $
      0
      0

      The Visual Studio 2012 Team Foundation Server Monitoring Management Pack for monitoring TFS with System Center Operations Manager 2007 SP1 or 2012 is now available. You can download it here.

      The Team Foundation Server 2012 Monitoring Management Pack provides both proactive and reactive monitoring of Microsoft Team Foundation Server 2012. It monitors TFS components such as application tier server instances, team project collections, build servers, and proxy servers.

      Feature Summary
      The monitoring provided by this management pack includes availability and configuration monitoring, performance data collection, and default thresholds. You can integrate the monitoring of Team Foundation Server components into your service-oriented monitoring scenarios.

      • Auto discovery of TFS components
      • Implements containment hierarchy, reflecting logical architecture of the Product
      • Implements a proper health model using Monitors
      • Contains tasks, diagnostic and recovery for certain failures
      • Provides events that indicate service outages
      • Provides alerts that show configuration issues and connected data source changes
      • Verification that all dependent services are running

      Here’s a screenshot of the Health Explorer view, where you can see some of the monitors for some of the TFS events in the event log.

      Health Explorer of a TFS server

      And here’s an example of the Availability Report available through SCOM for the application tier.

      Availability Report sample

      This monitoring pack has a dependency on and is meant to be used in conjunction with Microsoft Visual Studio Team Foundation Server 2012. For monitoring TFS2010, see the Visual Studio 2010 Team Foundation Server Monitoring Management Pack.

      Getting started on your new Microsoft Surface RT

      $
      0
      0

      So, you've just unpacked your brand new Microsoft Surface with Windows RT. What now? Where do you start? Here's my list of things to do and hopefully it helps you.

      Step 1 – Choose the right Microsoft account / Windows Live ID

      "Microsoft account" is the new name given to "Windows Live ID". You might know it as your "Hotmail account" or your "Xbox Live login".

      If you have a Windows Phone, then it's a good idea to use the same account that your phone is associated with. Then you will get all the goodness of a common SkyDrive and roaming settings (for apps that support it).

      To find which account you're using on your Windows Phone:

      1. On Start, flick left to the App list, tap Settings, and then tap Email+accounts.
      2. Look for the first email account in the list that is named "Microsoft account" or "Windows Live".

      Tip: If you renamed your primary account to something other than "Microsoft account" look for the first account that is listed with the Windows logo next to it.

      Step 2 – Secure your Surface

      With Windows 8, you no longer have to create a separate logon to your machine. You can just login using your Microsoft account and password. If you haven't set this up already, follow these steps to set it up:

      1. From the Settings charm (swipe from the right of the screen), tap or click Change PC settings (at the very bottom of the screen).
      2. In the left pane, tap or click Users.
      3. Tap or click Switch to a Microsoft account and follow the instructions.

      image

      Create a picture password

      One of the things that gets tiresome pretty quickly on a touch device, is having to type in your complex password all the time. This is easily solved by creating a picture password.

      image

      If you want to know more about picture passwords, see the following blog posts on the Building 8 blog: Signing in with a picture password and Optimizing picture password security

      Step 3 – Apply latest Windows Updates

      On the Surface RT, this is a critical step. The December 2012 updates include a firmware update that dramatically improves battery life. Also, since the Surface ships with the "Preview" version of Office 2013, you will need to download the update to the final version.

      In total, it will be about 700MB of updates and one or two reboots – so it's a good idea if you are connected to a fast network and do it when you have some time to spare. Since this is such a large update, it's a good idea to do it proactively rather than wait for Windows to automatically do it for you.

      1. From the Settings charm (swipe from the right of the screen), tap or click Change PC settings (at the very bottom of the screen).
      2. In the left pane, scroll to the bottom and tap or click Windows Update.
      3. Tap or click Check for updates now.
      4. Repeat the process until there are no more updates to install.

      image

        Step 4 – Download a Windows theme

        A theme is a combination of desktop background pictures, window colours, and sounds. Of course you can customise these things yourself, but it's very convenient to just apply a theme that catches your eye.

        Step 5 – Apps from the Windows Store

        Updating all your apps to the latest versions

        If there are updates available for any of your installed apps, then the Store app Live Tile will have a little number on it, indicating the number of updates available.

        • Open the Store app and click the 'X updates available' text in the top right corner.

        Re-installing apps that you have on another Windows 8 PC

        A convenient feature of the Windows Store app lets you see which apps that you have previously installed on another Windows 8 PC. From this screen, you can then select all the apps and install them on your new Surface.

        1. On the Start screen, tap or click Store to open the Windows Store.
        2. Swipe down from the top edge of the screen, and then tap Your apps. (If you're using a mouse, point to the top of the screen, right-click, and then click Your apps.)
        3. Choose the apps you want to install, and then tap or click Install (from the bottom of the screen).

        Essential apps for your Surface RT 

        OneNote– Although your Surface comes with the Desktop version of OneNote pre-installed, there is a Metro Windows Store version of OneNote available from the Store. It's pretty much the app for using your Surface in meeting to take notes. (I used it to draft this blog post, before copying and pasting over to Word 2013).

        Lync– If you have access to Lync either via your work or an Office 365 subscription, then you'll want to download the Lync app. You can use it to have IM conversations join video conferences and make phone calls. If you allow it, it can run in the background all the time while you are using your Surface.

        Skype– Now that Skype supports signing in with a Microsoft account, if you have logged on to Windows using a Microsoft account, it will automatically sign you in. This is one of the reasons why it's so important to get your Microsoft accounts in order.

        Xbox SmartGlass– lets you control your Xbox from your Surface and view extra context about movies and games. See the Xbox SmartGlass Walkthrough video for more details.

        Wordament– built by two Microsoft employees in their spare time, Wordament is just like Boggle – but you compete with hundreds of other players on the same board at the same time. Caution: it's addictive, especially once you start unlocking Xbox Live achievements for it (try playing in your non-native language for some fun..)

        Step 6 - Configure your mail account(s)

        Let's start with the bad news. There is no Outlook for Surface RT and the built-in Mail app is functional, but not great. The good news is that it supports multiple accounts (Exchange/Outlook.com/Hotmail/etc) and it integrates with the lock screen to show you unread messages.

        One of the things I like to do to my email accounts is customise the email signatures:

        1. On Start, tap or click Mail.
        2. Swipe in from the right edge of the screen, and then tap Settings.
          (If you're using a mouse, point to the upper-right corner of the screen, move the mouse pointer down, and then click Settings.)
        3. Tap or click Accounts.
        4. Tap or click the account you want to change the signature for.
        5. Decide how you want to change your signature:
        6. Set Use a signature to either On or Off.
        7. If you're using a signature, change the text. At this time, there aren't any options to add images or change font settings (like color or type) in your signature.

        Step 7 – HomeGroup

        A homegroup is a group of PCs on a home network that can share files and printers. Using a homegroup makes sharing easier. You can share pictures, music, videos, documents, and printers with other people in your homegroup.

        One feature that existed in previous versions of Windows is HomeGroup. It's well worth the effort to set up the computers in your house on the same HomeGroup. I've never quite realised the benefits of HomeGroup until recently (see File History on the next step). Once you have connected your Surface to your home WiFi network, follow these instructions:

        Step 8 – Continuous backups with File History

        Have you thought about a backup strategy for your Surface? Obviously if you're storing all your documents/pictures/notebooks in your SkyDrive, then you will have a copy up there. But what about all those other files on your Surface?

        This is where File History comes in. It's a continuous backup of all the files in your Library folders. It keeps multiple versions as well, in case you need to go back to a previous version.

        The beauty is when you have both HomeGroup and File History configured. Your Surface can continuously and automatically backup your files to a HomeGroup PC when you're connected at home. The HomeGroup PC can also have an external USB drive connected (with BitLocker-To-Go encryption), so then you have a continuous, secure backup of your home PC and Surface.

        image

        image

        Step 9 – Accessorize

        If you're planning on watching a bunch of movies on a long plane trip or you just feel comfortable having more storage, you should invest in a large, fast microSD card. The Surface RT supports up to 64GB, which makes the SanDisk Ultra 64GB Class 10 microSD a good choice: 

        Personally, I like the safety blanket of a real mouse. A touchscreen is great and the touchpad on the keyboard is great as well, but for high-precision or mouse-intensive work, a real mouse is nice to have in your bag.

        Most of the Microsoft mice that have come out in the last few years have had little USB dongles called "Nano transceivers". There's nothing wrong with those and they will all work with the Surface – but it does mean that you have to put the transceiver in your single USB port. A nicer looking option is to get one of the new Microsoft Wedge mice that use Bluetooth – no dongles.

        Step 10 – Start building apps

        If you have any sort of interest in building apps for the Windows Store, you'll want to do two things:

        1. Start looking at all the great training content online
        2. Become a Windows Store developer and reserve your app names

        Although you can't write apps directly on the Surface, you can deploy and debug apps on your Surface from your development PC. See the following links:

        And of course, if you want to track your work and store your code somewhere, you can sign up for free to http://tfs.visualstudio.com/

        image

        TFS2012: What are all the different Jobs built-in to TFS?

        $
        0
        0

        This is a question that I get occasionally, and it’s covered in more detail in the Professional Team Foundation Server 2012 book that I wrote.

        Team Foundation Server has a Job Agent built in. It’s implemented as a Windows Service that runs on your Application Tier servers. There are some tables and stored procedures in the Tfs_Configuration database and your collection databases that define the jobs, the job queue and the job history. You can read more about the internals of the TFS Background Job Agent in Chris Sidi’s blog post, including how to control it using the TFS API and PowerShell.

        The following table describes all of the jobs across the configuration and collection databases and their default schedule:

        Job Name

        Scheduled Interval

        Build Cleanup Job

         

        Build Information Cleanup Job

        2 days

        Build Warehouse Sync

        30 minutes

        Cleanup Discussion Database

        daily

        Cleanup TestManagement Database

        daily

        Common Structures Warehouse Sync

        30 minutes

        File Container Cleanup

        daily

        Job History Cleanup Job

        2 days

        Message Queue Cleanup Job

        2 days

        Optimize Databases

        7 days

        Prune Registry Audit Log

        7 days

        Repopulate Dynamic Suites

        hourly

        Security Identity Cleanup Job

        daily

        Synchronize Test Cases

        10 minutes

        Team Foundation Server Activity Logging Administration

        2 days

        Team Foundation Server Coverage Analysis

        hourly

        Team Foundation Server Event Processing

        daily

        Team Foundation Server Framework Data Cleanup

        daily

        Team Foundation Server Framework File Service Cleanup

        daily

        Team Foundation Server Send Mail Job

        daily

        Test Management Warehouse Sync

        30 minutes

        Upgrade - Version Control Code Churn Online

         

        Version Control Administration

        5 days

        Version Control Code Churn

        daily

        Version Control Delta Processing

        daily

        Version Control Statistics Update

        daily

        Version Control Warehouse Sync

        30 minutes

        Work Item Tracking Administration

        daily

        Work Item Tracking Integration Synchronization

        hourly

        Work Item Tracking Referenced Identities Update

        7 days

        Work Item Tracking Remove Orphan Attachments

        daily

        Work Item Tracking Remove unused constants

        14 days

        Work Item Tracking Warehouse Sync

        30 minutes

        The ones that I often get questions about are the following:

        • Optimize Databases – This will reorganize/rebuild any indexes in SQL that exceed the fragmentation threshold
        • TFS Activity Log Administration – This will purge data in the activity log (tbl_Command) that is older than 14 days
        • Version Control Code Churn Online – This is a once-off job that runs after upgrade from TFS2010 to TFS2012. The format for storing the code churn data changed, so rather than converting that data during upgrade, it was done slowly over time post-upgrade by this job.

        Update: Thanks for the comment Tommy - I forgot a very important aspect of some of these jobs. Although the default schedules are listed above, some of them are queued 'on demand' by events that happen in TFS. Two examples of this are:

        • Team Foundation Server Event Processing - This job is responsible for sending out email alert subscriptions and SOAP alert subscriptions. For example, when a work item is changed - this job is queued on demand to process any alerts.
        • The various Warehouse Sync jobs - These jobs are also triggered to run when data changes as well. By default, they don't run more often than every 5 minutes.

        Another concept which you might come across is 'Host Dormancy'. This is a feature built-in to the 'kernel' of TFS that will pause jobs from running if a collection hasn't been accessed in a period of time. If a collection isn't being accessed, that means that the data isn't changing, so there's no need to run some jobs. This is key functionality that allows the Team Foundation Service to scale to thousands of collections.

        TFS2012: IntelliSense for customizing Work Item Types using XML

        $
        0
        0

        Team Foundation Server allows you to modify the Work Item Type definitions. You can use a graphical interface like the Process Editor included in the Team Foundation Server Power Tool, or you can edit the raw XML.

        For making changes across many work item types, I prefer to edit the raw XML in Visual Studio, since it allows me to use Find & Replace, Copy/Paste, and other useful text-editing functions. 

        One very useful feature of Visual Studio is IntelliSense for editing XML files. To activate IntelliSense for XML files, you need to have the XSD schema files in a special directory on your machine.

        In this blog post, I will show you how you can enable IntelliSesnse for editing Work Item Tracking XML files. This gives you the flexibility of editing the raw XML, with the safety net of IntelliSense and XML validation. It's based upon an old blog post from Ben Day and updated for Team Foundation Server 2012.

        Obtaining the latest schema files

        Download here (11KB, Zip file)

        Or, you can open Microsoft.TeamFoundation.WorkItemTracking.Common.dll from your GAC in Reflector and export out the schema files which are embedded as resources:

        image

        Setting them up so IntelliSense works

        Extract the XSD files to this folder on your local machine:

        C:\Program Files (x86)\Microsoft Visual Studio 11.0\Xml\Schemas

        This is where the Visual Studio IntelliSense engine looks for matching schema files, when you open an XML file.

        Opening Work Item Type definitions in XML editor, instead of Process Editor

        If you have the Team Foundation Server Power Tools installed, the Process Editor plug-in (ProejctTemplateEditor, in the list) is set as the default handler for work item XML files. So you get this UI view, rather than the raw XML:

        image

        To change this behaviour, you can go to

        • File > Open …
        • Select the Work Item XML file
        • Instead of clicking the ‘Open’ button, click the little arrow next to the ‘Open’ button and choose ‘Open With…’

        clip_image001

        You can then choose ‘XML (Text) Editor’ and optionally set it as the default editor for these files in the future.

        clip_image002

        Once you’ve followed all these steps, you get the joy of editing the Work Item Type XML file with the power and syntax checking of IntelliSense.

        image

        The entire Work Item Type XML schema is documented on MSDN at Index to XML Element Definitions for Work Item Types.

        TFS: How to Customize Work Item Types

        $
        0
        0

        Team Foundation Server has allowed you to modify your Work Item Type Definitions since the first version of TFS. (Side note: this is not the case with the Team Foundation Service, but the team hopes to enable that at some point in the future. At the moment, limiting the customization allows them to innovate the features in the Service at a faster pace without having to worry too much about everybody's customizations.)

        The fundamentals for modifying Work Item Types are documented in the following places:

        In this post, I'm going to show you the tools and process that I personally use for customizing work item types.

        Prerequisites / Tools

        • Real (Production) TFS server / project
        • Test (Staging) TFS server / project
        • ExportWITDs.cmd - A batch file (included below) that uses the 'witadmin.exe exportwitd' command
        • ImportWITDs.cmd
        • Visual Studio, XML editor with IntelliSense
        • Checkin.cmd - A batch file that uses tf.exe to prompt for a comment and check-in current changes.
        • Team Foundation Server Power Tools - Process Editor

        Workflow

        When I'm working with a customer and doing a series of process template or work item type customization, this is the workflow that I follow:

        1. Run a script to export all Work Item Type definitions to my local machine
        2. Check-in a copy of the definitions to source control, so that we have a baseline to work from and revert back to
        3. Edit the XML definitions in Visual Studio as XML, with IntelliSense (see below)
        4. Run a script to import the definition to my Test project
        5. Verify the changes in a second copy of Visual Studio
        6. Check-in the changes
        7. Run a script to import the definition to my Production project

        Step 1 - Export all work item types

        The following script exports a list of the work item type names to a temporary file, then uses that list to export each of the work item types to a separate file in the current directory. It needs to be run from a Visual Studio Command Prompt, or you need to add ‘C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\ide’ to your PATH environment variable.

        ExportWITDs.cmd:

        SET collection="http://tfs-server:8080/tfs/DefaultCollection"

        SET project="Project XYZ"

        witadmin listwitd /collection:%collection% /p:%project% > %temp%\witd.txt

        :: Remove quotes from project name

        SET _string=###%project%###

        SET _string=%_string:"###=%

        SET _string=%_string:###"=%

        SET _string=%_string:###=%

        for /F "delims=" %%a in (%temp%\witd.txt) do witadmin exportwitd /collection:%collection% /p:%project% /n:"%%a" /f:"%3_%_string%_%%a.xml"

        Step 2 - Check-in a copy

        There’s no script for this step, since it’s a one-time thing. Just use Visual Studio, or ‘tf add . /R’ followed by ‘tf checkin . /R’

        Step 3 - Open with XML Editor

        See my previous blog post on how I enable IntelliSense for editing work item types as XML.

        Step 4 - Import the changes to Test

        Importing the changes is relatively straightforward. When I am rapidly iterating on a Work Item Type design, I like to create a 'ImportWITDs.cmd' batch file that imports everything that I'm currently working on. Then I can just leave a command prompt open and run it whenever I feel like it.

        Now, for the seasoned witadmin pros, you'll know that there's also a '/v' option that allows you to validate the changes before you actually upload them to the server. In my experience, this is a waste of time - two reasons:

        1. If the XML is invalid, then it's going to fail if you try and upload it without validating first.
        2. The validation process doesn't validate everything - it misses some things. (I forget the specific cases, but I think it was something like fields that already exist or something like that).

        So because of these two reasons and coupled with the fact that I'm also uploading to a test server first - I skip the '/v' validation step and try the import directly.

        ImportWITDs.cmd:

        SET collection="http://tfs-server:8080/tfs/DefaultCollection"

        SET project="Project XYZ"

        witadmin importwitd /collection:%collection% /p:%project% /f:"_DefaultCollection_Task.xml"

        witadmin importwitd /collection:%collection% /p:%project% /f:"_DefaultCollection_Bug.xml"

        witadmin importwitd /collection:%collection% /p:%project% /f:"_DefaultCollection_Issue.xml"

        witadmin importwitd /collection:%collection% /p:%project% /f:"_DefaultCollection_Shared Steps.xml"

        witadmin importwitd /collection:%collection% /p:%project% /f:"_DefaultCollection_Test Case.xml"

        Step 5 - Verify the changes

        Once I've run the ImportWITDs.cmd script, and it completes without any errors - then it's time to verify the changes. To do this, I normally have a second copy of Visual Studio open.

        Before hitting 'Refresh' in Team Explorer, it's important to close all existing Work Item tabs. As having an open query or work item, can cause the metadata not to be reloaded correctly – then you start to wonder whether your changes were uploaded successfully or not.

        Once everything is closed, hit the 'Refresh' button at the top of Team Explorer. Then go ahead and open a New Work Item form for the type that you have just modified.

        Step 6 - Check-in the changes

        If you have verified the changes and everything looks great - it's a good idea to check the XML in to source control. This gives you a point that you can roll-back to in the future. It also helps your successor understand what changes have been made to the work item types and why they were made.

        After checking in the changes, we also check-out all the files again. (This is not strictly necessary if you are using Visual Studio 2012 and Local Workspaces, since the files will be read-write on disk and any changes will be detected anyway.)

        Checkin.cmd:

        @echo off

        SET /P comment="Checkin comment?"

        tf checkin . /r /noprompt /comment:"%comment%"

        tf edit . /R

        Step 7 - Import the changes to Production

        Once we've checked-in a copy of our changes, it's time to upload the Work Item Type changes to the Production team project. If you're making the changes on behalf of a customer, then you would have them review the changes on your Test system first.

        Since I normally iterate on a set of changes a few times and upload to Production once at the end, I usually just modify the Server/Collection/Project settings in ImportWITDs.cmd and use that, rather than creating a separate batch file.

        Other Tools

        Although they are not part of the "normal" workflow, there are some other tools that I have used in the past for special situations.

        TFS Team Project Manager

        I can’t recommend this tool from Jelle Druyts highly enough for doing what I call “Bulk Administration” tasks in TFS. It lets you easily take a set of Work Item Types and upload them to all projects in your project collection. It also lets you bulk edit build definitions, build process templates, fields, source control settings and more.

        image

        ExportWITDSorted.exe

        This is a little tool that I wrote for myself. The use case that I wrote it for was when you a working with heavily customised work item types that you don't have the original XML for.

        Although the way that you modify work item types is all in XML - that is not how the work item types are defined in the TFS database. When you tell TFS to import your work item type XML file, it shreds the XML, parses out all the fields, layouts, transitions, etc and puts them in separate SQL tables. When you tell TFS to export a work item type as XML, it does the opposite. The ordering of the elements in the XML is basically the ordering of the rows from the database. No sorting.

        If you are trying to do a diff of different work item type XML files, this can be pretty frustrating. Of course you can go and get a diff tool that understands XML semantics, but Visual Studio can't do this for you.

        This tool I wrote uses the same APIs that 'witadmin exportwitd' uses to get an export of the work item type, but then it iterates through the XML elements and sorts them by the 'name' attribute. This makes it a little easier to diff with a 'dumb' text-diff tool like Visual Studio or WinMerge.

        using System;
        using Microsoft.TeamFoundation.Client;
        using Microsoft.TeamFoundation.WorkItemTracking.Client;
        using System.Xml;
        using System.Xml.XPath;

        namespace ExportWitdSorted
        {
            class Program
            {
                static void Main(string[] args)
                {
                    if (args == null || args.Length != 4)
                    {
                        Console.WriteLine("Usage:   ExportWitdSorted.exe <collection url> <project> <work item type> <outputfile>");
                        Console.WriteLine("Example: ExportWitdSorted.exe http://tfsserver:8080/Collection MyProject \"My Bug\" mybug.xml");
                        Environment.Exit(1);
                    }

                    // Connect to TFS
                    TfsTeamProjectCollection tpc = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(args[0]));
                    WorkItemStore wis = tpc.GetService<WorkItemStore>();
                    Project project = wis.Projects[args[1]];
                    WorkItemType type = project.WorkItemTypes[args[2]];

                    // Export the work item definition to an XmlDocument
                    XmlDocument originalDoc = type.Export(false);

                    // Create a copy of the definition and remove all the <FIELD> nodes so that we can replace them with a sorted list
                    XmlDocument sortedDoc = new XmlDocument();
                    sortedDoc.LoadXml(originalDoc.OuterXml);
                    sortedDoc.SelectSingleNode("//FIELDS").RemoveAll();

                    // Get the nodes from the original document and sort them
                    XmlNode node = originalDoc.SelectSingleNode("//FIELDS");
                    XPathNavigator navigator = node.CreateNavigator();
                    XPathExpression selectExpression = navigator.Compile("FIELD/@name");
                    selectExpression.AddSort(".", XmlSortOrder.Ascending, XmlCaseOrder.None, "", XmlDataType.Text);
                    XPathNodeIterator nodeIterator = navigator.Select(selectExpression);

                    // Import the sorted nodes into the new document
                    while (nodeIterator.MoveNext())
                    {
                        XmlNode fieldNode = originalDoc.SelectSingleNode("//FIELD[@name='" + nodeIterator.Current.Value + "']");
                        XmlNode importedFieldNode = sortedDoc.ImportNode(fieldNode, true);
                        sortedDoc.SelectSingleNode("//FIELDS").AppendChild(importedFieldNode);
                    }

                    sortedDoc.Save(args[3]);
                }
            }
        }

        TFS, Load Balancers, Idle Timeout settings and TCP Keep-Alives

        $
        0
        0

        Since TFS 2010, it has been possible to have multiple Application Tier servers configured in a load-balanced configuration. If you use something like a F5 BIG-IP LTM device, then the default Idle Timeout settings for the TCP Profile can cause problems. (But don’t despair, read the whole post).

        Here’s the scenario:

        • Between the TFS ASP.NET Application and SQL Server, there is a maximum execution timeout of 3600 seconds (1 hour)
        • In IIS/ASP.NET there is a maximum request timeout of 3600 seconds (it’s no coincidence that it matches)
        • This allows TFS operations to run for up to an hour before they get killed off. In reality, you shouldn’t see any TFS operations run for anywhere near this long – but on big, busy servers like the ones inside Microsoft, this was not uncommon.

        Load balancers, in their default configuration usually have an ‘Idle Timeout’ setting of around 5 minutes. The reason for this is that every request that stays open, is consuming memory in the load balancer device. A longer timeout means that more memory is consumed and it’s a potential Denial-of-Service attack vector. (Side note: What’s stopping somebody using TCP Keep-Alives like I describe below to keep a huge number of connections open and have the same DoS effect?)

        So why is this a problem if your ‘Idle Timeout’ is set to something less than 3600 seconds? This is what can happen:

        • The client makes a request to TFS – for example: “Delete this really large workspace or branch”. That request/connection remains open until the command completes.
        • The TFS Application Tier then goes off and calls a SQL Stored Procedure to delete the content.
        • If that Stored Procedure takes longer than the ‘Idle Timeout’ value, the load balancer will drop the connection between the client and the application tier.
        • The request in IIS/ASP.NET will get abandoned, and the stored procedure will get cancelled.
        • The client will get an error message like ‘The underlying connection was closed: A connection that was expected to be kept alive was closed by the server’. Basically, this means that the connection got the rug pulled out from under it.

        Prior to Visual Studio & Team Foundation Server 2012, I recommended that people talk to their Network Admin guys and get the load balancer configuration updated to a higher ‘TCP Idle Timeout’ setting. This usually involved lots of back-and-forth with the grumpy admins, and eventually you could convince them to begrudgingly change it, just for TFS, to 3600. If you think that you’re hitting this problem – one way to verify is to try the same command directly against one of your application tier servers, rather than via the load balancer. If it succeeds, then you’ve likely found your culprit.

        HTTP Keep-Alives

        If you’ve administered web sites/webservers before, you’ve likely heard of HTTP Keep-Alive. Basically, when they’re enabled on the client and the server, the client keeps the TCP connection open after making a HTTP GET request, and reuses the connection for subsequent HTTP GET requests. This avoids the overhead of closing and re-establishing a new TCP connection.

        image

        That doesn’t help our Idle Timeout problem, since we only make a single HTTP request. It’s that single HTTP request that gets killed halfway through – HTTP Keep-Alives won’t help us here.

        Introducing TCP Keep-Alives

        There’s a mechanism built-in to the TCP protocol that allows you to send a sort-of “PING” back and forth between the client and the server, but not pollute the HTTP request/response.

        If you have a .NET client application, this is the little gem that you can use in your code:

        webRequest.ServicePoint.SetTcpKeepAlive(true, 50 * 1000, 1000); // Enable TCP Keep-Alives. Send the first Keep-Alive after 50 seconds, then if no response is received in 1 second, send another keep-alive.

        In this example NetMon network trace:

        • I deployed a web services to Windows Azure, where the load balancer had a TCP Idle Timeout set to 5 minutes (this has changed lately in Azure now that they moved to a software based load balancer).
        • This web services was coded to do a Thread.Sleep(seconds) for however long I told it to, then send a response back.

        NetMon capture that shows TCP KeepAlive packets

        First of all, you’ll notice that I did this investigation quite some time ago (~2 years…). Next, you’ll see that there’s some other traffic that happens on my connection between the HTTP:Request at frame 179 and the HTTP:Response at frame 307. Those are the TCP Keep-Alive ‘PING’ and ‘ACK’ packets.

        Finally, you can see that after 320 seconds have passed (i.e. 20 seconds after the load balancer should’ve closed the connection), I get a valid HTTP:Response back. This means that we have successfully avoided the load balancer killing our connection prematurely.

        What’s in it for me?

        The whole reason I did this investigation was when I was working on the TFS team and they were getting ready to launch the Team Foundation Service. Although it was quite rare, there were instances where users could hit this TCP Idle Timeout limitation.

        The good news is that by working with the rock star dev on the Version Control team, Philip Kelley– we were able to include a change in the TFS 2010 Forward Compatibility update and the TFS 2012 RTM clients to send TCP Keep-Alives every 30 seconds, thus avoiding the issues altogether when talking to the Team Foundation Service, and on-premises TFS servers deployed behind a load balancer. You can see this for yourself in Microsoft.TeamFoundation.Client.Channels.TfsHttpRequestHelpers.PrepareWebRequest().

        webRequest.ServicePoint.SetTcpKeepAlive(true, 30000, 5000);

        A caveat

        If you don’t have a direct connection between your client and your server, and you go via a HTTP proxy server or something like ISA/ForeFront Threat Management Gateway – the TCP Keep-Alive packets aren’t propagated through those proxies. You’ll get an error back with something like ‘502: Bad Gateway’, which basically means that the connection between the Proxy server and the TFS server was dropped.

        Here’s what the NetMon trace looks like for this example:

        NetMon capture that shows TCP KeepAlive packets, and ultimately the connection getting dropped


        TFS2012: New tools for TFS Administrators

        $
        0
        0

        This is a brand new feature in TFS 2012 that hasn’t really been documented or talked about yet. If you’re a TFS administrator and you browse to this address on your server, you will see a new web-based administration interface for some things inside of TFS:

        http://your-server:8080/tfs/_oi/

        Activity Log

        The first page that we see, is a view on the TFS Activity Log. Internally, TFS has two tables in the Tfs_Configuration and Tfs_CollectionX databases called tbl_Command and tbl_Parameter. This tables keep a record of every single command that every single user has executed against TFS for the last 14 days.

        In this screenshot, you can see that the following columns are displayed:

        • Command Id – A unique ID (per database) given to the command execution.
        • Application – Which component of TFS does it relate to? Version Control, WorkItem Tracking, Framework, Lab Management, etc
        • Command Name – The server name of the command. You can usually work out what the equivalent client/API call is – but these command names are not documented anywhere.
        • Status – 0 = Success, –1 = Failure
        • Start Time – When was the request first received by TFS
        • Execution Time – How long did the command run for (Divide by 1,000,000 to get seconds)
        • Identity Name – The user name of the user who executed the command
        • IP Address – IPv4 or IPv6 address
        • Unique Identifier – Used to group/correlate multiple server requests that originate from a single client request.
        • User Agent – the ‘User-Agent’ HTTP Header from the client. Tells you the name of the executable if it’s using the TFS API and what version/SKU.
        • Command Identifier – When using the TFS Command Line tools, this helps you correlate to what command the user was using. ‘tf get’, ‘tf edit’, etc.
        • Execution Count – How many times was this command executed. The logging mechanism has some smarts to reduce the noise in the log. If you download a bazillion files, it doesn’t log a bazillion individual rows in the log. It just sets this value to a bazillion for that entry.
        • Authentication Type – NTLM or Kerberos.

        Screenshot of TFS Activity Log Web Interface

        One of the things the TFS Activity Logger does, is that it logs the parameters passed in with a request when:

        • The command fails (i.e. a status of != 0)
        • Or, the command takes longer than 30 seconds

        You can see these parameters by double-clicking a row in the table:

        image

        At the top of the Activity Log screen, you can also filter the log based upon Host/Collection and Identity Name. This is useful if a particular user complains about slow performance or TFS errors – you can easily look at the server logs to see what the server is seeing.

        image

        You can also click the ‘Export’ link to download a CSV file of the same content.

        If you’d like to know more about how to query or interpret the contents of the TFS Activity Log – grab a copy of my Professional Team Foundation Server 2012 book and look at Chapter 23 – Monitoring Server Health and Performance.

        TFS Job Monitoring

        Built-in to TFS is the TFS Background Job Agent. This job agent is responsible for the scheduling and queuing of maintenance jobs and other jobs within TFS. You can see my blog post on all the different jobs in TFS 2012 for more information.

        If we click the ‘Job Monitoring’ tab, we get some fairly ugly charts that give us some insight into how long the jobs are taking to execute.

        image

        There is another chart further down the ‘Job Summary’ page that shows us the number of times that a job has been run, and what was the status of each of those runs.

        image

        We can click on one of the green bars in that chart, or the blue bars in the previous chat, or the ‘Job History’ link in the navigation bar to see a different view of the TFS jobs.

        This view shows us the number of jobs that were executing at a particular time, the average time that they waited in the job queue, and the average run time.

        image

        If you then click the ‘Job Queue’ link in the navigation bar, you can see which jobs are currently queued, their priorities and when they are expected to start.

        image

        New book: Professional Team Foundation Server 2012

        $
        0
        0

        I’m very pleased to announce that our new book Professional Team Foundation Server 2012 is now available!

        It’s an update to the 2010 edition that reflects all the great new features and changes introduced in Visual Studio Team Foundation Server 2012. For example, there are whole new chapters on Managing Teams, Agile Planning Tools and Integration with Project Server. There’s also new content on the new Team Explorer interface, the Code Review tools, Local Workspaces and the updated Testing and Lab Management features. Throughout the book, we also talk about how to use the cloud hosted Team Foundation Service and talk about some of how the TFS internals have changed to support the service.

        We hope that you enjoy this book as much as the previous one and we look forward to reading your reviews.

        Book cover for: Professional Team Foundation Server 2012

        ISBN: 9781118314098

        You can buy the book in the following ways:

        And you can preview some of the book before you buy:

        Preview the book before you buy

        Table of Contents

        The book is broken up into five sections and its written in a way that you can either read the whole thing cover-to-cover or jump in to a particular part or chapter that interests you. My personal favourite chapters are the ones in Part V – Administration, since I wrote most of them. Smile

        Part I – Getting Started

        1 – Introducing Team Foundation Server 2012

        2 – Planning a Deployment

        3 – Installation and Configuration

        4 – Connecting to Team Foundation Server

        Part II – Version Control

        5 – Overview of Version Control

        6 – Using Team Foundation Version Control

        7 – Ensuring Code Quality

        8 – Migration from Legacy Version Control Systems

        9 – Branching and Merging

        10 – Common Version Control Scenarios

        Part III – Project Management

        11 – Introducing Work-Item Tracking

        12 – Customizing Process Templates

        13 – Managing Teams and Agile Planning Tools

        14 – Reporting and SharePoint Dashboards

        15 – Integration with Project Server

        Part IV – Team Foundation Build

        16 – Overview of Build Automation

        17 – Using Team Foundation Build

        18 – Customizing the Build Process

        Part V – Administration

        19 – Introduction to Team Foundation Server Administration

        20 – Scalability and High Availability

        21 – Disaster Recovery

        22 – Security and Privileges

        23 – Monitoring Server Health and Performance

        24 – Testing and Lab Management

        25 – Upgrading Team Foundation Server

        26 – Working with Geographically Distributed Teams

        27 – Extending Team Foundation Server

        Authors

        With Ed joining Microsoft since the last book, that completes the set – all four authors work for Microsoft:

        • Ed Blankenship is the Microsoft Program Manager for the Lab Management scenarios for Team Foundation Server and the Visual Studio ALM product family. He was voted as Microsoft MVP of the Year for Visual Studio ALM & Team Foundation Server before joining Microsoft.
        • Martin Woodward is currently the Program Manager for the Microsoft Visual Studio Team Foundation Server Cross-Platform Tools Team. Before joining Microsoft, he was voted Team System MVP of the Year, and has spoken about Team Foundation Server at events internationally.
        • Grant Holliday is a Senior Premier Field Engineer for Microsoft in Australia. Prior to this role, he spent three years in Redmond, Washington as a program manager in the Visual Studio Team Foundation Server product group.
        • Brian Keller is a Principal Technical Evangelist for Microsoft specializing in Visual Studio and application lifecycle management. He has presented at conferences all over the world and has managed several early adopter programs for emerging Microsoft technologies.

        This time around we also had long-time TFS/ALM MVP Steve St. Jean contributing on some of the book as well as being a Technical Editor and checking all our facts.

        Q & A

        When people find out that I’ve written a book, there’s a few questions that often come up.

        How much money do you make from the book?

        A colleague wrote a book many years ago and he set my expectations right from the start. He used to say: “You don’t make a lot of money by writing a book – especially technical books”. Personally, the royalties are a nice surprise when they come, but I’m not headed for early retirement with them. Smile 

        There’s really two ways that authors get paid for their contributions:

        • Advance– This is a fixed sum, negotiated with the publisher before you sign a contract. Usually its paid in instalments, as you complete different milestones in the process. 50% draft, final draft, etc.
        • Royalties– This is a how much you will get for each sale of the book. There is not a single percentage and it varies depending on whether the book was sold in the USA, e-book, translation, etc.

        For a more complete explanation of how it all works, check out Charles Petzold’s article on Book Royalties and Advances.

        Then there’s the non-direct value– you get to say “I wrote the book” on your business card, which is instant credibility and opens up more opportunities.

        You work for Microsoft – what do they think about you writing a book?

        Microsoft has a Moonlighting policy which covers things like writing a book and building apps. Essentially, each of the authors had to seek approval from their manager before they could work on the book. The policy also has rules that say you can’t use any Microsoft resources and the work is not allowed to impact your daytime work duties.

        Since the subject of the book is a Microsoft product and it helps educate people on how to use it, there was never going to be much resistance to the idea.

        What’s the process for writing a book?

        The Wiley author site has more information on the Life of a book, but in short the process is:

        Proposal > Contract > Draft writing > Editors > Tech Editors > Author Review > Proofs > Printing

        How long did it take you?

        Writing a book takes a lot of time and it requires a lot of concentration. It took me a little while to settle into a rhythm, but eventually my style ended up as intense focus weekends every few weeks:

        • Monday-Thursday: Start researching content for the chapter. Put all the links in a OneNote notebook. Do the hands on labs, etc. Basically immerse myself in the subject of that chapter and come up with a logical flow of sub-headings
        • Thursday night: Spend a few hours at home and take all the screenshots that I could possibly use.
        • Friday night, Saturday all day: Take my laptop to a local coffee shop without an Internet connection. Then just write, write, write. Fill out all the paragraphs for the sub-headings, put in all the screenshots and get the word count up to where it should be.
        • Sunday: Depending on where the word count was at, Sunday was usually spent reviewing and tidying up the formatting and getting it ready to submit. My goal was to upload the draft by Sunday night, since all our chapters were due on Mondays.

        What is the most annoying part of writing a book?

        Screenshots. We had it easy for the 2010 edition – the product was RTM, so nothing was changing. With the 2012 edition, we were writing the book before the product was released. That meant that every time the UI changed between Beta/RC/RTM/Update 1, we had to go back to check and update our screenshots.

        Summary

        To finish off, writing these books has been very personally rewarding experience. I saw it as a way of capturing 4-5 years worth of accumulated knowledge and experience and getting it down on paper so that others can learn from it. And hey, never in my wildest dreams did I imagine that I would see my name in Russian on the front of a book.

        Five years at Microsoft and a new job

        $
        0
        0

        Recently, I completed 5 years of service at Microsoft. The company makes a big deal of anniversaries that fall on the 5-year milestones with increasingly larger "optical crystal" monuments.

        Optical crystal service awards

        As part of my anniversary, I also imported another tradition back to my local branch office from my time in Redmond. The tradition says that on your anniversary, you bring in 1 pound of M&Ms for each year of service and share it in a bowl outside your office. I'll tell you that 2.2kg doesn't go far once people get the after-lunch munchies. :)

        Bowl of M&M

        I'm pleased to mark this anniversary, since it represents the longest time to date that I've been with a single employer. However, that is one of the beauties of a large company like Microsoft - there is the opportunity to change jobs and gain different experiences, but remain with the same company.

        clip_image003

        New Job: Senior Service Engineer, Team Foundation Service

        Yes, that's right, I have another new job. As a Service Engineer, I'll be in the engine room of http://tfs.visualstudio.com/ keeping the service humming along, on-boarding new and exciting services (like the Load Testing Service) and evolving the maturity of the services. My main area of focus is on improving the efficiency of the Service Delivery team through automation and engineering improvements.

        My history at Microsoft so far has been quite broad, which actually reflects how I approach most things:

        Perhaps the most exciting part of the role, is that I will remain in Australia and work 100% from home with the occasional trip to the mother ships (Redmond/Raleigh). My experience so far has been a little different to Scott's, but I'm planning a follow-up post on what it's like to be a remote worker, in a completely different time zone. Part of running a global service like the TF Service is that there are customers in all time zones around the world using it. The Service Delivery team now has around-the-world coverage with me in Australia and other team members in India, Europe, North Carolina and Seattle. We’re still ironing out the processes as we get ready to launch the commercial service before the end of 2013.

        After a break of a few years, I'm absolutely thrilled to be part of the Server and Tools DivisionCloud and Enterprise Engineering Group again. I'm working amongst some of the brightest people I know and I am looking forward to having a huge impact on software and services that are relied upon by developers around the world.

        What does a well maintained Team Foundation Server look like?

        $
        0
        0

        After spending some time out in the field looking at customer’s TFS environments and more recently looking at some of Microsoft’s internal on-premises TFS deployments, I realised that some environments are configured and better maintained than others.

        Some of the general concepts and the very TFS-specific configurations are talked about in Part 5 of my Professional Team Foundation Server 2012 book, but many of the basics were considered out of scope or assumed knowledge. Also, not everybody has read the book, even though it gets 5 stars and is considered “THE Reference for the TFS Administrator and expert!” on Amazon.

        The purpose of this blog post is to give the Service Owners of TFS a check-list of things to hold different roles accountable for in the smooth operation of the server. It’s broken into 5 sections that roughly translate to the different roles in a typical enterprise IT department. In some cases, it might all be the one person. In other cases, it could be a virtual team of 50 spread all throughout the company and the globe.

        1. The initial setup and provisioning of the hardware, operating system and SQL platform
        2. Regular OS system administrator tasks
        3. Regular SQL DBA tasks
        4. TFS-specific configurations
        5. Regular TFS administrator tasks

        The list is in roughly descending priority order, so even if you do the first item in each section, that’s better than not doing any of them. I’ll add as many reference links as I can, but if you need specific instructions for the steps, leave a comment and I’ll queue up a follow-up blog post.

        Keep Current

        • Apply all security updates that the MBSA tool identifies. ‘Critical’ security updates should be applied within 48 hours – There’s no excuses for missing Critical security updates. They are very targeted fixes for very specific and real threats. The risk of not patching soon enough is often greater than the risk of introducing a regression.
        • Be on the latest TFS release. (TFS 2012.4 RC4 at the time this post was written or TFS2013 RTM after November 13 2013. If you’re stuck on TFS2010, see here for the latest service packs and hotfixes.)
        • Be on the latest edition of SQL that is supported by the TFS version. Check your SQL version here. (TFS 2010 = SQL2008R2SP3, TFS 2012.4 = SQL2012 SP1, TFS 2013 = SQL2012 SP1). Be on Enterprise edition for high-scale environments.
        • Be on the latest OS release supported by the combination of SQL + TFS. Most likely Windows Server 2008 R2 SP1 or 2012.
        • Be on the latest supported drivers for your hardware (NIC & SAN/HBA drivers especially).

        Initial OS Configuration and Regular Management Tasks

        • Collect a performance counter baseline for a representative period of time to identify any bottlenecks and serve as a useful diagnostics tool in the future.  A collection over a 24 hour period on a weekday @ 1-5min intervals to a local file should be sufficient. Don’t know which counters to collect? Download the PAL tool and look at the “threshold files” for “System Overview” on all your servers, “SQL Server” on your data tier servers, and "IIS" and ".NET (ASP.NET)" for your application tier servers.
        • Ensure antivirus exclusions are correct for TFS, SQL and SharePoint. (KB2636507)
        • Ensure firewall rules are correct. I had an outage once where the network profile changed from ‘domain’ to ‘public’ due to a switch gateway change, and our firewall policy blocked SQL access for the ‘public’ profile which effectively took SQL offline for TFS.
        • Ensure page file settings are configured for an appropriately sized disk & memory dump settings are configured for Complete memory dump. If you get a bluescreen, having a dump greatly increases your chances of getting a root cause + fix. (KB254649), test the settings using NotMyFault.exe (during a maintenance window, of course)
        • Don’t run SQL or TFS as a local administrator.

        Initial SQL Configuration

        • SQL Pre-Deployment Best Practices (SQLIO/IOmeter to benchmark storage performance)
        • SQL recommended IO configuration. SQLCAT Storage Top 10 best practices
        • Check disk partition alignments for a potential 30% IO performance improvement (especially if your disks were ever attached to a server running Windows Server 2003, but sometimes if you used pre-partitioned disks from OEM)
        • Ensure that Instant File Initialization is enabled (if the performance vs. security trade-off is appropriate in your environment. The article has more details). This enables SQL to create data files without having to zero-out the contents, which makes it “instant”. This requires the service account that SQL runs as to have the ‘Perform Volume Maintenance Tasks’ (SE_MANAGE_VOLUME) permission.
        • Separate LUNs for data/log/tempdb/system.
        • Multiple data files for TempDB and TPC databases. (See here for guidance on the “right” number of files. If you have less than 8 cores, use #files = #cores. If you have more than 8 cores, use 8 files and if you’re seeing in-memory contention, add 4 more files at a time.)
        • Consider splitting tbl_Content out to a separate filegroup so that it can be managed differently
        • Consider changing ‘max degree of parallelism’ (MAXDOP) to a value other than ‘0’ (a single command can peg all CPUs and starve other commands). The trade-off here is slower execution time vs. higher concurrency of multiple commands from multiple users.
        • Consider these SQL startup traceflags. Remember, the answer to “should I do this on all my servers?” is not “yes”, the answer is “it depends on the situation”.
        • Configure daily SQL ErrorLog rollover and 30 day retention.
        • Set an appropriate ‘max server memory’ value for SQL server. If it’s a server dedicated to SQL (assuming TFS, SSRS and SSAS are on different machines), then a loose formula you can use is to reserve: 1 GB of RAM for the OS, 1 GB for each 4 GB of RAM installed from 4–16 GB, and then 1 GB for every 8 GB RAM installed above 16 GB RAM. So, for a 32GB dedicated server, that’s 32-1-4-2=25GB. If you are running SSRS/SSAS/TFS on the same hardware, then you will need to reduce the amount further.

        Regular SQL DBA Maintenance

        (These are not TFS specific and apply to most SQL servers)

        • Backup according to the supported backup procedure (marked transactions, transaction logs, SSRS encryption key and use SQL backup compression and WITH CHECKSUM). It’s important to ensure that transaction log backups run frequently – they allow you to do a point-in-time recovery. It also checkpoints and allows the transaction log file to be reused. If you don’t run transaction log backups (and you’re running in FULL recovery mode, which is the default), then your transaction logfiles will continue to grow. If you need to shrink them, follow the advice in this article.
        • Run DBCC CHECKDB regularly to detect physical/logical corruption and have the best chance at repairing and then preventing it in the future. Ola Hollengren's SQL Server Integrity Check scripts are an effective way of doing this, if your organisation doesn't have an established process already. Even though the solution is free, if you use it, send Ola an email to say that you appreciate his work. The solution can also be used for backups and index maintenance for non-TFS databases. TFS rebuilds it's own indexes when needed and it requires marked transactions as per the supported backup procedure)
        • Ensure PAGE_VERIFY=CHECKSUM is enabled to prevent corruption. If it’s not, you have to rebuild indexes after enabling it to get the checksums set.
        • Mange data/log file freespace and growth.
        • Monitor for TempDB freespace (<75% available).
        • Monitor for long-running transactions (>60 minutes, excluding index rebuilds, backup jobs).
        • Monitor table sizes & row counts (there’s a script on my blog here, search the page for sp_spaceused).
        • Monitor SQL ERRORLOG for errors and warnings.

        TFS Configuration Optimizations

        • At least two application tiers in a load balanced configuration. That gives you redundancy, increased capacity for requests/sec, and two job agents for running background jobs. Ensure that your load balancer configuration has a TCP Idle Timeout of 60 minutes, or that all your clients are running a recent version. See here fore more details.
        • Ensure that SQL Page Compression is enabled for up to a 3X storage reduction on tables other than tbl_Content (if running on SQL Enterprise or Data Center Edition). To enable, it’s the opposite of KB2712111.
        • Ensure that table partitioning is enabled for version control (if a large number of workspaces and running SQL Enterprise). Not recommended unless you have >1B rows in tbl_LocalVersion. Contact Customer Support for the script, since it’s an undocumented feature for only the very largest TFS instances (i.e. DevDiv).
        • Check that SOAP gzip compression is enabled (should’ve been done by TFS 2010 SP1 install. I have seen up to an 80% reduction in traffic across the wire and vastly improved user experience response times for work item operations).
        • Disable / monitor the IIS Log files so they don’t fill the drive: %windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpLogging /dontLog:"True"  /commit:apphost
        • Change the TFS App Pool Idle Timeouts from 20 minutes to 0 (no idle timeout), and disable scheduled recycling so that you don’t have an app-pool recycle during business hours.
        • Implement a TFS Proxy Server and make sure people use it (especially build server), even if no users are remote it reduces the requests/sec load on the ATs. Configure it as the default proxy for our AD site using: tf proxy /add
        • Enable work item tracking metadata filtering if appropriate.
        • Enable SMTP settings and validate that they work. The most common issue here is that a SMTP server won’t relay for the service account that TFS is running as.
        • Set TFS’s NotificationJobLogLevel = 2, so that you get the full errors for any event notification jobs that fail.
        • Consider moving application tier file cache to a separate physical and/or logical drive. See here for how to set a different dataDirectory, but don’t touch any of the other settings. The reason you want it on it’s own drive, is 1) to separate the I/O load and 2) if you ever have to restore the database to an earlier point in time, you have to clear the cache so that you don’t end up sending the wrong content to users. If you make it a separate drive, you can just do a quick-format which takes seconds. Otherwise you have to delete all the folders/files individually which takes much longer.

        Regular TFS Administrator Maintenance

        René's blog post Top 10 of things every TFS Administrator should do also covers some other things. 

        Regular TFS Build Administrator Maintenance

        This is a community contribution from Jesse on regular maintenance around Build Agents, Symbols and Drop shares:

        • Monitor disk space usage on the build agents
        • Monitor queue time for the builds, spin up additional agents if available and needed
        • Clean up the \Builds folder on build agents to remove old workspaces
        • Backup the Symbols share regularly
        • Backup the Builds Drop folder regularly
        • Exclude \Builds, \Symbols, \Drop, Team Explorer Cache from Anti-virus real time scanning

         

        Exit Procedures

        Another community contribution from Jesse – this is a set of things to check for when a user rolls-off a project or otherwise stops using the server:

        • Check for locked or checked out files
        • Check for queued builds
        • Check for remaining  workspaces
        • Check for work items assigned to this account
        • Check for Builds, Source control items that are exclusively owned by the user
        • Back up their personal work item queries by exporting them all to WIQL

        Other Resources

        The ALM Rangers are a group of individuals from the TFS Product Group, members of Microsoft Services, Microsoft Most Valued Professionals (MVPs) and technical specialists from technology communities around the globe, giving you a real-world view from the field, where the technology has been tested and used. If you haven’t seen some of the resources that they produce and maintain, I highly recommend that you check them out:

        Hopefully this blog post has been an effective use of my limited keystrokes and together we can improve the predictability, reliability and availability of Team Foundation Server in your organisation.

        Updates:

        [October 9 2013]: Added notes on local admin, SQL Instant File Initialization, max server memory, transaction log shrinking, SMTP settings, cache directory settings, build administrator tasks and exit procedures.
        [October 19 2013]: Added link to Ola's solution for integrity checks and database backups.
        [November 1 2013]: Added link to René's blog post on Top 10 TFS administrator tasks
        [November 16 2013]: Added reference to IIS & ASP.NET threshold files for PAL. Thanks Chetan.

        TFS Administration Tool 2.3 (2013) Released

        $
        0
        0

        As I did after the last major TFS release, the TFS Administration Tool has been updated to depend upon the TFS 2013 Object Model. You no-longer need Team Explorer 2012 installed to use the tool. It can be installed on machines running either the stand-alone object model (http://aka.ms/TFSOM2013), TFS 2013 or Visual Studio 2013.

        This release supports SharePoint groups/roles, thanks to a community contribution. There are no other major functional changed between the release and the previous (2.2) release.

        http://tfsadmin.codeplex.com/

        Currently, the MSI in the downloaded ZIP file is flagged by Windows SmartScreen as “unsafe” – based upon the experience of the last release, in about a week, it will build enough “reputation” to be considered safe.

        If you find a bug, the best way to get it fixed is to upload a patch. You can also open an issue and include either the contents of the "Output" window or the contents of the log file saved in the "Logs" folder so that we can easily reproduce and investigate the problem.

        Viewing all 62 articles
        Browse latest View live


        <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>