Quantcast
Channel: Grant Holliday's Blog
Viewing all 62 articles
Browse latest View live

TFS2010: Invoking TFS web services using PowerShell

$
0
0

In this blog post, I’m going to show you how to invoke the Visual Studio 2010 Team Foundation Web Services remotely using Windows PowerShell 2.0.

There are some TFS administrative functions that can only be performed remotely using the TFS ASMX web services. Over time, there will be powertools and functionality in the command line tools and TFS Administration Console  that allows you to access these. In the meantime though, we have to create our own scripts and utilities for doing this.

PowerShell 2.0 is installed by default in Windows 7 and Windows Server 2008 R2. It includes a new cmdlet called “New-WebServiceProxy”. Using this cmdlet, we can create an in-memory web service proxy for the TFS web services and use it to invoke some useful web methods.

Viewing currently executing requests

1. Open Powershell 2.0 from the Start menu.

image

2. Create the web service proxy by copying this command and replacing your server name in the URL.  The UseDefaultCredentials is required so that we are authenticated to the web service.

$tfsadmin = New-WebServiceProxy –UseDefaultCredential -URI http://tfsserver:8080/tfs/TeamFoundation/administration/v3.0/AdministrationService.asmx?WSDL

image

3. Invoke the QueryActiveRequests web method, expand the ActiveRequests property for each item returned and then format some of the properties (User, Method, etc) as a table (ft).

$tfsadmin.QueryActiveRequests($null, "False") | %{ $_.ActiveRequests } | ft StartTime,UserName,MethodName,RemoteComputer

image

This is a very simple example which should be enough to get you started. Some of the other Administration web services you might want to explore are:

Where possible, you should use the Microsoft.TeamFoundation object model since that is the officially supported API. An example for modifying the TFS registry is included in this post.


TFS2010: Warehouse and Job Service Administrator Reports

$
0
0

The new TFS Administration Console will show you very basic information about warehouse & cube processing. However, it doesn’t show you anything about the queued & executing jobs.  These are both gaps that I hope we address in the shipping product in the future. In the meantime for any real warehouse or job status information you have to hit the web services. Since the web services don’t allow you to invoke them using a web browser from a remote computer, you’re forced to logon locally to the server and run them from there.

The smarter option is to download yourself a copy of WebServiceStudio, which allows you to invoke web methods with complex parameters from your remote workstation. No more logging onto servers!

Here are the WSDL EndPoint’s that you’ll need. Copy them in, change the server name, select ‘Get’, select a method from the tree, select ‘Invoke’, (ignore the exception dialog if you get one the first time), select ‘Invoke’ again and there you have it.

  image

Now, the problem with either reading the XML from the web browser (argh! my eyes!), or using WebServiceStudio is that there’s lots of nested information which makes it difficult to find out what’s going on. Fortunately with a fairly simple report and the XML Data Source that SQL Reporting Services provides, we can make it much, much nicer.

Introducing the first two TFS2010 Administration Reports

As I run the Pioneer dogfood TFS server day-to-day, I notice opportunities for tools to help make TFS administrator’s lives easier. Here’s the first two reports to give you a better insight as to what is happening on your server and allow you to investigate problems without having to mess around with web services.

Download & Setup

  • Open a browser an browse to your SQL Reporting Services root site. For example: http://tfsserver/Reports/
  • Create a new folder and name it ‘Status’
  • Create two new shared data sources with the following properties:
  • Download the ZIP attachment from the bottom of this post
  • Extract the contents to a temporary folder on your workstation
  • Select ‘Upload File’ from the reporting services manager
  • Browse to the temporary folder where you extracted the reports, select ‘OK’.
  • Repeat for the other report
  • Try it out!
  • Get somebody else to try it out and make sure permissions are set correctly.

Warehouse Status Report

The first part of the report shows you the overall status, similar to the ‘Reporting’ tab in the Team Foundation Administration Console. This is a quick an easy way to find out if an Incremental or Full processing is in progress.  It will also show you any errors (like warehouse schema conflicts) in the ‘Last Run’ column.

Warehouse Status Report - Processing status

The second part of this report is useful after an upgrade or when the warehouse needs to be rebuilt manually.  It shows you each of the data adapter sync jobs for each collection and their current status. During normal operation, these will run very quickly as data changes in the operational stores, so you’ll probably always see them as “Idle”. It will also show you any errors from previous job executions in the ‘Last Run’ column.

Warehouse Status Report - Data Adapter Jobs

Job Status Report

The first part of this report shows you the job definitions for the instance and the interval they’re set to run on. This is useful for checking to see if a job has somehow been disabled or changed.

Job Status Report - Job Definitions

The second part of this report shows you the job history. Let me explain each of the fields:

  • Job ID – Every job in the system has a unique id.
  • Job Agent ID – Every AT/Job Agent has a unique id.  This field is useful if you have multiple application tiers and you want to know which one the job executed on so that you can investigate the event log or performance counters.
  • Result – This is an integer that represents the result of the job. 0 = Succeeded, 1 = PartiallySucceeded, 2 = Failed, 3 = Stopped, 4 = Killed, 5 = Blocked, 6 = ExtensionNotFound, 7 = Inactive, 8 = Disabled
  • Queued Reason – This is an integer that represents the way a job was scheduled. 0 = No reason to execute the job, 1 = Job has a schedule, 2 = Queued manually, 4 = Queued manually while already in progress, 8 = Queued due to previous result (Blocked, Inactive)
  • Start Time – The local time that the job started executing on a job agent
  • End Time – The local time that the job finished executing on a job agent
  • Duration – The number of minutes that the job was executing for
  • Result Message – Jobs are able to log a status message when they finish executing. Most jobs don’t,  since they execute very often and there’s no need to log anything if the job succeeded.

Job Status Report - Job History

There’s a few little visual styles that I added to allow you to glance the report and find problems:

  • Hover over a job result message to see the full text
  • Job results that failed have their error printed in red
  • Jobs that took longer than 30 minutes have their duration in red
  • Jobs that took longer than 10 minutes have their duration in orange

 

Filters

There is a lot of noise when dealing with jobs, so I added some parameters to help you filter out the noise.

image

  • Unscheduled jobs - are jobs that have a definition, but no schedule set. Examples: “Create Team Project Collection”, “Delete Team Project Collection”, “Service Team Project Collection”, “Provision Attach Team Project collection” and “Team Foundation Server On Demand Identity Synchronization”.
  • Successful jobs - are jobs that have a job result of ‘0’

If you find these useful, leave me a comment or drop me an email. I’d love suggestions for them or for any new TFS2010 Administrator reports.

TFS2010: Create a new Team Project Collection from Powershell and C#

$
0
0

Team Foundation Server 2010 has the great new Administration Console, however one of the shortcomings of it is that you have to run it on the TFS Application Tier itself. The team wants to have a tool that allows remote server administration, however it required more time than we had for this release. Now because I hate logging on to servers, I’ve started seeking out ways to do common tasks remotely.

Fortunately, the product was architected in a way that you can almost do everything in the admin console via web services and the TFS client & server APIs.

To Create a Team Project Collection, the normal way is to logon to the server, open the admin console and click through the UI.

Create TPC UI

Using Powershell

Fortunately, I found a script from Chris Sidi and all I had to do was make it compatible with the changes that we introduced after Beta2. All you have to do is start Windows PowerShell on your local workstation, replace the highlighted values and run the following script. You can also run this on the server itself, however you will need to start PowerShell using “Run as Administrator”.

# Load client OM assembly.
[Reflection.Assembly]::Load("Microsoft.TeamFoundation.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a");

$instanceBaseUrl = "http://tfsserver:8080/tfs/";
$tfsServer = new-object Microsoft.TeamFoundation.Client.TfsConfigurationServer $instanceBaseUrl;

$tpcSvc = $tfsServer.GetService([Microsoft.TeamFoundation.Framework.Client.ITeamProjectCollectionService]);
$job = $tpcSvc.QueueCreateCollection(
    "MyCollection",      # collection name.
    "",                  # description.
    $false,              # don't make this the default collection.
    "~/MyCollection/",   # virtual directory.
    "Started",           # State after creation.
    $null,               # no tokens.
    "Server=SQLSERVER;Integrated Security=SSPI;",       # The SQL instance to create the collection on. Specify SERVER\INSTANCE if not using default instance
    $null,               # null because the collection database doesn't already exist.
    $null)               # null because the collection database doesn't already exist.

$collection = $tpcSvc.WaitForCollectionServicingToComplete($job)

 

Execution of the last line will block until the collection is completed. If the collection cannot be completed for any reason, you’ll receive an exception:

Exception calling "WaitForCollectionServicingToComplete" with "1" argument(s): "The collection servicing job did not succeed."

If this is the case, then you’ll have to open up the admin console and look through the “Logs” node to see why it failed.

 

C# code to do the same

Add references to: Microsoft.TeamFoundation.Client.dll and Microsoft.TeamFoundation.Common.dll

using System;
using System.Collections.Generic;
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.Framework.Client;
using Microsoft.TeamFoundation.Framework.Common;

namespace CreateTPC
{
    class Program
    {
        static void Main(string[] args)
        {
            string serverUrl = "http://myserver:8080/tfs";
            string collectionName = "MyCollection";
            string sqlConnectionString = "Data Source=MYSQLSERVER;Integrated Security=True;";

            TfsConfigurationServer tfs = new TfsConfigurationServer(new Uri(serverUrl));
            ITeamProjectCollectionService tpcService = tfs.GetService<ITeamProjectCollectionService>();

            Dictionary<string, string> servicingTokens = new Dictionary<string, string>();
            servicingTokens.Add("SharePointAction", "None"); // don't configure sharepoint
            servicingTokens.Add("ReportingAction", "None"); // don't configure reporting services

            ServicingJobDetail tpcJob = tpcService.QueueCreateCollection(
                collectionName, // Collection name
                "", // description
                false, // IsDefaultCollection
                string.Format("~/{0}/", collectionName), // virtual directory
                TeamFoundationServiceHostStatus.Started, // initial state
                servicingTokens, // servicing tokens
                sqlConnectionString, // connection string
                null, // default connection string
                null  // database category connection strings
                );

            TeamProjectCollection tpc = tpcService.WaitForCollectionServicingToComplete(tpcJob);
        }
    }
}

 

!-->

TFS2010: DevDiv TFS Server Upgraded

$
0
0

Back in April, the week before the VS2010 worldwide launch we successfully upgraded the server to TFS2010 RTM.  Because this is such a large server and almost 4,000 people in the division depend on it for their day-to-day work, it took a couple of months of planning, testing and dry-runs to get done. Since then, we've also upgraded our proxy servers to Windows 2008 R2 + TFS2010, upgraded our SQL server to Windows 2008 R2 + SQL Server 2008 R2, moved to a new set of hardware and upgraded+consolidated a couple of other servers to this server.  A busy year so far!

DevDiv TFS2010 Server Topology

This server has had an interesting history which makes this upgrade particularly important.  The server originally started as the dogfood server for the TFS team in December 2004, you can see this from the very first checkin:

C:\>tf history /collection:http://vstfdevdiv:8080 $/ /version:C1 /format:detailed /noprompt /stopafter:1
--------------------------------------------------------------------------------------------------------
Changeset: 1
User: CD193
Date: Friday, December 10, 2004 10:04:32 AM

Comment:
  Initial creation of the repository

Items:
  add $/

A brief history

Since that first checkin, it has been constantly patched and upgraded ahead of each release (CTPs, Betas, SPs, etc).  Then in early 2008 the whole division on-boarded to the server and it became the single source & bug repository for the division. During this on-boarding period there were lots and lots of patches made so they server could scale to the unique demands of the division.  These patches were all rolled into the product and shipped as part of TFS2008 SP1.

Then in mid-2008 we started to have some big growing pains as the number of users and the demands of the server increased. There was a lot of pressure from up the chain and across the division to fix things and make it better. This ultimately lead to what we referred to internally as “the schema change” and you can read more about the impacts of it in Matt’s change to slot mode in TFS2010 blog post.

The improvement that this change brought is pretty clear from the following chart which shows Command Time vs. Command Count – up until the patch was deployed, performance for large operations (Gets, Merges, Branches of millions of files) was pretty bad.

image

 

However, getting the schema upgrade deployed was not smooth sailing or a silver bullet to our problems. The chart below shows our availability over the last 2 years.  As you can see, we were not in a good shape towards the end of 2008.  The schema upgrade involved adding a new non-NULL column to a 5 billion row table. Our initial attempt performed this in a single transaction and took many hours.  After running for ~48 hours, our SQL cluster failed over to the passive node which caused the transaction to start rolling back.  This is when we discovered that rollback is single-threaded and lower priority, so we had to wait almost 4 days for the transaction to rollback before we could bring the server back online. That was not a good week Sad smile and we learnt many lessons from that upgrade. 

image

Once that upgrade was complete and the TFS problems were fixed, it was like a spotlight came on and exposed some problems in our underlying infrastructure (cluster failovers, poor disk performance, network failures, hardware failures).  Over the next six months, we had a team of people dedicated (from both the product group & operations side) to getting to the bottom of all the issues and a focus to get the division stable again.

The end result is that by us dogfooding our own product, we’ve changed how we approach upgrades and made them more robust which makes it a much better experience for everybody.

Here’s the latest DevDiv TFS Statistics:

  • Team Projects: 72
  • Files: 981,754,813
  • Uncompressed File Sizes: 20,317,315
  • Checkins: 1,912,072
  • Shelvesets: 244,324
  • Merge History: 2,342,520,807
  • Workspaces: 38,625
  • Local Copies: 4,251,932,059
  • Users with Assigned Work Items: 5,121
  • Total Work Items: 897,787
  • Areas & Iterations: 11,835
  • Work Item Versions: 8,540,808
  • Work Item Attachments: 472,487
  • Work Item Queries: 93,572

TFS2010: Large & Resumable Check-in Support

$
0
0

Problem

The DevDiv mainline contains over 200GB of content and more than 1.8 million files. Any version control operations that had to deal with this amount of content would occasionally run into two problems:

  1. If the number of pending changes is more than ~300,000, then the client might not be able to process all this data in the ListView and hit an OutOfMemoryException
  2. If it took longer than 60 minutes to upload all of the content and commit the check-in in the database, then it would hit the 60 minute timeout and rollback

Workarounds

People could sometimes work around the issue by using “tf checkin /noprompt” which would cut the client memory usage enough to allow it to complete.  We also have an undocumented switch “tf checkin /all” which allows you to check-in all pending changes in a workspace without having to load all the pending changes.  This switch has a limitation that it does not work with “edits” and “adds” and should only be used for branches (there is also a “tf branch /checkin” option which does the branch & check-in in one step). Another workaround was to break the check-in up into multiple check-ins, however this was less than ideal. 

New in TFS2010

We made an enhancement in Team Foundation Server 2010 to support very large check-ins by changing the client and server to page pending changes to and from the server.  It includes three core changes:

  1. Add paging capability for querying pending changes
  2. Change the list view in the Pending Changes dialog to a virtual list view and utilize the paging capability to populate the list on demand
  3. Add the ability to ‘enqueue’ pending changes for a check-in on the server

The default page size is 150,000, which means that if you try and check-in more than this, then your pending changes are written to a temporary table in batches of 150k.  This means that every batch has a full 60 minutes to upload before hitting the client-side timeout.  Once the client reaches the last page of pending changes, then it sends an extra flag which tells the server to commit the changes.  Since the server has all the data in a temporary table, it’s able to quickly move it into the real content table and avoid the 60 minute timeout.

If you cancel your check-in command before it’s complete, you can also run it again and it will reuse the pending changes that are in the temporary table.  This temporary table of pending changes is cleaned up every 24 hours by a job so that the paged changes don’t accumulate sit in the database if they never get checked in. 

Proof

Yesterday I was able to check-in and add of 1.8 million items (200GB of content) with a single action without restarting or running out of memory.  It took about 7 hours to upload all the content and then another 25 minutes to perform the final commit in the database.  The end result is a single changeset which is exactly the result I was after.

TFS2010: How to enable compression for SOAP traffic

$
0
0

When we upgraded our internal servers to TFS2010, some of our remote users noticed that HTTP compression was used for some traffic, but not all. HTTP compression was enabled for file downloads from source control and for web access pages but we weren’t compressing the SOAP responses to clients for Work Item Tracking and other API commands.  This only applies when installing TFS2010 on Windows 2008, Windows 2008 R2 and Windows 7 - it will be enabled by default in a future release.

To make this change for yourself, logon to your application tier, open a command prompt as administrator, paste the following commands:

%windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/soap%u002bxml; charset=UTF-8',enabled='True']" /commit:apphost
%windir%\system32\inetsrv\appcmd recycle apppool /apppool.name:"Microsoft Team Foundation Server Application Pool"

Some investigation revealed that the IIS “Dynamic Compression” feature is getting installed and enabled correctly but we’re missing the MIME type for the SOAP requests.  Here’s an example from the installation log file:

[Info   @01:11:43.140] IIS7 Feature value: CompressionBinaries=1

[Info   @01:11:43.140] IIS7 Feature value: HttpCompressionDynamic=1

[Info   @01:11:43.140] IIS7 Feature value: HttpCompressionStatic=1

[Info   @01:22:21.295] Configuring dynamic compression on IIS7

[Info   @01:22:21.295] Process starting: fileName=C:\Windows\system32\inetsrv\appcmd.exe arguments=set config -section:urlCompression /doDynamicCompression:true

[Info   @01:22:21.999] Process finished: fileName=C:\Windows\system32\inetsrv\appcmd.exe arguments=set config -section:urlCompression /doDynamicCompression:true exitCode=0 in 701 ms

[Info   @01:22:21.999] Applied configuration changes to section "system.webServer/urlCompression" for "MACHINE/WEBROOT/APPHOST" at configuration commit path "MACHINE/WEBROOT/APPHOST"

The easiest way to verify that compression is correctly enabled is to look at a network trace using a tool like Network Monitor (NetMon). Fiddler doesn’t work with the default configuration of Visual Studio, because it is set to “BypassProxyOnLocal”.

Here’s what the NetMon trace looks like without compression. Notice that the HTTPPayloadLine is in cleartext.

image

And here’s what it looks like after compression is enabled – the payload is GZip compressed.

image

TFS2010: How to query Work Items using SQL on the Relational Warehouse

$
0
0

In John Socha-Leialoha's blog post on Upgrading Team Foundation Server 2008 Reports to 2010, Part I, there is a hidden gem:

For the first time, writing reports against the warehouse using SQL is officially supported. As a rule of thumb, you’ll generally want to use the cube for historical reports, or reports that require a lot of slicing and dicing using parameters of aggregate data. The cube is really good at this sort of work. The warehouse, on the other hand, allows you to create reports that pull loosely related data together in ways not possible with the cube.

The views that begin with “v” and end with “Overlay” are used for processing the cube, and as such aren’t really meant for use in your reports.

The relational warehouse is a reasonable store to work against, since the warehouse adapters that sync data from the operational store run every 5-minutes and keep the data fresh. You should NEVER write reports directly against the WorkItem* tables in the collection database, since this is an operational store and is 100% unsupported and can cause performance problems for normal usage. The limitation of TFS2008 still exists that fields of the Html type are not pushed into the warehouse and for those you’ll have to use the Work Item Tracking object model to query and retrieve them.

Permissions

Before you can use these queries, you’ll need to be a member of the TfsWarehouseDataReader role in the Tfs_Warehouse database. Remember that the warehouse contains data from all projects on a server, so anybody who has access to query the warehouse can see this regardless of their permissions within TFS. The best way to give people access is to create an Active Directory group that contains all the users that should have access to query the relational warehouse, then add that group to the role with the following script:

USE [Tfs_Warehouse]
GO

CREATE USER [DOMAIN\TfsWarehouseDataReadersGroup] FOR LOGIN [DOMAIN\TfsWarehouseDataReadersGroup] WITH DEFAULT_SCHEMA=[dbo]
GO

EXEC sp_addrolemember N'TfsWarehouseDataReader', N'DOMAIN\TfsWarehouseDataReadersGroup'
GO

You can also use this group to give people access to query the OLAP cube in Analysis Services. See Grant Access to the Databases of the Data Warehouse for Visual Studio ALM for more information.

There are 9 views that you can query and write reports against with some level of assurance that they will work the next time that the server is upgraded:

  • CurrentWorkItemView
  • WorkItemHistoryView
  • BuildChangesetView
  • BuildCoverageView
  • BuildDetailsView
  • BuildProjectView
  • CodeChurnView
  • RunCoverageView
  • TestResultView

Now that TFS2010 has multiple project collections sharing the same relational warehouse and OLAP cube, there are a two things to consider when writing queries against the views:

  • Filter on Project GUID - since a project's name is not necessarily unique across multiple collections on the same server.
  • Make sure your joins use unique keys. For example, work item IDs are no longer unique within the warehouse

Example

Here’s an example Work Item Query (WIQL):

SELECT [System.Id], [Microsoft.VSTS.Common.StackRank], [Microsoft.VSTS.Common.Priority], [Microsoft.VSTS.Common.Severity], [System.State], [System.Title]

FROM WorkItems

WHERE [System.TeamProject] = 'DemoAgile'

AND  [System.AssignedTo] = 'Grant Holliday'

AND  [System.WorkItemType] = 'Bug'

AND  [System.State] <> 'Closed'

ORDER BY [System.State], [Microsoft.VSTS.Common.StackRank], [Microsoft.VSTS.Common.Priority], [Microsoft.VSTS.Common.Severity], [System.Id]

And here’s an equivalent query of the same data from the relational warehouse:

SELECT [System_Id], [Microsoft_VSTS_Common_StackRank], [Microsoft_VSTS_Common_Priority], [Microsoft_VSTS_Common_Severity], [System_State], [System_Title]

FROM CurrentWorkItemView

WHERE [ProjectNodeGUID] = 'a6fc4213-94c0-4361-87bf-9520e07eb096'

AND [System_AssignedTo] = 'Grant Holliday'

AND  [System_WorkItemType] = 'Bug'

AND  [System_State] <> 'Closed'

ORDER BY [System_State], [Microsoft_VSTS_Common_StackRank], [Microsoft_VSTS_Common_Priority], [Microsoft_VSTS_Common_Severity], [System_Id]

Some queries will be much faster using the WIT Object Model, but at least this gets you started with the relational warehouse.

TFS2010 Upgrade: Compatibility of tools

$
0
0

When you upgrade from Team Foundation Server 2008 to 2010, one of the things you need to check is the compatibility of the tools that people rely on and use the server with. Without careful preparation this can have a significant impact on your user's experience after the server is upgraded.

As an example, when we upgraded the DevDiv TFS server which is home to more than 3,500 users - there were ~200 unique tools that were using the server which all had to be assessed for compatibility. We started by prioritizing them into four buckets:

  • P0 - Can't upgrade the server unless these are proven to work
  • P1 - These should work within 1 week of the server being upgraded
  • P2 - There should be an owner identified and a plan in place for these to be made compatible
  • P3 - All other tools that aren't used very much or have been orphaned

To help the tool owners check compatibility, we had a cloned and upgraded server available that they could work against. They also had the option of installing TFS Basic in about 15 minutes by themselves and testing on their own workstation.

Perhaps the most difficult part of it all was identifying who actually owned the tools! Once we identified the owners, there was a weekly barrage of nag-mail to make sure they were ready. Here's an example of one of the status reports:

P0 Tools – 11 out of the 15 P0 tools have been tested and made compatible with TFS2010 and the other 4 are on track.

P1 & P2 Tools – Good progress has been made with the P1 tools with almost half of them signed off. The other half have no owners or are not used by many people.

clip_image001[4]

Identifying tools

The easiest way to identify tools that might be impacted is to look at the tbl_Command table in the TfsActivityLogging database.

SELECT UserAgent, SUM(ExecutionCount) as ExecutionCount

FROM [TfsActivityLogging].[dbo].[tbl_Command]

GROUP BY UserAgent

ORDER BY SUM(ExecutionCount) DESC

This will show you all the unique user agents that have accessed the server in the last 14 days ordered by the total number of commands. You can ignore w3wp.exe[*], devenv.exe, msbuild.exe, tf.exe, tfpt.exe, WINPROJ.exe, Excel.Exe, BuildNotifications.exe & vstesthost.exe - since they will all get upgraded along with the server & client installs.

Changes

The following table is a list of the common changes that will be required for your tools to be compatible with TFS2010:

Change

Workaround / Recommendation

Incompatible client versions

If you are using the VS2008 object model, the server rejects requests from clients that don't have the "Forward Compatibility GDR" installed.

At the very least, you need to install VS2008 SP1 + the Forward Compatibility GDR on any clients that connect to TFS.

See this blog post for more details on supported forward and backwards compatibility.

Recommendation:

You should upgrade and recompile your tools to use the VS2010 object model for full compatibility.

Virtual Directory

TFS2010 introduces a "/tfs/" virtual directory prefix in the default install. It is possible to install TFS at the root "/", or any other prefix.

When you install or upgrade the server, it is possible to install at the root "/". This configuration along with setting a default collection will allow tools with hard-coded server addresses to seamlessly continue to connect to the existing server.

Recommendation

Tools should not make assumptions about the structure of a server URL or hard-code the server URL. They should allow users to specify an arbitrary URL through a configuration setting.

Collection URL

TFS2010 introduces a new concept called "Team Project Collections" (TPC). To facilitate this, there is an extra identifier in the URL.

TFS2010 has a "Default Collection" setting which is stored in the TFS instance registry (/Configuration/DefaultCollection/). This is the collection that clients will get when they don't specify a collection.

When you upgrade a TFS2008 server to TFS2010, the default collection is set to the TPC containing projects from the upgraded server. When you use "tfsconfig import" to upgrade additional TFS2008 databases, the default collection is not changed.

Recommendation

Tools should specify an explicit TPC to avoid unintended consequences if the "Default Collection" of an instance changes.

Object Model changes

In the VS2008 object model:

In the VS2010 object model:

  • TfsTeamProjectCollection tpc = new TfsTeamProjectCollection(new Uri("http://server:8080/tfs/CollectionName"));

The VS2010 object model has many compatibility changes and bug fixes (e.g. improved memory usage).

Recommendation

You should recompile your tool against the VS2010 object model and add support for choosing & specifying a TPC name.

See Taylor's blog on the TfsConnection, TfsConfigurationServer and TfsTeamProjectCollection classes that replace the old TeamFoundationServer class.

Web Service changes

Most of the Web Services that TFS provides are not a documented or supported interface - however some tools still do use them.

The one that gets used fairly often is the Administration.asmx service with the QueryServerRequests() method that tells you which commands are currently executing on the server.

Recommendation

You should always use the supported object model or data warehouse & cube if it provides the information you need. See my blog post on How to query Work Items using SQL on the Relational Warehouse.

If you cannot use the object model, you will need to use the new URLs and regenerate your proxy classes.


How to: Copy very large files across a slow or unreliable network

$
0
0

To prepare for the DevDiv TFS2010 upgrade we had to copy 8TB of SQL backups about 100 miles across a WAN link so that we could restore it on our test system.  The link speed was reasonably good and the latency fairly low (5ms), but when you’re dealing with files this big then the odds are against you and using sneakernet can be a good option. In our case it wasn’t an option and we had to find the next best solution.  In the end we were able to copy all 8TB over 7 days without having to resume or restart once.

The 8TB backups were spanned across 32 files of 250GB each which makes them a little easier to deal with.  The first problem that you’ll encounter when using a normal Windows file copy, XCopy, RoboCopy or TeraCopy to copy these large files is that your available memory on the source server will start to drop and eventually run out. The next problem you’ll encounter is the connection will break for some reason and you’ll have to restart or resume the transfer.

Fortunately the EPS Windows Server Performance Team have a blog post on the issue and a great recommendation: Ask the Performance Team : Slow Large File Copy Issues

The problem lies in the way in which the copy is performed - specifically Buffered vs. Unbuffered Input/Output (I/O).

Buffered I/O describes the process by which the file system will buffer reads and writes to and from the disk in the file system cache.  Buffered I/O is intended to speed up future reads and writes to the same file but it has an associated overhead cost.  It is effective for speeding up access to files that may change periodically or get accessed frequently.  There are two buffered I/O functions commonly used in Windows Applications such as Explorer, Copy, Robocopy or XCopy:

  • CopyFile() - Copies an existing file to a new file
  • CopyFileEx() - This also copies an existing file to a new file, but it can also call a specified callback function each time a portion of the copy operation is completed, thus notifying the application of its progress via the callback function.  Additionally, CopyFileEx can be canceled during the copy operation.

So looking at the definition of buffered I/O above, we can see where the perceived performance problems lie - in the file system cache overhead.  Unbuffered I/O (or a raw file copy) is preferred when attempting to copy a large file from one location to another when we do not intend to access the source file after the copy is complete.  This will avoid the file system cache overhead and prevent the file system cache from being effectively flushed by the large file data.  Many applications accomplish this by calling CreateFile() to create an empty destination file, then using the ReadFile() and WriteFile() functions to transfer the data.

  • CreateFile() - The CreateFile function creates or opens a file, file stream, directory, physical disk, volume, console buffer, tape drive, communications resource, mailslot, or named pipe. The function returns a handle that can be used to access an object.
  • ReadFile() - The ReadFile function reads data from a file, and starts at the position that the file pointer indicates. You can use this function for both synchronous and asynchronous operations.
  • WriteFile() - The WriteFile function writes data to a file at the position specified by the file pointer. This function is designed for both synchronous and asynchronous operation.

Which Tool? ESEUTIL

Yes, the tool has some limitations – but in my experience it’s well worth the time investment to get running. See How to Run Eseutil /Y (Copy File)

To get the utility, you need access to an Exchange server or to install Exchange in Administrator-only mode. When you install Exchange in Administrator-only mode, the appropriate binaries are copied to your computer and you can then copy these three files off and use them on another computer:

ese.dll
eseutil.exe
exchmem.dll

It does not accept wildcard characters (such as *.* to copy all files), so you must have to specify a file name and copy one file at a time. Or use a command like: FOR %f IN (d:\backups\*.BAK) DO ESEUTIL /Y "%f"

COPY FILE:
    DESCRIPTION:  Copies a database or log file.
         SYNTAX:  D:\BIN\ESEUTIL /y <source file> [options]
     PARAMETERS:  <source file> - name of file to copy
        OPTIONS:  zero or more of the following switches, separated by a space:
                  /d<file> - destination file (default: copy source file to
                             current directory)
                  /o       - suppress logo
          NOTES:  1) If performed on arbitrary files, this operation may fail
                     at the end of the file if its size is not sector-aligned.

Example

D:\>d:\bin\eseutil /y c:\Backups\Backup1.bak /d \\destination\c$\Backups\Backup1.bak

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 08.01
Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating COPY FILE mode...
     Source File: c:\Backups\Backup1.bak
Destination File: \\destination\c$\Backups\Backup1.bak

                      Copy Progress (% complete)

          0    10   20   30   40   50   60   70   80   90  100
          |----|----|----|----|----|----|----|----|----|----|
          ...................................................

Operation completed successfully in 7.67 seconds.

Other Tools

If you read the comments of the performance team’s blog post, you’ll see that XCopy has a /J option in Windows 7 and Windows 2008 R2 that does unbuffered I/O. However that’s not an option when you haven’t upgraded to R2 yet.

/J           Copies using unbuffered I/O. Recommended for very large files.

Which Direction?

Through trial and error, we determined that it was much more reliable to run eseutil.exe on the SOURCE server and push the files to the remote share.  This seemed to absorb any network blips and required no manual interruption over the 7 days it took us to copy the files.

Verifying hashes

The third problem you want to avoid is getting the files copied and then finding out that they match in size but the contents are corrupt.  You can check for this by generating hashes on both the source and target systems and comparing them after the copy.

You can download the Microsoft File Checksum Integrity Verifier fciv.exe tool from Microsoft Downloads.

Then run it like this on each system:

fciv.exe C:\Backups -type *.bak -r -wp -xml hashes.xml

Administrative Report Pack for Team Foundation Server 2010

$
0
0

One of the key components of TFS is the Data Warehouse, which is made up of a relational database and an Analysis Services cube.  In general, people don’t have a problem with the performance or operation of our Data Warehouse.  However there were two classes of problems that you’re likely to run into as your servers grow larger:

  1. Processing Time – As the number of reportable fields increases, the number of dimensions that Analysis Services has to process also increases.  This increases the time it takes to process the cube and therefore the latency of the data is higher.
  2. Schema Conflicts – In the simple case, when there are two fields in different collections (e.g. Priority) with the same name but a different type (e.g. String vs. Integer) this results in a schema conflict.  That project collection is then blocked from processing warehouse updates and the data in the warehouse & cube becomes stale.

Internally at Microsoft we started noticing some problems after we upgraded TFS2008 servers and consolidated them onto larger TFS2010 servers as new Team Project Collections (TPCs). There is a constraint (or feature, depending how you look at it) in the system that you can only have a single data warehouse per instance of TFS2010.  This feature enables you to do cross-project reporting for gathering data across all the projects in all collections on that server.  The downside of this though, is that when a field’s reporting settings change in one team project in one collection on a server, it can impact the data warehouse experience for everybody else on that server.

The good news is that through our dogfooding efforts on the Pioneer server, we found some issues early enough and made changes to the product before it shipped to avoid some other issues. While we were investigating and fixing these issues, we developed some reports that visualize the information that TFS stores about the health of the data warehouse to track our improvements.  Now we’re sharing those reports with you.

Please download them (below), install them, send us any feedback (comments on this blog are fine) and we’ll work on including them in the next official Power Tools.

These reports are useful to TFS administrators, operations/support teams, project administrators & end-users.  The reports in this pack display the following kinds of information:

  • Recent processing times
  • Current status (whether the cube is processing now and, if not, when it is scheduled to process next)
  • Schema conflicts
  • Most recent time that each adapter successfully ran

Interpreting the reports

In the download, there is a FAQ document which includes screenshots of what different reports mean and common questions. The contents of the FAQ are also available at Monitoring the TFS Data Warehouse – FAQ.

 

Requirements

  • SQL Server Reporting Services 2008 or 2008 R2
  • A shared datasource to which you connect the report, as the installation instructions describe how to configure.

Download: AdminReportPack.zip

Download and install this report pack to the Team Foundation Server Reporting Services Instance to monitor warehouse and cube processing status.

For information about how to install this report pack, see Installing Admin Report Pack for TFS.docx that is included in the download.

The pack includes:

  • Admin Report Pack for TFS FAQ.docx
  • Installing Admin Report Pack for TFS.docx
  • Reports\Cube Status.rdl
  • Reports\Blocked Fields.rdl
  • Reports\Reportable Fields.rdl

Here’s an overview of how the reports look and what questions you can answer with them.

Cube Status

Use this report when you want to answer the following questions:

  • How long is cube processing taking?
  • How much time elapses between processing jobs?
  • How often do the processing jobs run?
  • Do errors occur with when the cube is processed?

Process Times:

image

Current Processing Status

This table tells you whether the warehouse is currently being processed or when it will be processed next.

image

The Next Run Time is in local time, and Run time is the duration of the job that is currently running.

Warehouse Job Status

This table lists all team project collections and all Data Adapter Jobs and displays how long ago the Warehouse and Cube data were updated. In addition, the table will show the schema conflicts and other Data Adapter errors that caused data to be out of date:

image

Showing Schema Merge Conflicts on the Warehouse Job Status View

If a conflict occurs when two schemas merge, the table will show the conflict with a link to the sub report that displays details about the blocked fields.

image

If you click the link that appears under the schema conflict error, a different report appears and shows the fields that are currently active and blocked for the blocked team project collection.

Blocked Field Changes

The following table shows blocked fields that have conflicts over all team project collections.

image

Blocked fields appear before the fields with which they are conflicting. The conflicting areas appear in bold.

Queued Field Changes

The following table shows field changes that are queued behind the blocked changes in the previous illustration.

image

After you resolve the blocked changes, the queued changes will then be applied to the warehouse.

Reportable Fields

This report shows all reportable fields in the deployment of Team Foundation Server. Administrators of team projects should review this report before they add a reportable field or change the properties of an existing field. The report helps prevent potential schema-merge conflicts. It lists fields across all collections, including any fields that are blocked.

image

Found 2 fields that match ‘Found In’

Cube Processing Status

This report shows list of recently completed Cube processing jobs.

image

I hope that you find these reports useful. Please comment on this post if you have questions or other feedback.

Monitoring the TFS Data Warehouse - FAQ

$
0
0

This blog post describes how to interpret the Data Warehouse & Cube status reports included in the Administrative Report Pack for TFS2010.

If these tips and the help topic for the error message do not answer your question, see the Microsoft Technical Forums for Visual Studio Team Foundation (http://go.microsoft.com/fwlink/?LinkId=54490). You can search these forums to find information about a variety of troubleshooting topics. In addition, the forums are monitored to provide quick responses to your questions.

Should I expect some processing jobs to fail?

Yes. Some failures are part of typical processing. Three jobs require exclusive access to the same warehouse resources:

  • Optimize Databases (runs at 1:00 AM by default)
  • Full Analysis Data Sync (runs at 2:00 AM by default)
  • Incremental Analysis Data Sync (runs every two hours by default)

None of these jobs can run in parallel with any other job on the list. If one job is already in progress when another job starts, the second job will fail quickly with error TF276000, as shown in the following illustration of the Cube Processing Details view of the Processing Times report:

clip_image001[3]

The following illustration shows a sample of typical processing failures:

clip_image002[3]

The previous illustration shows the results of the following events:

  1. An Incremental job was scheduled to run at 1:00 AM on June 25, 2010, but it failed because the Optimize Databases job had already started.
  2. An Incremental job was scheduled to run at 3:00 AM on the same morning and upgraded itself to Full Analysis Data Sync.
  3. An Incremental job was scheduled to run at 1:00 AM on the next morning, but it failed because the Optimize Databases job had already started.
  4. A Full Analysis Data Sync job started at 2:00 AM on June 26, 2010, and ran for one hour and 48 minutes. That job caused the Incremental job that was scheduled to run at 3:00 AM on the same morning to fail.

Why might most Cube processing jobs fail?

The Cube processing job requires exclusive access to some of the warehouse resources that data synchronization jobs use. The Cube processing job will wait for the release of the resources (normally for an hour) before it gives up. If a data synchronization job does not release the resource in time, the Cube processing job will fail with the following error:

ERROR: TF221033: Job failed to acquire a lock using lock mode Exclusive, resource DataSync: [ServerInstance].[TfsWarehouse] and timeout 3600.

The following illustration shows failures that occur if the Cube processing job cannot access one or more of the warehouse resources that it requires:

clip_image003[3]

To troubleshoot this issue, you must determine which Warehouse Data Sync job is preventing the Cube processing job from accessing the resource or resources that it needs. This report pack does not provide an easy way to determine which Warehouse Data Sync job is causing the problem, but you can determine that information by examining the Warehouse Job Status view. As the following illustration shows, the warehouse data for the problematic job will be much older than the warehouse data for other jobs for the same team project collection:

clip_image005[3]

To troubleshoot issues with individual Warehouse Data Sync jobs, first unblock the overall Warehouse Sync process by disabling the offending job to allow re rest of the process proceed, and attempt to solve the issue with the individual job afterwards.

Why might many Incremental jobs be upgraded to Full jobs?

According to the process for synchronizing the warehouse, the cube should be processed incrementally throughout the day, and then a full synchronization should occur every day at 2:00 AM. Full synchronization jobs usually run longer and consume more system resources than Incremental jobs. However, the system will try to correct itself if an Incremental job failed. In that situation, the next Incremental job will be upgraded to a full synchronization. If multiple Incremental jobs are upgraded to Full, as the following illustration shows, you might first determine whether your network connectivity is reliable. You should inspect the error that the failing job returned and then address the issue.

clip_image006[3]

Why might a processing job run for a long time (~24 hours) before it fails?

If your network loses connectivity, the server-side execution of the Analysis processing job might finish but fail to report the job completion to the processing component for Team Foundation Server. Because of the same network failures, the resource lock might be released, but the Job Agent might not update the job’s state. The following illustration shows that a Full processing job started on June 24, 2010, at 2:00 AM and ran for more than 24 hours. Because it released the processing lock, the Incremental job was running in parallel with it.

clip_image007[3]

The following illustration shows the worst case of the same problem. The Incremental job has run for more than nine hours, which means that no other jobs are scheduled and the cube is at least nine hours out of date. To mitigate this issue, you should use the AnalysisServicesProcessingTimeout setting for processing the cube for Team Foundation Server. This MSDN article describes how to Change a Process Control Setting for the Data Warehouse or Analysis Services Cube.

clip_image008[3]

clip_image009[3]

How might I resolve a schema-merge conflict to unblock a team project collection?

When a Team Project Collection gets blocked due to schema merge conflicts, the Warehouse Job Status table will show the conflict with a link to the sub report that displays details about the blocked fields. If you click the link that appears under the schema conflict error, a different report appears and shows the fields that are currently active and blocked for the blocked team project collection. See illustration below. For additional help on resolving the schema merge conflicts see Resolving Schema Conflicts That Are Occurring in the Data Warehouse.

clip_image010[3]

 

clip_image011[3]

SharePoint 2010 Error: HTTP Error 400. The size of the request headers is too long

$
0
0

Recently I started seeing this error on our internal TFS SharePoint sites.  These sites use the Excel Dashboards and if I opened more than a few sites at a time, I would start getting the following popup error:

Unexpected callback response!
Error: 400 Bad Request

Bad Request - Request Too Long
HTTP Error 400. The size of the request headers is too long.

It didn’t seem to matter which browser I used or whether I ran in Private Mode (to get a clean session each time) – the result was always the same error.

The fix:

http://support.microsoft.com/kb/920862/en-us

Save this as a .reg file and run it.

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\HTTP\Parameters]
"MaxFieldLength"=dword:0000fffe
"MaxRequestBytes"=dword:0007a120

After setting the settings, you will need to restart your server so that HTTP.sys can pickup the new parameters.

Getting Started with TFS Hosting from DiscountASP.NET

$
0
0

With the recent reorganization of SPLA prices for Team Foundation Server, there’s been some new offerings in the Hosted TFS space. I’ve blogged about other TFS hosting services in the past and there’s a list of the companies that provide TFS hosting services here. Now there’s a new player in the hosted Team Foundation Server market: DiscountASP.NET, who are known for their existing ASP.NET web hosting services.

Both DiscountASP.NET and SaaS Made Easy offer the “Basic” version of Team Foundation Server 2010 as a hosted service. It includes all Version Control and Work Item Tracking functionality along with Web Access. However, SharePoint, Reporting Services and Analysis Services integration is not included. Here are the two services compared:

 

DiscountASP.NET

SaaS Made Easy

Project Collections

1

1

Team Projects

Unlimited

Unlimited

Disk Space

3GB

5GB

Discounts for >10 Users

Yes

Yes

Price / User / Month (USD)

$20

$15

Setup Fee

$25

$0

For people that are happy with the value that TFS provides and want to use it without the hassle of setting up and running the server themselves, both of these are pretty compelling offers. To see how easy it was to get started and what the experience was like, I gave it a try myself.

Step 1: Signing Up

Either browse to http://discountasp.net/ and click through from the front page, or directly to https://www.discountasp.net/tfs/signup/.

image

Then click ‘Next’ and specify a name for your Team Project Collection along with how many users you want to start with. A Team Project Collection is just a container for Team Projects & it doesn’t really matter what you specify, as long as it’s a valid name and not already taken.

image

Then click ‘Next’ and start filling out your billing details along with your credit card. The service will be automatically renewed each month and charged to the card that you specify here. So if you don’t need the service any more, make sure you cancel it before the end of the month.

image

Once you have filled out all the details and sent it off, you’ll get an email within minutes from tfs.support@discountasp.net with your activation confirmation and your control panel username.

image

Step 2: Creating A User

Now you’ll need to follow the steps in the Getting Started KB article to create your first username & password that you can use to connect to the server.

Browse to https://mytfs.discountasp.net/users/manage.aspx and type in your control panel username (tfs_123456) along with the password you specified in the sign up form.

image

If you clicked the link above, you’ll be taken directly to the Manage Users screen. If not, you’ll have to select the link from the navigation bar on the left. Once you’re on this screen enter in a username and password (4-12 characters) and click ‘Add’.

image

Before you can use this account, you’ll need to browse to the Global Groups management screen and add the user to the Project Collection Administrators Group by clicking ‘View/Edit Members’.

image

Then select your user and click ‘Save’. The page doesn’t redirect anywhere, so it may appear like nothing happened. As long as a user is checked and you clicked Save, everything should be fine.

image

Before you leave the control panel, you need to find out your Server Name and Server URL. Click the Account Information link and copy the Server URL. It should be something like https://tfs01.discountasp.net/tfs

image

Step 3: Install Visual Studio 2010

Before you can start using your new collection, you need to create a Team Project in it. The only way to do this is with Visual Studio Team Explorer, which is the client for Team Foundation Server. The good news is that if you already have Visual Studio 2010 Professional (or higher) installed, you already have Team Explorer. If you are using Visual Studio Express, you will need to download and install Team Explorer or a different edition of Visual Studio.

If you don’t have Team Explorer already installed, you have three options for getting it:

  1. Download and Install the Microsoft Visual Studio Team Explorer 2010 ISO. This is a free download but you can’t write any code with this edition of Visual Studio.
  2. Download and Install the 90-day trial of Visual Studio Professional.
  3. Download and Install the 90-day trial of Visual Studio Ultimate.

If you want to use Visual Studio and you don’t already have it installed, you should consider the last two options. They are fully-functional (time limited) versions and they have a great web-installer experience. If you download the first option, you have to mess around with mounting an ISO file or burning it to a CD before you can install it.

Once you have Visual Studio installed, you should consider installing the Lab Management GDR which includes a rollup of fixes for issues in the client (as well as the server) that were found after 2010 shipped.

Step 4: Connect to TFS

Now that you have Visual Studio Team Explorer installed, you have a client that can connect to Team Foundation Server.

Open Visual Studio from the Start Menu and then choose ‘Connect To Team Foundation Server’ from the start page, or click the ‘Connect to Team Project’ icon in the Team Explorer pane.

image

On the Connect to Team Project dialog, click ‘Servers…’

image

On the Add/Remove Team Foundation Server dialog, click ‘Add…’

image

Type or paste the Server URL that we copied earlier from your Account Information screen and click ‘OK’.

image

If you want to skip these steps or you want to pre-populate the server as part of a logon script or something, you run these two commands (after replacing the URL with your own server’s URL)

reg add HKCU\Software\Microsoft\VisualStudio\10.0\TeamFoundation\Instances\tfs01.discountasp.net /v Uri /d https://tfs01.discountasp.net/tfs

reg add HKCU\Software\Microsoft\VisualStudio\10.0\TeamFoundation\Instances\tfs01.discountasp.net /v Type /t REG_DWORD /d 0

After selecting the server, you will be prompted for your TFS username and password. This is not your control panel password – it’s the user that you created in the control panel and added to the Project Collection Administrators group. Enter your credentials and click ‘OK’.

image

You should now see the server added to your list of servers. Click ‘Close’

image

You should now see your Team Project Collection selected and no Team Projects available. Click ‘Connect’.

image

Step 5: Upload the Visual Studio Scrum Process Template (Optional)

A process template defines the work item types, queries and groups that your team project will start with. All of these things can be changed after the project is created, but it’s good to start with a process template that is close to your needs.

Your new Team Project Collection comes with the two built-in process templates:

  1. MSF for Agile Software Development v5.0 - This template allows you to organize and track the progress and health of a small- to medium-sized Agile project.
  2. MSF for CMMI Process Improvement v5.0 - This template allows you to organize and track the progress and health of projects that require a framework for process improvement and an auditable record of decisions.

After Team Foundation Server 2010 shipped, the product team released a new process template called “Microsoft Visual Studio Scrum 1.0”. This process template includes the following work item types: Sprint, Product Backlog Item, Bug, Task, Impediment & Test Case. It also includes a number of reports like Sprint Burndown and Velocity, however since Reporting Services and the Analysis Services cube isn’t included in the “Basic” version of TFS, these reports aren’t available with this hosting package. If you want these features, you’ll need to move up from the “Basic” version.

If you want to see the other process templates that are available, or download the Process Template Editor to build your own, you can go to the Process Templates and Tools page on MSDN.

If you want to use the Scrum work item types, go to the Visual Studio Scrum 1.0 download page and download the process template.

image

Once you’ve downloaded the template, run the Microsoft_Visual_Studio_Scrum_1.0.msi installer. Take note of the path that the process templates are going to be installed in.

image

Once the installation is complete, go back to Visual Studio. Right-click the server and choose ‘Team Project Collection Settings’ then ‘Process Template Manager’.

image

When the Process Template Manager dialog appears, click Upload.

image

Browse to the location that you installed the process template to and click ‘Select Folder. For example: C:\Program Files (x86)\Microsoft\Microsoft Visual Studio Scrum 1.0\Process Template

image

The process template will then be uploaded to your Team Project Collection.

image

Step 6: Create a New Team Project

After this, you are then connected to TFS and you can create a new team project. To do this, open the Team Explorer window if it is not already visible. Right-click the server and choose ‘Create New Team Project’.

image

At this point, the New Team Project Wizard will appear and you can give your team project a name and click ‘Next’.

image

On the next screen of the wizard, you can choose the process template that you want to start with. After this, you can click ‘Finish’ since there’s no more questions to answer in the “Basic” version of TFS. The team project will then be created on the server using the template you specified.

image

Once the New Team Project wizard finishes, you will have a team project that you can use for Work Item Tracking and Version Control.

image

Step 7: Store the Username and Password

If you don’t want to be prompted every time you can save your username and password on your computer. To do this you have two options:

  1. Open Control Panel > User Accounts and Family Safety > Credential Manager > Add a Windows credential
  2. Or, Click Start > Run > Type: RunDll32.exe keymgr.dll,KRShowKeyMgr

image

Then enter in your server name, username and password. Now when you open Visual Studio, use Web Access or the command line tools – it will automatically sign you in with these credentials.

Step 8: Connecting to Team Web Access

This is not really a setup step, but you’ll want to know how to connect to Team Web Access so you can manage your work items with just a web browser. Simply browse to the Server URL (e.g. https://tfs01.discountasp.net/tfs/), enter your username and password if prompted and then you’re connected.

image

Then you can create, view, edit and query bugs through the web access UI.

image

Summary

Now that you’ve created a Team Project, you can use Visual Studio Team Explorer or any other TFS tool (like Microsoft Test Manager, Excel, Project or Outlook) to access your source code and work items.

Through my very unscientific tests of clicking around and uploading/downloading source, the server (which is hosted in Los Angeles, CA) seems fast enough to work with. It’s great to see another TFS Hosting partner out there offering a great service at a reasonable price.

TFS2010: Update Activity Logging Cleanup Interval

$
0
0

Every command that a user executes in TFS is logged to the database. This is very useful for investigating performance issues and other things. I’ve blogged before about how to query this table for TFS2008. Those same queries work for TFS2010 as well.

By default, each night a job runs that deletes log entries older than 14 days. If you want to change that interval, you’ll need to update the cleanup job settings in each collection.

Here’s a PowerShell script that will do this for you.

  • If you are not running it on the AT itself. You will need Team Explorer 2010 installed.
  • You will need PowerShell installed & run: set-executionmode unrestricted

You will need to update the highlighted values (server URL & retention period).

$a = [Reflection.Assembly]::Load("Microsoft.TeamFoundation.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a")

#Update configuration URL as necessary
$configServer = new-object Microsoft.TeamFoundation.Client.TfsConfigurationServer "http://localhost:8080/tfs/"

$collectionService = $configServer.GetService([Microsoft.TeamFoundation.Framework.Client.ITeamProjectCollectionService])

$collectionService.GetCollections() | %{
        $collectionName = $_.Name
        $collectionId = $_.Id

        trap [Microsoft.TeamFoundation.TeamFoundationServiceUnavailableException]
        {
           write-warning ("Skipping project collection "+ $collectionName +" ("+ $collectionId +"): "+ $_.Exception.Message)
           continue
        }
        &{
           write-host
           write-host ("* Operating on project collection "+ $collectionName +" ("+ $collectionId +").")

           $projectCollection = $configServer.GetTeamProjectCollection($collectionId)
 
           # Check that we can access the team project collection.
           $projectCollection.Connect("None")
 
           #Get the job service
           $jobService = $projectCollection.GetService([Microsoft.TeamFoundation.Framework.Client.ITeamFoundationJobService])

           #Get the job for the current collection
           $job = $jobService.QueryJobs() | where { $_.Name -eq "Team Foundation Server Activity Logging Administration" }

           #Output the current setting
           Write-Output "Current setting:"
           $job.Data.SqlStatement.InnerXml

           #Update the current setting
           $job.Data.SqlStatement.InnerXml = "<string>EXEC prc_PruneCommands @maxAgeDays = 30</string>"
           Write-Output "New setting:"
           $job.Data.SqlStatement.InnerXml

           Write-Output "Saving job definition... "
           $jobService.UpdateJob($job)

        }
}

TFS2010: Test Attachment Cleaner and why you should be using it

$
0
0

[Update: 1 Nov 2011] There is an update available for TFS 2010 SP1 that prevents many of the large test attachments being stored in the database in the first place. You can download it from KB2608743 or read more about it on Annuthra's blog post: Reduce the size of test data in VS 2010 by avoiding publishing deployment binaries into TFS.

The execution of a Test Run in Team Foundation Server 2010 (whether automated or manual) generates a bunch of diagnostic data, for example, IntelliTrace logs (.iTrace), Video log files, Test Result (.trx) and Code Coverage (.cov) files. This diagnostic data is critical in eliminating the “no repro” bug scenarios between the testers and developers.

However, the downside of these rich diagnostic data captures is that the volume of the diagnostic data, over a period of time, can grow at a rapid pace. The Team Foundation Server administrator has little or no control over what data gets attached as part of Test Runs. There are no policy settings to limit the size of the data capture and there is no retention policy to determine how long to hold this data before initiating a cleanup.

A few months ago, the Test Attachment Cleaner for Visual Studio Ultimate 2010 & Test Professional 2010 power tool was released which addresses these shortcomings in a command-line tool.

Over the last 18 months, the TFS 2010 Dogfood server that we call ‘Pioneer’ has been slowly accumulating test attachments. Recently, the size of the attachments (680GB) eclipsed the size of version control content (563GB) and that Test Attachments older than 6 months represented over 400GB of storage.

After downloading and installing the tcmpt.exe tool, there are a five sample settings files in: C:\Program Files (x86)\Microsoft\Test Attachment Cleaner\SampleSettings

  1. Scenario#1: Identify list of attachments taking up database space of more than 1 GB per attachment
  2. Scenario#2: View/Delete IntelliTrace log files over a size of 500 MB
  3. Scenario#3: View/Delete all Video log files with Test run creation date older than specific date with no active bugs
  4. Scenario#4: View/Delete all TRX & COV log files for test runs that happened between 30 and 90 days in age & do not perform the Linked Bugs lookup query
  5. Scenario#5: View/Delete all custom/user attachments over size of 5 MB with no active or resolved bugs on test runs between 2 dates

After checking with the test managers and users of the system, we got approval to remove the old content and we ended up creating our own settings file:

<!-- View/Delete all attachments on test runs older than 6 months, that are not linked to active bugs -->
<DeletionCriteria>
  <TestRun>
    <AgeInDays OlderThan="180" />
  </TestRun>
  <Attachment />
  <LinkedBugs>    
     <Exclude state="Active" />
  </LinkedBugs>
</DeletionCriteria>

Then we ran the command-line tool with the following parameters:

TCMPT.exe attachmentCleanup /collection:http://localhost:8080/tfs/CollectionName /teamproject:TeamProjectName /settingsFile:OlderThan6Months.xml /mode:delete

The following charts show the result after the tool ran for over 36 hours:

image  image

Warning: Running queries on the operational stores is not recommended and you should run them on a pre-production/backup server if possible. In this case, this particular query is read-only and returns very quickly, so the impact to the overall system is low. Also, since the schema is intentionally not documented or supported, there are no promises that it won't change for a future release (in fact, this table already has changed as part of making TFS work on Azure).

To see the growth of your Test Case attachments over time, you can use the following SQL query on each of your collection databases:

SELECT 
  DATEADD(month,DATEDIFF(month,0,creationdate),0) as [Month],
  SUM(CompressedLength) / 1024 / 1024 as AttachmentSizeMB
FROM tbl_Attachment WITH (nolock)
GROUP BY DATEADD(month,DATEDIFF(month,0,creationdate),0) 
ORDER BY DATEADD(month,DATEDIFF(month,0,creationdate),0)

The result will be something like this:

Month

Size (MB)

2/1/2010

1,790

3/1/2010

3,663

4/1/2010

5,193

5/1/2010

5,503

6/1/2010

4,701

7/1/2010

8,594

8/1/2010

11,313

9/1/2010

22,333

10/1/2010

18,597

11/1/2010

16,409

12/1/2010

20,720

To see the size of all the tables in your collection databases, you can use the following SQL query:

-- Table rows and data sizes
CREATE TABLE #t (     [name] NVARCHAR(128),    [rows] CHAR(11),    reserved VARCHAR(18),     data VARCHAR(18),     index_size VARCHAR(18),    unused VARCHAR(18))
GO

INSERT #t
EXEC [sys].[sp_MSforeachtable] 'EXEC sp_spaceused ''?'''
GO

SELECT
    name as TableName,
    Rows,
    ROUND(CAST(REPLACE(reserved, ' KB', '') as float) / 1024,2) as ReservedMB,
    ROUND(CAST(REPLACE(data, ' KB', '') as float) / 1024,2) as DataMB,
    ROUND(CAST(REPLACE(index_size, ' KB', '') as float) / 1024,2) as IndexMB,
    ROUND(CAST(REPLACE(unused, ' KB', '') as float) / 1024,2) as UnusedMB
FROM #t
ORDER BY CAST(REPLACE(reserved, ' KB', '') as float) DESC
GO

DROP TABLE #t
GO

The result will be something like the following table. The names can be a little confusing (which is unlikely to change, since the schema isn’t documented/supported). I’ve added a column that gives a brief description of what the tables are used for.

TableName

Area

Rows

DataMB

IndexMB

tbl_Content

Version Control

6,954,871

573,861

12

tbl_AttachmentContent

Test Attachments

1,827,941

275,244

47

Attachments

Work Item Attachments

218,738

50,303

33

tbl_LocalVersion

Version Control

381,589,040

25,006

23,354

WorkItemLongTexts

Work Item Tracking

10,732,588

10,733

3,638

WorkItemsWere

Work Item Tracking

5,701,208

9,693

4,534

tbl_Version

Version Control

54,614,825

3,184

6,923

tbl_PropertyValue

Properties (Code Churn)

152,825,451

4,056

3,676

tbl_BuildInformationField

Team Build

41,565,170

5,930

186


IntelliTrace for Azure without Visual Studio

$
0
0

IntelliTrace is a very useful tool for diagnosing applications on the Windows Azure platform. It is especially invaluable for diagnosing problems that occur during the startup of your roles. You might have seen these sorts of problems when you do a deployment and it gets stuck in the "Starting role" state and continuously reboots.

Publishing via Visual Studio

If you are building and deploying your application using Visual Studio, then it's fairly straightforward to enable IntelliTrace. When you right-click your Cloud Service project and choose Publish…, you're presented with a dialog where you can enable IntelliTrace.

clip_image002

Under the covers, Visual Studio will build the service, create the package, include the IntelliTrace runtime and change the startup task to start IntelliTrace instead of the Azure runtime. Once the intermediate package is created, the tools then go ahead and upload the package and deploy it to Azure.

For the official documentation see Debugging a Deployed Hosted Service with IntelliTrace and Visual Studio on MSDN.

Retrieving IntelliTrace Logs via Visual Studio

Once your role is deployed and IntelliTrace is running, it's fairly straightforward to retrieve and view the logs in Visual Studio. In the Server Explorer toolbar, open the Azure deployment, right-click a role instance and choose View IntelliTrace Logs for that instance.

Under the covers again, Visual Studio communicates with your role, restarts the IntelliTrace log file, downloads it to your machine and opens it. It's actually a little more complicated than that, which I'll cover later on.

Enabling IntelliTrace without Visual Studio

There are many cases where you may not want to build and deploy directly from Visual Studio. For example:

  • You have a build server that builds your Cloud Service project into a package.
  • You have an operations team that does deployments to Azure and developers don't have direct access to the Azure portal or API.

Visual Studio supports creating a Cloud Service package without deploying it on the Publish dialog. However, when you select that option, it disables the option to enable IntelliTrace.

clip_image004

With a small amount of digging in the C:\Program Files (x86)\MSBuild\Microsoft\Cloud Service\1.0\Visual Studio 10.0\Microsoft.CloudService.targets file, it's easy enough to work out how to create an IntelliTrace enabled package without Visual Studio. At the top of the file, these properties are defined with default values:

<!-- IntelliTrace related properties that should be overriden externally to enable IntelliTrace. -->

<PropertyGroup>

  <EnableIntelliTrace Condition="'$(EnableIntelliTrace)' == ''">false</EnableIntelliTrace>

  <IntelliTraceConnectionString Condition="'$(IntelliTraceConnectionString)' == ''">UseDevelopmentStorage=true</IntelliTraceConnectionString>

</PropertyGroup>

To enable the IntelliTrace collector in a role, all you need to do is set the EnableIntelliTrace property in MSBuild. For example, here's how to run the build from a command-line:

msbuild WindowsAzureProject1.ccproj /p:EnableIntelliTrace=true;
IntelliTraceConnectionString="BaseEndpoint=core.windows.net;
Protocol=https;AccountName=storageaccountname;AccountKey=storagekey"
/t:Publish

Once the build completes, you are left with a Cloud Service Package file (*.cspkg) and a Cloud Service Configuration file (*.cscfg). These include the IntelliTrace runtime files, a remote-control agent (which is described in the next section) and a startup task.

clip_image006

These package and configuration files can then be handed off to somebody to deploy via the Windows Azure Portal. The Windows Azure Service Management API can also be used from PowerShell or custom code to deploy. If an operations team is managing your Azure deployments, then this is exactly what you want.

How does Visual Studio retrieve logs? It uses IntelliTraceAgentHost.exe

If you look at the intermediate un-packaged (e.g. \bin\Debug\WindowsAzureProject1.csx\roles\WorkerRole1\plugins\IntelliTrace), you'll see that along with the IntelliTrace runtime, there is an additional application: IntelliTraceAgentHost.exe.

clip_image008

If you look at the RoleModel.xml file (e.g. \bin\Debug\WindowsAzureProject1.csx\roles\WorkerRole1\RoleModel.xml), you'll see that it's started as a foreground task along with the IntelliTrace startup task that starts the runtime.

<RoleModel xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" name="WorkerRole1" version="1.3.11122.0038" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">

  <Startup>

    <Task commandLine="IntelliTraceStartupTask.exe" executionContext="elevated" relativePath="plugins\IntelliTrace">

      <Environment>

        <Variable name="IntelliTraceConnectionString" value="%@IntelliTraceConnectionString%" />

      </Environment>

    </Task>

    <Task commandLine="IntelliTraceAgentHost.exe" executionContext="elevated" taskType="foreground" relativePath="plugins\IntelliTrace">

      <Environment />

    </Task>

  </Startup>

This agent is running all the time and is the mechanism by which Visual Studio interacts with the IntelliTrace runtime that is running on a role. It listens on an Azure Queue and responds when somebody in Visual Studio selects View IntelliTrace Logs. The queue name is based upon a hash of the deployment id, role name and instance id of the deployed application.

Once the agent receives a command on the Queue (from somebody choosing View IntelliTrace Logs from Visual Studio), it pushes status messages onto a client response queue. The message looks something like this:

<IntelliTraceAgentRequest Id="7e7d6413d22644b38e3986da24e0c84b" TargetProcessId="0" ResponseQueueName="itraceresp9e221eabf27044d4baccf1a8b7ccf765" />

The stages of the request are:

  1. Pending
  2. CreatingSnapshot
  3. Uploading
  4. Completed

Because the IntelliTrace runtime is running and writing to the log file, the file is in use and cannot be just copied off disk. It turns out that within IntelliTrace.exe there is a hidden option called copy. When run with this option, IntelliTrace will stop logging to the current file and create a new one. This allows the old file to be read without restarting IntelliTrace and the application that is being traced.

Once the snapshot file has been created, the agent then uploads it to blob storage in the intellitrace container.

When the upload is complete, the agent then pushes a message on the queue which informs Visual Studio where to retrieve the file from. The message looks something like this:

<IntelliTraceAgentResponse RequestId="7e7d6413d22644b38e3986da24e0c84b " Status="Completed" PercentComplete="100">

  <Error></Error>

  <Logs>

    <Log BlobName="320af8081d0143e694c5d885ab544ea7" ProcessName="WaIISHost" IsActive="true" />

    <Log BlobName="b60c6aaeb2c445a7ab7b4fb7a99ea877" ProcessName="w3wp" IsActive="true" />

  </Logs>

  <Warnings />

</IntelliTraceAgentResponse>

Retrieving IntelliTrace Log Files without Visual Studio

Although the View IntelliTrace Logs option in the Server Explorer toolbox works great, it requires you to have the API management certificate and storage account keys for your service. In the scenario where you have a separate operations team that deploys and runs your service, it's unlikely that developers will have access to these keys. It's also unlikely that an operations person will feel comfortable opening Visual Studio and using it to retrieve the logs.

Fortunately, we can use the same API that Visual Studio uses and build our own application that triggers a snapshot and downloads the IntelliTrace file from blob storage.

  1. Download the source from the attachment at the end of this post.
  2. Unzip the source and open it in Visual Studio.
  3. For each project, add references to the following files
    • C:\Program Files (x86)\Windows Azure Tools\1.3\Visual Studio 10.0\Microsoft.Cct.IntelliTrace.Client.dll
    • C:\Program Files (x86)\Windows Azure Tools\1.3\Visual Studio 10.0\Microsoft.Cct.IntelliTrace.Common.dll
    • C:\Program Files\Windows Azure SDK\v1.3\ref\Microsoft.WindowsAzure.StorageClient.dll
  4. For each project, modify app.config and configure the StorageAccountConnectionString.

IntelliTraceControl.exe

This console application determines the correct queue name and pushes a message on the queue to initiate an IntelliTrace log snapshot. Once the snapshot is uploaded to blob storage, it will return a GUID that represents the object in the blob container.

Usage: IntelliTraceControl.exe <deployment id> <role name> <instance id>

Example: IntelliTraceControl.exe 300f08dca40d468bbb57488359aa3991 WebRole1 0

IntelliTraceDownload.exe

Using the GUID returned from the previous app, this app will connect to blob storage and download the *.iTrace file to your TEMP directory.

Usage: IntelliTraceDownload.exe <guid>

Example: IntelliTraceDownload 84404febbde847348341c98b96e91a2b

Once you have retrieved the file, you can open it in Visual Studio for diagnosing problems.

TFS 2010: What Service Packs and Hotfixes Should I Install?

$
0
0

Team Foundation Server 2010 was released in April 2010. Since then, there have been a number of important Service Packs, Cumulative Updates and hotfixes that have been made available based upon internal usage at Microsoft and customer feedback via the support organisation. This blog post is an attempt at bringing together all the updates that are currently available.

Installation Guide

For a new install, you should always start with the latest version of the Installation Guide. The version on the web is newer than the version that's included on the DVD/ISO.

Licensing

If you have questions about the licensing around Team Foundation Server, you should take a look at the following sources:

Application Tier

Although this is a link to the trial edition ISO, it's the same bits that are used for the non-trial edition. The trial lasts for 90 days and you Upgrade from the Trial Edition by entering your product key at any time. If you're having trouble getting your product key, you can request a trial extension by following these instructions.

The following updates must be installed in the correct order:

Database Tier

Team Foundation Server 2010 requires SQL Server 2008 Service Pack 1 or later (TFS installation will block if you don't have it installed). If your IT department requires you to use SQL Server 2008 (rather than 2008 R2) it is recommended to install at least SQL Server 2008 Service Pack 2, as it includes a number of important fixes that TFS benefits from. (See the bottom of this post for more details if you're interested)

Although TFS 2010 shipped with SQL Server 2008, the R2 edition of SQL Server has been subsequently released. After an update to the Microsoft Product Use Rights document, you can now use SQL Server 2008 R2 with TFS 2010:

SQL SERVER TECHNOLOGY

Visual Studio Team Foundation Server 2010 includes the right to use one instance of SQL Server 2008 Standard or SQL Server 2008 R2 Standard in support of Team Foundation Server, as permitted in the Universal License Terms section for products that include SQL Server technology.

Note: There may be Cumulative Updates that have been released after a Service Pack. The general recommendations from the SQL Server team are that you should a) Install the latest released Service Pack for your version of SQL; b) Only install SQL Cumulative Updates when you encounter a specific issue that is addressed by that Cumulative Update.

If for some reason, you can’t run SQL Server 2008 SP3 or at least SQL Server 2008 R2 SP1, AND you are using SQL Enterprise Edition, you should take a look at this KB on possible data corruption issues and recommended minimum patching levels.

Clients (Visual Studio & Microsoft Test Manager)

Visual Studio 2010

(Note: If you have a license key for full version of Visual Studio 2010 Professional, the Visual Studio 2010 Professional Trial - Web Installer includes Team Explorer and it can be more convenient than downloading and mounting the ISO file)

The following updates must be installed in the correct order:

The MSSCCI Provider allows non-Microsoft tools to connect to TFS:

Team Explorer Everywhere (TEE) is an Eclipse IDE/Java implementation of the TFS client:

Visual Studio 2008

The following updates must be installed in the correct order:

Visual Studio 2005

The following updates must be installed in the correct order:

Build Controllers, Build Agents, Test Controllers & Test Agents

A Team Build 2008 server cannot communicate with a Team Foundation Server 2010 server, as such all existing Team Build servers will need to be upgraded.

The following updates must be installed in the correct order:

If you work in a cross-platform environment, you may also want to install the build extensions that allow you to execute Ant or Maven 2 builds and publish any JUnit test results back to TFS.

SharePoint

For the latest recommended updates to SharePoint Server 2010 and Windows SharePoint Services 3.0, see the Office Update Center. You should at least have these:

Feature Pack for Team Foundation Server and Project Server Integration

Only those machines that have the feature pack installed can participate in data synchronization between the two products. See the Configuration Quick Reference for installation pre-requisites and instructions.

Office Project Server 2007

Office Project Server 2010

Process Templates

Visual Studio 2010 Ultimate and Test Professional 2010

Hopefully you find this list of updates useful. If there is something that I’ve missed or you think should be on here, leave a comment or send me an email and I'll do my best to include it.

Brian Harry also has a blog post as of March 2012 which talks about all these patches and the philosophy.

[Update 4 Jan 2012]: Fixed SQL08 R2 SP1 link, added note about SQL CU's, added file types (e.g. ISO) and file sizes.
[Update 9 Jan 2012]: Added SQL08 R2 SP1 CU4 as recommended, since it addresses a ghost record cleanup issue. Added some other patches and feature packs.
[Update 16 Jan 2012]: Replaced TFS SP1 CU1, VS2010 SP1 TFS Compatibility GDR and Test attachment data hotfix with TFS SP1 CU2 link.
[Update 29 Jan 2012]: Updated link to Test Attachment Cleaner to point to the TFS 2010 Power Tools, since it’s now included there.
[Update 31 Jan 2012]: Fixed link to VS2010 SP1 ISO and ISO Mounting instructions.
[Update 2 Feb 2012]: Added link to corruption issues with SQL Enterprise editions. The previous SQL recommended patch levels include this patch, so they are unchanged. Added links to SharePoint service packs and cumulative updates.
[Update 25 Mar 2012]: Added link to VS2010 SP1 TFS 11 Compatibility GDR. Added links to updated licensing rules (TEE, TFS reporting). Updated section titles to clarify where the updates apply to.
[Update 29 Mar 2012]: Fixed links to Project Server feature packs on MSDN.
[Update 26 July 2012]: Updated from SQL 2008 R2 SP1 to SQL 2008 R2 SP2.

TFS: Empty Process Template

$
0
0

Over the last few years, I've occasionally had a need for an empty or minimal process template. For example:

  • You are setting up a sync with the TFS Integration Tools and you want to do a "context sync" (i.e. let the tool copy the Work Item Types, Areas/Iterations, Global Lists from one server to another)
  • You have a project for source control only and don't want any work item types
  • You are testing changes to a Work Item Type and don't want to be bothered with field conflicts (e.g. if you're retrofitting a project to work with Microsoft Test Manager that was upgraded from TFS 2008 to TFS 2010)

Typically, my process is as follows:

  • Create a new Team Project Collection on the server
  • Don't create any Team Projects in the collection
  • Download the MSF Agile Process Template
  • Strip out everything that is not required (i.e. SharePoint, Reporting, Lab, Build, Work Item Types, Queries, etc)
  • Rename it to the Empty Process Template
  • Upload it to the TPC
  • Create a new Team Project using the Empty Process Template

I'm going to save you (and me) a bunch of time and let you download the Empty Process Template that I’ve already created.

A word of warning: When using this process template, there are some problems that you can have in the future. Problems that I’ve come across:

  • No Build Process Templates. Since the ‘Build’ parts are taken out of the process template, DefaultTemplate.xaml and UpgradeTemplate.xaml won’t get uploaded.
  • Work Item Categories are empty. This means that you can’t use Microsoft Test Manager with the project until you add them in again.
  • Lab Management won’t work with the team project.
  • Other problems with permissions (build, work item query folders, Lab Management, etc) – since many permissions are set during team project creation.

Team Build 2010: Associate Changesets and Work Items with a Dummy Build

$
0
0

Jason Prickett has a blog post called Creating Fake Builds in TFS Build 2010. It includes code for creating a dummy build service host, build controller, build definition and then a build result (IBuildDetail).

One thing that isn’t shown is how to associate Changesets and Work Items to the build result. Fortunately its not that hard and you can use the following methods in the Team Build Object Model:

Once you’ve added the associations, you may need to save the InformationNode using IBuildInformationNode.Save(). However, some of them will save as part of the association. In my sample code below, I choose to save after making all the associations to minimize the number of round-trips to the server.

The following sample code will go and find the specified Build Number then add an association for the specified changeset ID and all the work items that are associated to the changeset.

/// <summary>
/// Finds the specified build number and adds changeset + work item associations
/// so that they appear in the build report.
/// </summary>
private static void AssociateBuild(String collectionUri, String teamProject, String buildNumber, int changesetId)
{
  TfsTeamProjectCollection tpc = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(collectionUri), new UICredentialsProvider());
  IBuildServer buildServer = (IBuildServer)tpc.GetService(typeof(IBuildServer));

  // Create a IBuildDefinitionSpec so that we can specify our build detail exactly and
  // avoid a potentially very expensive call to the server with QueryBuilds()
  IBuildDetailSpec spec = buildServer.CreateBuildDetailSpec(teamProject);
  spec.BuildNumber = buildNumber;
  spec.MaxBuildsPerDefinition = 1; // Redundant, but just for safety - we’re specifying the build number there should only be one
  IBuildDetail buildDetail = buildServer.QueryBuilds(spec).Builds[0];

  // Get the changeset that we want to associate
  Changeset changeset = tpc.GetService<VersionControlServer>().GetChangeset(changesetId);
  buildDetail.Information.AddAssociatedChangesets(new Changeset[] { changeset });
  buildDetail.Information.AddAssociatedWorkItems(changeset.WorkItems);
  buildDetail.Information.Save();
  buildDetail.Save();
}

By creating fake builds and associating changesets and work items, you’re closing the traceability loop within TFS. Build results act as a synchronization point and are an important metric in the data warehouse and reporting capabilities of TFS.

Where’s Grant?

$
0
0

You may have noticed that my blog has been a little void of new content lately. The good news is that I'm back with a bunch of new things to talk about.

First of all, for those of you who don't know, my family and I relocated back to my hometown of Canberra, Australia in September last year. I spent just over 3 years on the Team Foundation Server product team, which I enjoyed immensely. Moving to Seattle and taking a job on the product team was the best decision we ever made - there's no better way to experience another country/culture than to immerse yourself in it. Literally every day I would meet a new person, encounter a new problem to solve or do something that would leave an impact on thousands of people.

However, we decided that it was best for our newborn (and for us… free babysitting!) to be closer to our extended family in Australia. Another part of it was that I longed for the Aussie sunshine that Seattle just doesn't get enough of. I was lucky enough to transfer within Microsoft as a Senior Consultant with Microsoft Consulting Services (MCS). The nice thing about Canberra is that even on the shortest, coldest winter day, the sky is blue and the sun is shining.

Fast-forward nine months and I'm now a Senior Premier Field Engineer (PFE) in the Microsoft Services Premier Support organization (you’ll have to forgive me any time I slip up with American spelling, I’m trying to unlearn the habits). Consulting wasn't going to give me the technical depth and variety that I thrive on. There will be is a a follow-up on what exactly I do as a PFE, but think of it as deep technical expertise in one or two Microsoft products. We do proactive stuff, like Health Checks and reactive stuff like on-site support for CritSit support cases.

image

You may have also read on Brian's blog that we've been hard at work on the next book: Professional Team Foundation Server 2012. The others (poor Martin) have also been busy on the Professional Application Lifecycle Management 2012 book. The books are due for release late in the year, but you can pre-order now on Amazon and save some money. The books have been updated to include all the new TFS 2012 features along with some more "war stories" and "best practices" that everybody loved from the 2010 edition.

clip_image001clip_image002

Now that things have started stabilizing (relocation, work, home, etc) along with the new variety of customer-facing work, I can focus on creating some new content. Here are some of the blog posts I've got queued up (in no particular order):

In the mean time, be sure to check out the new look http://tfspreview.com/ to see all the great new features of Visual Studio Team Foundation Server 2012 and the hosted service.

Viewing all 62 articles
Browse latest View live