Quantcast
Channel: Grant Holliday's Blog
Viewing all 62 articles
Browse latest View live

TFS: Enable Team Project Portal script

$
0
0

If you don’t have SharePoint configured at the time that you upgrade your Team Foundation Server or when you attach a new TPC, then one of the things that you might be left with is this:

  • The Team Project is working fine
  • The SharePoint site is working fine
  • But they’re not linked or associated with each other as a Project Portal

Project Portal Settings in Visual Studio Team Explorer. The 'Enable team project portal' checkbox is not checked

Unfortunately (until now) the only way to link these two together again is to use Visual Studio to open up the Project Portal Settings for each team project and tick the box. If you have do to this for more than a few team projects on a server, it’s pretty tedious. 5 is about the limit of my patience, but I have seen customers do up to 100.

 

EnableProjectPortal.exe

Download EnableProjectPortal.zip

Usage

EnableProjectPortal.exe <tfs server url> <team project id> <sharepoint relative path> <owned web identifier>

Example

EnableProjectPortal.exe http://localhost:8080/tfs/" "2eb9c8a2-2243-4897-ac88-602bef270dd5" "/sites/TailspinToysCollection/Tailspin Toys" "224C16E0-00DA-4C98-9042-3D21228B2511"

This console application will use the ICatalogService API to do the equivalent of the '[x] Enable team project portal' checkbox. You will need to collect some information before you can use it, but at least you can build up a batch file and do lots of team projects at once.

Get a list of projects and their IDs

There's plenty of different ways to get a list of Team Projects and their GUIDs, but this one is fine for a once off:

1. Open SQL Server Management Studio

2. Connect to Team Project Collection database (e.g. Tfs_DefaultCollection)

3. Run the following query:

SELECT project_name, project_id

FROM tbl_Projects

WHERE [state] = 'WellFormed'

ORDER BY project_name

It should return something like this:

project_name

project_id

Tailspin Toys

2EB9C8A2-2243-4897-AC88-602BEF270DD5

Copy and paste this list into Excel

Get a list of the SharePoint sites and their WebIdentifier IDs

Once again, there's more than one way to get this information other than going directly to the database. But for a once off like this, it will be fine:

1. Open SQL Server Management Studio

2. Connect to the SharePoint content database that holds your team sites (e.g. WSS_Content)

3. Run the following query:

SELECT

FullUrl as 'RelativePath',

Id as 'OwnedWebIdentifier'

FROM AllWebs

ORDER BY RelativePath

It should return something like this:

RelativePath

OwnedWebIdentifier

 

B032339F-D997-4B2C-B5D0-3CB6064D2F1A

sites/FabrikamFiberCollection

919B7437-B8D9-4B56-8AB5-D5B22605278F

sites/FabrikamFiberCollection/FabrikamFiber

7485DD68-2C1D-4089-AD1E-7FA43D92065D

sites/team

1E3082E4-6517-401A-8D5F-22DF8ED1B308

Paste this into somewhere else in your Excel workbook

Construct a mapping table

Now we need to map the team projects/team project IDs to sharepoint sites. Use Excel to construct a table with the following format:

project_name

project_id

RelativePath

OwnedWebIdentifier

Tailspin Toys

2EB9C8A2-2243-4897-AC88-602BEF270DD5

sites/TailspinToysCollection/Tailspin Toys

7FC7E412-F49C-488B-A023-8C1D61AE34C7

FabrikamFiber

FD6FA263-B3F9-45E3-96AF-AD67E75C9FF7

sites/FabrikamFiberCollection/FabrikamFiber

7485DD68-2C1D-4089-AD1E-7FA43D92065D

Now we can use this table construct the arguments for EnableProjectPortals.exe. You can use this formula in a new column to the right, in Excel:

=CONCATENATE("EnableProjectPortal.exe ""http://localhost:8080/tfs/"" """,B2,""" """,C2,""" """,D2,"""")

Repairing connections

Once all the portal settings have been established, you should open the Team Foundation Server Administration Console and choose “Repair Connections”.

This will make sure all the SharePoint permissions and properties are set correctly.

TFS Administration Console showing the Repair Connections link

Log for repairing the connection between TFS and SharePoint

At this point you are done and you have saved yourself or your customer a lot of tedious clicking.

 

The Code

All the thanks go to Phil (another Aussie expat on the TFS team) for this utility. I was just the beneficiary and now I'm the messenger.

Here's the guts of it where we set up a dependency property in the ICatalogService between the team project and the SharePoint site.

var projectPortalNodes = teamProject.NodeReferences[0].QueryChildren(new Guid[] { CatalogResourceTypes.ProjectPortal }, true, CatalogQueryOptions.ExpandDependencies);

CatalogNode projectPortalNode = null;

if (projectPortalNodes.Count > 0)
{
  // It already exists, so lets overwrite/set with the values we want.
projectPortalNode = projectPortalNodes[0];
}
else
{
  // It doesn't exist, so lets create it.
  projectPortalNode = teamProject.NodeReferences[0].CreateChild(CatalogResourceTypes.ProjectPortal, "Project Portal");
}

// Set properties
projectPortalNode.Resource.Properties["ResourceSubType"] = "WssSite";
projectPortalNode.Resource.Properties["RelativePath"] = sharePointRelativePath;
projectPortalNode.Resource.Properties["OwnedWebIdentifier"] = sharePointOwnedWebIdentifier;

// BUG: Use the first sharepoint web resource. Doesn't work with multiple.
projectPortalNode.Dependencies.SetSingletonDependency("ReferencedResource", sharepointWebAppResources[0].NodeReferences[0]);

catalogService.SaveNode(projectPortalNode);


The Physical Internet

$
0
0

Recently I read the book called Tubes: A Journey to the Center of the Internet by Andrew Blum. If you've ever wondered how your computer connects to other computers around the world, this book is a must read. I consider this essential reading for any engineer responsible for delivering online services or networks.

Tubes: A Journey to the Center of the Internet

Once you've finished reading that book, go and read Wired: Mother Earth Mother Board by Neal Stephenson. It's quite lengthy at 42,000 words and was written in 1996, but it makes a great companion to Tubes. Here's a sample:

One day a barge appears off the cove, and there is a lot of fussing around with floats, lots of divers in the water. A backhoe digs a trench in the cobble beach. A long skinny black thing is wrestled ashore. Working almost naked in the tropical heat, the men bolt segmented pipes around it and then bury it. It is never again to be seen by human eyes. Suddenly, all of these men pay their bills and vanish. Not long afterward, the phone service gets a hell of a lot better.

Caution: Submarines

Ever since I was a kid, I've had a fascination with submarine cables. I can trace these memories back to family holidays along the NSW South Coast. As you drive up and down the coast, there are lots of rivers to cross and on the shore beside every bridge was one of these signs:

clip_image002

Photo credits: brynau on Flickr

I remember asking my grandparents what the signs were for and being told something about stopping the submarines coming down the river, but that didn't sit quite right with me. Why would they have a big sign advertising that there was protection there? And why is there a picture of a boat with an anchor on it?

Connecting Australia to the world

As an early user of the Internet in Australia, two things were clear: It was slow and it was expensive. At the time, Australia was connected to the world via a twin pair of 560Mbit/sec cables that went via New Zealand & Hawaii: PacRimWest, PacRimEast and Tasman2.

Then in the year 2000, things started to change. Internet access started becoming a lot faster and a lot more affordable. This was due to the commissioning of two significant cables:

"Southern Cross Cable route" by J.P.Lon, Mysid Wikipedia Commons
Image credits: "Southern Cross Cable route" by J.P.Lon, Mysid Wikipedia Commons
"SEA-ME-WE-3-Route" by J.P.Lon on Wikipedia
Image credits: "SEA-ME-WE-3-Route" by J.P.Lon on Wikipedia

These two cables were built using two different business models which are talked about in the book. To summarize:

  1. Consortium: Cables were financed by consortiums of (usually) government owned telephone providers in the countries that the cable would pass by. Each provider would be responsible for part of the cost of the cable in return for having access to it. Prior to 1997, this is how all cables were built. Because the financiers of the cables were from the "Old world" club of telephones, capacity on the cable was sold in "circuits". The more bandwidth you wanted, the more circuits you had to buy. It also meant that as the fibre optic technology along the cable was upgraded, they could sell more circuits at the same price.
  2. Privately financed: "New world" private investors did the math and realized that they could build submarine cables and sell the rights to the actual fibre pairs in the cable. This then allowed the communications providers to put their own fibre optic equipment on the ends of the cable and send/receive as much data as they were capable of, without per-circuit fees.

As the rush of investment on these new world cables picked up pace, some of the old world consortiums felt so threatened that they ended up buying capacity on the cables themselves!

Submarine Cables around the world

Much of the source material in the book originates from a company called TeleGeography. TeleGeography are a telecommunications market research firm that has been studying the physical Internet since 1998. Along with things like bandwidth and co-location pricing research, they also sell a 36" x 50" wall map of submarine cables for $250. They also have an interactive online version with additional context for each country's Internet access.

TeleGeography 2014 Submarine Cable Map

Being the techie that I am, I ended up getting a framed copy of the map and have it on my wall as a reminder of how far away Australia is from the rest of the world. (Not like I need a reminder! :)

Cable Landings

In the December 2009 edition (17.12) of Wired magazine, there was an article called Netscapes: Tracing the Journey of a Single Bit by Andrew that included this picture:

Grover Beach, California. Photo from Wired article: Netscapes: Tracing the Journey of a Single Bit

Grover Beach, California

After traversing the continent, our packet will arrive in an LA building much like 60 Hudson Street. But if it wants to ford the Pacific, it can jog north to a sleepy town near San Luis Obispo. This sheltered section of coastline is not a busy commercial port, so it’s unlikely that a ship will drag an anchor through a transoceanic cable here. A major landing point for data traffic from Asia and South America, the station at Grover Beach sends and receives about 32 petabits of traffic per day. As our bit streams through the Pacific Crossing-1 cable (underneath the four posts, left), it’s on the same trail as some of the most important information in the world: stock reports from the Nikkei Index, weather updates from Singapore, emails from China — all moving at millions of miles an hour through the very physical, very real Internet.

This is just one of hundreds of cable landing points around the world and the book describes the process of "landing" a cable on a beach and connecting it to a nearby "Landing Station" like this one. These are usually non-descript buildings nearby the beach, but not actually required to be on the beach.

Internet Exchanges

The next step in the journey of a bit is "How do all these cables criss-crossing the globe connect to each other?"

It turns out that there's some pretty significant Internet exchange points (IX or IXP) spread around the world for this purpose. An IXP allows networks to directly "cross-connect" (peer) with each other, often at no charge. This literally means patching a cable between the two networks and into the same switch. Keith Mitchell's presentation Interconnections on the Internet: Exchange Points talks about the different interconnection models and what determines the success of an IXP.

Wikipedia has a list of Internet exchange points by size and TeleGeography lists them by country. The largest ones by traffic volume are:

Unsurprisingly, you will find many cloud service providers (i.e. Azure, Amazon, Google, Facebook, Akamai, etc) have major datacenters located near these exchange points. This allows them to peer with lots of ISPs for cheap/free traffic and reduces the latency between their services and their customers.

Aside: Net Neutrality, Interconnection and Netflix

I won't go into the details here, but these articles make for interesting reading on the topic of "paid for" interconnects and how they can dramatically effect things like your video streaming experience.

Direct line from Chicago to New York

One of the other books that I came across recently is called Flash Boys by Michael Lewis.  The first chapter (which is summarised in this Forbes article) describes how Dan Spivey of Spread Networks came up with the idea to build a fibre optic line directly between Chicago and New York for sending low-latency trades. Dan helped devise a low-latency arbitrage strategy, wherein the fund would search out tiny discrepancies between futures contracts in Chicago and their underlying equities in New York.

Book: Flash Boys by Michael Lewis

Since fibre optics carry light signals at the speed of light, the only way to get the signals to the other end faster is to reduce the distance. What Dan realised was that the existing fibre paths between the two cities were not as direct as they could be, as they tended to follow railroad rights-of-way.

By building a cable that is nearly as straight as the crow flies, Spread Networks was able to shave 100 miles and 3 milliseconds off the latency between the two trading data centers. This made the cable extremely valuable and they ended up selling the exclusive rights to a single broker firm (since if more than one person had access to the cable, that devalued it).

Dan was obsessed with the length of the cable, since every twist and turn adds to the latency. One extreme example is when the cable ducts run down one side of the road and then at an intersection, they cross the road and continue on the opposite side of the road. Instead of making two 90 degree turns they laid the cable diagonally across the road.

 

I hope you've enjoyed this quick excursion around the physical infrastructure of the Internet. If you find any more interesting articles or books on the topic, I'd love to hear about them.

This blog has moved!

TFS: How to Customize Work Item Types

$
0
0

Team Foundation Server has allowed you to modify your Work Item Type Definitions since the first version of TFS. (Side note: this is not the case with the Team Foundation Service, but the team hopes to enable that at some point in the future. At the moment, limiting the customization allows them to innovate the features in the Service at a faster pace without having to worry too much about everybody’s customizations.)

The fundamentals for modifying Work Item Types are documented in the following places:

In this post, I’m going to show you the tools and process that I personally use for customizing work item types.

Prerequisites / Tools

  • Real (Production) TFS server / project
  • Test (Staging) TFS server / project
  • ExportWITDs.cmd – A batch file (included below) that uses the ‘witadmin.exe exportwitd’ command
  • ImportWITDs.cmd
  • Visual Studio, XML editor with IntelliSense
  • Checkin.cmd – A batch file that uses tf.exe to prompt for a comment and check-in current changes.
  • Team Foundation Server Power Tools – Process Editor

Workflow

When I’m working with a customer and doing a series of process template or work item type customization, this is the workflow that I follow:

  1. Run a script to export all Work Item Type definitions to my local machine
  2. Check-in a copy of the definitions to source control, so that we have a baseline to work from and revert back to
  3. Edit the XML definitions in Visual Studio as XML, with IntelliSense (see below)
  4. Run a script to import the definition to my Test project
  5. Verify the changes in a second copy of Visual Studio
  6. Check-in the changes
  7. Run a script to import the definition to my Production project

Step 1 – Export all work item types

The following script exports a list of the work item type names to a temporary file, then uses that list to export each of the work item types to a separate file in the current directory. It needs to be run from a Visual Studio Command Prompt, or you need to add ‘C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\ide’ to your PATH environment variable.

ExportWITDs.cmd:

SET collection="http://tfs-server:8080/tfs/DefaultCollection"

SET project="Project XYZ"

witadmin listwitd /collection:%collection% /p:%project% > %temp%\witd.txt

:: Remove quotes from project name

SET _string=###%project%###

SET _string=%_string:"###=%

SET _string=%_string:###"=%

SET _string=%_string:###=%

for /F "delims=" %%a in (%temp%\witd.txt) do witadmin exportwitd /collection:%collection% /p:%project% /n:"%%a" /f:"%3_%_string%_%%a.xml"

Step 2 – Check-in a copy

There’s no script for this step, since it’s a one-time thing. Just use Visual Studio, or ‘tf add . /R’ followed by ‘tf checkin . /R’

Step 3 – Open with XML Editor

See my previous blog post on how I enable IntelliSense for editing work item types as XML.

Step 4 – Import the changes to Test

Importing the changes is relatively straightforward. When I am rapidly iterating on a Work Item Type design, I like to create a ‘ImportWITDs.cmd’ batch file that imports everything that I’m currently working on. Then I can just leave a command prompt open and run it whenever I feel like it.

Now, for the seasoned witadmin pros, you’ll know that there’s also a ‘/v’ option that allows you to validate the changes before you actually upload them to the server. In my experience, this is a waste of time – two reasons:

  1. If the XML is invalid, then it’s going to fail if you try and upload it without validating first.
  2. The validation process doesn’t validate everything – it misses some things. (I forget the specific cases, but I think it was something like fields that already exist or something like that).

So because of these two reasons and coupled with the fact that I’m also uploading to a test server first – I skip the ‘/v’ validation step and try the import directly.

ImportWITDs.cmd:

SET collection="http://tfs-server:8080/tfs/DefaultCollection"

SET project="Project XYZ"

witadmin importwitd /collection:%collection% /p:%project% /f:"_DefaultCollection_Task.xml"

witadmin importwitd /collection:%collection% /p:%project% /f:"_DefaultCollection_Bug.xml"

witadmin importwitd /collection:%collection% /p:%project% /f:"_DefaultCollection_Issue.xml"

witadmin importwitd /collection:%collection% /p:%project% /f:"_DefaultCollection_Shared Steps.xml"

witadmin importwitd /collection:%collection% /p:%project% /f:"_DefaultCollection_Test Case.xml"

Step 5 – Verify the changes

Once I’ve run the ImportWITDs.cmd script, and it completes without any errors – then it’s time to verify the changes. To do this, I normally have a second copy of Visual Studio open.

Before hitting ‘Refresh’ in Team Explorer, it’s important to close all existing Work Item tabs. As having an open query or work item, can cause the metadata not to be reloaded correctly – then you start to wonder whether your changes were uploaded successfully or not.

Once everything is closed, hit the ‘Refresh’ button at the top of Team Explorer. Then go ahead and open a New Work Item form for the type that you have just modified.

Step 6 – Check-in the changes

If you have verified the changes and everything looks great – it’s a good idea to check the XML in to source control. This gives you a point that you can roll-back to in the future. It also helps your successor understand what changes have been made to the work item types and why they were made.

After checking in the changes, we also check-out all the files again. (This is not strictly necessary if you are using Visual Studio 2012 and Local Workspaces, since the files will be read-write on disk and any changes will be detected anyway.)

Checkin.cmd:

@echo off

SET /P comment="Checkin comment?"

tf checkin . /r /noprompt /comment:"%comment%"

tf edit . /R

Step 7 – Import the changes to Production

Once we’ve checked-in a copy of our changes, it’s time to upload the Work Item Type changes to the Production team project. If you’re making the changes on behalf of a customer, then you would have them review the changes on your Test system first.

Since I normally iterate on a set of changes a few times and upload to Production once at the end, I usually just modify the Server/Collection/Project settings in ImportWITDs.cmd and use that, rather than creating a separate batch file.

Other Tools

Although they are not part of the "normal" workflow, there are some other tools that I have used in the past for special situations.

TFS Team Project Manager

I can’t recommend this tool from Jelle Druyts highly enough for doing what I call “Bulk Administration” tasks in TFS. It lets you easily take a set of Work Item Types and upload them to all projects in your project collection. It also lets you bulk edit build definitions, build process templates, fields, source control settings and more.

image

ExportWITDSorted.exe

This is a little tool that I wrote for myself. The use case that I wrote it for was when you a working with heavily customised work item types that you don’t have the original XML for.

Although the way that you modify work item types is all in XML – that is not how the work item types are defined in the TFS database. When you tell TFS to import your work item type XML file, it shreds the XML, parses out all the fields, layouts, transitions, etc and puts them in separate SQL tables. When you tell TFS to export a work item type as XML, it does the opposite. The ordering of the elements in the XML is basically the ordering of the rows from the database. No sorting.

If you are trying to do a diff of different work item type XML files, this can be pretty frustrating. Of course you can go and get a diff tool that understands XML semantics, but Visual Studio can’t do this for you.

This tool I wrote uses the same APIs that ‘witadmin exportwitd’ uses to get an export of the work item type, but then it iterates through the XML elements and sorts them by the ‘name’ attribute. This makes it a little easier to diff with a ‘dumb’ text-diff tool like Visual Studio or WinMerge.

using System;
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.WorkItemTracking.Client;
using System.Xml;
using System.Xml.XPath;

namespace ExportWitdSorted
{
    class Program
    {
        static void Main(string[] args)
        {
            if (args == null || args.Length != 4)
            {
                Console.WriteLine("Usage:   ExportWitdSorted.exe <collection url> <project> <work item type> <outputfile>");
                Console.WriteLine("Example: ExportWitdSorted.exe http://tfsserver:8080/Collection MyProject \"My Bug\" mybug.xml");
                Environment.Exit(1);
            }

            // Connect to TFS
            TfsTeamProjectCollection tpc = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(args[0]));
            WorkItemStore wis = tpc.GetService<WorkItemStore>();
            Project project = wis.Projects[args[1]];
            WorkItemType type = project.WorkItemTypes[args[2]];

            // Export the work item definition to an XmlDocument
            XmlDocument originalDoc = type.Export(false);

            // Create a copy of the definition and remove all the <FIELD> nodes so that we can replace them with a sorted list
            XmlDocument sortedDoc = new XmlDocument();
            sortedDoc.LoadXml(originalDoc.OuterXml);
            sortedDoc.SelectSingleNode("//FIELDS").RemoveAll();

            // Get the nodes from the original document and sort them
            XmlNode node = originalDoc.SelectSingleNode("//FIELDS");
            XPathNavigator navigator = node.CreateNavigator();
            XPathExpression selectExpression = navigator.Compile("FIELD/@name");
            selectExpression.AddSort(".", XmlSortOrder.Ascending, XmlCaseOrder.None, "", XmlDataType.Text);
            XPathNodeIterator nodeIterator = navigator.Select(selectExpression);

            // Import the sorted nodes into the new document
            while (nodeIterator.MoveNext())
            {
                XmlNode fieldNode = originalDoc.SelectSingleNode("//FIELD[@name=’" + nodeIterator.Current.Value + "’]");
                XmlNode importedFieldNode = sortedDoc.ImportNode(fieldNode, true);
                sortedDoc.SelectSingleNode("//FIELDS").AppendChild(importedFieldNode);
            }

            sortedDoc.Save(args[3]);
        }
    }
}

TFS, Load Balancers, Idle Timeout settings and TCP Keep-Alives

$
0
0

Since TFS 2010, it has been possible to have multiple Application Tier servers configured in a load-balanced configuration. If you use something like a F5 BIG-IP LTM device, then the default Idle Timeout settings for the TCP Profile can cause problems. (But don’t despair, read the whole post).

Here’s the scenario:

  • Between the TFS ASP.NET Application and SQL Server, there is a maximum execution timeout of 3600 seconds (1 hour)
  • In IIS/ASP.NET there is a maximum request timeout of 3600 seconds (it’s no coincidence that it matches)
  • This allows TFS operations to run for up to an hour before they get killed off. In reality, you shouldn’t see any TFS operations run for anywhere near this long – but on big, busy servers like the ones inside Microsoft, this was not uncommon.

Load balancers, in their default configuration usually have an ‘Idle Timeout’ setting of around 5 minutes. The reason for this is that every request that stays open, is consuming memory in the load balancer device. A longer timeout means that more memory is consumed and it’s a potential Denial-of-Service attack vector. (Side note: What’s stopping somebody using TCP Keep-Alives like I describe below to keep a huge number of connections open and have the same DoS effect?)

So why is this a problem if your ‘Idle Timeout’ is set to something less than 3600 seconds? This is what can happen:

  • The client makes a request to TFS – for example: “Delete this really large workspace or branch”. That request/connection remains open until the command completes.
  • The TFS Application Tier then goes off and calls a SQL Stored Procedure to delete the content.
  • If that Stored Procedure takes longer than the ‘Idle Timeout’ value, the load balancer will drop the connection between the client and the application tier.
  • The request in IIS/ASP.NET will get abandoned, and the stored procedure will get cancelled.
  • The client will get an error message like ‘The underlying connection was closed: A connection that was expected to be kept alive was closed by the server’. Basically, this means that the connection got the rug pulled out from under it.

Prior to Visual Studio & Team Foundation Server 2012, I recommended that people talk to their Network Admin guys and get the load balancer configuration updated to a higher ‘TCP Idle Timeout’ setting. This usually involved lots of back-and-forth with the grumpy admins, and eventually you could convince them to begrudgingly change it, just for TFS, to 3600. If you think that you’re hitting this problem – one way to verify is to try the same command directly against one of your application tier servers, rather than via the load balancer. If it succeeds, then you’ve likely found your culprit.

HTTP Keep-Alives

If you’ve administered web sites/webservers before, you’ve likely heard of HTTP Keep-Alive. Basically, when they’re enabled on the client and the server, the client keeps the TCP connection open after making a HTTP GET request, and reuses the connection for subsequent HTTP GET requests. This avoids the overhead of closing and re-establishing a new TCP connection.

image

That doesn’t help our Idle Timeout problem, since we only make a single HTTP request. It’s that single HTTP request that gets killed halfway through – HTTP Keep-Alives won’t help us here.

Introducing TCP Keep-Alives

There’s a mechanism built-in to the TCP protocol that allows you to send a sort-of “PING” back and forth between the client and the server, but not pollute the HTTP request/response.

If you have a .NET client application, this is the little gem that you can use in your code:

webRequest.ServicePoint.SetTcpKeepAlive(true, 50 * 1000, 1000); // Enable TCP Keep-Alives. Send the first Keep-Alive after 50 seconds, then if no response is received in 1 second, send another keep-alive.

In this example NetMon network trace:

  • I deployed a web services to Windows Azure, where the load balancer had a TCP Idle Timeout set to 5 minutes (this has changed lately in Azure now that they moved to a software based load balancer).
  • This web services was coded to do a Thread.Sleep(seconds) for however long I told it to, then send a response back.

NetMon capture that shows TCP KeepAlive packets

First of all, you’ll notice that I did this investigation quite some time ago (~2 years…). Next, you’ll see that there’s some other traffic that happens on my connection between the HTTP:Request at frame 179 and the HTTP:Response at frame 307. Those are the TCP Keep-Alive ‘PING’ and ‘ACK’ packets.

Finally, you can see that after 320 seconds have passed (i.e. 20 seconds after the load balancer should’ve closed the connection), I get a valid HTTP:Response back. This means that we have successfully avoided the load balancer killing our connection prematurely.

What’s in it for me?

The whole reason I did this investigation was when I was working on the TFS team and they were getting ready to launch the Team Foundation Service. Although it was quite rare, there were instances where users could hit this TCP Idle Timeout limitation.

The good news is that by working with the rock star dev on the Version Control team, Philip Kelley – we were able to include a change in the TFS 2010 Forward Compatibility update and the TFS 2012 RTM clients to send TCP Keep-Alives every 30 seconds, thus avoiding the issues altogether when talking to the Team Foundation Service, and on-premises TFS servers deployed behind a load balancer. You can see this for yourself in Microsoft.TeamFoundation.Client.Channels.TfsHttpRequestHelpers.PrepareWebRequest().

webRequest.ServicePoint.SetTcpKeepAlive(true, 30000, 5000);

A caveat

If you don’t have a direct connection between your client and your server, and you go via a HTTP proxy server or something like ISA/ForeFront Threat Management Gateway – the TCP Keep-Alive packets aren’t propagated through those proxies. You’ll get an error back with something like ‘502: Bad Gateway’, which basically means that the connection between the Proxy server and the TFS server was dropped.

Here’s what the NetMon trace looks like for this example:

NetMon capture that shows TCP KeepAlive packets, and ultimately the connection getting dropped

TFS2012: New tools for TFS Administrators

$
0
0

This is a brand new feature in TFS 2012 that hasn’t really been documented or talked about yet. If you’re a TFS administrator and you browse to this address on your server, you will see a new web-based administration interface for some things inside of TFS:

http://your-server:8080/tfs/_oi/

Activity Log

The first page that we see, is a view on the TFS Activity Log. Internally, TFS has two tables in the Tfs_Configuration and Tfs_CollectionX databases called tbl_Command and tbl_Parameter. This tables keep a record of every single command that every single user has executed against TFS for the last 14 days.

In this screenshot, you can see that the following columns are displayed:

  • Command Id – A unique ID (per database) given to the command execution.
  • Application – Which component of TFS does it relate to? Version Control, WorkItem Tracking, Framework, Lab Management, etc
  • Command Name – The server name of the command. You can usually work out what the equivalent client/API call is – but these command names are not documented anywhere.
  • Status – 0 = Success, –1 = Failure
  • Start Time – When was the request first received by TFS
  • Execution Time – How long did the command run for (Divide by 1,000,000 to get seconds)
  • Identity Name – The user name of the user who executed the command
  • IP Address – IPv4 or IPv6 address
  • Unique Identifier – Used to group/correlate multiple server requests that originate from a single client request.
  • User Agent – the ‘User-Agent’ HTTP Header from the client. Tells you the name of the executable if it’s using the TFS API and what version/SKU.
  • Command Identifier – When using the TFS Command Line tools, this helps you correlate to what command the user was using. ‘tf get’, ‘tf edit’, etc.
  • Execution Count – How many times was this command executed. The logging mechanism has some smarts to reduce the noise in the log. If you download a bazillion files, it doesn’t log a bazillion individual rows in the log. It just sets this value to a bazillion for that entry.
  • Authentication Type – NTLM or Kerberos.

Screenshot of TFS Activity Log Web Interface

One of the things the TFS Activity Logger does, is that it logs the parameters passed in with a request when:

  • The command fails (i.e. a status of != 0)
  • Or, the command takes longer than 30 seconds

You can see these parameters by double-clicking a row in the table:

image

At the top of the Activity Log screen, you can also filter the log based upon Host/Collection and Identity Name. This is useful if a particular user complains about slow performance or TFS errors – you can easily look at the server logs to see what the server is seeing.

image

You can also click the ‘Export’ link to download a CSV file of the same content.

If you’d like to know more about how to query or interpret the contents of the TFS Activity Log – grab a copy of my Professional Team Foundation Server 2012 book and look at Chapter 23 – Monitoring Server Health and Performance.

TFS Job Monitoring

Built-in to TFS is the TFS Background Job Agent. This job agent is responsible for the scheduling and queuing of maintenance jobs and other jobs within TFS. You can see my blog post on all the different jobs in TFS 2012 for more information.

If we click the ‘Job Monitoring’ tab, we get some fairly ugly charts that give us some insight into how long the jobs are taking to execute.

image

There is another chart further down the ‘Job Summary’ page that shows us the number of times that a job has been run, and what was the status of each of those runs.

image

We can click on one of the green bars in that chart, or the blue bars in the previous chat, or the ‘Job History’ link in the navigation bar to see a different view of the TFS jobs.

This view shows us the number of jobs that were executing at a particular time, the average time that they waited in the job queue, and the average run time.

image

If you then click the ‘Job Queue’ link in the navigation bar, you can see which jobs are currently queued, their priorities and when they are expected to start.

image

New book: Professional Team Foundation Server 2012

$
0
0

I’m very pleased to announce that our new book Professional Team Foundation Server 2012 is now available!

It’s an update to the 2010 edition that reflects all the great new features and changes introduced in Visual Studio Team Foundation Server 2012. For example, there are whole new chapters on Managing Teams, Agile Planning Tools and Integration with Project Server. There’s also new content on the new Team Explorer interface, the Code Review tools, Local Workspaces and the updated Testing and Lab Management features. Throughout the book, we also talk about how to use the cloud hosted Team Foundation Service and talk about some of how the TFS internals have changed to support the service.

We hope that you enjoy this book as much as the previous one and we look forward to reading your reviews.

Book cover for: Professional Team Foundation Server 2012

ISBN: 9781118314098

You can buy the book in the following ways:

And you can preview some of the book before you buy:

Preview the book before you buy

Table of Contents

The book is broken up into five sections and its written in a way that you can either read the whole thing cover-to-cover or jump in to a particular part or chapter that interests you. My personal favourite chapters are the ones in Part V – Administration, since I wrote most of them. Smile

Part I – Getting Started

1 – Introducing Team Foundation Server 2012

2 – Planning a Deployment

3 – Installation and Configuration

4 – Connecting to Team Foundation Server

Part II – Version Control

5 – Overview of Version Control

6 – Using Team Foundation Version Control

7 – Ensuring Code Quality

8 – Migration from Legacy Version Control Systems

9 – Branching and Merging

10 – Common Version Control Scenarios

Part III – Project Management

11 – Introducing Work-Item Tracking

12 – Customizing Process Templates

13 – Managing Teams and Agile Planning Tools

14 – Reporting and SharePoint Dashboards

15 – Integration with Project Server

Part IV – Team Foundation Build

16 – Overview of Build Automation

17 – Using Team Foundation Build

18 – Customizing the Build Process

Part V – Administration

19 – Introduction to Team Foundation Server Administration

20 – Scalability and High Availability

21 – Disaster Recovery

22 – Security and Privileges

23 – Monitoring Server Health and Performance

24 – Testing and Lab Management

25 – Upgrading Team Foundation Server

26 – Working with Geographically Distributed Teams

27 – Extending Team Foundation Server

Authors

With Ed joining Microsoft since the last book, that completes the set – all four authors work for Microsoft:

  • Ed Blankenship is the Microsoft Program Manager for the Lab Management scenarios for Team Foundation Server and the Visual Studio ALM product family. He was voted as Microsoft MVP of the Year for Visual Studio ALM & Team Foundation Server before joining Microsoft.
  • Martin Woodward is currently the Program Manager for the Microsoft Visual Studio Team Foundation Server Cross-Platform Tools Team. Before joining Microsoft, he was voted Team System MVP of the Year, and has spoken about Team Foundation Server at events internationally.
  • Grant Holliday is a Senior Premier Field Engineer for Microsoft in Australia. Prior to this role, he spent three years in Redmond, Washington as a program manager in the Visual Studio Team Foundation Server product group.
  • Brian Keller is a Principal Technical Evangelist for Microsoft specializing in Visual Studio and application lifecycle management. He has presented at conferences all over the world and has managed several early adopter programs for emerging Microsoft technologies.

This time around we also had long-time TFS/ALM MVP Steve St. Jean contributing on some of the book as well as being a Technical Editor and checking all our facts.

Q & A

When people find out that I’ve written a book, there’s a few questions that often come up.

How much money do you make from the book?

A colleague wrote a book many years ago and he set my expectations right from the start. He used to say: “You don’t make a lot of money by writing a book – especially technical books”. Personally, the royalties are a nice surprise when they come, but I’m not headed for early retirement with them. Smile 

There’s really two ways that authors get paid for their contributions:

  • Advance – This is a fixed sum, negotiated with the publisher before you sign a contract. Usually its paid in instalments, as you complete different milestones in the process. 50% draft, final draft, etc.
  • Royalties – This is a how much you will get for each sale of the book. There is not a single percentage and it varies depending on whether the book was sold in the USA, e-book, translation, etc.

For a more complete explanation of how it all works, check out Charles Petzold’s article on Book Royalties and Advances.

Then there’s the non-direct value – you get to say “I wrote the book” on your business card, which is instant credibility and opens up more opportunities.

You work for Microsoft – what do they think about you writing a book?

Microsoft has a Moonlighting policy which covers things like writing a book and building apps. Essentially, each of the authors had to seek approval from their manager before they could work on the book. The policy also has rules that say you can’t use any Microsoft resources and the work is not allowed to impact your daytime work duties.

Since the subject of the book is a Microsoft product and it helps educate people on how to use it, there was never going to be much resistance to the idea.

What’s the process for writing a book?

The Wiley author site has more information on the Life of a book, but in short the process is:

Proposal > Contract > Draft writing > Editors > Tech Editors > Author Review > Proofs > Printing

How long did it take you?

Writing a book takes a lot of time and it requires a lot of concentration. It took me a little while to settle into a rhythm, but eventually my style ended up as intense focus weekends every few weeks:

  • Monday-Thursday: Start researching content for the chapter. Put all the links in a OneNote notebook. Do the hands on labs, etc. Basically immerse myself in the subject of that chapter and come up with a logical flow of sub-headings
  • Thursday night: Spend a few hours at home and take all the screenshots that I could possibly use.
  • Friday night, Saturday all day: Take my laptop to a local coffee shop without an Internet connection. Then just write, write, write. Fill out all the paragraphs for the sub-headings, put in all the screenshots and get the word count up to where it should be.
  • Sunday: Depending on where the word count was at, Sunday was usually spent reviewing and tidying up the formatting and getting it ready to submit. My goal was to upload the draft by Sunday night, since all our chapters were due on Mondays.

What is the most annoying part of writing a book?

Screenshots. We had it easy for the 2010 edition – the product was RTM, so nothing was changing. With the 2012 edition, we were writing the book before the product was released. That meant that every time the UI changed between Beta/RC/RTM/Update 1, we had to go back to check and update our screenshots.

Summary

To finish off, writing these books has been very personally rewarding experience. I saw it as a way of capturing 4-5 years worth of accumulated knowledge and experience and getting it down on paper so that others can learn from it. And hey, never in my wildest dreams did I imagine that I would see my name in Russian on the front of a book.

Five years at Microsoft and a new job

$
0
0

Recently, I completed 5 years of service at Microsoft. The company makes a big deal of anniversaries that fall on the 5-year milestones with increasingly larger "optical crystal" monuments.

Optical crystal service awards

As part of my anniversary, I also imported another tradition back to my local branch office from my time in Redmond. The tradition says that on your anniversary, you bring in 1 pound of M&Ms for each year of service and share it in a bowl outside your office. I’ll tell you that 2.2kg doesn’t go far once people get the after-lunch munchies. :)

Bowl of M&M

I’m pleased to mark this anniversary, since it represents the longest time to date that I’ve been with a single employer. However, that is one of the beauties of a large company like Microsoft – there is the opportunity to change jobs and gain different experiences, but remain with the same company.

clip_image003

New Job: Senior Service Engineer, Team Foundation Service

Yes, that’s right, I have another new job. As a Service Engineer, I’ll be in the engine room of http://tfs.visualstudio.com/ keeping the service humming along, on-boarding new and exciting services (like the Load Testing Service) and evolving the maturity of the services. My main area of focus is on improving the efficiency of the Service Delivery team through automation and engineering improvements.

My history at Microsoft so far has been quite broad, which actually reflects how I approach most things:

Perhaps the most exciting part of the role, is that I will remain in Australia and work 100% from home with the occasional trip to the mother ships (Redmond/Raleigh). My experience so far has been a little different to Scott’s, but I’m planning a follow-up post on what it’s like to be a remote worker, in a completely different time zone. Part of running a global service like the TF Service is that there are customers in all time zones around the world using it. The Service Delivery team now has around-the-world coverage with me in Australia and other team members in India, Europe, North Carolina and Seattle. We’re still ironing out the processes as we get ready to launch the commercial service before the end of 2013.

After a break of a few years, I’m absolutely thrilled to be part of the Server and Tools Division Cloud and Enterprise Engineering Group again. I’m working amongst some of the brightest people I know and I am looking forward to having a huge impact on software and services that are relied upon by developers around the world.


What does a well maintained Team Foundation Server look like?

$
0
0

After spending some time out in the field looking at customer’s TFS environments and more recently looking at some of Microsoft’s internal on-premises TFS deployments, I realised that some environments are configured and better maintained than others.

Some of the general concepts and the very TFS-specific configurations are talked about in Part 5 of my Professional Team Foundation Server 2012 book, but many of the basics were considered out of scope or assumed knowledge. Also, not everybody has read the book, even though it gets 5 stars and is considered “THE Reference for the TFS Administrator and expert!” on Amazon.

The purpose of this blog post is to give the Service Owners of TFS a check-list of things to hold different roles accountable for in the smooth operation of the server. It’s broken into 5 sections that roughly translate to the different roles in a typical enterprise IT department. In some cases, it might all be the one person. In other cases, it could be a virtual team of 50 spread all throughout the company and the globe.

  1. The initial setup and provisioning of the hardware, operating system and SQL platform
  2. Regular OS system administrator tasks
  3. Regular SQL DBA tasks
  4. TFS-specific configurations
  5. Regular TFS administrator tasks

The list is in roughly descending priority order, so even if you do the first item in each section, that’s better than not doing any of them. I’ll add as many reference links as I can, but if you need specific instructions for the steps, leave a comment and I’ll queue up a follow-up blog post.

Keep Current

  • Apply all security updates that the MBSA tool identifies. ‘Critical’ security updates should be applied within 48 hours – There’s no excuses for missing Critical security updates. They are very targeted fixes for very specific and real threats. The risk of not patching soon enough is often greater than the risk of introducing a regression.
  • Be on the latest TFS release. (TFS 2012.4 RC4 at the time this post was written or TFS2013 RTM after November 13 2013. If you’re stuck on TFS2010, see here for the latest service packs and hotfixes.)
  • Be on the latest edition of SQL that is supported by the TFS version. Check your SQL version here. (TFS 2010 = SQL2008R2SP3, TFS 2012.4 = SQL2012 SP1, TFS 2013 = SQL2012 SP1). Be on Enterprise edition for high-scale environments.
  • Be on the latest OS release supported by the combination of SQL + TFS. Most likely Windows Server 2008 R2 SP1 or 2012.
  • Be on the latest supported drivers for your hardware (NIC & SAN/HBA drivers especially).

Initial OS Configuration and Regular Management Tasks

  • Collect a performance counter baseline for a representative period of time to identify any bottlenecks and serve as a useful diagnostics tool in the future.  A collection over a 24 hour period on a weekday @ 1-5min intervals to a local file should be sufficient. Don’t know which counters to collect? Download the PAL tool and look at the “threshold files” for “System Overview” on all your servers, “SQL Server” on your data tier servers, and “IIS” and “.NET (ASP.NET)” for your application tier servers.
  • Ensure antivirus exclusions are correct for TFS, SQL and SharePoint. (KB2636507)
  • Ensure firewall rules are correct. I had an outage once where the network profile changed from ‘domain’ to ‘public’ due to a switch gateway change, and our firewall policy blocked SQL access for the ‘public’ profile which effectively took SQL offline for TFS.
  • Ensure page file settings are configured for an appropriately sized disk & memory dump settings are configured for Complete memory dump. If you get a bluescreen, having a dump greatly increases your chances of getting a root cause + fix. (KB254649), test the settings using NotMyFault.exe (during a maintenance window, of course)
  • Don’t run SQL or TFS as a local administrator.

Initial SQL Configuration

  • SQL Pre-Deployment Best Practices (SQLIO/IOmeter to benchmark storage performance)
  • SQL recommended IO configuration. SQLCAT Storage Top 10 best practices
  • Check disk partition alignments for a potential 30% IO performance improvement (especially if your disks were ever attached to a server running Windows Server 2003, but sometimes if you used pre-partitioned disks from OEM)
  • Ensure that Instant File Initialization is enabled (if the performance vs. security trade-off is appropriate in your environment. The article has more details). This enables SQL to create data files without having to zero-out the contents, which makes it “instant”. This requires the service account that SQL runs as to have the ‘Perform Volume Maintenance Tasks’ (SE_MANAGE_VOLUME) permission.
  • Separate LUNs for data/log/tempdb/system.
  • Multiple data files for TempDB and TPC databases. (See here for guidance on the “right” number of files. If you have less than 8 cores, use #files = #cores. If you have more than 8 cores, use 8 files and if you’re seeing in-memory contention, add 4 more files at a time.)
  • Consider splitting tbl_Content out to a separate filegroup so that it can be managed differently
  • Consider changing ‘max degree of parallelism’ (MAXDOP) to a value other than ‘0’ (a single command can peg all CPUs and starve other commands). The trade-off here is slower execution time vs. higher concurrency of multiple commands from multiple users.
  • Consider these SQL startup traceflags. Remember, the answer to “should I do this on all my servers?” is not “yes”, the answer is “it depends on the situation”.
  • Configure daily SQL ErrorLog rollover and 30 day retention.
  • Set an appropriate ‘max server memory’ value for SQL server. If it’s a server dedicated to SQL (assuming TFS, SSRS and SSAS are on different machines), then a loose formula you can use is to reserve: 1 GB of RAM for the OS, 1 GB for each 4 GB of RAM installed from 4–16 GB, and then 1 GB for every 8 GB RAM installed above 16 GB RAM. So, for a 32GB dedicated server, that’s 32-1-4-2=25GB. If you are running SSRS/SSAS/TFS on the same hardware, then you will need to reduce the amount further.

Regular SQL DBA Maintenance

(These are not TFS specific and apply to most SQL servers)

  • Backup according to the supported backup procedure (marked transactions, transaction logs, SSRS encryption key and use SQL backup compression and WITH CHECKSUM). It’s important to ensure that transaction log backups run frequently – they allow you to do a point-in-time recovery. It also checkpoints and allows the transaction log file to be reused. If you don’t run transaction log backups (and you’re running in FULL recovery mode, which is the default), then your transaction logfiles will continue to grow. If you need to shrink them, follow the advice in this article.
  • Run DBCC CHECKDB regularly to detect physical/logical corruption and have the best chance at repairing and then preventing it in the future. Ola Hollengren’s SQL Server Integrity Check scripts are an effective way of doing this, if your organisation doesn’t have an established process already. Even though the solution is free, if you use it, send Ola an email to say that you appreciate his work. The solution can also be used for backups and index maintenance for non-TFS databases. TFS rebuilds it’s own indexes when needed and it requires marked transactions as per the supported backup procedure)
  • Ensure PAGE_VERIFY=CHECKSUM is enabled to prevent corruption. If it’s not, you have to rebuild indexes after enabling it to get the checksums set.
  • Mange data/log file freespace and growth.
  • Monitor for TempDB freespace (<75% available).
  • Monitor for long-running transactions (>60 minutes, excluding index rebuilds, backup jobs).
  • Monitor table sizes & row counts (there’s a script on my blog here, search the page for sp_spaceused).
  • Monitor SQL ERRORLOG for errors and warnings.

TFS Configuration Optimizations

  • At least two application tiers in a load balanced configuration. That gives you redundancy, increased capacity for requests/sec, and two job agents for running background jobs. Ensure that your load balancer configuration has a TCP Idle Timeout of 60 minutes, or that all your clients are running a recent version. See here fore more details.
  • Ensure that SQL Page Compression is enabled for up to a 3X storage reduction on tables other than tbl_Content (if running on SQL Enterprise or Data Center Edition). To enable, it’s the opposite of KB2712111.
  • Ensure that table partitioning is enabled for version control (if a large number of workspaces and running SQL Enterprise). Not recommended unless you have >1B rows in tbl_LocalVersion. Contact Customer Support for the script, since it’s an undocumented feature for only the very largest TFS instances (i.e. DevDiv).
  • Check that SOAP gzip compression is enabled (should’ve been done by TFS 2010 SP1 install. I have seen up to an 80% reduction in traffic across the wire and vastly improved user experience response times for work item operations).
  • Disable / monitor the IIS Log files so they don’t fill the drive: %windir%\system32\inetsrv\appcmd set config -section:system.webServer/httpLogging /dontLog:”True”  /commit:apphost
  • Change the TFS App Pool Idle Timeouts from 20 minutes to 0 (no idle timeout), and disable scheduled recycling so that you don’t have an app-pool recycle during business hours.
  • Implement a TFS Proxy Server and make sure people use it (especially build server), even if no users are remote it reduces the requests/sec load on the ATs. Configure it as the default proxy for our AD site using: tf proxy /add
  • Enable work item tracking metadata filtering if appropriate.
  • Enable SMTP settings and validate that they work. The most common issue here is that a SMTP server won’t relay for the service account that TFS is running as.
  • Set TFS’s NotificationJobLogLevel = 2, so that you get the full errors for any event notification jobs that fail.
  • Consider moving application tier file cache to a separate physical and/or logical drive. See here for how to set a different dataDirectory, but don’t touch any of the other settings. The reason you want it on it’s own drive, is 1) to separate the I/O load and 2) if you ever have to restore the database to an earlier point in time, you have to clear the cache so that you don’t end up sending the wrong content to users. If you make it a separate drive, you can just do a quick-format which takes seconds. Otherwise you have to delete all the folders/files individually which takes much longer.

Regular TFS Administrator Maintenance

René’s blog post Top 10 of things every TFS Administrator should do also covers some other things. 

Regular TFS Build Administrator Maintenance

This is a community contribution from Jesse on regular maintenance around Build Agents, Symbols and Drop shares:

  • Monitor disk space usage on the build agents
  • Monitor queue time for the builds, spin up additional agents if available and needed
  • Clean up the \Builds folder on build agents to remove old workspaces
  • Backup the Symbols share regularly
  • Backup the Builds Drop folder regularly
  • Exclude \Builds, \Symbols, \Drop, Team Explorer Cache from Anti-virus real time scanning

 

Exit Procedures

Another community contribution from Jesse – this is a set of things to check for when a user rolls-off a project or otherwise stops using the server:

  • Check for locked or checked out files
  • Check for queued builds
  • Check for remaining  workspaces
  • Check for work items assigned to this account
  • Check for Builds, Source control items that are exclusively owned by the user
  • Back up their personal work item queries by exporting them all to WIQL

Other Resources

The ALM Rangers are a group of individuals from the TFS Product Group, members of Microsoft Services, Microsoft Most Valued Professionals (MVPs) and technical specialists from technology communities around the globe, giving you a real-world view from the field, where the technology has been tested and used. If you haven’t seen some of the resources that they produce and maintain, I highly recommend that you check them out:

Hopefully this blog post has been an effective use of my limited keystrokes and together we can improve the predictability, reliability and availability of Team Foundation Server in your organisation.

Updates:

[October 9 2013]: Added notes on local admin, SQL Instant File Initialization, max server memory, transaction log shrinking, SMTP settings, cache directory settings, build administrator tasks and exit procedures.
[October 19 2013]: Added link to Ola’s solution for integrity checks and database backups.
[November 1 2013]: Added link to René’s blog post on Top 10 TFS administrator tasks
[November 16 2013]: Added reference to IIS & ASP.NET threshold files for PAL. Thanks Chetan.

TFS Administration Tool 2.3 (2013) Released

$
0
0

As I did after the last major TFS release, the TFS Administration Tool has been updated to depend upon the TFS 2013 Object Model. You no-longer need Team Explorer 2012 installed to use the tool. It can be installed on machines running either the stand-alone object model (http://aka.ms/TFSOM2013), TFS 2013 or Visual Studio 2013.

This release supports SharePoint groups/roles, thanks to a community contribution. There are no other major functional changed between the release and the previous (2.2) release.

http://tfsadmin.codeplex.com/

Currently, the MSI in the downloaded ZIP file is flagged by Windows SmartScreen as “unsafe” – based upon the experience of the last release, in about a week, it will build enough “reputation” to be considered safe.

If you find a bug, the best way to get it fixed is to upload a patch. You can also open an issue and include either the contents of the "Output" window or the contents of the log file saved in the "Logs" folder so that we can easily reproduce and investigate the problem.

TFS: Enable Team Project Portal script

$
0
0

If you don’t have SharePoint configured at the time that you upgrade your Team Foundation Server or when you attach a new TPC, then one of the things that you might be left with is this:

  • The Team Project is working fine
  • The SharePoint site is working fine
  • But they’re not linked or associated with each other as a Project Portal

Project Portal Settings in Visual Studio Team Explorer. The 'Enable team project portal' checkbox is not checked

Unfortunately (until now) the only way to link these two together again is to use Visual Studio to open up the Project Portal Settings for each team project and tick the box. If you have do to this for more than a few team projects on a server, it’s pretty tedious. 5 is about the limit of my patience, but I have seen customers do up to 100.

 

EnableProjectPortal.exe

Download EnableProjectPortal.zip

Usage

EnableProjectPortal.exe <tfs server url> <team project id> <sharepoint relative path> <owned web identifier>

Example

EnableProjectPortal.exe http://localhost:8080/tfs/” “2eb9c8a2-2243-4897-ac88-602bef270dd5″ “/sites/TailspinToysCollection/Tailspin Toys” “224C16E0-00DA-4C98-9042-3D21228B2511″

This console application will use the ICatalogService API to do the equivalent of the ‘[x] Enable team project portal’ checkbox. You will need to collect some information before you can use it, but at least you can build up a batch file and do lots of team projects at once.

Get a list of projects and their IDs

There’s plenty of different ways to get a list of Team Projects and their GUIDs, but this one is fine for a once off:

1. Open SQL Server Management Studio

2. Connect to Team Project Collection database (e.g. Tfs_DefaultCollection)

3. Run the following query:

SELECT project_name, project_id

FROM tbl_Projects

WHERE [state] = ‘WellFormed’

ORDER BY project_name

It should return something like this:

project_name

project_id

Tailspin Toys

2EB9C8A2-2243-4897-AC88-602BEF270DD5

Copy and paste this list into Excel

Get a list of the SharePoint sites and their WebIdentifier IDs

Once again, there’s more than one way to get this information other than going directly to the database. But for a once off like this, it will be fine:

1. Open SQL Server Management Studio

2. Connect to the SharePoint content database that holds your team sites (e.g. WSS_Content)

3. Run the following query:

SELECT

FullUrl as ‘RelativePath’,

Id as ‘OwnedWebIdentifier’

FROM AllWebs

ORDER BY RelativePath

It should return something like this:

RelativePath

OwnedWebIdentifier

 

B032339F-D997-4B2C-B5D0-3CB6064D2F1A

sites/FabrikamFiberCollection

919B7437-B8D9-4B56-8AB5-D5B22605278F

sites/FabrikamFiberCollection/FabrikamFiber

7485DD68-2C1D-4089-AD1E-7FA43D92065D

sites/team

1E3082E4-6517-401A-8D5F-22DF8ED1B308

Paste this into somewhere else in your Excel workbook

Construct a mapping table

Now we need to map the team projects/team project IDs to sharepoint sites. Use Excel to construct a table with the following format:

project_name

project_id

RelativePath

OwnedWebIdentifier

Tailspin Toys

2EB9C8A2-2243-4897-AC88-602BEF270DD5

sites/TailspinToysCollection/Tailspin Toys

7FC7E412-F49C-488B-A023-8C1D61AE34C7

FabrikamFiber

FD6FA263-B3F9-45E3-96AF-AD67E75C9FF7

sites/FabrikamFiberCollection/FabrikamFiber

7485DD68-2C1D-4089-AD1E-7FA43D92065D

Now we can use this table construct the arguments for EnableProjectPortals.exe. You can use this formula in a new column to the right, in Excel:

=CONCATENATE(“EnableProjectPortal.exe “”http://localhost:8080/tfs/”” “””,B2,””” “””,C2,””” “””,D2,””””)

Repairing connections

Once all the portal settings have been established, you should open the Team Foundation Server Administration Console and choose “Repair Connections”.

This will make sure all the SharePoint permissions and properties are set correctly.

TFS Administration Console showing the Repair Connections link

Log for repairing the connection between TFS and SharePoint

At this point you are done and you have saved yourself or your customer a lot of tedious clicking.

 

The Code

All the thanks go to Phil (another Aussie expat on the TFS team) for this utility. I was just the beneficiary and now I’m the messenger.

Here’s the guts of it where we set up a dependency property in the ICatalogService between the team project and the SharePoint site.

var projectPortalNodes = teamProject.NodeReferences[0].QueryChildren(new Guid[] { CatalogResourceTypes.ProjectPortal }, true, CatalogQueryOptions.ExpandDependencies);

CatalogNode projectPortalNode = null;

if (projectPortalNodes.Count > 0)
{
  // It already exists, so lets overwrite/set with the values we want.
projectPortalNode = projectPortalNodes[0];
}
else
{
  // It doesn’t exist, so lets create it.
  projectPortalNode = teamProject.NodeReferences[0].CreateChild(CatalogResourceTypes.ProjectPortal, “Project Portal”);
}

// Set properties
projectPortalNode.Resource.Properties[“ResourceSubType”] = “WssSite”;
projectPortalNode.Resource.Properties[“RelativePath”] = sharePointRelativePath;
projectPortalNode.Resource.Properties[“OwnedWebIdentifier”] = sharePointOwnedWebIdentifier;

// BUG: Use the first sharepoint web resource. Doesn’t work with multiple.
projectPortalNode.Dependencies.SetSingletonDependency(“ReferencedResource”, sharepointWebAppResources[0].NodeReferences[0]);

catalogService.SaveNode(projectPortalNode);

EnableProjectPortal.zip

The Physical Internet

$
0
0

Recently I read the book called Tubes: A Journey to the Center of the Internet by Andrew Blum. If you’ve ever wondered how your computer connects to other computers around the world, this book is a must read. I consider this essential reading for any engineer responsible for delivering online services or networks.

Tubes: A Journey to the Center of the Internet

Once you’ve finished reading that book, go and read Wired: Mother Earth Mother Board by Neal Stephenson. It’s quite lengthy at 42,000 words and was written in 1996, but it makes a great companion to Tubes. Here’s a sample:

One day a barge appears off the cove, and there is a lot of fussing around with floats, lots of divers in the water. A backhoe digs a trench in the cobble beach. A long skinny black thing is wrestled ashore. Working almost naked in the tropical heat, the men bolt segmented pipes around it and then bury it. It is never again to be seen by human eyes. Suddenly, all of these men pay their bills and vanish. Not long afterward, the phone service gets a hell of a lot better.

Caution: Submarines

Ever since I was a kid, I’ve had a fascination with submarine cables. I can trace these memories back to family holidays along the NSW South Coast. As you drive up and down the coast, there are lots of rivers to cross and on the shore beside every bridge was one of these signs:

clip_image002

Photo credits: brynau on Flickr

I remember asking my grandparents what the signs were for and being told something about stopping the submarines coming down the river, but that didn’t sit quite right with me. Why would they have a big sign advertising that there was protection there? And why is there a picture of a boat with an anchor on it?

Connecting Australia to the world

As an early user of the Internet in Australia, two things were clear: It was slow and it was expensive. At the time, Australia was connected to the world via a twin pair of 560Mbit/sec cables that went via New Zealand & Hawaii: PacRimWest, PacRimEast and Tasman2.

Then in the year 2000, things started to change. Internet access started becoming a lot faster and a lot more affordable. This was due to the commissioning of two significant cables:

"Southern Cross Cable route" by J.P.Lon, Mysid Wikipedia Commons
Image credits: “Southern Cross Cable route” by J.P.Lon, Mysid Wikipedia Commons
"SEA-ME-WE-3-Route" by J.P.Lon on Wikipedia
Image credits: “SEA-ME-WE-3-Route” by J.P.Lon on Wikipedia

These two cables were built using two different business models which are talked about in the book. To summarize:

  1. Consortium: Cables were financed by consortiums of (usually) government owned telephone providers in the countries that the cable would pass by. Each provider would be responsible for part of the cost of the cable in return for having access to it. Prior to 1997, this is how all cables were built. Because the financiers of the cables were from the “Old world” club of telephones, capacity on the cable was sold in “circuits”. The more bandwidth you wanted, the more circuits you had to buy. It also meant that as the fibre optic technology along the cable was upgraded, they could sell more circuits at the same price.
  2. Privately financed: “New world” private investors did the math and realized that they could build submarine cables and sell the rights to the actual fibre pairs in the cable. This then allowed the communications providers to put their own fibre optic equipment on the ends of the cable and send/receive as much data as they were capable of, without per-circuit fees.

As the rush of investment on these new world cables picked up pace, some of the old world consortiums felt so threatened that they ended up buying capacity on the cables themselves!

Submarine Cables around the world

Much of the source material in the book originates from a company called TeleGeography. TeleGeography are a telecommunications market research firm that has been studying the physical Internet since 1998. Along with things like bandwidth and co-location pricing research, they also sell a 36″ x 50″ wall map of submarine cables for $250. They also have an interactive online version with additional context for each country’s Internet access.

TeleGeography 2014 Submarine Cable Map

Being the techie that I am, I ended up getting a framed copy of the map and have it on my wall as a reminder of how far away Australia is from the rest of the world. (Not like I need a reminder! :)

Cable Landings

In the December 2009 edition (17.12) of Wired magazine, there was an article called Netscapes: Tracing the Journey of a Single Bit by Andrew that included this picture:

Grover Beach, California. Photo from Wired article: Netscapes: Tracing the Journey of a Single Bit

Grover Beach, California

After traversing the continent, our packet will arrive in an LA building much like 60 Hudson Street. But if it wants to ford the Pacific, it can jog north to a sleepy town near San Luis Obispo. This sheltered section of coastline is not a busy commercial port, so it’s unlikely that a ship will drag an anchor through a transoceanic cable here. A major landing point for data traffic from Asia and South America, the station at Grover Beach sends and receives about 32 petabits of traffic per day. As our bit streams through the Pacific Crossing-1 cable (underneath the four posts, left), it’s on the same trail as some of the most important information in the world: stock reports from the Nikkei Index, weather updates from Singapore, emails from China — all moving at millions of miles an hour through the very physical, very real Internet.

This is just one of hundreds of cable landing points around the world and the book describes the process of “landing” a cable on a beach and connecting it to a nearby “Landing Station” like this one. These are usually non-descript buildings nearby the beach, but not actually required to be on the beach.

Internet Exchanges

The next step in the journey of a bit is “How do all these cables criss-crossing the globe connect to each other?”

It turns out that there’s some pretty significant Internet exchange points (IX or IXP) spread around the world for this purpose. An IXP allows networks to directly “cross-connect” (peer) with each other, often at no charge. This literally means patching a cable between the two networks and into the same switch. Keith Mitchell’s presentation Interconnections on the Internet: Exchange Points talks about the different interconnection models and what determines the success of an IXP.

Wikipedia has a list of Internet exchange points by size and TeleGeography lists them by country. The largest ones by traffic volume are:

Unsurprisingly, you will find many cloud service providers (i.e. Azure, Amazon, Google, Facebook, Akamai, etc) have major datacenters located near these exchange points. This allows them to peer with lots of ISPs for cheap/free traffic and reduces the latency between their services and their customers.

Aside: Net Neutrality, Interconnection and Netflix

I won’t go into the details here, but these articles make for interesting reading on the topic of “paid for” interconnects and how they can dramatically effect things like your video streaming experience.

Direct line from Chicago to New York

One of the other books that I came across recently is called Flash Boys by Michael Lewis.  The first chapter (which is summarised in this Forbes article) describes how Dan Spivey of Spread Networks came up with the idea to build a fibre optic line directly between Chicago and New York for sending low-latency trades. Dan helped devise a low-latency arbitrage strategy, wherein the fund would search out tiny discrepancies between futures contracts in Chicago and their underlying equities in New York.

Book: Flash Boys by Michael Lewis

Since fibre optics carry light signals at the speed of light, the only way to get the signals to the other end faster is to reduce the distance. What Dan realised was that the existing fibre paths between the two cities were not as direct as they could be, as they tended to follow railroad rights-of-way.

By building a cable that is nearly as straight as the crow flies, Spread Networks was able to shave 100 miles and 3 milliseconds off the latency between the two trading data centers. This made the cable extremely valuable and they ended up selling the exclusive rights to a single broker firm (since if more than one person had access to the cable, that devalued it).

Dan was obsessed with the length of the cable, since every twist and turn adds to the latency. One extreme example is when the cable ducts run down one side of the road and then at an intersection, they cross the road and continue on the opposite side of the road. Instead of making two 90 degree turns they laid the cable diagonally across the road.

 

I hope you’ve enjoyed this quick excursion around the physical infrastructure of the Internet. If you find any more interesting articles or books on the topic, I’d love to hear about them.

This blog has moved!

TFS2012: IntelliSense for customizing Work Item Types using XML

$
0
0

Team Foundation Server allows you to modify the Work Item Type definitions. You can use a graphical interface like the Process Editor included in the Team Foundation Server Power Tool, or you can edit the raw XML.

For making changes across many work item types, I prefer to edit the raw XML in Visual Studio, since it allows me to use Find & Replace, Copy/Paste, and other useful text-editing functions. 

One very useful feature of Visual Studio is IntelliSense for editing XML files. To activate IntelliSense for XML files, you need to have the XSD schema files in a special directory on your machine.

In this blog post, I will show you how you can enable IntelliSesnse for editing Work Item Tracking XML files. This gives you the flexibility of editing the raw XML, with the safety net of IntelliSense and XML validation. It’s based upon an old blog post from Ben Day and updated for Team Foundation Server 2012.

Obtaining the latest schema files

Download here (11KB, Zip file)

Or, you can open Microsoft.TeamFoundation.WorkItemTracking.Common.dll from your GAC in Reflector and export out the schema files which are embedded as resources:

image

Setting them up so IntelliSense works

Extract the XSD files to this folder on your local machine:

C:\Program Files (x86)\Microsoft Visual Studio 11.0\Xml\Schemas

This is where the Visual Studio IntelliSense engine looks for matching schema files, when you open an XML file.

Opening Work Item Type definitions in XML editor, instead of Process Editor

If you have the Team Foundation Server Power Tools installed, the Process Editor plug-in (ProejctTemplateEditor, in the list) is set as the default handler for work item XML files. So you get this UI view, rather than the raw XML:

image

To change this behaviour, you can go to

  • File > Open …
  • Select the Work Item XML file
  • Instead of clicking the ‘Open’ button, click the little arrow next to the ‘Open’ button and choose ‘Open With…’

clip_image001

You can then choose ‘XML (Text) Editor’ and optionally set it as the default editor for these files in the future.

clip_image002

Once you’ve followed all these steps, you get the joy of editing the Work Item Type XML file with the power and syntax checking of IntelliSense.

image

The entire Work Item Type XML schema is documented on MSDN at Index to XML Element Definitions for Work Item Types.

This blog has moved!


TFS2012: IntelliSense for customizing Work Item Types using XML

$
0
0

Team Foundation Server allows you to modify the Work Item Type definitions. You can use a graphical interface like the Process Editor included in the Team Foundation Server Power Tool, or you can edit the raw XML.

For making changes across many work item types, I prefer to edit the raw XML in Visual Studio, since it allows me to use Find & Replace, Copy/Paste, and other useful text-editing functions. 

One very useful feature of Visual Studio is IntelliSense for editing XML files. To activate IntelliSense for XML files, you need to have the XSD schema files in a special directory on your machine.

In this blog post, I will show you how you can enable IntelliSesnse for editing Work Item Tracking XML files. This gives you the flexibility of editing the raw XML, with the safety net of IntelliSense and XML validation. It’s based upon an old blog post from Ben Day and updated for Team Foundation Server 2012.

Obtaining the latest schema files

Download here (11KB, Zip file)

Or, you can open Microsoft.TeamFoundation.WorkItemTracking.Common.dll from your GAC in Reflector and export out the schema files which are embedded as resources:

image

Setting them up so IntelliSense works

Extract the XSD files to this folder on your local machine:

C:\Program Files (x86)\Microsoft Visual Studio 11.0\Xml\Schemas

This is where the Visual Studio IntelliSense engine looks for matching schema files, when you open an XML file.

Opening Work Item Type definitions in XML editor, instead of Process Editor

If you have the Team Foundation Server Power Tools installed, the Process Editor plug-in (ProejctTemplateEditor, in the list) is set as the default handler for work item XML files. So you get this UI view, rather than the raw XML:

image

To change this behaviour, you can go to

  • File > Open …
  • Select the Work Item XML file
  • Instead of clicking the ‘Open’ button, click the little arrow next to the ‘Open’ button and choose ‘Open With…’

clip_image001

You can then choose ‘XML (Text) Editor’ and optionally set it as the default editor for these files in the future.

clip_image002

Once you’ve followed all these steps, you get the joy of editing the Work Item Type XML file with the power and syntax checking of IntelliSense.

image

The entire Work Item Type XML schema is documented on MSDN at Index to XML Element Definitions for Work Item Types.

This blog has moved!

TFS2012: IntelliSense for customizing Work Item Types using XML

$
0
0

Team Foundation Server allows you to modify the Work Item Type definitions. You can use a graphical interface like the Process Editor included in the Team Foundation Server Power Tool, or you can edit the raw XML.

For making changes across many work item types, I prefer to edit the raw XML in Visual Studio, since it allows me to use Find & Replace, Copy/Paste, and other useful text-editing functions. 

One very useful feature of Visual Studio is IntelliSense for editing XML files. To activate IntelliSense for XML files, you need to have the XSD schema files in a special directory on your machine.

In this blog post, I will show you how you can enable IntelliSesnse for editing Work Item Tracking XML files. This gives you the flexibility of editing the raw XML, with the safety net of IntelliSense and XML validation. It’s based upon an old blog post from Ben Day and updated for Team Foundation Server 2012.

Obtaining the latest schema files

Download here (11KB, Zip file)

Or, you can open Microsoft.TeamFoundation.WorkItemTracking.Common.dll from your GAC in Reflector and export out the schema files which are embedded as resources:

image

Setting them up so IntelliSense works

Extract the XSD files to this folder on your local machine:

C:\Program Files (x86)\Microsoft Visual Studio 11.0\Xml\Schemas

This is where the Visual Studio IntelliSense engine looks for matching schema files, when you open an XML file.

Opening Work Item Type definitions in XML editor, instead of Process Editor

If you have the Team Foundation Server Power Tools installed, the Process Editor plug-in (ProejctTemplateEditor, in the list) is set as the default handler for work item XML files. So you get this UI view, rather than the raw XML:

image

To change this behaviour, you can go to

  • File > Open …
  • Select the Work Item XML file
  • Instead of clicking the ‘Open’ button, click the little arrow next to the ‘Open’ button and choose ‘Open With…’

clip_image001

You can then choose ‘XML (Text) Editor’ and optionally set it as the default editor for these files in the future.

clip_image002

Once you’ve followed all these steps, you get the joy of editing the Work Item Type XML file with the power and syntax checking of IntelliSense.

image

The entire Work Item Type XML schema is documented on MSDN at Index to XML Element Definitions for Work Item Types.

This blog has moved!

TFS2012: IntelliSense for customizing Work Item Types using XML

$
0
0

Team Foundation Server allows you to modify the Work Item Type definitions. You can use a graphical interface like the Process Editor included in the Team Foundation Server Power Tool, or you can edit the raw XML.

For making changes across many work item types, I prefer to edit the raw XML in Visual Studio, since it allows me to use Find & Replace, Copy/Paste, and other useful text-editing functions. 

One very useful feature of Visual Studio is IntelliSense for editing XML files. To activate IntelliSense for XML files, you need to have the XSD schema files in a special directory on your machine.

In this blog post, I will show you how you can enable IntelliSesnse for editing Work Item Tracking XML files. This gives you the flexibility of editing the raw XML, with the safety net of IntelliSense and XML validation. It’s based upon an old blog post from Ben Day and updated for Team Foundation Server 2012.

Obtaining the latest schema files

Download here (11KB, Zip file)

Or, you can open Microsoft.TeamFoundation.WorkItemTracking.Common.dll from your GAC in Reflector and export out the schema files which are embedded as resources:

image

Setting them up so IntelliSense works

Extract the XSD files to this folder on your local machine:

C:\Program Files (x86)\Microsoft Visual Studio 11.0\Xml\Schemas

This is where the Visual Studio IntelliSense engine looks for matching schema files, when you open an XML file.

Opening Work Item Type definitions in XML editor, instead of Process Editor

If you have the Team Foundation Server Power Tools installed, the Process Editor plug-in (ProejctTemplateEditor, in the list) is set as the default handler for work item XML files. So you get this UI view, rather than the raw XML:

image

To change this behaviour, you can go to

  • File > Open …
  • Select the Work Item XML file
  • Instead of clicking the ‘Open’ button, click the little arrow next to the ‘Open’ button and choose ‘Open With…’

clip_image001

You can then choose ‘XML (Text) Editor’ and optionally set it as the default editor for these files in the future.

clip_image002

Once you’ve followed all these steps, you get the joy of editing the Work Item Type XML file with the power and syntax checking of IntelliSense.

image

The entire Work Item Type XML schema is documented on MSDN at Index to XML Element Definitions for Work Item Types.

Viewing all 62 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>