How important is typing ability?

As a programmer, I spend a lot of time in front of a keyboard and naturally I’ve gotten pretty good at typing. Out of curiosity I took a typing test to see how fast I am (surprisingly I haven’t taken one of those in a long time):

Your speed was: 89wpm.

You made 1 mistake

Pretty good, especially when I had to type the word PASSEGGIERI (which I got right) and Turkish (which I surprisingly got wrong). This got me wondering how I compared to the average programmer typing speed. After minutes upon minutes of Googling, I found few results and the results I did find showed that 1.) programmer typing speed is all over the map and 2.) I’m apparently in the upper echelon of programmer typists.

#1 is interesting, #2 is irrelevant.

I find it quite interesting that programmers, who like me spend a lot of time in front of a keyboard, are not more consistently in the 70+wpm category. It doesn’t take much time to learn the keyboard, and I would imagine practicing a few hours a day would increase speed very quickly. But it doesn’t matter. You’ll notice that I titled this post

How important is typing ability?

and not

How important is typing speed?

Typing speed is irrelevant when it comes to programming. In this world of Intellisense and copy-paste it doesn’t matter how fast your fingers can move. Programming requires a great deal of thought and speed is irrelevant if you don’t know what you’re typing. I could actually argue that typing speed increases bugs in code.

What is important is typing ability. A programmer should know where all of the keys are on the keyboard and should be able to touch type. A programmer should know where = is and what finger to hit { with. Without thinking. All of a programmer’s thought should go into the what I’m typing not the how it’s getting entered.

If you hunt-n-peck, you can’t program. There, I said it.

Dependency Injection in ASP.NET vNext

One of the big headlines of ASP.NET vNext is that dependency injection is built into the framework. It’s a pretty basic implementation of DI, but although it’s very basic I’m sure it will suffice for most applications. I’ll show how to set up and use the built in DI container below, but first…

What is Dependency Injection?

The best definition of DI I have found is from James Shore:

Dependency injection means giving an object its instance variables. Really. That’s it.

As James implies, it’s very simple, and essentially means a class gets the services it uses pushed to it rather than creating them itself. So for instance if you have an EF context you use to get data from a database, you would push the EF object to your class via dependency injection rather than instantiating the EF context in the class.

How do I use the vNext DI container?

This is definitely the easiest DI container to set up and configure out of all of the ones I have used. You should be able to get it up and running within 10 minutes on your own or in 9 minutes and 38 seconds if you follow this guide.

vNext Home

For this example we’ll use the ASP.NET vNext Home samples, specifically the HelloMvc sample. For instructions on getting that up and running view my Getting Started with ASP.NET vNext post.

Dependency Injection in ASP.NET MVC

The dependency injection code can be found at https://github.com/aspnet/DependencyInjection. It is included in the MVC framework, however, which means that it’s already included in our Home repository which in turn means you don’t have to do anything!

Class

First things first – we need a thing to inject. Let’s create a TestContext class to inject into our controllers that has a single method that returns the current date and time as a string. (And an interface for it, obviously!)

using System;

namespace HelloMvc.Test
{
    public interface ITestContext
    {
        string GetDate();
    }
}
using System;

namespace HelloMvc.Test
{
    public class TestContext : ITestContext
    {
        public string GetDate()
        {
            return DateTime.Now.ToString();
        }
    }
}

Add Scoped

Now… our future controllers are going to ask for ITestContext‘s and we need to tell the controller builder what concrete type to give it. This is where AddScoped<TService, TImplementation> comes in. The AddScoped method has generic type parameters for the interface type (what is asked for) and the implementation type (the concrete type passed in). It is an extension method to the IServiceCollection class and you specify these in the Configure method on app startup.

public void Configure(IBuilder app)
{
    app.UseErrorPage();

    app.UseServices(services =>
    {
        services.AddMvc();
        services.AddScoped<ITestContext, TestContext>();
    });

    app.UseMvc();
    app.UseWelcomePage();
}

Constructor Injection

Now, let’s update the HomeController to ask for an ITestContext. When a request comes in that the HomeController will handle, the controller builder will see it wants an ITestContext and will give it a concrete TestContext. I also updated the User method to use the new context to set the user’s Name to the current date and time.

using Microsoft.AspNet.Mvc;
using MvcSample.Web.Models;
using HelloMvc.Test;

namespace MvcSample.Web
{
    public class HomeController : Controller
    {
	private ITestContext _context;
	
	public HomeController(ITestContext context)
	{
		_context = context;
	}

        public IActionResult Index()
        {
            return View(User());
        }

        public User User()
        {
            User user = new User()
            {
                Name = _context.GetDate(),
                Address = "123 Real St."
            };

            return user;
        }

	public IActionResult Thing()
	{
		return View();
	}
    }
}

Now, if you run the app (run k web from the command line, if you forgot) and you should see Hello 6/2/2014 8:04:05 PM! (save for your current date and time, obviously).

And there you have it! A simple explanation of the new dependency injection service in ASP.NET vNext. Next time, we’ll look into using Ninject or StructureMap or some other IoC container with a vNext app.

ASP.NET vNext Links

ASP.NET vNext was announced a few weeks ago and there are now some great resources available explaining what it is, why it’s here and why it’s a really freaking important change.

Github

All of ASP.NET vNext is open source and available on Github. The Home project linked below has sample code that shows you how to get vNext up and running.

David Fowler

David is one of the principle developers of ASP.NET vNext and has had his hand in most pieces of the stack. He has a few writeups on his personal blog about the scope, direction and details of ASP.NET vNext.

Scott Hanselman

Scott is a developer at Microsoft on the Web Platform Team (that’s his main gig among other side projects, his blog, podcasts, youtube channels, speaking, etc etc etc). Scott wrote a very good overview of vNext.

TechEd

TechEd is Microsoft’s annual technology conference. In the videos below Scott Hanselman drives 2 talks about vNext – the first is a 100 level course with Scott Hunter (The Lesser Scott’s, as they call themselves) providing an overview of vNext. The second is a 400 level course with principle developer David Fowler that dives deep into the code.

ASP.NET

The ASP.NET site itself has been updated with information about vNext, and includes updates to the Music Store and BugTracker sample applications.

Graeme Christie

Graeme provides a great overview of how to get vNext up and running on OSX and Linux.

Getting started with ASP.NET vNext

ASP.NET vNext is the next version of the ASP.NET web framework. The focus this time around is strip out unnecessary bits and make a leaner, meaner, easier-to-use framework.

A few of the things done include:

  • Side by side deployment of .NET – you can deploy all code, dependencies and the .NET framework in the bin directory of your site
  • A cloud optimized version of .NET, which is very lean. The full .NET is above 200mb, the cloud optimized version is ~11mb
  • ASP.NET Web API and ASP.NET MVC are now merged into one framework
  • No dependency on IIS. You can host your sites in IIS, or you can host it in your own, custom process
  • A new project.json file that holds references and configuration for the project
  • You can compile C# using the Roslyn compilers, which means no dependency on MSBuild or Visual Studio

ASP.NET vNext Home

To start playing with vNext, go to the ASP.NET vNext repo on GitHub and clone it. The Home repository is a starting point that includes samples and documentation.

After you have the project cloned, run:

kvmsetup.cmd

This will download any necessary framework files and dependencies. KVM is the K Version Manager, which is what allows you to download and manage multiple versions of the K runtime. Next, run

kvm install 0.1-alpha-build-0421

This will download and install the 0421 build of the K Runtime and place it in your user profile. (0421 was the latest build as of this post. You can upgrade the version by running kvm upgrade)

We’ll focus on the ASP.NET MVC sample, so navigate to Samples/HelloMvc and run

kpm restore

This will look into project.json and will load all dependencies required for the project.

Finally, run

K web

This will start the K runtime and will attempt to start the web configuration specified in the project.json file. This configuration tells the runtime to start up and listen on port 5001.

If you navigate to http://localhost:5001 you should see the welcome page for the site. Hooray! From here, you can add more controllers, views, etc and build out a site. When it comes time to release, you can copy your project directory up to your server, run K web and have your site running without ever installing .NET. The Lesser Scotts even copied their project to a USB key and ran it off of that. Awesome.

No Visual Studio!??

What’s very cool about this is there is no dependency on Visual Studio or MSBuild. You can, if you so desire, develop in notepad, notepad++, vim, or whatever other app you want. The Roslyn compilers allow for this to happen – all you have to do while developing is change a file, save it, and refresh your browser. This makes it a much easier, seamless, awesome programming experience.

With ASP.NET vNext Microsoft is making an effort to make developing .NET applications easier. There is no need to install a massive framework, less compile and load time, and all of this is open source. It’s very, very exciting.

Difference between git reset soft, mixed and hard

The reset command. Confusing. Misunderstood. Misused. But it doesn’t need to be that way! It’s really not too confusing once you figure out what’s going on.

Definitions

First, let’s define a few terms.

HEAD

This is an alias for the tip of the current branch, which is the most recent commit you have made to that branch.

Index

The index, also known as the staging area, is the set of files that will become the next commit. It is also the commit that will become HEAD’s parent.

Working Copy

This is the term for the current set of files you’re working on in your file system.

Flow

When you first checkout a branch, HEAD points to the most recent commit in the branch. The files in the HEAD (they aren’t technically files, they’re blobs but for the purposes of this discussion we can think of them as straight files) match that of the files in the index, and the files checked out in your working copy match HEAD and the index as well. All 3 are in an equal state, and Git is happy.

When you perform a modification to a file, Git notices and says “oh, hey, something has changed. Your working copy no longer matches the index and HEAD.” So it marks the file as changed.

Then, when you do a git add, it stages the file in the index, and Git says “oh, okay, now your working copy and index match, but those are both different than HEAD.”

When you then perform a git commit, Git creates a new commit that HEAD now points to and the status of the index and working copy match it so Git’s happy once more.

Reset

If you just look at the reset command by itself, all it does is reset HEAD (the tip of the current branch) to another commit. For instance, say we have a branch (the name doesn’t matter, so let’s call this one “super-duper-feature”) and it looks like so:

HEAD-Latest

If we perform:

> git reset HEAD

… nothing happens. This is because we tell git to reset this branch to HEAD, which is where it already is. But if we do:

> git reset HEAD~1

(HEAD~1 is shorthand case for “the commit right before HEAD”, or put differently “HEAD’s parent”) our branch now looks like so:

HEAD-Parent

If we start at the latest commit again and do:

> git reset HEAD~2

our branch would look like so:

HEAD-Parent-Parent

Again, all it does on a basic level is move HEAD to another commit.

Parameters

So the reset command itself is pretty simple, but it’s the parameters that cause confusion. The main parameters are soft, hard and mixed. These tell Git what to do with your index and working copy when performing the reset.

Soft

The --soft parameter tells Git to reset HEAD to another commit, but that’s it. If you specify --soft Git will stop there and nothing else will change. What this means is that the index and working copy don’t get touched, so all of the files that changed between the original HEAD and the commit you reset to appear to be staged.

reset-wc-index-changed

Mixed (default)

The --mixed parameter (which is the default if you don’t specify anything) will reset HEAD to another commit, and will reset the index to match it, but will stop there. The working copy will not be touched. So, all of the changes between the original HEAD and the commit you reset to are still in the working copy and appear as modified, but not staged.

reset-wc-changed

Hard

The --hard parameter will blow out everything – it resets HEAD back to another commit, resets the index to match it, and resets the working copy to match it as well. This is the more dangerous of the commands and is where you can cause damage. Data might get lost here*!

reset-all-happy


* You can recover it using git reflog but that’s out of scope here.

Twitter isn’t a blogging service, so let’s kill the tweetstorm

There’s a new way of posting on Twitter that’s gaining in popularity – The Tweetstorm™. Coined by BuzzFeed (as far as I know), a Tweetstorm is a message or rant that spans multiple tweets, with each tweet commonly being prefixed by #/. BuzzFeed describes it as such:

Beginning with a simple “1/”, Andreessen began to launch off on blog post-length lectures, 140 characters at a time. Many are 10 or 15 tweets long and shot off in rapid succession.

The multi-tweet is, by all measures, a perfectly normal bit of Twitter behavior; sometimes an important thought or piece of news runs over 140 characters. There are even platforms, like TwitLonger, which allow users to attach a longer message to tweets to work around the 140 character rule. However, what sets Andreessen’s tweetstorm™ apart from the conventional multi-tweet is any indication of anticipated length. Instead, a tweetstormer™ gives no real indication how long it’s going to take and assumes that the reader is more than OK with this.

Let me be honest for one second – I don’t like it. It’s dumb, and it defeats the entire purpose of Twitter. Obviously Marc Andreessen feels differently than me, because he’s a master of Tweetstorms:

Twitter is good for tidbits of information – a status update, a link, or your new hair color. I often think of it as quality control on an assembly line – many workers stand around a conveyor belt, picking out items that aren’t up to snuff. Not every worker finds every low quality item, but there are enough workers standing at this belt that at the end of the line almost all of the low quality items have been removed.

Assembly workers

Twitter should be used in the same way. Scroll around, peruse, window shop. When something grabs your attention – read it, review it, click the link, whatever you need to do. You’re not going to see every single thing in your feed nor should you. If something’s important, someone down the line (whom are the rest of the users on Twitter) might see it and read it. At some point if the information is important enough you’ll hear about it through retweets or another communication medium.

Using it for anything more than that doesn’t even make any sense. Each tweet should stand on it’s own with it’s own context; they should be short and succinct. Posting 12 consecutive 140 character messages and expecting your user to read and follow along is asking too much. The user can’t browse over it and quickly know what’s there – they have to stop and read a blog post worth of text to figure out what’s being posted. A smart man once said:

Understand the limitations of the communication medium you are using and know when to escalate to another, more appropriate one.

Jeff’s absolutely right. Know when to elevate from Twitter into a more appropriate communication medium – Facebook, an email, or a blog post. Please, just don’t blog on Twitter.

The Universe is not programmable, but we can document what we can

Wired posted an article with the idea that the entire universe is an API and explained how we need to tap into its potential. The basic premise of their argument is that an API “lays out the code’s inputs and outputs” and that everything in the universe also has inputs and outputs, therefore has a documentable API. Although the idea behind the argument is valid, the argument falls short in a number of ways.

The term programmable is misused in this case. The definition of programmable is:

adjective
1.
capable of being programmed.
noun
2.
an electronic device, as a calculator or telephone, that can be programmed to perform specific tasks.

If something is programmable, it means it can be set up to perform tasks. It can be coerced to do what we tell it to, and only when we tell it to. If there’s anything I learned in my physics courses it’s that sometimes you can’t predict nor control things. Saying we can predict and control all things in the universe is like saying Lindsay Lohan is ever going to get it together. It just isn’t right.

Where the argument falls even shorter is that the API analogy doesn’t fit the argument he’s trying to make anyway. The argument behind the noise is to document our surroundings. Write down what we know and what we’re figuring out. And you know what? We are already doing that. It’s called science. This human race is already trying to figure out the “API of the universe” and we have been for years. Years upon years. Basic mathematics have been around for, like, forever. The Babylonians estimated and documented the √2 to 5 decimal places in ~1800 BC. Newton watched an apple and discovered that if (world == Earth) gravity == 9.8 m/s^2. Discovering the facts of the universe is nothing new.

What Keith is really calling for is for all research to be stored in a centralized place that is available to everyone. Sure, there are places like arXiv but those are very low level technical papers – nothing for the masses. There needs to be a place the collective knowledge of the human race can be put on display. Wikipedia is close, and perhaps it can be it as long as everyone gets on board.

This idea that we should document and share our knowledge of the world around us is the one thing he and I agree on. Knowledge should be available to all, and it shouldn’t be limited to those who can afford it. We should be promoting education, learning, discovery and science. Keith is right in saying that corporations are going to stifle change, and as a civilization we should show them the importance of sharing findings for good of humanity over monetary gain.

The API analogy is cute, but that’s it. The main point behind the analogy is what’s solid here, and that’s what should be taken from his article.

Get some packages with Microsoft OneGet

Are you a Windows user? Do you see people using apt-get and Homebrew and get filled with rage? Are you not a fan of chocolate? Well, then, you are in luck! Introducing, OneGet.

What is OneGet?

OneGet is Microsoft’s new package manager which allows you to discover and install new software on Windows machines. It is similar to apt-get on Linux, Homebrew on OSX, and even the Powershell based Chocolately package manager. When I, however, say similar to Chocolately, I don’t mean that it replaces Chocolately. In fact, it embraces it. OneGet is essentially an interface to many different package repositories, each repository hosting any number of different pieces of software. Chocolately is one of those repositories and in fact is the one and only repository currently available. As more and more repositories become available you can add each of them as a source and query all of them at the same time. Awesome.

How do I get it?

To install OneGet, install the Windows Management Framework V5 Preview. This will, among a few other things, install Powershell 5 along with the OneGet Powershell module. Once installed OneGet will be available the next time you open Powershell. Please note that this is Windows 8/Windows Server 2012 only and that it’s a CTP and is subject to change!

How do I use it?

There are 7 cmdlets available, allowing you to manage repositories and packages. To view a list of the available cmdlets use the Get-Command command:

> Get-Command -Module OneGet

  CommandType     Name                                               Source
  -----------     ----                                               ------
  Cmdlet          Add-PackageSource                                  OneGet
  Cmdlet          Find-Package                                       OneGet
  Cmdlet          Get-Package                                        OneGet
  Cmdlet          Get-PackageSource                                  OneGet
  Cmdlet          Install-Package                                    OneGet
  Cmdlet          Remove-PackageSource                               OneGet
  Cmdlet          Uninstall-Package                                  OneGet

There currently is no documentation for these, so I’ll detail what they do below.

Get-PackageSource

This cmdlet lists the available repositories you have added to OneGet. As I stated above, Chocolatey is the only one so far.

> Get-PackageSource

Name                          Location                      Provider                                          IsTrusted
----                          --------                      --------                                          ---------
chocolatey                    http://chocolatey.org/api/v2/ Chocolatey                                            False

Add-PackageSource and Remove-PackageSource

These will add, and obviously remove, package repositories. You’ll (hopefully) use this soon when more repositories become available. The Add-PackageSource cmdlet takes name, Provider and Location parameters at a minimum.

> Add-PackageSource chocolatey -Provider Chocolatey -Location http://chocolatey.org/api/v2/
> Remove-PackageSource chocolatey

Get-Package

You can view a list of all packages currently installed on your system by using the Get-Package command:

> Get-Package

Name                             Version          Status           Source         Summary
----                             -------          ------           ------         -------
7zip                             9.22.01.20130618 Installed        Local File     7-Zip is a file archiver with a hi...
7zip.install                     9.22.01.20130618 Installed        Local File     7-Zip is a file archiver with a hi...

Find-Package

To view a list of packages available from all of your repositories, use the Find-Package command. The first time you run it, it will want to install and setup NuGet:

> Find-Package

  RequiresInformation
  The NuGet Package Manager is required to continue. Can we please go get
  [Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"):

From there, it will give you a list of all available packages:

Name                             Version          Status           Source         Summary
----                             -------          ------           ------         -------
1password                        1.0.9.340        Available        chocolatey     1Password - Have you ever forgotte...
7zip                             9.22.01.20130618 Available        chocolatey     7-Zip is a file archiver with a hi...
7zip.commandline                 9.20.0.20130618  Available        chocolatey     7-Zip is a file archiver with a hi...
7zip.install                     9.22.01.20130618 Available        chocolatey     7-Zip is a file archiver with a hi...
ack                              2.04             Available        chocolatey     ack is a tool like grep, designed ...
acr                              2.6.0            Available        chocolatey
ActivePerl                       5.14.2.2         Available        chocolatey     ActivePerl is the leading commerci...

...

zabbix-agent                     2.2.1            Available        chocolatey     zabbix
zadig                            2.1.1            Available        chocolatey     USB driver installation made easy
zetaresourceeditor               2.2.0.11         Available        chocolatey     zetaresourceeditor
zoomit                           4.50             Available        chocolatey     ZoomIt is a screen zoom and annota...
zotero-standalone                4.0.19           Available        chocolatey     Zotero [zoh-TAIR-oh] is a free, ea...

You can also provide a filter to search for a specific package:

> Find-Package 7zip

Name                             Version          Status           Source         Summary
----                             -------          ------           ------         -------
7zip                             9.22.01.20130618 Available        chocolatey     7-Zip is a file archiver with a hi...

Install-Package and Uninstall-Package

To install a package, use Install-Package. You’ll have to be running Powershell as Administrator to install packages, and set your Execution-Policy to RemoteSigned.

> Install-Package 7zip

Installing Package '7zip' from untrusted source
WARNING: This package source is not marked as safe. Are you sure you want to install software from 'chocolatey'
[Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): Y

Name                             Version          Status           Source         Summary
----                             -------          ------           ------         -------
7zip.install                     9.22.01.20130618 Installed        chocolatey     7-Zip is a file archiver with a hi...
7zip                             9.22.01.20130618 Installed        chocolatey     7-Zip is a file archiver with a hi...

It will first prompt you that the package source (Chocolatey) is not marked as safe (but it is, because we know it is) but hit yes anyway (unless you’re scared, but you shouldn’t be.) By default, packages will be downloaded and installed to C:\Chocolatey\lib when using the Chocolatey repository.

If you hate what you installed, want it gone and killed with fire, use Uninstall-Package:

> Uninstall-Package 7zip

Name                             Version          Status           Source         Summary
----                             -------          ------           ------         -------
7zip.install                     9.22.01.20130618 Not Installed
7zip                             9.22.01.20130618 Not Installed

Why is this cool?

Because it drastically reduces the time it takes to find, download and install software. I have to run at most 2 Powershell commands and I’ll have whatever software I want installed. The packages are named so appropriately that many times you can guess it and reduce your command count down to 1! That’s a 50% increase in efficiency! Whoa!

This also means that Microsoft, again, is serious about supporting the developer community. First it was the .NET Foundation and Roslyn*, and now they’re embracing something that Linux and OSX users have had for years. For the first time in a while I’m really excited that I use Windows.

Now if you’ll excuse me, I’m going to uninstall 7zip just so I can OneGet it.


* Unless you count Steve Balmer’s promise.

Users see the UI, not the code

UI design is hard. Like, it’s way hard. And it’s also a very important piece of the software puzzle. In fact, some might say it’s the most important piece because to users, it is the software:

A good user interface is one of the most important aspects of an enterprise product. While the underlying architecture is extremely important to deliver the functionality, to the end-user, the interface is the product. They don’t know, (and don’t care, usually,) of what goes on behind the scenes, other than that they expect things to work. Every way they interact with the product is through the interface.

When a user opens an app, they see the interface. They don’t see the code behind it, the layers, the interfaces, the helper libraries; they see the UI. That is the software. If you perform massive technical improvements but leave the UI the same, no one will notice. This is why the interface is so critically important, but also why it’s one of the hardest things to do in software. Designing an interface that both looks good and is intuitive to all users takes effort and skill, and is something that Microsoft, Google and even Apple have yet to fully master.

Look, I am in no way a master at UI design. I kind of suck at it. But I can get by if I have to, and one thing that helps me as I’m working is to ask myself this question:

If I was a user how would I expect this to work?

You are a user of many more pieces of software than you will ever write yourself. You, like everyone, will have expectations of how something should function. So tap into those experiences. Put yourself in the shoes of a user and design the feature as you think it should work. Think about the different reasons a user would use this feature and the goals they might want to achieve while using it. Try to come up with something that minimizes the pain of accomplishing those goals. Chances are you’ll come up with something better than these.

Migrating from SVN to Git; How we did it

My team migrated from SVN to Git about 3 months ago. After a few tweaks, a few bugs and a little elbow grease we’ve been stable ever since. And you know what? It was one of the best moves we’ve ever made. Developers are more efficient and we have finally documented and streamlined our release workflow.

I was in charge of handling the migration. That included setting up an internal Git host, migrating the SVN repositories over, documenting the new Git development process and training the developers. One important requirement was that we couldn’t stop active development during the migration – developers always had to have a place to commit code.

Technical Side – What tools were used and how was it done?

Stash

Due to legal reasons, we weren’t able to make use of popular Git hosts such as GitHub or Bitbucket, so we needed to find an internal hosting solution. We looked at many open source hosts, along with GitHub Enterprise, and finally determined that Atlassian Stash was the best option for us. It offered most of the features we desired – internally hosted, pull requests, HTTP/HTTPS/SSH access, and the ability to connect with Active Directory – and was corporate backed and reasonably priced.

Other than the setup being archaic Stash was up and running within 20 minutes. Configuration was relatively trivial, and mostly included configuring permissions and user setup. We hooked up Stash with Active Directory so all employees can login using their domain accounts. This reduces the number of username/password pairs everyone has to remember which, imo, is a really good thing.

Initial SVN Migration

The initial SVN migration went relatively smoothly with little to no hiccups. We followed the steps I layed out in my previous post, Migrating from SVN to Git, with one small caveat – after we performed the first fetch, we left SVN as the primary repository, and all code was still committed there. We set the permissions on the Git server to be readonly so that developers could clone the repositories, get introduced and familiar to Git, and we could confirm that there were no connection or permission issues with Stash. Everyday we performed a fetch from the SVN repository and pushed the changes up to Git to keep things up to date. We left this process in place for about a week; once we confirmed there were no issues, and all devs had some sort of Git client they liked, we switched over. (As a side note, if we had many more repositories, and/or were going to leave this process in place for longer than a week, I would have set up a job to run daily (perhaps hourly?) to perform the fetch. If you’re in this boat, I recommend you do that using Powershell or similar, unless you like performing the same monotonous task every morning, in which case go for it.)

When we were ready to shut off SVN, we had all developers commit any pending changes to SVN, then we switched the repository to be readonly. We performed one final fetch/push from SVN, and opened up the Git server to the world. (Okay, opened to our office, but whatever.)

Tools

So we set up Stash on the server, but what clients did we use? We are a Microsoft shop, and as such have a mix between SourceTree, Posh-Git and bash. We didn’t really set any limitations on what client to use, as long as it works for the dev. (If you ask me, though, Posh-Git is the way to go. By a mile.)

Human Side – Git workflow, developer training and hiccups

Git Workflow

This is where the fun starts. Developers inevitably had questions, most of which I could answer but some of which we had to work out together. Most of these questions revolved around workflow – when do I branch, why do I branch, do I need to branch? Our SVN workflow was, well, not exactly much of a workflow. We had a develop branch, and most work went into that, and sometimes we would branch for features, but then we would have merge problems because SVN sucks at that, and then we’d release whenever from wherever, and… yeah. Not much of a workflow.

So, I took this opportunity to standardize our process, which is basically git-flow. We have a develop branch, all features get branched from there and merged back when they’re ready. When we decide to release, we branch into a release branch, perform fixes, and merge the production ready code into master. Hotfixes are merged off of master and merged back into master and develop. I laid this workflow out in a formal document that was available to everyone – developer or otherwise.

The fun part about documents, at least that I have found, is that nobody reads them. Ever. I still got a lot of questions about where to branch feature branches from, when to create a release branch, and where to release from. My answer, most of the time, was “read the documentation” (without being rude) to which I got a “what documentation?” response.

Training

So the next logical step was group training. I set aside 30 minutes to get all developers together and explain things – both about Git and the new workflow. We went over the differences and similarities between SVN and Git – what distributed means in practice, pushing, pulling, committing, stashing, adding it items to the index, etc. And then we covered the new workflow (with pictures!) and how Stash helps formalize the process with Pull Requests and such.

The training was a huge success even with it only being a 30 minute session. Everyone was able to ask questions and get on the same page. I highly advise giving a formal presentation if you can with as many visual aids as possible. It’s much easier to understand a live, visual presentation over emails and a Word document.

Issues

We luckily didn’t run into any technical issues. The only slight issue we ran into was getting developers to follow the new workflow. Again, training pretty much mitigated this issue and everything was smoothed out in a matter of days. We have yet to have any technical issues.

Wrap Up

If you’re on the fence about making the switch to Git, I highly recommend it. There are many benefits with little to no drawbacks. We’ve only been using it for 3 months and I can already see an increase in productivity and quality of output. Formal Pull Requests have strengthened our peer reviews and having a strict release process has increased our quality. It has been one of the best decisions we’ve made as a team in a long time.