ASP.NET vNext Dependency Injection with Castle Windsor

In a previous post I explained how to set up and use the built in dependency injection container in ASP.NET vNext. Today we’re going to look at setting up and using the popular Castle Windsor DI container.

In order to use third party DI containers the MVC team set up a conforming container that the various DI containers tap into. Now, some will say conforming containers are an anti-pattern, and I lean on that side of the fence since they involve a lot of boiler plate code and limit the usefulness of your DI container. But this is the route they took and they have their reasons. It sounds like the MVC team is rethinking this strategy, though, and chances are the adapter code will live in the DI container projects themselves rather than in the DI repository. Hey, vNext is super-alpha so they’re allowed to make tons of changes and do whatever they want.

Regardless, as a personal experiment I wrote a Castle Windsor populator (I’m going to use the term populator here for lack of a better term). The ASP.NET team had previously written the Ninject and Autofac populators but have not gotten to the Castle Windsor one. My fork with the code is available here: (I’m currently in preliminary talks with the ASP.NET team to open a pull request for it).

For this example we’ll use the sample MVC project available in the MVC repository. That already has AutoFac set up and we’ll modify it to instead use Castle Windsor. All of the code for this post is available in my fork of the MVC repository.

How do we use the new Windsor populator?

The Windsor populator I wrote follows the same patterns as the Ninject and Autofac ones that the ASP.NET team wrote. You instantiate a container, send it the services to populate and then return an IServiceProvider that gets passed to the framework. When the framework needs an object, it uses the IServiceProvider you give it. The code essentially boils down to:

var services = new ServiceCollection();
services.AddTransient<ITestService, TestService>();

var container = new WindsorContainer();
WindsorRegistration.Populate(container, services, app.ApplicationServices);
var sp = container.Resolve<IServiceProvider>();


Setting up the Windsor populator in the ASP.NET MVC Sample

To get started, we first need to add a dependency to the Windsor DI package. We need to modify the project.json file to add the dependency for Microsoft.Framework.DependencyInjection.Windsor

"net45": {
      "dependencies": {
        "System.ComponentModel.DataAnnotations": "",
        "Microsoft.Framework.DependencyInjection.Autofac": "1.0.0-*",
        "Microsoft.Framework.DependencyInjection.Windsor": "1.0.0-*",
        "Microsoft.Framework.ConfigurationModel.Json": "1.0.0-*",
        "Autofac": "3.3.0"

If you’re using the Visual Studio 2014 CTP once you save the project.json file it will automatically restore the dependencies. If you’re using the command line, open it up and run:

kpm restore

This will force the K Package Manager to restore all dependencies. The dependencies are placed by default in C:\Users\yourusername\.kpm\packages. If you navigate to that folder, you should see a Microsoft.Framework.DependencyInjection.Windsor folder and inside there a folder named 1.0.0-alpha3-something (where something is the latest build number. It is currently 10148 at time of writing).

Now, since the Castle Windsor populator I created isn’t currently in the DI repo, we’ll have to build it from source and update the referenced package to use our custom copy of the library. We can do this by grabbing the compiled binaries and pasting them into the packages folder.

First, though, update the project.json file in the MVC sample project to reference the exact version of the Windsor DI library that we currently have (which, for me, is 1.0.0-alpha3-10148) instead of using the latest (aka *):

"net45": {
      "dependencies": {
        "System.ComponentModel.DataAnnotations": "",
        "Microsoft.Framework.DependencyInjection.Autofac": "1.0.0-*",
        "Microsoft.Framework.DependencyInjection.Windsor": "1.0.0-alpha3-10148",
        "Microsoft.Framework.ConfigurationModel.Json": "1.0.0-*",
        "Autofac": "3.3.0"

This is necessary just in case a new version of the library is released as we’re working and it overwrites our version when we run kpm restore.*

Next, navigate over to my fork and clone the repository: Navigate into the Windsor folder and run:

kpm build

This will compile the project and place the binaries in the bin\Debug\net45 directory. Copy those compiled binaries and paste them into the Microsoft.Framework.DependencyInjection.Windsor\1.0.0-alpha3-10148 folder in your packages folder.

Back at the command line, navigate to the MVC sample folder and run:

k web

This will compile the project and start the development server. If you navigate to http://localhost:5001 you should see a test page with a bunch of junk on it (junk is good). At this point in time it is still utilizing Autofac. We’ll now edit the code to use the Windsor populator. First let’s expand the if statement that is checking the configuration and the DI system into 2 ifs:

if (configuration.TryGet("DependencyInjection", out diSystem))
    if (diSystem.Equals("AutoFac", StringComparison.OrdinalIgnoreCase))

Next, pull out the shared code that instantiates the service collection:

if (configuration.TryGet("DependencyInjection", out diSystem))
    var services = new ServiceCollection();

    var defaultServices = Microsoft.AspNet.Hosting.HostingServices.GetDefaultServices();
    foreach (var defaultService in defaultServices)

    services.AddTransient<ITestService, TestService>();

    if (diSystem.Equals("AutoFac", StringComparison.OrdinalIgnoreCase))

Finally, add an else if that checks if the DI system is Windsor:

else if(diSystem.Equals("Windsor", StringComparison.OrdinalIgnoreCase))
    var container = new WindsorContainer();
    WindsorRegistration.Populate(container, services, app.ApplicationServices);
    var sp = container.Resolve<IServiceProvider>();



For extra points, we’ll add an else that throws an exception if a DI system is specified that we don’t know about:

    throw new ArgumentException("Unknown dependency injection container: " + diSystem);

Now if you run k web and navigate to http://localhost:5001 you should see that junk again. Hooray! We’re using Windsor – wait, what? No, wait, this is still Autofac. We still have to update the configuration! Open config.json and swap Windsor in for Autofac:

    "DependencyInjection": "Windsor"

Now run k web and navigate to http://localhost:5001 and you should see junk again and everything should look the same but it’s using Castle Windsor this time! Whoooo!

Next time, I’ll explain how I built the Windsor populator and why I used some of the features I did.

* Ironically, this happened to me as I was writing the Castle Windsor populator and took me 20 minutes to figure out why Microsoft.Framework.DependencyInjection.Windsor was not a valid namespace. Thanks Fowler.

How important is typing ability?

As a programmer, I spend a lot of time in front of a keyboard and naturally I’ve gotten pretty good at typing. Out of curiosity I took a typing test to see how fast I am (surprisingly I haven’t taken one of those in a long time):

Your speed was: 89wpm.

You made 1 mistake

Pretty good, especially when I had to type the word PASSEGGIERI (which I got right) and Turkish (which I surprisingly got wrong). This got me wondering how I compared to the average programmer typing speed. After minutes upon minutes of Googling, I found few results and the results I did find showed that 1.) programmer typing speed is all over the map and 2.) I’m apparently in the upper echelon of programmer typists.

#1 is interesting, #2 is irrelevant.

I find it quite interesting that programmers, who like me spend a lot of time in front of a keyboard, are not more consistently in the 70+wpm category. It doesn’t take much time to learn the keyboard, and I would imagine practicing a few hours a day would increase speed very quickly. But it doesn’t matter. You’ll notice that I titled this post

How important is typing ability?

and not

How important is typing speed?

Typing speed is irrelevant when it comes to programming. In this world of Intellisense and copy-paste it doesn’t matter how fast your fingers can move. Programming requires a great deal of thought and speed is irrelevant if you don’t know what you’re typing. I could actually argue that typing speed increases bugs in code.

What is important is typing ability. A programmer should know where all of the keys are on the keyboard and should be able to touch type. A programmer should know where = is and what finger to hit { with. Without thinking. All of a programmer’s thought should go into the what I’m typing not the how it’s getting entered.

If you hunt-n-peck, you can’t program. There, I said it.

Dependency Injection in ASP.NET vNext

One of the big headlines of ASP.NET vNext is that dependency injection is built into the framework. It’s a pretty basic implementation of DI, but although it’s very basic I’m sure it will suffice for most applications. I’ll show how to set up and use the built in DI container below, but first…

What is Dependency Injection?

The best definition of DI I have found is from James Shore:

Dependency injection means giving an object its instance variables. Really. That’s it.

As James implies, it’s very simple, and essentially means a class gets the services it uses pushed to it rather than creating them itself. So for instance if you have an EF context you use to get data from a database, you would push the EF object to your class via dependency injection rather than instantiating the EF context in the class.

How do I use the vNext DI container?

This is definitely the easiest DI container to set up and configure out of all of the ones I have used. You should be able to get it up and running within 10 minutes on your own or in 9 minutes and 38 seconds if you follow this guide.

vNext Home

For this example we’ll use the ASP.NET vNext Home samples, specifically the HelloMvc sample. For instructions on getting that up and running view my Getting Started with ASP.NET vNext post.

Dependency Injection in ASP.NET MVC

The dependency injection code can be found at It is included in the MVC framework, however, which means that it’s already included in our Home repository which in turn means you don’t have to do anything!


First things first – we need a thing to inject. Let’s create a TestContext class to inject into our controllers that has a single method that returns the current date and time as a string. (And an interface for it, obviously!)

using System;

namespace HelloMvc.Test
    public interface ITestContext
        string GetDate();
using System;

namespace HelloMvc.Test
    public class TestContext : ITestContext
        public string GetDate()
            return DateTime.Now.ToString();

Add Scoped

Now… our future controllers are going to ask for ITestContext‘s and we need to tell the controller builder what concrete type to give it. This is where AddScoped<TService, TImplementation> comes in. The AddScoped method has generic type parameters for the interface type (what is asked for) and the implementation type (the concrete type passed in). It is an extension method to the IServiceCollection class and you specify these in the Configure method on app startup.

public void Configure(IBuilder app)

    app.UseServices(services =>
        services.AddScoped<ITestContext, TestContext>();


Constructor Injection

Now, let’s update the HomeController to ask for an ITestContext. When a request comes in that the HomeController will handle, the controller builder will see it wants an ITestContext and will give it a concrete TestContext. I also updated the User method to use the new context to set the user’s Name to the current date and time.

using Microsoft.AspNet.Mvc;
using MvcSample.Web.Models;
using HelloMvc.Test;

namespace MvcSample.Web
    public class HomeController : Controller
	private ITestContext _context;
	public HomeController(ITestContext context)
		_context = context;

        public IActionResult Index()
            return View(User());

        public User User()
            User user = new User()
                Name = _context.GetDate(),
                Address = "123 Real St."

            return user;

	public IActionResult Thing()
		return View();

Now, if you run the app (run k web from the command line, if you forgot) and you should see Hello 6/2/2014 8:04:05 PM! (save for your current date and time, obviously).

And there you have it! A simple explanation of the new dependency injection service in ASP.NET vNext. Next time, we’ll look into using Ninject or StructureMap or some other IoC container with a vNext app.

UPDATE: New post is up that goes over setting up Castle Windsor.

ASP.NET vNext Links

ASP.NET vNext was announced a few weeks ago and there are now some great resources available explaining what it is, why it’s here and why it’s a really freaking important change.


All of ASP.NET vNext is open source and available on Github. The Home project linked below has sample code that shows you how to get vNext up and running.

David Fowler

David is one of the principle developers of ASP.NET vNext and has had his hand in most pieces of the stack. He has a few writeups on his personal blog about the scope, direction and details of ASP.NET vNext.

Scott Hanselman

Scott is a developer at Microsoft on the Web Platform Team (that’s his main gig among other side projects, his blog, podcasts, youtube channels, speaking, etc etc etc). Scott wrote a very good overview of vNext.


TechEd is Microsoft’s annual technology conference. In the videos below Scott Hanselman drives 2 talks about vNext – the first is a 100 level course with Scott Hunter (The Lesser Scott’s, as they call themselves) providing an overview of vNext. The second is a 400 level course with principle developer David Fowler that dives deep into the code.


The ASP.NET site itself has been updated with information about vNext, and includes updates to the Music Store and BugTracker sample applications.

Graeme Christie

Graeme provides a great overview of how to get vNext up and running on OSX and Linux.

Getting started with ASP.NET vNext

ASP.NET vNext is the next version of the ASP.NET web framework. The focus this time around is strip out unnecessary bits and make a leaner, meaner, easier-to-use framework.

A few of the things done include:

  • Side by side deployment of .NET – you can deploy all code, dependencies and the .NET framework in the bin directory of your site
  • A cloud optimized version of .NET, which is very lean. The full .NET is above 200mb, the cloud optimized version is ~11mb
  • ASP.NET Web API and ASP.NET MVC are now merged into one framework
  • No dependency on IIS. You can host your sites in IIS, or you can host it in your own, custom process
  • A new project.json file that holds references and configuration for the project
  • You can compile C# using the Roslyn compilers, which means no dependency on MSBuild or Visual Studio

ASP.NET vNext Home

To start playing with vNext, go to the ASP.NET vNext repo on GitHub and clone it. The Home repository is a starting point that includes samples and documentation.

After you have the project cloned, run:


This will download any necessary framework files and dependencies. KVM is the K Version Manager, which is what allows you to download and manage multiple versions of the K runtime. Next, run

kvm install 0.1-alpha-build-0421

This will download and install the 0421 build of the K Runtime and place it in your user profile. (0421 was the latest build as of this post. You can upgrade the version by running kvm upgrade)

We’ll focus on the ASP.NET MVC sample, so navigate to Samples/HelloMvc and run

kpm restore

This will look into project.json and will load all dependencies required for the project.

Finally, run

K web

This will start the K runtime and will attempt to start the web configuration specified in the project.json file. This configuration tells the runtime to start up and listen on port 5001.

If you navigate to http://localhost:5001 you should see the welcome page for the site. Hooray! From here, you can add more controllers, views, etc and build out a site. When it comes time to release, you can copy your project directory up to your server, run K web and have your site running without ever installing .NET. The Lesser Scotts even copied their project to a USB key and ran it off of that. Awesome.

No Visual Studio!??

What’s very cool about this is there is no dependency on Visual Studio or MSBuild. You can, if you so desire, develop in notepad, notepad++, vim, or whatever other app you want. The Roslyn compilers allow for this to happen – all you have to do while developing is change a file, save it, and refresh your browser. This makes it a much easier, seamless, awesome programming experience.

With ASP.NET vNext Microsoft is making an effort to make developing .NET applications easier. There is no need to install a massive framework, less compile and load time, and all of this is open source. It’s very, very exciting.

Difference between git reset soft, mixed and hard

The reset command. Confusing. Misunderstood. Misused. But it doesn’t need to be that way! It’s really not too confusing once you figure out what’s going on.


First, let’s define a few terms.


This is an alias for the tip of the current branch, which is the most recent commit you have made to that branch.


The index, also known as the staging area, is the set of files that will become the next commit. It is also the commit that will become HEAD’s parent.

Working Copy

This is the term for the current set of files you’re working on in your file system.


When you first checkout a branch, HEAD points to the most recent commit in the branch. The files in the HEAD (they aren’t technically files, they’re blobs but for the purposes of this discussion we can think of them as straight files) match that of the files in the index, and the files checked out in your working copy match HEAD and the index as well. All 3 are in an equal state, and Git is happy.

When you perform a modification to a file, Git notices and says “oh, hey, something has changed. Your working copy no longer matches the index and HEAD.” So it marks the file as changed.

Then, when you do a git add, it stages the file in the index, and Git says “oh, okay, now your working copy and index match, but those are both different than HEAD.”

When you then perform a git commit, Git creates a new commit that HEAD now points to and the status of the index and working copy match it so Git’s happy once more.


If you just look at the reset command by itself, all it does is reset HEAD (the tip of the current branch) to another commit. For instance, say we have a branch (the name doesn’t matter, so let’s call this one “super-duper-feature”) and it looks like so:


If we perform:

> git reset HEAD

… nothing happens. This is because we tell git to reset this branch to HEAD, which is where it already is. But if we do:

> git reset HEAD~1

(HEAD~1 is shorthand case for “the commit right before HEAD”, or put differently “HEAD’s parent”) our branch now looks like so:


If we start at the latest commit again and do:

> git reset HEAD~2

our branch would look like so:


Again, all it does on a basic level is move HEAD to another commit.


So the reset command itself is pretty simple, but it’s the parameters that cause confusion. The main parameters are soft, hard and mixed. These tell Git what to do with your index and working copy when performing the reset.


The --soft parameter tells Git to reset HEAD to another commit, but that’s it. If you specify --soft Git will stop there and nothing else will change. What this means is that the index and working copy don’t get touched, so all of the files that changed between the original HEAD and the commit you reset to appear to be staged.


Mixed (default)

The --mixed parameter (which is the default if you don’t specify anything) will reset HEAD to another commit, and will reset the index to match it, but will stop there. The working copy will not be touched. So, all of the changes between the original HEAD and the commit you reset to are still in the working copy and appear as modified, but not staged.



The --hard parameter will blow out everything – it resets HEAD back to another commit, resets the index to match it, and resets the working copy to match it as well. This is the more dangerous of the commands and is where you can cause damage. Data might get lost here*!


* You can recover it using git reflog but that’s out of scope here.

Twitter isn’t a blogging service, so let’s kill the tweetstorm

There’s a new way of posting on Twitter that’s gaining in popularity – The Tweetstorm™. Coined by BuzzFeed (as far as I know), a Tweetstorm is a message or rant that spans multiple tweets, with each tweet commonly being prefixed by #/. BuzzFeed describes it as such:

Beginning with a simple “1/”, Andreessen began to launch off on blog post-length lectures, 140 characters at a time. Many are 10 or 15 tweets long and shot off in rapid succession.

The multi-tweet is, by all measures, a perfectly normal bit of Twitter behavior; sometimes an important thought or piece of news runs over 140 characters. There are even platforms, like TwitLonger, which allow users to attach a longer message to tweets to work around the 140 character rule. However, what sets Andreessen’s tweetstorm™ apart from the conventional multi-tweet is any indication of anticipated length. Instead, a tweetstormer™ gives no real indication how long it’s going to take and assumes that the reader is more than OK with this.

Let me be honest for one second – I don’t like it. It’s dumb, and it defeats the entire purpose of Twitter. Obviously Marc Andreessen feels differently than me, because he’s a master of Tweetstorms:

Twitter is good for tidbits of information – a status update, a link, or your new hair color. I often think of it as quality control on an assembly line – many workers stand around a conveyor belt, picking out items that aren’t up to snuff. Not every worker finds every low quality item, but there are enough workers standing at this belt that at the end of the line almost all of the low quality items have been removed.

Assembly workers

Twitter should be used in the same way. Scroll around, peruse, window shop. When something grabs your attention – read it, review it, click the link, whatever you need to do. You’re not going to see every single thing in your feed nor should you. If something’s important, someone down the line (whom are the rest of the users on Twitter) might see it and read it. At some point if the information is important enough you’ll hear about it through retweets or another communication medium.

Using it for anything more than that doesn’t even make any sense. Each tweet should stand on it’s own with it’s own context; they should be short and succinct. Posting 12 consecutive 140 character messages and expecting your user to read and follow along is asking too much. The user can’t browse over it and quickly know what’s there – they have to stop and read a blog post worth of text to figure out what’s being posted. A smart man once said:

Understand the limitations of the communication medium you are using and know when to escalate to another, more appropriate one.

Jeff’s absolutely right. Know when to elevate from Twitter into a more appropriate communication medium – Facebook, an email, or a blog post. Please, just don’t blog on Twitter.

The Universe is not programmable, but we can document what we can

Wired posted an article with the idea that the entire universe is an API and explained how we need to tap into its potential. The basic premise of their argument is that an API “lays out the code’s inputs and outputs” and that everything in the universe also has inputs and outputs, therefore has a documentable API. Although the idea behind the argument is valid, the argument falls short in a number of ways.

The term programmable is misused in this case. The definition of programmable is:

capable of being programmed.
an electronic device, as a calculator or telephone, that can be programmed to perform specific tasks.

If something is programmable, it means it can be set up to perform tasks. It can be coerced to do what we tell it to, and only when we tell it to. If there’s anything I learned in my physics courses it’s that sometimes you can’t predict nor control things. Saying we can predict and control all things in the universe is like saying Lindsay Lohan is ever going to get it together. It just isn’t right.

Where the argument falls even shorter is that the API analogy doesn’t fit the argument he’s trying to make anyway. The argument behind the noise is to document our surroundings. Write down what we know and what we’re figuring out. And you know what? We are already doing that. It’s called science. This human race is already trying to figure out the “API of the universe” and we have been for years. Years upon years. Basic mathematics have been around for, like, forever. The Babylonians estimated and documented the √2 to 5 decimal places in ~1800 BC. Newton watched an apple and discovered that if (world == Earth) gravity == 9.8 m/s^2. Discovering the facts of the universe is nothing new.

What Keith is really calling for is for all research to be stored in a centralized place that is available to everyone. Sure, there are places like arXiv but those are very low level technical papers – nothing for the masses. There needs to be a place the collective knowledge of the human race can be put on display. Wikipedia is close, and perhaps it can be it as long as everyone gets on board.

This idea that we should document and share our knowledge of the world around us is the one thing he and I agree on. Knowledge should be available to all, and it shouldn’t be limited to those who can afford it. We should be promoting education, learning, discovery and science. Keith is right in saying that corporations are going to stifle change, and as a civilization we should show them the importance of sharing findings for good of humanity over monetary gain.

The API analogy is cute, but that’s it. The main point behind the analogy is what’s solid here, and that’s what should be taken from his article.

Get some packages with Microsoft OneGet

Are you a Windows user? Do you see people using apt-get and Homebrew and get filled with rage? Are you not a fan of chocolate? Well, then, you are in luck! Introducing, OneGet.

What is OneGet?

OneGet is Microsoft’s new package manager which allows you to discover and install new software on Windows machines. It is similar to apt-get on Linux, Homebrew on OSX, and even the Powershell based Chocolately package manager. When I, however, say similar to Chocolately, I don’t mean that it replaces Chocolately. In fact, it embraces it. OneGet is essentially an interface to many different package repositories, each repository hosting any number of different pieces of software. Chocolately is one of those repositories and in fact is the one and only repository currently available. As more and more repositories become available you can add each of them as a source and query all of them at the same time. Awesome.

How do I get it?

To install OneGet, install the Windows Management Framework V5 Preview. This will, among a few other things, install Powershell 5 along with the OneGet Powershell module. Once installed OneGet will be available the next time you open Powershell. Please note that this is Windows 8/Windows Server 2012 only and that it’s a CTP and is subject to change!

How do I use it?

There are 7 cmdlets available, allowing you to manage repositories and packages. To view a list of the available cmdlets use the Get-Command command:

> Get-Command -Module OneGet

  CommandType     Name                                               Source
  -----------     ----                                               ------
  Cmdlet          Add-PackageSource                                  OneGet
  Cmdlet          Find-Package                                       OneGet
  Cmdlet          Get-Package                                        OneGet
  Cmdlet          Get-PackageSource                                  OneGet
  Cmdlet          Install-Package                                    OneGet
  Cmdlet          Remove-PackageSource                               OneGet
  Cmdlet          Uninstall-Package                                  OneGet

There currently is no documentation for these, so I’ll detail what they do below.


This cmdlet lists the available repositories you have added to OneGet. As I stated above, Chocolatey is the only one so far.

> Get-PackageSource

Name                          Location                      Provider                                          IsTrusted
----                          --------                      --------                                          ---------
chocolatey           Chocolatey                                            False

Add-PackageSource and Remove-PackageSource

These will add, and obviously remove, package repositories. You’ll (hopefully) use this soon when more repositories become available. The Add-PackageSource cmdlet takes name, Provider and Location parameters at a minimum.

> Add-PackageSource chocolatey -Provider Chocolatey -Location
> Remove-PackageSource chocolatey


You can view a list of all packages currently installed on your system by using the Get-Package command:

> Get-Package

Name                             Version          Status           Source         Summary
----                             -------          ------           ------         -------
7zip                    Installed        Local File     7-Zip is a file archiver with a hi...
7zip.install            Installed        Local File     7-Zip is a file archiver with a hi...


To view a list of packages available from all of your repositories, use the Find-Package command. The first time you run it, it will want to install and setup NuGet:

> Find-Package

  The NuGet Package Manager is required to continue. Can we please go get
  [Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"):

From there, it will give you a list of all available packages:

Name                             Version          Status           Source         Summary
----                             -------          ------           ------         -------
1password                      Available        chocolatey     1Password - Have you ever forgotte...
7zip                    Available        chocolatey     7-Zip is a file archiver with a hi...
7zip.commandline         Available        chocolatey     7-Zip is a file archiver with a hi...
7zip.install            Available        chocolatey     7-Zip is a file archiver with a hi...
ack                              2.04             Available        chocolatey     ack is a tool like grep, designed ...
acr                              2.6.0            Available        chocolatey
ActivePerl                      Available        chocolatey     ActivePerl is the leading commerci...


zabbix-agent                     2.2.1            Available        chocolatey     zabbix
zadig                            2.1.1            Available        chocolatey     USB driver installation made easy
zetaresourceeditor              Available        chocolatey     zetaresourceeditor
zoomit                           4.50             Available        chocolatey     ZoomIt is a screen zoom and annota...
zotero-standalone                4.0.19           Available        chocolatey     Zotero [zoh-TAIR-oh] is a free, ea...

You can also provide a filter to search for a specific package:

> Find-Package 7zip

Name                             Version          Status           Source         Summary
----                             -------          ------           ------         -------
7zip                    Available        chocolatey     7-Zip is a file archiver with a hi...

Install-Package and Uninstall-Package

To install a package, use Install-Package. You’ll have to be running Powershell as Administrator to install packages, and set your Execution-Policy to RemoteSigned.

> Install-Package 7zip

Installing Package '7zip' from untrusted source
WARNING: This package source is not marked as safe. Are you sure you want to install software from 'chocolatey'
[Y] Yes  [N] No  [S] Suspend  [?] Help (default is "Y"): Y

Name                             Version          Status           Source         Summary
----                             -------          ------           ------         -------
7zip.install            Installed        chocolatey     7-Zip is a file archiver with a hi...
7zip                    Installed        chocolatey     7-Zip is a file archiver with a hi...

It will first prompt you that the package source (Chocolatey) is not marked as safe (but it is, because we know it is) but hit yes anyway (unless you’re scared, but you shouldn’t be.) By default, packages will be downloaded and installed to C:\Chocolatey\lib when using the Chocolatey repository.

If you hate what you installed, want it gone and killed with fire, use Uninstall-Package:

> Uninstall-Package 7zip

Name                             Version          Status           Source         Summary
----                             -------          ------           ------         -------
7zip.install            Not Installed
7zip                    Not Installed

Why is this cool?

Because it drastically reduces the time it takes to find, download and install software. I have to run at most 2 Powershell commands and I’ll have whatever software I want installed. The packages are named so appropriately that many times you can guess it and reduce your command count down to 1! That’s a 50% increase in efficiency! Whoa!

This also means that Microsoft, again, is serious about supporting the developer community. First it was the .NET Foundation and Roslyn*, and now they’re embracing something that Linux and OSX users have had for years. For the first time in a while I’m really excited that I use Windows.

Now if you’ll excuse me, I’m going to uninstall 7zip just so I can OneGet it.

* Unless you count Steve Balmer’s promise.

Users see the UI, not the code

UI design is hard. Like, it’s way hard. And it’s also a very important piece of the software puzzle. In fact, some might say it’s the most important piece because to users, it is the software:

A good user interface is one of the most important aspects of an enterprise product. While the underlying architecture is extremely important to deliver the functionality, to the end-user, the interface is the product. They don’t know, (and don’t care, usually,) of what goes on behind the scenes, other than that they expect things to work. Every way they interact with the product is through the interface.

When a user opens an app, they see the interface. They don’t see the code behind it, the layers, the interfaces, the helper libraries; they see the UI. That is the software. If you perform massive technical improvements but leave the UI the same, no one will notice. This is why the interface is so critically important, but also why it’s one of the hardest things to do in software. Designing an interface that both looks good and is intuitive to all users takes effort and skill, and is something that Microsoft, Google and even Apple have yet to fully master.

Look, I am in no way a master at UI design. I kind of suck at it. But I can get by if I have to, and one thing that helps me as I’m working is to ask myself this question:

If I was a user how would I expect this to work?

You are a user of many more pieces of software than you will ever write yourself. You, like everyone, will have expectations of how something should function. So tap into those experiences. Put yourself in the shoes of a user and design the feature as you think it should work. Think about the different reasons a user would use this feature and the goals they might want to achieve while using it. Try to come up with something that minimizes the pain of accomplishing those goals. Chances are you’ll come up with something better than these.