Creating an ASP.NET 5 Class Library with Vim

One of the great things about ASP.NET 5 (aka vNext) is that there is no longer a reliance on Visual Studio. I definitely think VS is a great IDE, but it does have it’s quirks and people love options; the option of not having to use VS is a very good option. Also, since ASP.NET 5 can be developed and run on OSX and Linux, users of the *nix OS’s are able to use whatever editor they want.

One thing you might be wondering is: If I don’t have VS, how to I create a project? In current versions of VS/ASP.NET, you open VS and hit File->New Project, give it a name and VS generates a bunch of junk for you. If you don’t have VS, how do you generate that bunch of junk?

Beauty in simplicity

One of the design goals of ASP.NET 5 was to simplify the project structure and remove the hell that is the .csproj file. All that is required now for an ASP.NET 5 project is a project.json file. The csproj file (now a .kproj file in VS 2015) is solely a VS thing and is only necessary if you want to use Visual Studio.

You can, obviously, still use VS to generate projects and it will work great and you can be on your way. But what if you don’t want to use VS? (or are on *nix?)

You have 2 options, the first being to use another generator. One example of a generator is generator-aspnet, which uses the popular Yeoman scaffolding tool.

The other option, which is way more fun, is to create the project yourself.

Creating an ASP.NET 5 project with Vim

We’ll create a simple class library to show how stupidly easy it is, and all we’ll use is Powershell/Vim.

First, create a directory for your project:

C:\Projects> mkdir ClassLib
C:\Projects> cd ClassLib

Next, create a project.json file:

C:\Projects\ClassLib> vim project.json

..and add the following contents (I promise, this is it!):



Create a file for a class:

C:\Projects\ClassLib> vim MyClass.cs

… with the following contents:

namespace ClassLib
    public class MyClass
        public int Number { get; set; }

And finally, build it:

C:\Projects\ClassLib> kpm build

This will build the library as a nuget package and place the nupkg and all files in the \bin\Debug path.

That’s it! It’s worth noting that this is compiling against .NET 4.5 full and is using dependencies from the GAC. You can specify the framework/dependencies in the project.json file if you want to use aspnet50 or aspnetcore50. For example, to compile against aspnetcore50, you can specify it like so:

    "frameworks": {
        "aspnetcore50": {
            "dependencies": {
                "System.Runtime": "4.0.20-beta-22231"

You can review the project.json schema here:

Explaining the ASP.NET 5 Configuration Framework

ASP.NET 5 (aka vNext) has a new configuration system which is designed to be lightweight and to work everywhere. This means no more web.config XML hell! Hooray! (However you can use XML files if you want…)

Loading Settings

The Configuration class is defined in the Microsoft.Framework.ConfigurationModel namespace. This is included as a dependency if you reference the Microsoft.AspNet.Mvc package so there’s no need to explicitly reference the nuget package.

Currently, there are 3 different file types that are supported: Json, Xml and Ini. You can also pull in settings from Environment Variables (like those you’d set in Azure). The Ini load method is in the default package, however the Json and Xml load methods are each defined in the Microsoft.Framework.ConfigurationModel.Json and Microsoft.Framework.ConfigurationModel.Xml packages, respectively. This is because the Json and Xml loaders have different dependencies, and the AspNet team wants to ensure you only load and reference what you need.

To load a settings file, you instantiate a new Configuration and use one of the Add... methods:

var config = new Configuration();

Getting Settings

To get settings out, call the Get method. For example, if you have a Json config like so:

     "window": {
         "background": {
             "color": "blue"

You can get the color out with the full “namespace” (for lack of a better term) of the key:

var backgroundColor = config.Get("window:background:color");

The team looks like they’re hitting their goal of a lightweight configuration system that’s easy to use. They still have some kinks to work out (like how are arrays going to be handled, if at all?) but the initial impressions are positive.

What is an Assembly Neutral Interface and why do we need it?

If you’ve spent any time browsing the source of ASP.NET 5 (aka vNext), you’ve surely seen the [AssemblyNeutral] attribute floating around. What in the hell is that?

Some Background

Right now, interfaces are tied to an assembly. If you want to implement an interface, you have to reference the assembly that the interface is declared in. For instance, let’s say I’m writing a framework, aptly named ZeeFramework (what it actually does and the implementation both don’t matter), and in this framework I have an interface for a logger:

namespace Zee
    public interface IZeeLogger
        void DoZeeLogging(); //Ironically sounds like "Doozie Logging"

Now, in your application you’re writing, you want to use this amazing ZeeFramework (it’s been called framework of a generation, so obviously it’s amazing). And with this logging component, you want to use log4net. You have 2 options:

  1. You write an implementation yourself
  2. You contact the creators of log4net and tell them to write an implementation

The first one sucks because each and every person who wants to use ZeeFramework has to write their own implementation. The second one sucks because the creators of log4net have to create and maintain a package with an implementation for this logger. And other logging frameworks, like nlog, have to do the same.

Okay. Let’s say you convince log4net and nlog to write implementations for everyone to use. Great. Now they both have binary dependencies on ZeeFramework just so they can implement IZeeLogger. And what if you, because you’re mean and don’t like ZeeFramework, create a framework named WhyFramework with another logging interface:

namespace Why
    public interface IWhyLogger
        void WhyLog();

And we run into the same issue – either everyone writing an application has to write their own implementation, or the logging frameworks have to each maintain a package with an implementation. The logging interfaces are essentially the same – why do we need all of these implementations?

Let’s all live in harmony

In a perfect world, we’d have one logging abstraction – ILogger – that every framework would use. My ZeeFramework and your WhyFramework would be dependent on that, and log4net and nlog and whatever other loggers there are would each maintain a single implementation of that single interface.

So how would that work? We could have a package whose sole member would be the the ILogger interface, but then we’re back to having seemingly unnecessary binary dependencies. And besides, who would maintain that? Who would own it?

The AssemblyNeutral attribute

Swimming DuckThat’s where the [AssemblyNeutral] attribute comes in. When an interface is decorated with this attribute, it’s identity is no longer tied to an assembly. Interfaces are now just contracts, and the code basically says “hell, all I need to be able to do is call Log – can you log or not?”. We’re getting down to Duck Typing – if it walks like a duck, and talks like a duck, well then it must be a duck which means it can do duck-like things.

This removes the unnecessary binary dependencies that we have today, and allows for more loose coupling, which as developers we strive for.

To do this, anyone who wants to use the assembly neutral interface must define it in their code (it must be exactly the same!). For instance, if I defined this in ZeeFramework and log4net defined it in their package, we’d be good to go:

public interface ILogger
    void Log(string message);


The goal is to:

  • Allows for loose coupling
  • Less code duplication
  • Less dependency hell
  • More community defined standards

The ASP.NET vNext team wants to create a vibrant open source eco-system. Part of that includes allowing the community to decide on standards and contracts. This allows the community to more easily achieve that. The more easily these standards can be adopted the more likely they will be adopted.

You can read more about Assembly Neutral Interfaces on the ASP.NET Github Wiki or on David Fowler’s blog

I understand why Facebook split Messenger into it’s own application

A large part of programming (probably the largest part) is refactoring. It’s the process of taking existing code and improving it. And part of that is determining when a class, feature, or even an application gets too big and splitting it up. It could mean applying the Single Responsibility Principle and splitting a class into 2 or more classes. Or moving features around in the UI so that the app is more organized.

Or even breaking a feature out into it’s own application. Facebook recently did just that when it split off the messaging portion of Facebook into it’s own application, the aptly named Messenger. They have received some flak for this, and from a user perspective it’s definitely not unwarranted. I have to download a new app? A second one? What happens to the old app? What if I’m browsing Facebook and I want to respond to a message? Now I have to switch apps? Are the permissions different on both apps? If I change my password do I have to re-login in both apps? Do I have to have the main Facebook app or can I only use messenger? It adds confusion, and one thing that isn’t good is confusing your users.

Mark Zuckerberg recently explained the reasoning behind the split himself:

… but if we wanted to focus on serving this [use case] well, we had to build a dedicated and focused experience. We build for the whole community. … We realize that we have a lot to earn in terms of trust and proving that this standalone messenger experience will be really good. We have some of our most talented people working on this.

Although people may be confused, what Facebook is trying to do is enhance the messaging user experience. People may not realize it now, but it was the right move, and especially from a programming perspective. They broke off Messenger so they could decrease the size of the teams working on both Messenger and the main Facebook application.

Quality vs Quantity

The quality of a product often goes down with the number of hands that touch it. The more programmers there are on a team, the more communication paths there are:


Many programmers know that as the size of teams grow, so does the time it takes to complete a feature. And things get forgotten. And Jerry said this but Ronnie said that. And you end up having 6 person code reviews.

And then there are the meetings. Oh god, the meetings. Let’s synchronously get together and talk about doing something instead of actually, you know, doing it.


Smaller teams are more focused. They’re leaner, meaner and can get things done faster. Leaner, meaner, more focused teams means a better overall product – both Messenger and Facebook itself. And a better overall product is, obviously, better for users.

I understand the user blowback from this, but overall this was a smart move. Besides, just like every time Facebook changes their layout, everyone will get used to it in a matter of weeks.

Compression on the web is surprisingly underused

Eric Lawrence posted an article the other day on web compression, focusing a bunch of different algorithms, what should get compressed and how to get the best performance on your site based on mixing minification and compression. It’s a great read with lots of good, useful information.

Isn’t compression used on most sites already?

This got me curious of the state of compression on the web. How many sites use some form of compression? My initial assumption was > 90%, but even that sounded low. I looked up some stats on W3Techs, which provides “information about the usage of various types of technologies on the web.” Based on their studies, they found that:

Compression is used by 58.1% of all the websites.

Fifty-eight point one percent.

That’s it? I was astounded when I first read that number and I’m still pretty surprised now. There are estimated to be over 1 billion websites in existence right now, which means 500 million of them are sending uncompressed data. That’s a lot of useless bytes being sent through the wire.

It makes no sense to not enable compression. It’s incredibly easy to set up in both Apache and IIS (on IIS you seriously just have to click a few buttons). Enabling it will only affect users for the good, because not only have all major browsers supported it for the past ~10 years, they all send the Accept-Encoding: gzip header if they support compression. If they don’t send the header, the server won’t compress the response. Everyone gets their content either way, some will just get it faster than others.

The only downside is a slight increase in CPU usage, but that is a minimal increase for a massive decrease in response size. To show how much the response size is decreased, we can use the Composer in Fiddler to run two requests to, one with the Accept-Encoding: gzip header and one without:

Without Compression With Compression (gzip) Savings
79,203 bytes 21,499 bytes 72.8%

Multiply that over thousands of users a month and that’s a significant bandwidth savings.

Compress yourself before your wreck yourself

Check yourself before you wreck yourself
Do yourself a favor and make sure that compression is enabled on your website. It’s the ultimate low-cost, high reward feature.

Your server’s internet pipes will thank you.

Google’s Material Design Spec is a great idea

Google just released their first major update to their Material Design Spec. Originally released back in June, the spec is a document that outlines the best approach to application design based on their material design philosophy. It’s goal is to:

Create a visual language that synthesizes classic principles of good design with the innovation and possibility of technology and science.

It’s a great introduction to application design and is relatively easy to follow. It has color pallets, layout ideas and animation guidelines among many other things.

What in the crap is “Material Design”?

It is a design philosophy for virtual applications that attempts to replicate the physical world. Items in the physical world have physical properties – they have mass, they rotate, they lay on top of each other, they move. They accelerate and decelerate, with larger objects taking longer than smaller objects to get up to speed. When an object is touched, it provides tactile feedback and moves in predictable ways.

The goal of material design is to replicate these effects to provide a consistent, immersive experience across applications. When the user taps on an item in the virtual world, it should provide feedback in some way – it should ripple, or highlight, or float. Pages shouldn’t just appear – they should slide in, with natural looking acceleration and deceleration. It’s appealing to the eyes and provides a great deal of polish to an application.

The following video shows a few examples of what Google is trying to achieve:

Everything is fluid, and moves, and is colorful and fun and appealing. But it’s not over the top – animations don’t take 30 seconds and cause users to get frustrated because they have to wait to perform their action. The animations are immediate, and provide just enough visual appeal while not getting in the way.

Okay, really, is this important?

I think so. To my knowledge this is the first document of it’s kind – an easy to use, “here’s what looks good and why” guide for application design. It’s Layouts and Colors for Dummies*. It will, hopefully, create consistency across not only environments, but across devices, OS’s and applications. If applications function in similar ways, it means the application growth curve turns from this:

This means learning is hard

This means learning is hard

to this:


Learning is easy!

This will allow developers to spend less time focusing on specific environments and more time creating a single, awesome design that works everywhere. Less time duplicating means more time awesome-ing.


Google has also promised that they’ll take community contributions:

… we set out to create a living document that would grow with feedback from the community.

The fact that Google is taking feedback from the community is, probably, the best part. This isn’t Google trying to tell the world how to do things – they aren’t throwing slop (this spec) to the pigs (the development community) and expecting us to eat it up and do whatever they tell us. This is a document by the community, for the community, and for the greater good of computing.

Just like CommonMark, this is an attempt to grow the field computing, and I’m definitely on board.

* If this isn’t a real book it should be.

Install Windows 10 from a USB Flash Drive

I’m writing this because I can, for some reason, never remember how to use Diskpart. And who uses DVD’s anymore? Download the Windows 10 preview ISO from here:


1. Insert a usb drive at least 4gb in size

2. Open a command prompt as administrator

Hit Windows Key, type cmd and hit Ctrl+Shift+Enter. This will force it to open as admin.

3. Run diskpart

This will open and run the Diskpart command line utility, which allows you to manage disks, partitions and volumes.

C:\Windows\system32> diskpart

4. Run list disk

This will list all disks on your system. You’ll see the something similar to this:

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online          238 GB      0 B
  Disk 1    Online          465 GB      0 B
  Disk 2    Online           29 GB      0 B

5. Select your flash drive by running select disk #

Find the item that corresponds with your flash drive and select the disk. In the example above, my flash drive is disk 2 so I’ll run:

DISKPART> select disk 2

Disk 2 is now the selected disk.

6. Run clean

WARNING: This deletes all data on your drive

The clean command marks all data on the drive as deleted and therefore removes all partitions and volumes. Make sure you want to do this! If you are sure, run:


7. Create a partition

DISKPART> create partition primary

8. Select the new partition

Since we know there is only one partition, we can just run this:

DISKPART> select partition 1

Without checking the partition number. If you’re really curious, run list partition to check.

9. Format the partition

To format it, we’ll use the NTFS file system and run a quick format:

DISKPART> format fs=ntfs quick

10. Set the current partition as Active


DISKPART> active

11. Exit diskpart

Run exit. This will exit diskpart, but leave the command window open.

12. Mount your ISO

Use Virtual CloneDrive or similar.

13. Navigate to the mounted image and install a bootsector

My ISO is mounted as G:\, so I’ll navigate to G:\boot and run:

C:Windowssystem32> G:
G:\> cd boot
G:\boot> bootsect.exe /nt60 E:

Where E:\ in this case is my flash drive’s letter.

14. Copy the entire contents of the ISO to your flash drive

You can either do this from Windows using Ctrl+C+Ctrl+V, or from the command line using xcopy.

G:\> xcopy g:\*.* e:\ /E /H /F 

/E copies all subfolders, /H copies all hidden files and /F displays all source and destination file names as it’s copying.

Once that’s done, go and install Windows!

Tools Amplify Talent

My sister in law’s dad picked up golf a few years ago. We’ll call him George, because that’s a pretty generic name and he was curious about golf. Anyway. On the morning of my then-girlfriend’s-brother’s-wedding-to-his-then-girlfriend (complicated, I know) we played a round of golf. George had never played before, wasn’t interested in learning, but didn’t want to miss out on the heavily desired guy time so he rode in the cart with us while we played. Apparently something clicked, because he, at that exact point in time, decided golf was for him. Fast forward to today, and he plays a few times a week.

Now, as I said, he’s only been playing a few years, so he’s not great. Last I heard, he shoots in the low 90’s/high 80’s. Not bad, but not good either. (For you non-golfers, 72 is the average par score for most courses. The winners of PGA tournaments score in the 60’s). It takes years upon years to master golf* so his score is not a surprise.

What is a surprise, though, is that George thinks buying new clubs will make him a better golfer. I have heard by rumor that he has already had 5 sets of clubs, not to mention a countless number of new drivers, putters and various other clubs. All in a few years of golf. He’s probably had a few new golf bags, too, because hell, the bag color definitely affects your swing.

He thinks that the clubs make him good. But that’s astonishingly backwards.

Tools amplify talent; talent doesn’t appear through the tools.

Buying a new Callaway driver won’t make you magically hit the ball 300 yards if you can’t swing straight to begin with.

This applies to so many other disciplines as well. Take woodworking, for example. If you have a natural eye for desk design, you can get by with low quality tools. If your jigsaw has a low RPM and a bent blade, you can still cut wood and sand it and perfect it and craft a beautiful desk. It might take longer, and might be more difficult, but you still have the ability and the eye for desk design. The most expensive jigsaw you can buy won’t magically flip the switch in your brain that allows your hands to work with wood.

elaborate table

Most people can’t craft a table as beautiful as this.

Tools enhance your ability. They allow you to apply the skills you have gained from years of experience. New, expensive tools are not a substitute for experience.

Quite often prospective programmers are asking What programming language should I learn? To me, that’s a fruitless question. It doesn’t matter what language you learn, what’s important is learning how to program. You need to learn how to manipulate a computer and how to think in a logical, linear manner.

Once you know how to program, you’ll understand how to choose the right tool for the job. Find a language that augments what you’re trying to do. You wouldn’t choose Objective-C for web programming, just like you wouldn’t choose C# for embedded systems. You don’t use a belt sander for a smooth finishing sand. Choose the right tools, and they’ll help you create something awesome.

Unless you’re hoping to hit a hole in one; in that case you’re gonna have to rely on luck.

* I’m actually convinced that nobody masters golf. It’s an incredibly difficult game.

If you’re using enum.ToString() that often, you’re doing it wrong

Daniel Wertheim measured the performance of enum.ToString and found it to be 400x slower than using a comparable class with const’s. That’s a massive difference, and something that, in theory, might make you think otherwise about using an enum.

But in practice… who cares?

You shouldn’t be using enum.ToString that often anyway

There aren’t many scenarios in which you should be ToStringing them in rapid succession. And if you are, you’re probably doing something wrong. Enums are used to hold state and are commonly compared, and enum comparisons are incredibly fast. Much, much faster than comparing strings.

The only real time you’ll have to have the string representation of an enum is if you’re populating a drop down list or something similar, and for that you ToString each item once and you’re done with it.

Just for fun, I ran a totally unscientific, unoptimized test* to see how fast a single enum.ToString() ran:

static void Main(string[] args)
    var sw = new Stopwatch();
    var s = Test.One.ToString();
public enum Test

The result was 00:00:00.0000664. This was for a single ToString with no burn in. That’s ridiculously fast, and will be even faster after it’s JIT’d.

So, yes, Daniel is right and ToStringing an enum is slow, but let’s look at the big picture here. For the amount that you should be calling ToString on an enum (very little) it’s fast enough by a large margin. Unless you run into a very rare situation, there are many more performance issues to worry about.

* Like, really, this probably breaks every rule there is.

Recovering changes with git reflog

I ran into a situation recently where I accidentally merged an unfinished feature branch directly into master. I had been working on the feature and got an urgent hotfix request. Without thinking, I branched from the feature branch to perform the hotfix changes, then merged that directly into master once I was finished.


Luckily enough, I noticed the vast number of changes in master and realized what I had done before tagging and releasing.

My first thought was to revert the merge commit, but since it was a fast forward it wasn’t that simple. The feature branch had about a month’s worth of work in it and it would have been a pain to wade through all of those commits.

What is a developer to do?

Reflog to the rescue!

Reflog is basically a list of every single action performed on the repository. Specifically, the man pages say:

Reflog is a mechanism to record when the tip of branches are updated.

So anytime you commit, checkout or merge, an entry is entered into the reflog. This is important to remember, because it means that basically nothing is ever lost.

Here’s some sample output from the reflog command:

D:/Projects/reflog [develop]> git reflog
38ca8c4 HEAD@{0}: checkout: moving from feature/foo to develop
512e62c HEAD@{1}: commit: Now with 50% more foos!:
38ca8c4 HEAD@{2}: checkout: moving from develop to feature/foo

Some things:

  • The results here are listed in descending order – newest action is first
  • The first alphanumeric string is the commit hash of the result of the action – if the action is a commit it’s the new commit hash, if the action is a checkout it’s the commit hash of the new branch head, etc
  • The next column is the history of HEAD. So the first line (HEAD@{0}) is where HEAD is now, the second line is where head was before that, the third line is where head was before that, etc
  • The final column is the action along with any additional information – if the action is a commit, it’s the commit message, if the action is a checkout, it includes information about the to and from branches

Using that output, you can easily trace my footsteps (remember, descending order so we’re starting at the bottom):

  1. First, I checked out a feature branch
  2. I then committed 50% more foos
  3. Finally, I checked out the develop branch

So how do I get my data back?

It’s relatively easy – in most cases you can perform a checkout on the commit you want to get back, and branch from there.

Let’s pretend that, while in the develop branch, I somehow deleted my unmerged feature/foo branch. I can run git reflog to see the history of HEAD, and see that the last time I was on the feature/foo branch was on commit 512e62c. I can run git checkout 512e62c, then git branch branch-name:

D:/Projects/reflog [master]> git checkout 512e62c
Note: checking out '512e62c'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at 512e62c... First commit
D:/Projects/reflog [(512e62c...)]> git branch feature/foo
D:/Projects/reflog [(512e62c...)]> git checkout feature/foo
Switched to branch 'feature/foo'
D:/Projects/reflog [feature/foo]>

Notice how it said that I’m in a detached HEAD state. What that means is that HEAD is not pointing to the tip of a branch – we’re down the line of commits somewhere. However, at this point the files are checked out in my working copy and I am able to recover them. I can run git branch to create a branch from this commit, and continue working where I left off like nothing happened.