What is an Assembly Neutral Interface and why do we need it?

If you’ve spent any time browsing the source of ASP.NET 5 (aka vNext), you’ve surely seen the [AssemblyNeutral] attribute floating around. What in the hell is that?

Some Background

Right now, interfaces are tied to an assembly. If you want to implement an interface, you have to reference the assembly that the interface is declared in. For instance, let’s say I’m writing a framework, aptly named ZeeFramework (what it actually does and the implementation both don’t matter), and in this framework I have an interface for a logger:

namespace Zee
{
    public interface IZeeLogger
    {
        void DoZeeLogging(); //Ironically sounds like "Doozie Logging"
    }
}

Now, in your application you’re writing, you want to use this amazing ZeeFramework (it’s been called framework of a generation, so obviously it’s amazing). And with this logging component, you want to use log4net. You have 2 options:

  1. You write an implementation yourself
  2. You contact the creators of log4net and tell them to write an implementation

The first one sucks because each and every person who wants to use ZeeFramework has to write their own implementation. The second one sucks because the creators of log4net have to create and maintain a package with an implementation for this logger. And other logging frameworks, like nlog, have to do the same.

Okay. Let’s say you convince log4net and nlog to write implementations for everyone to use. Great. Now they both have binary dependencies on ZeeFramework just so they can implement IZeeLogger. And what if you, because you’re mean and don’t like ZeeFramework, create a framework named WhyFramework with another logging interface:

namespace Why
{
    public interface IWhyLogger
    {
        void WhyLog();
    }
}

And we run into the same issue – either everyone writing an application has to write their own implementation, or the logging frameworks have to each maintain a package with an implementation. The logging interfaces are essentially the same – why do we need all of these implementations?

Let’s all live in harmony

In a perfect world, we’d have one logging abstraction – ILogger – that every framework would use. My ZeeFramework and your WhyFramework would be dependent on that, and log4net and nlog and whatever other loggers there are would each maintain a single implementation of that single interface.

So how would that work? We could have a package whose sole member would be the the ILogger interface, but then we’re back to having seemingly unnecessary binary dependencies. And besides, who would maintain that? Who would own it?

The AssemblyNeutral attribute

Swimming DuckThat’s where the [AssemblyNeutral] attribute comes in. When an interface is decorated with this attribute, it’s identity is no longer tied to an assembly. Interfaces are now just contracts, and the code basically says “hell, all I need to be able to do is call Log – can you log or not?”. We’re getting down to Duck Typing – if it walks like a duck, and talks like a duck, well then it must be a duck which means it can do duck-like things.

This removes the unnecessary binary dependencies that we have today, and allows for more loose coupling, which as developers we strive for.

To do this, anyone who wants to use the assembly neutral interface must define it in their code (it must be exactly the same!). For instance, if I defined this in ZeeFramework and log4net defined it in their package, we’d be good to go:

[AssemblyNeutral]
public interface ILogger
{
    void Log(string message);
}

Why?

The goal is to:

  • Allows for loose coupling
  • Less code duplication
  • Less dependency hell
  • More community defined standards

The ASP.NET vNext team wants to create a vibrant open source eco-system. Part of that includes allowing the community to decide on standards and contracts. This allows the community to more easily achieve that. The more easily these standards can be adopted the more likely they will be adopted.

You can read more about Assembly Neutral Interfaces on the ASP.NET Github Wiki or on David Fowler’s blog

I understand why Facebook split Messenger into it’s own application

A large part of programming (probably the largest part) is refactoring. It’s the process of taking existing code and improving it. And part of that is determining when a class, feature, or even an application gets too big and splitting it up. It could mean applying the Single Responsibility Principle and splitting a class into 2 or more classes. Or moving features around in the UI so that the app is more organized.

Or even breaking a feature out into it’s own application. Facebook recently did just that when it split off the messaging portion of Facebook into it’s own application, the aptly named Messenger. They have received some flak for this, and from a user perspective it’s definitely not unwarranted. I have to download a new app? A second one? What happens to the old app? What if I’m browsing Facebook and I want to respond to a message? Now I have to switch apps? Are the permissions different on both apps? If I change my password do I have to re-login in both apps? Do I have to have the main Facebook app or can I only use messenger? It adds confusion, and one thing that isn’t good is confusing your users.

Mark Zuckerberg recently explained the reasoning behind the split himself:

… but if we wanted to focus on serving this [use case] well, we had to build a dedicated and focused experience. We build for the whole community. … We realize that we have a lot to earn in terms of trust and proving that this standalone messenger experience will be really good. We have some of our most talented people working on this.

Although people may be confused, what Facebook is trying to do is enhance the messaging user experience. People may not realize it now, but it was the right move, and especially from a programming perspective. They broke off Messenger so they could decrease the size of the teams working on both Messenger and the main Facebook application.

Quality vs Quantity

The quality of a product often goes down with the number of hands that touch it. The more programmers there are on a team, the more communication paths there are:

quad_paths

Many programmers know that as the size of teams grow, so does the time it takes to complete a feature. And things get forgotten. And Jerry said this but Ronnie said that. And you end up having 6 person code reviews.

And then there are the meetings. Oh god, the meetings. Let’s synchronously get together and talk about doing something instead of actually, you know, doing it.

main-qimg-8bae3fb44a61990ee7840b736b173f4c

Smaller teams are more focused. They’re leaner, meaner and can get things done faster. Leaner, meaner, more focused teams means a better overall product – both Messenger and Facebook itself. And a better overall product is, obviously, better for users.

I understand the user blowback from this, but overall this was a smart move. Besides, just like every time Facebook changes their layout, everyone will get used to it in a matter of weeks.

Compression on the web is surprisingly underused

Eric Lawrence posted an article the other day on web compression, focusing a bunch of different algorithms, what should get compressed and how to get the best performance on your site based on mixing minification and compression. It’s a great read with lots of good, useful information.

Isn’t compression used on most sites already?

This got me curious of the state of compression on the web. How many sites use some form of compression? My initial assumption was > 90%, but even that sounded low. I looked up some stats on W3Techs, which provides “information about the usage of various types of technologies on the web.” Based on their studies, they found that:

Compression is used by 58.1% of all the websites.

Fifty-eight point one percent.

That’s it? I was astounded when I first read that number and I’m still pretty surprised now. There are estimated to be over 1 billion websites in existence right now, which means 500 million of them are sending uncompressed data. That’s a lot of useless bytes being sent through the wire.

It makes no sense to not enable compression. It’s incredibly easy to set up in both Apache and IIS (on IIS you seriously just have to click a few buttons). Enabling it will only affect users for the good, because not only have all major browsers supported it for the past ~10 years, they all send the Accept-Encoding: gzip header if they support compression. If they don’t send the header, the server won’t compress the response. Everyone gets their content either way, some will just get it faster than others.

The only downside is a slight increase in CPU usage, but that is a minimal increase for a massive decrease in response size. To show how much the response size is decreased, we can use the Composer in Fiddler to run two requests to http://davidzych.com, one with the Accept-Encoding: gzip header and one without:

Without Compression With Compression (gzip) Savings
79,203 bytes 21,499 bytes 72.8%

Multiply that over thousands of users a month and that’s a significant bandwidth savings.

Compress yourself before your wreck yourself

Check yourself before you wreck yourself
Do yourself a favor and make sure that compression is enabled on your website. It’s the ultimate low-cost, high reward feature.

Your server’s internet pipes will thank you.

Google’s Material Design Spec is a great idea

Google just released their first major update to their Material Design Spec. Originally released back in June, the spec is a document that outlines the best approach to application design based on their material design philosophy. It’s goal is to:

Create a visual language that synthesizes classic principles of good design with the innovation and possibility of technology and science.

It’s a great introduction to application design and is relatively easy to follow. It has color pallets, layout ideas and animation guidelines among many other things.

What in the crap is “Material Design”?

It is a design philosophy for virtual applications that attempts to replicate the physical world. Items in the physical world have physical properties – they have mass, they rotate, they lay on top of each other, they move. They accelerate and decelerate, with larger objects taking longer than smaller objects to get up to speed. When an object is touched, it provides tactile feedback and moves in predictable ways.

The goal of material design is to replicate these effects to provide a consistent, immersive experience across applications. When the user taps on an item in the virtual world, it should provide feedback in some way – it should ripple, or highlight, or float. Pages shouldn’t just appear – they should slide in, with natural looking acceleration and deceleration. It’s appealing to the eyes and provides a great deal of polish to an application.

The following video shows a few examples of what Google is trying to achieve:

Everything is fluid, and moves, and is colorful and fun and appealing. But it’s not over the top – animations don’t take 30 seconds and cause users to get frustrated because they have to wait to perform their action. The animations are immediate, and provide just enough visual appeal while not getting in the way.

Okay, really, is this important?

I think so. To my knowledge this is the first document of it’s kind – an easy to use, “here’s what looks good and why” guide for application design. It’s Layouts and Colors for Dummies*. It will, hopefully, create consistency across not only environments, but across devices, OS’s and applications. If applications function in similar ways, it means the application growth curve turns from this:

This means learning is hard

This means learning is hard

to this:

easy

Learning is easy!

This will allow developers to spend less time focusing on specific environments and more time creating a single, awesome design that works everywhere. Less time duplicating means more time awesome-ing.

Feedbagback

Google has also promised that they’ll take community contributions:

… we set out to create a living document that would grow with feedback from the community.

The fact that Google is taking feedback from the community is, probably, the best part. This isn’t Google trying to tell the world how to do things – they aren’t throwing slop (this spec) to the pigs (the development community) and expecting us to eat it up and do whatever they tell us. This is a document by the community, for the community, and for the greater good of computing.

Just like CommonMark, this is an attempt to grow the field computing, and I’m definitely on board.


* If this isn’t a real book it should be.

Install Windows 10 from a USB Flash Drive

I’m writing this because I can, for some reason, never remember how to use Diskpart. And who uses DVD’s anymore? Download the Windows 10 preview ISO from here: http://windows.microsoft.com/en-us/windows/preview

Steps

1. Insert a usb drive at least 4gb in size

2. Open a command prompt as administrator

Hit Windows Key, type cmd and hit Ctrl+Shift+Enter. This will force it to open as admin.

3. Run diskpart

This will open and run the Diskpart command line utility, which allows you to manage disks, partitions and volumes.

C:\Windows\system32> diskpart

4. Run list disk

This will list all disks on your system. You’ll see the something similar to this:

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online          238 GB      0 B
  Disk 1    Online          465 GB      0 B
  Disk 2    Online           29 GB      0 B

5. Select your flash drive by running select disk #

Find the item that corresponds with your flash drive and select the disk. In the example above, my flash drive is disk 2 so I’ll run:

DISKPART> select disk 2

Disk 2 is now the selected disk.

6. Run clean

WARNING: This deletes all data on your drive

The clean command marks all data on the drive as deleted and therefore removes all partitions and volumes. Make sure you want to do this! If you are sure, run:

DISKPART> clean

7. Create a partition

DISKPART> create partition primary

8. Select the new partition

Since we know there is only one partition, we can just run this:

DISKPART> select partition 1

Without checking the partition number. If you’re really curious, run list partition to check.

9. Format the partition

To format it, we’ll use the NTFS file system and run a quick format:

DISKPART> format fs=ntfs quick

10. Set the current partition as Active

Run:

DISKPART> active

11. Exit diskpart

Run exit. This will exit diskpart, but leave the command window open.

12. Mount your ISO

Use Virtual CloneDrive or similar.

13. Navigate to the mounted image and install a bootsector

My ISO is mounted as G:\, so I’ll navigate to G:\boot and run:

C:Windowssystem32> G:
G:\> cd boot
G:\boot> bootsect.exe /nt60 E:

Where E:\ in this case is my flash drive’s letter.

14. Copy the entire contents of the ISO to your flash drive

You can either do this from Windows using Ctrl+C+Ctrl+V, or from the command line using xcopy.

G:\> xcopy g:\*.* e:\ /E /H /F 

/E copies all subfolders, /H copies all hidden files and /F displays all source and destination file names as it’s copying.

Once that’s done, go and install Windows!

Tools Amplify Talent

My sister in law’s dad picked up golf a few years ago. We’ll call him George, because that’s a pretty generic name and he was curious about golf. Anyway. On the morning of my then-girlfriend’s-brother’s-wedding-to-his-then-girlfriend (complicated, I know) we played a round of golf. George had never played before, wasn’t interested in learning, but didn’t want to miss out on the heavily desired guy time so he rode in the cart with us while we played. Apparently something clicked, because he, at that exact point in time, decided golf was for him. Fast forward to today, and he plays a few times a week.

Now, as I said, he’s only been playing a few years, so he’s not great. Last I heard, he shoots in the low 90’s/high 80’s. Not bad, but not good either. (For you non-golfers, 72 is the average par score for most courses. The winners of PGA tournaments score in the 60’s). It takes years upon years to master golf* so his score is not a surprise.

What is a surprise, though, is that George thinks buying new clubs will make him a better golfer. I have heard by rumor that he has already had 5 sets of clubs, not to mention a countless number of new drivers, putters and various other clubs. All in a few years of golf. He’s probably had a few new golf bags, too, because hell, the bag color definitely affects your swing.

He thinks that the clubs make him good. But that’s astonishingly backwards.

Tools amplify talent; talent doesn’t appear through the tools.

Buying a new Callaway driver won’t make you magically hit the ball 300 yards if you can’t swing straight to begin with.

This applies to so many other disciplines as well. Take woodworking, for example. If you have a natural eye for desk design, you can get by with low quality tools. If your jigsaw has a low RPM and a bent blade, you can still cut wood and sand it and perfect it and craft a beautiful desk. It might take longer, and might be more difficult, but you still have the ability and the eye for desk design. The most expensive jigsaw you can buy won’t magically flip the switch in your brain that allows your hands to work with wood.

elaborate table

Most people can’t craft a table as beautiful as this.

Tools enhance your ability. They allow you to apply the skills you have gained from years of experience. New, expensive tools are not a substitute for experience.

Quite often prospective programmers are asking What programming language should I learn? To me, that’s a fruitless question. It doesn’t matter what language you learn, what’s important is learning how to program. You need to learn how to manipulate a computer and how to think in a logical, linear manner.

Once you know how to program, you’ll understand how to choose the right tool for the job. Find a language that augments what you’re trying to do. You wouldn’t choose Objective-C for web programming, just like you wouldn’t choose C# for embedded systems. You don’t use a belt sander for a smooth finishing sand. Choose the right tools, and they’ll help you create something awesome.

Unless you’re hoping to hit a hole in one; in that case you’re gonna have to rely on luck.


* I’m actually convinced that nobody masters golf. It’s an incredibly difficult game.

If you’re using enum.ToString() that often, you’re doing it wrong

Daniel Wertheim measured the performance of enum.ToString and found it to be 400x slower than using a comparable class with const’s. That’s a massive difference, and something that, in theory, might make you think otherwise about using an enum.

But in practice… who cares?

You shouldn’t be using enum.ToString that often anyway

There aren’t many scenarios in which you should be ToStringing them in rapid succession. And if you are, you’re probably doing something wrong. Enums are used to hold state and are commonly compared, and enum comparisons are incredibly fast. Much, much faster than comparing strings.

The only real time you’ll have to have the string representation of an enum is if you’re populating a drop down list or something similar, and for that you ToString each item once and you’re done with it.

Just for fun, I ran a totally unscientific, unoptimized test* to see how fast a single enum.ToString() ran:

static void Main(string[] args)
{
    var sw = new Stopwatch();
    sw.Start();
    var s = Test.One.ToString();
    sw.Stop();
    Console.WriteLine(sw.Elapsed);
    Console.Read();
}
        
public enum Test
{
    One,
    Two,
    Three
}

The result was 00:00:00.0000664. This was for a single ToString with no burn in. That’s ridiculously fast, and will be even faster after it’s JIT’d.

So, yes, Daniel is right and ToStringing an enum is slow, but let’s look at the big picture here. For the amount that you should be calling ToString on an enum (very little) it’s fast enough by a large margin. Unless you run into a very rare situation, there are many more performance issues to worry about.


* Like, really, this probably breaks every rule there is.

Recovering changes with git reflog

I ran into a situation recently where I accidentally merged an unfinished feature branch directly into master. I had been working on the feature and got an urgent hotfix request. Without thinking, I branched from the feature branch to perform the hotfix changes, then merged that directly into master once I was finished.

Whoops.

Luckily enough, I noticed the vast number of changes in master and realized what I had done before tagging and releasing.

My first thought was to revert the merge commit, but since it was a fast forward it wasn’t that simple. The feature branch had about a month’s worth of work in it and it would have been a pain to wade through all of those commits.

What is a developer to do?

Reflog to the rescue!

Reflog is basically a list of every single action performed on the repository. Specifically, the man pages say:

Reflog is a mechanism to record when the tip of branches are updated.

So anytime you commit, checkout or merge, an entry is entered into the reflog. This is important to remember, because it means that basically nothing is ever lost.

Here’s some sample output from the reflog command:

D:/Projects/reflog [develop]> git reflog
38ca8c4 HEAD@{0}: checkout: moving from feature/foo to develop
512e62c HEAD@{1}: commit: Now with 50% more foos!:
38ca8c4 HEAD@{2}: checkout: moving from develop to feature/foo

Some things:

  • The results here are listed in descending order – newest action is first
  • The first alphanumeric string is the commit hash of the result of the action – if the action is a commit it’s the new commit hash, if the action is a checkout it’s the commit hash of the new branch head, etc
  • The next column is the history of HEAD. So the first line (HEAD@{0}) is where HEAD is now, the second line is where head was before that, the third line is where head was before that, etc
  • The final column is the action along with any additional information – if the action is a commit, it’s the commit message, if the action is a checkout, it includes information about the to and from branches

Using that output, you can easily trace my footsteps (remember, descending order so we’re starting at the bottom):

  1. First, I checked out a feature branch
  2. I then committed 50% more foos
  3. Finally, I checked out the develop branch

So how do I get my data back?

It’s relatively easy – in most cases you can perform a checkout on the commit you want to get back, and branch from there.

Let’s pretend that, while in the develop branch, I somehow deleted my unmerged feature/foo branch. I can run git reflog to see the history of HEAD, and see that the last time I was on the feature/foo branch was on commit 512e62c. I can run git checkout 512e62c, then git branch branch-name:

D:/Projects/reflog [master]> git checkout 512e62c
Note: checking out '512e62c'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at 512e62c... First commit
D:/Projects/reflog [(512e62c...)]> git branch feature/foo
D:/Projects/reflog [(512e62c...)]> git checkout feature/foo
Switched to branch 'feature/foo'
D:/Projects/reflog [feature/foo]>

Notice how it said that I’m in a detached HEAD state. What that means is that HEAD is not pointing to the tip of a branch – we’re down the line of commits somewhere. However, at this point the files are checked out in my working copy and I am able to recover them. I can run git branch to create a branch from this commit, and continue working where I left off like nothing happened.

CommonMark only wants to help

I’m sure many of you have heard of Markdown, which is a plain text format for writing structured documents. Developed in 2004, it is in wide use throughout the internet and has simple syntax:

Heading
=======

Sub-heading
-----------
  
h3. Traditional html title
 
Paragraphs are separated
by a blank line.
 
Let 2 spaces at the end of a line to do a  
line break
 
Text attributes *italic*,
**bold**, `monospace`.
 
A [link](http://example.com).
<<<   No space between ] and (  >>>

Shopping list:
 
* apples
* oranges
* pears
 
Numbered list:
 
1. apples
2. oranges
3. pears
 
The rain---not the reign---in
 Spain.

Markdown was created by John Gruber, and the “spec” (if you can call it that) is just the initial implementation, written in Perl. Many other implementations have spawned for various languages, and they all used this initial implementation as the spec, even though it is buggy and therefore incredibly ambiguous.

CommonMark

CommonMark is an effort by a group of people to create an unambiguous Markdown spec.

We propose a standard, unambiguous syntax specification for Markdown, along with a suite of comprehensive tests to validate Markdown implementations against this specification. We believe this is necessary, even essential, for the future of Markdown.

Due to so many differing implementations and the wide usage throughout the internet, it’s impossible to know whether or not the Markdown you write on Reddit will work in a readme.md on Github. The goal of CommonMark is to make sure that it will.

I think it’s a great cause, and as I said in the coding horror discussion, nothing but good things can come out of this. Another member, however, reminded me of why I was commenting on that discussion in the first place:

Well, with the exception of this little spat, of course.

Oh yes, this little spat. The spat between the CommonMark team and John Gruber. Apparently John is not on board with the standardization of Markdown and has ignored requests to be on the team. CommonMark was started 2 years ago and originally requested that John join the project. They heard nothing from John for 2 years, until they announced the project with it’s original name of Standard Markdown. Apparently John thought the name was infuriating and insisted that it be changed. It is now known as CommonMark.

John appears to be 100% against this project and the standardization of Markdown.

Why?

The intent behind CommonMark is to make Markdown more consistent and make the internet as a whole a better place. This is being done because Markdown is highly regarded throughout the industry. It’s being done because people love Markdown and want to see it live long after many projects die.

Markdown has been neglected ever since shortly after it’s initial release. The last version, 1.0.1, was released on December 17, 2004. 10 freaking years ago. It’s fine to no longer have any interest in maintaining a project, but to not let people continue it’s development is beyond me.

I would love to hear from John on the reasoning behind his lack of interest in CommonMark. He may very well have good reasons and can set the record straight. But for now, I just don’t get it.

Coffee shops and programming… now with science!

Remember that scene in Family Guy, where there are two guys in a coffee shop and one asks if the other will watch him work? You don’t? Okay, fine:

Guy #2: Hey, getting some writing done there buddy?

Guy #1: Yeah, setting up in public so everybody can watch me type my big screenplay.

Guy #2: Me too. All real writers need to be seen writing otherwise what’s the point, right?

Guy #1: You should totally write that down!

Guy #2: Okay, will you watch me?

Funny? Yes. We’ve all seen these people in our local Starbucks – sitting, on their laptops, diligently working away for the world to see. Go home you have probably said under your breath. Nobody wants to see you arrogantly type out in public (I certainly have never said that, but it’s because I’m half Canadian and therefore 50% nicer than the average American).

But is there a reason people work in coffee shops, other than to show off their assiduous lifestyle?

Well, apparently it can help you be more creative.

Researchers at the University of Illinois conducted an experiment to determine how ambient noise can affect creativity:

This paper examines how ambient noise, an important environmental variable, can affect creativity. Results from five experiments demonstrate that a moderate (70 dB) versus low (50 dB) level of ambient noise enhances performance on creative tasks and increases the buying likelihood of innovative products. A high level of noise (85 dB), on the other hand, hurts creativity. Process measures reveal that a moderate (vs. low) level of noise increases processing difficulty, inducing a higher construal level and thus promoting abstract processing, which subsequently leads to higher creativity. A high level of noise, however, reduces the extent of information processing and thus impairs creativity.

The subjects were exposed to differing levels of ambient noise and they were tested on their creativity by taking Remote Associates Tests. The researchers found that a moderate level of noise (which they classify as ~70db) helps raise cognitive awareness and therefore increases your creativity. Coffee shops, if you haven’t guessed yet, are in that same decibel level range and make for perfect creativity booster.

This level of ambient noise keeps your brain at a state of heightened awareness, where it is always engaged and actively thinking, calculating, and processing data. It’s quiet enough that it’s not a distraction, and there’s enough different noises going on (people talking, footsteps, doors opening and closing, coffee grinding, milk steaming, etc) your brain can’t focus on one single noise, and therefore throws it in the background and tunes it out. Coffee shops are also “safe” – most people are comfortable in them and don’t worry about the people around them, which allows their mind to get absorbed in their work.

As programmers, we tip toe this weird world between art and science. We need the math and reasoning skills of a scientist but the creative process of an artist. Coffee shops can help stem the creative side if you’re having a hard time finding the artist inside.

But what about those times where you’re 3 blocks away from the nearest Starbucks and have hit the creative wall? Enter Coffitivity. The goal is to allow someone to throw some headphones on and simulate the experience of being in a coffee shop. They have a few different loops, ranging from morning coffee shops to university lunch hangouts.

I’ve been listening to it for a few weeks and so far I think it works pretty well. After the first few minutes I forget I have headphones on and quickly get engrossed in whatever tasks I’m working on. (I’m even listening to it as I write this)

My only qualm at the moment is that the loops seem to be pretty short. After a while it starts getting distracting hearing the same woman’s laugh every 10 minutes. Kind of annoying. But if you’re dying to hear a coffee shop in a pinch, you can’t beat it.