CommonMark only wants to help

I’m sure many of you have heard of Markdown, which is a plain text format for writing structured documents. Developed in 2004, it is in wide use throughout the internet and has simple syntax:

Heading
=======

Sub-heading
-----------
  
h3. Traditional html title
 
Paragraphs are separated
by a blank line.
 
Let 2 spaces at the end of a line to do a  
line break
 
Text attributes *italic*,
**bold**, `monospace`.
 
A [link](http://example.com).
<<<   No space between ] and (  >>>

Shopping list:
 
* apples
* oranges
* pears
 
Numbered list:
 
1. apples
2. oranges
3. pears
 
The rain---not the reign---in
 Spain.

Markdown was created by John Gruber, and the “spec” (if you can call it that) is just the initial implementation, written in Perl. Many other implementations have spawned for various languages, and they all used this initial implementation as the spec, even though it is buggy and therefore incredibly ambiguous.

CommonMark

CommonMark is an effort by a group of people to create an unambiguous Markdown spec.

We propose a standard, unambiguous syntax specification for Markdown, along with a suite of comprehensive tests to validate Markdown implementations against this specification. We believe this is necessary, even essential, for the future of Markdown.

Due to so many differing implementations and the wide usage throughout the internet, it’s impossible to know whether or not the Markdown you write on Reddit will work in a readme.md on Github. The goal of CommonMark is to make sure that it will.

I think it’s a great cause, and as I said in the coding horror discussion, nothing but good things can come out of this. Another member, however, reminded me of why I was commenting on that discussion in the first place:

Well, with the exception of this little spat, of course.

Oh yes, this little spat. The spat between the CommonMark team and John Gruber. Apparently John is not on board with the standardization of Markdown and has ignored requests to be on the team. CommonMark was started 2 years ago and originally requested that John join the project. They heard nothing from John for 2 years, until they announced the project with it’s original name of Standard Markdown. Apparently John thought the name was infuriating and insisted that it be changed. It is now known as CommonMark.

John appears to be 100% against this project and the standardization of Markdown.

Why?

The intent behind CommonMark is to make Markdown more consistent and make the internet as a whole a better place. This is being done because Markdown is highly regarded throughout the industry. It’s being done because people love Markdown and want to see it live long after many projects die.

Markdown has been neglected ever since shortly after it’s initial release. The last version, 1.0.1, was released on December 17, 2004. 10 freaking years ago. It’s fine to no longer have any interest in maintaining a project, but to not let people continue it’s development is beyond me.

I would love to hear from John on the reasoning behind his lack of interest in CommonMark. He may very well have good reasons and can set the record straight. But for now, I just don’t get it.

Coffee shops and programming… now with science!

Remember that scene in Family Guy, where there are two guys in a coffee shop and one asks if the other will watch him work? You don’t? Okay, fine:

Guy #2: Hey, getting some writing done there buddy?

Guy #1: Yeah, setting up in public so everybody can watch me type my big screenplay.

Guy #2: Me too. All real writers need to be seen writing otherwise what’s the point, right?

Guy #1: You should totally write that down!

Guy #2: Okay, will you watch me?

Funny? Yes. We’ve all seen these people in our local Starbucks – sitting, on their laptops, diligently working away for the world to see. Go home you have probably said under your breath. Nobody wants to see you arrogantly type out in public (I certainly have never said that, but it’s because I’m half Canadian and therefore 50% nicer than the average American).

But is there a reason people work in coffee shops, other than to show off their assiduous lifestyle?

Well, apparently it can help you be more creative.

Researchers at the University of Illinois conducted an experiment to determine how ambient noise can affect creativity:

This paper examines how ambient noise, an important environmental variable, can affect creativity. Results from five experiments demonstrate that a moderate (70 dB) versus low (50 dB) level of ambient noise enhances performance on creative tasks and increases the buying likelihood of innovative products. A high level of noise (85 dB), on the other hand, hurts creativity. Process measures reveal that a moderate (vs. low) level of noise increases processing difficulty, inducing a higher construal level and thus promoting abstract processing, which subsequently leads to higher creativity. A high level of noise, however, reduces the extent of information processing and thus impairs creativity.

The subjects were exposed to differing levels of ambient noise and they were tested on their creativity by taking Remote Associates Tests. The researchers found that a moderate level of noise (which they classify as ~70db) helps raise cognitive awareness and therefore increases your creativity. Coffee shops, if you haven’t guessed yet, are in that same decibel level range and make for perfect creativity booster.

This level of ambient noise keeps your brain at a state of heightened awareness, where it is always engaged and actively thinking, calculating, and processing data. It’s quiet enough that it’s not a distraction, and there’s enough different noises going on (people talking, footsteps, doors opening and closing, coffee grinding, milk steaming, etc) your brain can’t focus on one single noise, and therefore throws it in the background and tunes it out. Coffee shops are also “safe” – most people are comfortable in them and don’t worry about the people around them, which allows their mind to get absorbed in their work.

As programmers, we tip toe this weird world between art and science. We need the math and reasoning skills of a scientist but the creative process of an artist. Coffee shops can help stem the creative side if you’re having a hard time finding the artist inside.

But what about those times where you’re 3 blocks away from the nearest Starbucks and have hit the creative wall? Enter Coffitivity. The goal is to allow someone to throw some headphones on and simulate the experience of being in a coffee shop. They have a few different loops, ranging from morning coffee shops to university lunch hangouts.

I’ve been listening to it for a few weeks and so far I think it works pretty well. After the first few minutes I forget I have headphones on and quickly get engrossed in whatever tasks I’m working on. (I’m even listening to it as I write this)

My only qualm at the moment is that the loops seem to be pretty short. After a while it starts getting distracting hearing the same woman’s laugh every 10 minutes. Kind of annoying. But if you’re dying to hear a coffee shop in a pinch, you can’t beat it.

The case against EntityDataSource

Why does Microsoft insist on developing the EntityDataSource? I really don’t see the benefit. It’s just adding bloat to Entity Framework, especially since most people are moving away from Web Forms in favor of MVC. It was never even a good idea in the first place. It’s supposed to make data binding easier, but it ends up causing many problems.

It’s difficult to debug

It works great when it works, but when it doesn’t work… Ugh. When it breaks, it’s nearly impossible to determine the problem without blindly Googling around and trying things until they work, which is terrible. I once had to use SQL Server Profiler to monitor queries to determine the sql it had generated so I could properly debug the issue.

You also can’t see the results being returned without viewing the output on the page. Manually binding allows you to view the IEnumerable returned and manipulate the results further, if necessary.

It forces data access logic in your views

If you want a list of all Users in your database then great, add an EDS and grab all of them. But what if you need to filter them? Forget about it. You have to add a Where property and manually write SQL yourself. Or use the <WhereParameters> property with a bunch of verbose filters:

<WhereParameters>
    <asp:SessionParameter Name="Id" DbType="Int32" SessionField="Id" />
    <asp:SessionParameter Name="Name" DbType="String" SessionField="Name" />
    <asp:ControlParameter ControlID="txtCompany" DbType="String" 
      DefaultValue="" Name="Company" PropertyName="Company" />
</WhereParameters>

It’s a mess of text and it’s impossible to determine what you’re actually selecting. Compared to:

context.Tests.Where(t => t.Id = (int)Session["Id"] 
                                  && t.Name = (string)Session["Name"] 
                                  && t.Company = txtCompany.Text);

Much cleaner and much more readable.

This also forces data access to live in your views which is a violation of Separation of Concerns. Data access should live where it belongs – in the aptly named Data Access Layer.

It’s slow

As a quick example, I set up a page with an EntityDataSource selecting all columns from a table with 50,000 rows and putting them into a GridView. To compare, I also manually binded the GridView by selecting from the ObjectContext itself.

Here’s a sample of the code for the EDS:

<asp:EntityDataSource runat="server" ID="eds" ConnectionString="name=Test" 
    AutoGenerateWhereClause="True" DefaultContainerName="Test" 
    EntitySetName="Test" AutoSort="true" />
<asp:GridView runat="server" ID="gvEds" DataSourceID="eds" AllowPaging="True"></asp:GridView>

And the manual binding:

<asp:GridView runat="server" ID="gvNoEds" AllowPaging="true" PageSize="50"></asp:GridView>
var tests = context.Test.ToList();
gvNoEds.DataSource = tests;
gvNoEds.DataBind();

Results:

Entity Data Source Manual
0.020342 seconds 0.006623 seconds

Sure, both are fast, but this shows that the EDS is an order of magnitude slower than the manual binding. In a situation where there are many concurrent users with a lot more on a page, it could be 0.1 seconds compared to 0.01 seconds which is a noticable difference.

Kill it with fire

I’m really not sure why Microsoft insists on continuously supporting EntityDataSource. I see a slow, outdated control helper that abstracts too much while adding a lot of complexity. Let it die.

Why use strong and em over b and i?

One question I see around the interwebs a lot is why strong and em should be used over b and i. If we look at the HTML 4 spec, it lists b and i under the Font style section, and notes:

The following HTML elements specify font information. Although they are not all deprecated, their use is discouraged in favor of style sheets.

The strong and em tags are listed under the Phrase elements section and note:

Phrase elements add structural information to text fragments.

Now that’s all well and good, but what does it mean?

Among other things:

b and i are visual.

What this means is that in a web browser, when the html parser encounters a <b> tag, it knows to bold the font. It’s a physical property, meaning “use thicker ink when displaying this word.” Same with <i> – “skew this so it looks like it’s going really fast” (or something like that). These are tags whose sole purpose is to physically change the display of the text.

Okay. Great.

Well, maybe not. What about when a blind person views a page? The visual properties mean nothing to them. That’s where em and strong come in.

em and strong are semantic

The em tag indicates emphasis and the strong tag indicates stronger emphasis. This could (and usually does) mean italics and bold on a web page. But it also could alert a text-to-speech program to use a different tone of voice when encountering the text. They have meaning behind them, and that meaning means different things to different interpreters.

As noted by the HTML 4 spec, b and i, although not deprecated, should be avoided because not only are they style properties that should be handled in CSS, they don’t have any semantic meaning. Use strong and em in their place.

Building the Castle Windsor Dependency Injection Populator for ASP.NET vNext

As I blogged about previously, I built a Castle Windsor populator for ASP.NET vNext. For this post, I’ll walk through some of the more interesting pieces of it.

Installer

In Castle Windsor, an installer is a class that implements IWindsorInstaller and is used to register all components. It’s not necessary, but it encapsulates the registrations and allows for more organization if you have a ton of services to register. For the Castle Windsor populator, I set up a KServiceInstaller to register the components.

The KServiceInstaller has a constructor that takes IEnumerable<IServiceDescriptor>, which is the list of services to be registered.

public KServiceInstaller(IEnumerable<IServiceDescriptor> services)

When Install is called on the installer, it loops through each service and registers it with Windsor, using either an ImplementationType or ImplementationInstance.

ImplementationType vs ImplementationInstance

The IServiceDescriptor interface exposes ImplementationType and ImplementationInstance properties. If the type being registered is a singleton and has a specific instance that is already instantiated then the service descriptor include an ImplementationInstance. If the type will be instantiated by the DI container, then the service descriptor includes an ImplementationType. Performing a null check will allow you to figure out which one to use, and they should be registered with the Windsor Instance or ImplementedBy registration methods, respectively.

if (service.ImplementationType != null)
{
    container.Register(Component.For(service.ServiceType)
                                .ImplementedBy(service.ImplementationType)
                                .ConfigureLifeCycle(service.Lifecycle));
}
else
{
    container.Register(Component.For(service.ServiceType)
                                .Instance(service.ImplementationInstance)
                                .ConfigureLifeCycle(service.Lifecycle));
}

Lifestyle

There are 3 LifeCycles specified in the vNext DI framework:

public enum LifecycleKind
{
    Singleton,
    Scoped,
    Transient
}

These map to the LifestyleSingleton, LifestyleScoped and LifestyleTransient life cycles in Windsor. Singleton, obviously, means we have one and only one, use it everywhere. Transient means give me a new one every time I ask for one and Scoped means give me a new one per scope, and inside that scope use the same one everywhere. The scope set up here is per web request, so it creates a new one on each request and it is used for the life of the request.

A simple extension method makes it easy to configure lifecycles:

internal static ComponentRegistration<object> ConfigureLifeCycle(
    this ComponentRegistration<object> registration, LifecycleKind kind)
{
    switch (kind)
    {
        case LifecycleKind.Scoped:
            registration.LifestyleScoped();
            break;
        case LifecycleKind.Singleton:
            registration.LifestyleSingleton();
            break;
        case LifecycleKind.Transient:
            registration.LifestyleTransient();
            break;
    }

    return registration;
}

And it’s used as such:

container.Register(Component.For(service.ServiceType)
                            .ImplementedBy(service.ImplementationType)
                            .ConfigureLifeCycle(service.Lifecycle));

FallbackLazyComponentLoader

There are many services in the vNext framework that are instantiated and registered prior to registration of the MVC services and any custom services you use in your app. These aren’t known and won’t be registered by Windsor, so we’ll need to fallback to another IServiceProvider if a request comes in for the service. That’s where the FallbackLazyComponentLoader comes in.

Castle Windsor includes the ability to acquire components as they’re needed, on the spot without registering them first. This is exposed via the ILazyComponentLoader interface, which is what FallbackLazyComponentLoader implements. It exposes a single constructor taking an IServiceProvider:

public FallbackLazyComponentLoader(IServiceProvider provider)

If Windsor encounters a request for a service that is not registered, it will fallback and attempt to resolve it using this fallback service provider:

public IRegistration Load(string name, Type service, IDictionary arguments)
{
    var serviceFromFallback = _fallbackProvider.GetService(service);

    if (serviceFromFallback != null)
    {
        return Component.For(service).Instance(serviceFromFallback);
    }

    return null;
}

WindsorServiceProvider

To use a custom DI framework, you must register a new IServiceProvider. The Windsor populator returns a WindsorServiceProvider which can be registered with the framework:

private class WindsorServiceProvider : IServiceProvider
{
    private IWindsorContainer _container;

    public WindsorServiceProvider(IWindsorContainer container)
    {
        _container = container;
    }

    public object GetService(Type serviceType)
    {
         return _container.Resolve(serviceType);
    }
}

When a request for a service comes in, Windsor will attempt to resolve it. If it isn’t explicitly registered it checks the fallback service provider, and if that fails it returns null.

Discourse V1 has been released

As promised by the team, Discourse has just hit V1.0. As Jeff Atwood says the version number is mostly arbitrary, but it shows that the software is ready for rapid adoption in communities everywhere.

Version numbers are arbitrary, yes, but V1 does signify something in public. We believe Discourse is now ready for wide public use.

That’s not to say Discourse can’t be improved – I have a mile-long list of things we still want to do. But products that are in perpetual beta are a cop-out. Eventually you have to ditch the crutch of the word “beta” and be brave enough to say, yes, we’re ready.

They have also unveiled their hosting service, which (I’m assuming) makes it stupidly easy to set up and use Discourse.

I set up Discourse as my comment system and I have been nothing but impressed with it. I would recommend Discourse to anyone needing some sort of discussion community.

Congrats to the Discourse team on the accomplishment!

Progress Indicators are Always a Good Idea

Progress bars. You know them. You love them. They tell you when the computer is going to be done with a task, or when you are going to be done with a task. Sometimes they’re accurate, sometimes not so much. Regardless of accuracy, studies have shown that humans like to know progress:

A friend tries to encourage you by observing “There’s light at the end of the tunnel.” The comment may help you persevere because, with the end in sight, the remainder of the task becomes more pleasant or the prospect of abandoning it more unpleasant. In either case, your friend is trying to influence how you perceive the task to help you complete it. The belief seems to be that long, boring tasks will be experienced as shorter and more interesting, or at least more tolerable, when we can tell we are making progress. This appears to be the rationale of designers who provide feedback to users about their progress.

The light at the end of the tunnel is a powerful force and is one that helps drive most of us to complete our goals. But as the nights get darker and the days get longer, how do we persevere through and not just see the light, but make it out of the tunnel to feel the light too?

For that, we turn to science. Task completion is a feel-good activity, and we become more and more happy as we complete a task. Hugo Liu describes why here:

Because completion is intrinsically rewarding. Neuroscience backs this up. It turns out that when you finish a complex task, your brain releases massive quantities of endorphins. Through the magic of classical conditioning, you come to associate present acts of completion and progress with the pleasure and satisfaction of your past completion-induced endorphin rushes.

The more tasks you complete the more and more you become addicted to that feeling.

Along with completions, we get the same reward from seeing plain ‘ol progress. Even if the end result is far away or even unattainable, feeling like we’re making progress is all that is needed to keep us happy:

penrosestairsShown above is a depiction of the Penrose Stairs, also known as the endless staircase or the impossible staircase. We can see from this vantage point that the stairs really lead to nowhere. However, for the man on the stairs, he may feel like he is getting somewhere. After all, the personal experience would be that of moving forward and upward — the very definition of progress.

In the context of the work environment, you are happier when you feel like you are moving forward and upward.

So what does this have to do with computers?

Well, we can’t physically see progress in a computer – if we start a file copy, we can’t open up the side of the computer and watch it copy. Developers have to build the indicators into the UI so end users know the computer is performing the task. And that’s the important part. Whenever possible, display progress to the user. Whether it’s a file copy, or the number of experience points they need to gain to level up, or a set of steps that needs to be completed users will thank you for knowing when they’ll be done. It will help them complete whatever goal they are trying to accomplish with your software.

Don’t be a Tweek Tweak Programmer

My previous post, Please learn to ask questions, asks developers to start asking more questions and try to understand the code behind many framework tools they use. A sibling of this topic is programming by coincidence. PbC (as it will henceforth be referenced as in this post)* is programming by luck, tweaks and accidental success until perceived correctness is achieved. The Pragmatic Programmer has 2 great examples:

Do you ever watch old black-and-white war movies? The weary soldier advances cautiously out of the brush. There’s a clearing ahead: are there any land mines, or is it safe to cross? There aren’t any indications that it’s a minefield—no signs, barbed wire, or craters. The soldier pokes the ground ahead of him with his bayonet and winces, expecting an explosion. There isn’t one. So he proceeds painstakingly through the field for a while, prodding and poking as he goes. Eventually, convinced that the field is safe, he straightens up and marches proudly forward, only to be blown to pieces.

And an example directly related to software development (emphasis mine):

Suppose Fred is given a programming assignment. Fred types in some code, tries it, and it seems to work. Fred types in some more code, tries it, and it still seems to work. After several weeks of coding this way, the program suddenly stops working, and after hours of trying to fix it, he still doesn’t know why. Fred may well spend a significant amount of time chasing this piece of code around without ever being able to fix it. No matter what he does, it just doesn’t ever seem to work right.

Fred doesn’t know why the code is failing because he didn’t know why it worked in the first place. It seemed to work, given the limited “testing” that Fred did, but that was just a coincidence. Buoyed by false confidence, Fred charged ahead into oblivion. Now, most intelligent people may know someone like Fred, but we know better. We don’t rely on coincidences—do we?

We don’t. Or, at least, we shouldn’t. The worst way to program is to program by tweaking values until something appears to work. I’m calling it Tweek Tweak programming**:

var name = "DaveZych"
var firstName = name.Split(0, 4);
var lastName = name.Split(4, name.Length);

Nope, didn’t work.

var name = "DaveZych"
var firstName = name.Split(0, 4);
var lastName = name.Split(4, name.Length - 4);

Eh, still no.

var name = "DaveZych"
var firstName = name.Split(0, 4);
var lastName = name.Split(3, name.Length - 4);

Yeaaaahhhhh, it worked once. Nailed it.

How many times have you seen a developer fumble their way through code like this? Too many? Yeah, me too. Chances are the code happened to work for that one test run but will fail the other 95% of the time and especially when the test data changes. This is a direct path to a very unmaintainable codebase.

The Pragmatic Programmer has tips on how to not program by coincidence and instead program deliberately:

  • Always be aware of what you are doing
    • Know the requirements. Know the tools. Know the goal. Know when to take a step back and think about what you’re doing
  • Don’t code blindfolded
    • Don’t be a Tweek Tweak programmer. Make sure you understand your requirements and the technology you’re using
  • Proceed from a plan, whether that plan is in your head, on the back of a cocktail napkin, or on a wall-sized printout from a CASE tool
    • Before starting, think about what you’re doing. Write it down if it helps (it almost always does)
  • Rely only on reliable things
    • Make sure that code doesn’t work just because the circumstances were correct
  • Document your assumptions
    • If you assume that all dates coming back from an API are in EST, document that so anyone having to maintain the software knows why it’s expecting that. Just watch out how many comments you use
  • Don’t just test your code, but test your assumptions as well
    • Run a ton of queries against the above mentioned API and confirm the dates comes back as you expect all the time, every time
  • Prioritize your effort
    • Focus on the important parts because those are generally the hardest and will take the most time
  • Don’t be a slave to history
    • Just because something was done a certain way before does not mean you should also do that, nor does it mean it was correct in the first place

Keep these in mind as you make your way through the programming minefield. Don’t be a Tweek Tweak Programmer! And, as always, THINK!


* Apparently that was the last time it was referenced.
** Both because of the act of tweaking values as well as the paranoia the developer feels regarding the stability of the code afterwards

Learn to ask more questions

I recently installed sod in my backyard along with an automatic sprinkler system. I performed all the work myself (with the help of some friends and family when necessary, of course) and that included laying the sprinkler pipe and hooking it up to the copper mainline.

I didn’t know much about sprinklers at the time I started – I new it was PVC, and you used the glue and stuck ‘em together and water was carried through them to the sprinkler heads. But installing a system requires a bit more knowledge, so I researched all of it until I felt comfortable enough to do it. I learned about valves, timers, how to sweat copper and test for leaks. I installed the system on my own and it’s been running perfectly since. All of this was done because I wasn’t afraid to ask a question and figure out the answer myself.

All too often I see young developers take abstraction at face value. They have no idea what’s going on under the hood of a framework and have no idea why their code works. Abstraction is great until it starts to hinder your maturation as a developer. So many problems I see people run into can be solved by a better understanding of the underlying issue at hand. This applies from technical challenges to business rules to life itself.

You don’t solve this by just gazing into the stars and wondering. You take that wonder a step further by investigating and applying it towards gaining knowledge. I wondered how to install a sprinkler system, and now I have a lawn that gets automatically watered once a week.

Don’t just assume. Don’t be blind. Wonder why things work the way they do. Ask. Research. Learn. As Pragmatic Programmer Tip #8 says:

Invest Regularly in Your Knowledge Portfolio

When you see something you don’t know, question what it is and how it works. I’m calling on all people to ask more questions. Combating nescience is how you advance your skills and suck less every year. Please, don’t just walk on top of abstraction, dig a little and uncover the mystery.

The investigation of a site crash

When I woke up this morning I noticed by site had crashed with the following error:

Error establishing a database connection.

Awesome. What a helpful error message. WordPress is basically saying:

shrug

Thanks WordPress. I ssh’d into the server to see if I could figure out what was wrong myself. Since this was complaining about a database connection, my first step was to check the status of mysql:

$ sudo netstat -tap | grep mysql

Nothing. Ruh Roh! Time to attempt a restart…

$ sudo /etc/init.d/mysql restart

… and the site came back up. Phew! That was easy.

But why did it crash in the first place?

Since MySql was the cause it made sense to me to first check the sql logs. I opened those up and found nothing. Based on the recommendation of Google I then searched the syslogs, specifically for memory:

$ sudo grep memory /var/logs/syslog
  Aug 19 10:56:12 localhost kernel: [10664646.817182]  [] out_of_memory+0x414/0x450
  Aug 19 10:56:12 localhost kernel: [10664646.819979] Out of memory: Kill process 4803 (mysqld) score 104 or sacrifice child
  Aug 19 10:56:12 localhost kernel: [10664646.831686]  [] out_of_memory+0x414/0x450
  Aug 19 10:56:12 localhost kernel: [10664646.833365] Out of memory: Kill process 4826 (mysqld) score 104 or sacrifice child

Aha! It looks like MySql ran out of memory and the server killed it. Okay, now on to the next question… Why?

Well… it ran out of memory, that’s why. (Duh.) One way to alleviate this is to create a swap file, which it turns out I forgot to do when I originally configured this server. Without that swapfile MySql had nowhere to overflow excess data to and subsequently crashed. I created and enabled a swap file:

$ sudo dd if=/dev/zero of=/swapfile bs=1024 count=1024k
$ sudo mkswap /swapfile
    Setting up swapspace version 1, size = 262140 KiB
    no label, UUID=XXXXX
$ sudo swapon /swapfile

After creating the file everything has been running great (so far).

Next Steps

Had I not been a narcissist and checked my own website I probably wouldn’t have noticed it was down for hours, perhaps encroaching on days. My next steps are to look into monitoring software – something that alerts me when there’s a problem, or even a potential problem before it’s even there. One I have found that does just that is Nagois, or it’s stepchild Icinga.