Site Meter

Monthly Archives: November 2007

ASP.NET MVC CTP will be released next week

According to ScottGu,

VS 2008 and .NET 3.5 include a ton of new features for ASP.NET development. We are planning to deliver even more ASP.NET functionality next year with a “ASP.NET 3.5 Extensions” release. The first public preview of this will be available for download next week on the web.

Next week’s ASP.NET 3.5 Extensions preview release will include:

  • ASP.NET MVC: This model view controller (MVC) framework for ASP.NET provides a structured model that enables a clear separation of concerns within web applications, and makes it easier to unit test your code and support a TDD workflow. It also helps provide more control over the URLs you publish in your applications, and more control over the HTML that is emitted from them.

LINQ-to-SQL: the multi-tier story

If your web application is even slightly bigger than the typical MSDN example, you’re almost certainly using some sort of multi-tier architecture. Unfortunately, as Dinesh Kulkari (MS LINQ Program Manager) explains on his blog, LINQ-to-SQL version 1 has no out-of-the-box multi-tier story.

So, if you’re going to use it, you’re going to have to get imaginative and write your own story.

What we already do

The typical multi-tier web application handles requests like this:


It’s a simplified view but covers the basic point, which is that to fulfil a request, the top layer makes multiple calls into the lower layers.

Using LINQ-to-SQL

Simple example: the user wants to rename a “Customer” entity. You want to do something like this:

void RenameButton_Click(object sender, EventArgs e)
   Customer myCustomer = ServiceLayer.CustomerEngine.GetCustomer(_customerID);
   myCustomer.Name = NameTextbox.Text;

Easy – you load the record, update it, then send it back to be saved. We made two calls to the service layer.

And that brings us to… the problem

Where do DataContexts come in? With LINQ to SQL, you’re always querying or saving changes on a DataContext, which is responsible for two things:

  • Managing connections to the database
  • Tracking changes on entities it created, so it knows which ones to write back to the database

Who creates the DataContext, and what is its lifespan?

You certainly can’t create and destroy a new DataContext within each service layer call, even though most trivial examples you’ll see do exactly that. You can’t do that because our Customer object above needs to remain attached to the same DataContext during its entire lifetime. If we created a new DataContext during the SaveCustomer() call, the new context wouldn’t know what changes had been made to the customer, so wouldn’t write anything to the database.

Somehow, we have to manage the lifespan of a DataContext across multiple service layer calls. This is the “multi-tier story” we’ve been missing. There are three most obvious mechanisms I can think of.

Story one: DataContext creation in UI layer

You can, if you want, create and destroy the DataContext objects in your ASPX-code-behind event handlers, passing the context into each service layer call. This allows the DataContext to span multiple calls, as follows:

void RenameButton_Click(object sender, EventArgs e)
   MyDomainDataContext dataContext = new MyDomainDataContext(CONNECTION_STRING);
   Customer myCustomer = ServiceLayer.CustomerEngine.GetCustomer(dataContext, _customerID);
   myCustomer.Name = NameTextbox.Text;
   ServiceLayer.CustomerEngine.SaveCustomer(dataContext, myCustomer);

Advantages of this approach:

  • Fine-grained control over DataContext lifespan and reuse

Disadvantages of this approach:

  • DataContext is exposed to UI layer (unless you wrap it in some sort of handle object), which encourages lazy non-separating-concerns coding
  • Code smell throughout your application – any chain of calls through the service layers involves a “context” parameter passed down through each layer
  • Tedious work of creating DataContexts all the time

Story two: DataContext per application

The opposite extreme is having a single, static DataContext shared across your entire application. You may instantiate it either on application start, or lazily, and your service classes may simply access it whenever they need database access.

Do not do this, because:

  • DataContext isn’t thread-safe, as far as I know
  • You lose isolation and cannot control when SubmitChanges() is called – concurrent requests will interfere with one another
  • Memory leaks are pretty likely

Story three: DataContext per unit-of-work (i.e. per request)

The classic solution to the problem is slightly more tricky that the others, but achieves – almost – the best of all worlds. This is the typical solution advocated by many when using other object-relational mappers, like NHibernate.

If each HTTP request has access to its own private DataContext object which lives for the duration of the request, you can expose it to the whole data access layer (and not to the UI layer), knowing that related calls will use the same DataContext object and thus keep track of object changes properly. Also, you don’t get unwanted interaction between concurrent requests.

But where would you store such an object? The natural place is in the IDictionary HttpContext.Current.Items. That’s a storage area whose lifespan equals the lifespan of the request, and is private to that request. You could, therefore, set up a static helper method available to your whole service layer:

internal static class MyDomainDataContextHelper
    public static MyDomainDataContext CurrentContext
            if (HttpContext.Current.Items["MyDomainDataContext"] == null)
                    = new MyDomainDataContext(ConfigurationSettings.AppSettings["connectionString"]);
            return (MyDomainDataContext)HttpContext.Current.Items["MyDomainDataContext"];

Now, querying the database is as simple as:

var query = from c in MyDomainDataContextHelper.CurrentContext.Customers where // ...etc

Since the DataContext is available statically, there’s no need to construct new instances or pass them around from the UI layer downwards. The UI layer doesn’t even need to know that such a thing exists. How nice!

Stop! That’s still not good enough

The simple implementation of method three leads to a serious limitation. The service layers are now coupled to HttpContext.Current, which is not a good idea – firstly because you ought to be striving to make your service layers independent of the UI platform (in this case, the web), and secondly because it’s going to break in some cases. HttpContext.Current won’t be available to your unit test runner, for instance.

Fortunately we can fix this with a trivial implementation of Inversion of Control. Let’s define an abstract notion of a “unit of work datastore” in our service layer.

namespace ServiceLayer
    public interface IUnitOfWorkDataStore
        object this[string key] { get; set; }
    public static class UnitOfWorkHelper
        public static IUnitOfWorkDataStore CurrentDataStore;

Now, when our application starts, we can map this datastore to HttpContext.Current.Items, by adding code to Global.asax:

private class HttpContextDataStore : IUnitOfWorkDataStore
    public object this[string key]
        get { return HttpContext.Current.Items[key]; }
        set { HttpContext.Current.Items[key] = value; }
protected void Application_Start(object sender, EventArgs e)
    ServiceLayer.UnitOfWorkHelper.CurrentDataStore = new HttpContextDataStore();

… and then the references to System.Web.HttpContext.Current.Item in the service layer can be replaced with UnitOfWorkHelper.CurrentDataStore:

internal static class MyDomainDataContextHelper
    public static MyDomainDataContext CurrentContext
            if (UnitOfWorkHelper.CurrentDataStore["MyDomainDataContext"] == null)
                    = new MyDomainDataContext(ConfigurationSettings.AppSettings["connectionString"]);
            return (MyDomainDataContext)UnitOfWorkHelper.CurrentDataStore["MyDomainDataContext"];

And we’re done! The service layer no longer has any dependency on or awareness of System.Web, and our unit tests can supply any IUnitOfWorkDataStore of their own creation by assigning it to ServiceLayer.UnitOfWorkHelper.CurrentDataStore.If you’re using a full-fledged Inversion of Control container, like Castle Windsor or Spring, you will no doubt register your IUnitOfWorkDataStore with it and access it that way. For the rest of us, this simple implementation works nicely.

So that’s all perfect then

Yes, I thought so for a little while. In the next post, I’ll explain some issues you’re likely to run into when using LINQ to SQL in this way. Problems that don’t happen with NHibernate…

Mindset shifts between ASP.NET and MVC/Rails (and the future of web development…)

Let’s imagine your web development team is going to move out of “classic” ASP.NET and into MonoRail/ASP.NET MVC/etc. We assume you have a good reason (“It’s new and exciting” is usually good enough for us devs).

What kind of issues are you going to face? What questions are going to keep coming up?

“… but how do I do postbacks?”

The loss of the postback model is going to hit a lot of web developers hard – there’s a big mindset shift to make. So how did we get here, and where are we going?

In the history of web development so far, there have basically been two major different approaches/philosophies to web UI.

1. Event-driven

Just like with native GUI apps, we imagine that when the user clicks a button, we can change the text on some label. When they change a value in a list, the “total” is updated. The fact that this stateful UI has to be transmitted over HTTP is just a technical obstacle to be overcome.

2. RESTful

Following the principles of REST and SOA, we understand that the cleanest web apps are stateless. We think in terms of requests and responses, working with HTTP rather than against it.

While RESTful apps are the fashion today, don’t forget that it’s the older technology. In 1997, the age of Perl, we had no choice. The first Event-driven platforms came later, achieving their statefulness through complicated abstraction layers (ASP.NET), fiddly scripting (AJAX), or abandoning HTML entirely (Flash).

Here’s another way of comparing the two mindsets:

Event-driven RESTful
Stateful Stateless
Heavyweight Lightweight
Overcomes HTTP Embraces HTTP
Widgets HTML
3rd-party controls DIY
Design view Code view
GUI Web page
Postbacks -
Partial page updates* Full page updates
Web 1.0 (uncool) Web 2.0 (cool)

* I know classic ASP.NET actually replaces the whole page on a postback, but the mental model is that it’s doing partial page updates. It just technically uses full page updates to achieve that.

Note I’m trying to keep AJAX out of this, because it’s just a way of strapping extra event-driven semantics on top of whatever other platform you’re using. I totally accept that it’s useful today, but in 3-5 years it will be gone and we won’t miss it.

So which should I use?

Another way to think about the progression of these approaches is like this:

  RESTful Event-driven
Ineffective web technology Perl, Plain PHP ASP.NET
Effective web technology RoR, MonoRail, (ASP.NET MVC?) ?

The most effective web frameworks today are undoubtedly the highly-streamlined, designed-for-HTTP, MVC derivatives. I call ASP.NET “ineffective” because anyone who’s used it for more than a few years knows that the abstraction doesn’t really work, it tends to be too heavyweight, and the “page lifecycle” is the software equivalent of cholera.

What’s the future?

The future is not Ruby on Rails. It’s not ASP.NET MVC either, though that will hopefully improve our industry for the next 2-4 years and you should probably use it.

The future is, inevitably, obviously, undoubtedly, the question mark on the table above. It has to be an event-driven architecture since that matches both the end-user and developer’s instincts about UIs. But unlike ASP.NET, it will be thick-client, not a load of leaky abstractions. It may be so well integrated into the OS that users don’t think of it as being different to their local apps.

In other words, the future is client-side. Well, it’s going to be interesting anyway.