LINQ-to-SQL: the multi-tier story
If your web application is even slightly bigger than the typical MSDN example, you’re almost certainly using some sort of multi-tier architecture. Unfortunately, as Dinesh Kulkari (MS LINQ Program Manager) explains on his blog, LINQ-to-SQL version 1 has no out-of-the-box multi-tier story.
So, if you’re going to use it, you’re going to have to get imaginative and write your own story.
What we already do
The typical multi-tier web application handles requests like this:
It’s a simplified view but covers the basic point, which is that to fulfil a request, the top layer makes multiple calls into the lower layers.
Using LINQ-to-SQL
Simple example: the user wants to rename a “Customer” entity. You want to do something like this:
void RenameButton_Click(object sender, EventArgs e) { Customer myCustomer = ServiceLayer.CustomerEngine.GetCustomer(_customerID); myCustomer.Name = NameTextbox.Text; ServiceLayer.CustomerEngine.SaveCustomer(myCustomer); }
Easy – you load the record, update it, then send it back to be saved. We made two calls to the service layer.
And that brings us to… the problem
Where do DataContexts come in? With LINQ to SQL, you’re always querying or saving changes on a DataContext, which is responsible for two things:
- Managing connections to the database
- Tracking changes on entities it created, so it knows which ones to write back to the database
Who creates the DataContext, and what is its lifespan?
You certainly can’t create and destroy a new DataContext within each service layer call, even though most trivial examples you’ll see do exactly that. You can’t do that because our Customer object above needs to remain attached to the same DataContext during its entire lifetime. If we created a new DataContext during the SaveCustomer() call, the new context wouldn’t know what changes had been made to the customer, so wouldn’t write anything to the database.
Somehow, we have to manage the lifespan of a DataContext across multiple service layer calls. This is the “multi-tier story” we’ve been missing. There are three most obvious mechanisms I can think of.
Story one: DataContext creation in UI layer
You can, if you want, create and destroy the DataContext objects in your ASPX-code-behind event handlers, passing the context into each service layer call. This allows the DataContext to span multiple calls, as follows:
void RenameButton_Click(object sender, EventArgs e) { MyDomainDataContext dataContext = new MyDomainDataContext(CONNECTION_STRING); Customer myCustomer = ServiceLayer.CustomerEngine.GetCustomer(dataContext, _customerID); myCustomer.Name = NameTextbox.Text; ServiceLayer.CustomerEngine.SaveCustomer(dataContext, myCustomer); }
Advantages of this approach:
- Fine-grained control over DataContext lifespan and reuse
Disadvantages of this approach:
- DataContext is exposed to UI layer (unless you wrap it in some sort of handle object), which encourages lazy non-separating-concerns coding
- Code smell throughout your application – any chain of calls through the service layers involves a “context” parameter passed down through each layer
- Tedious work of creating DataContexts all the time
Story two: DataContext per application
The opposite extreme is having a single, static DataContext shared across your entire application. You may instantiate it either on application start, or lazily, and your service classes may simply access it whenever they need database access.
Do not do this, because:
- DataContext isn’t thread-safe, as far as I know
- You lose isolation and cannot control when SubmitChanges() is called – concurrent requests will interfere with one another
- Memory leaks are pretty likely
Story three: DataContext per unit-of-work (i.e. per request)
The classic solution to the problem is slightly more tricky that the others, but achieves – almost – the best of all worlds. This is the typical solution advocated by many when using other object-relational mappers, like NHibernate.
If each HTTP request has access to its own private DataContext object which lives for the duration of the request, you can expose it to the whole data access layer (and not to the UI layer), knowing that related calls will use the same DataContext object and thus keep track of object changes properly. Also, you don’t get unwanted interaction between concurrent requests.
But where would you store such an object? The natural place is in the IDictionary HttpContext.Current.Items. That’s a storage area whose lifespan equals the lifespan of the request, and is private to that request. You could, therefore, set up a static helper method available to your whole service layer:
internal static class MyDomainDataContextHelper { public static MyDomainDataContext CurrentContext { get { if (HttpContext.Current.Items["MyDomainDataContext"] == null) { HttpContext.Current.Items["MyDomainDataContext"] = new MyDomainDataContext(ConfigurationSettings.AppSettings["connectionString"]); } return (MyDomainDataContext)HttpContext.Current.Items["MyDomainDataContext"]; } } }
Now, querying the database is as simple as:
var query = from c in MyDomainDataContextHelper.CurrentContext.Customers where // ...etc
Since the DataContext is available statically, there’s no need to construct new instances or pass them around from the UI layer downwards. The UI layer doesn’t even need to know that such a thing exists. How nice!
Stop! That’s still not good enough
The simple implementation of method three leads to a serious limitation. The service layers are now coupled to HttpContext.Current, which is not a good idea – firstly because you ought to be striving to make your service layers independent of the UI platform (in this case, the web), and secondly because it’s going to break in some cases. HttpContext.Current won’t be available to your unit test runner, for instance.
Fortunately we can fix this with a trivial implementation of Inversion of Control. Let’s define an abstract notion of a “unit of work datastore” in our service layer.
namespace ServiceLayer { public interface IUnitOfWorkDataStore { object this[string key] { get; set; } } public static class UnitOfWorkHelper { public static IUnitOfWorkDataStore CurrentDataStore; } }
Now, when our application starts, we can map this datastore to HttpContext.Current.Items, by adding code to Global.asax:
private class HttpContextDataStore : IUnitOfWorkDataStore { public object this[string key] { get { return HttpContext.Current.Items[key]; } set { HttpContext.Current.Items[key] = value; } } } protected void Application_Start(object sender, EventArgs e) { ServiceLayer.UnitOfWorkHelper.CurrentDataStore = new HttpContextDataStore(); }
… and then the references to System.Web.HttpContext.Current.Item in the service layer can be replaced with UnitOfWorkHelper.CurrentDataStore:
internal static class MyDomainDataContextHelper { public static MyDomainDataContext CurrentContext { get { if (UnitOfWorkHelper.CurrentDataStore["MyDomainDataContext"] == null) { UnitOfWorkHelper.CurrentDataStore["MyDomainDataContext"] = new MyDomainDataContext(ConfigurationSettings.AppSettings["connectionString"]); } return (MyDomainDataContext)UnitOfWorkHelper.CurrentDataStore["MyDomainDataContext"]; } } }
And we’re done! The service layer no longer has any dependency on or awareness of System.Web, and our unit tests can supply any IUnitOfWorkDataStore of their own creation by assigning it to ServiceLayer.UnitOfWorkHelper.CurrentDataStore.If you’re using a full-fledged Inversion of Control container, like Castle Windsor or Spring, you will no doubt register your IUnitOfWorkDataStore with it and access it that way. For the rest of us, this simple implementation works nicely.
So that’s all perfect then
Yes, I thought so for a little while. In the next post, I’ll explain some issues you’re likely to run into when using LINQ to SQL in this way. Problems that don’t happen with NHibernate…