Site Meter
 
 

Monthly Archives: January 2010

Validating a variable length list, ASP.NET MVC 2-style

In the previous post I showed a fairly straightforward way to create an editor where the user can add and delete the items in a set. Please read that previous post before continuing to read this one.

image

But what about validation? We don’t like your blank item names or your negative prices, sonny!

As you probably know, ASP.NET MVC 2 supports DataAnnotations attributes out of the box, so you can mark up your model as follows.

public class Gift
{
    [Required]
    public string Name { get; set; }
 
    [Range(0.01, double.MaxValue, ErrorMessage = "Please enter a price above zero.")]
    public double Price { get; set; }
}

Server-side validation

Server-side validation is trivial, assuming you want to use the default model binder convention of validating each model object when it’s bound. In the action that receives the form post, just check ModelState.IsValid, and refuse to accept the data if it isn’t.

[HttpPost]
public ActionResult Index(IEnumerable<gift> gifts)
{
    if (ModelState.IsValid)
        return View("Completed", gifts);
    else
        return View(gifts); // Redisplay the form with errors
}

You’ll also need to specify where the validation error messages should appear. Update the GiftEditorRow.ascx partial by placing a couple of Html.ValidationMessageFor() helpers somewhere inside the row.

<%= Html.ValidationMessageFor(x => x.Name) %>
<%= Html.ValidationMessageFor(x => x.Price) %>

Now, if the incoming data doesn’t satisfy your rules, the form will be re-rendered, displaying appropriate messages.

image

Client-side validation

It gets a bit more complicated if you also want client-side validation. You could use xVal (I’m not sure whether it would be easier or harder), but I want to get better acquainted with ASP.NET MVC 2’s built-in client-side validation feature, so I’m going to use that.

I don’t even know if there’s an officially recommended way of doing ASP.NET MVC 2 client-side validation when you’re dynamically adding and removing form elements, but I’ll show you a technique I found that will do it. Be warned: this is rather hacky. I’d be interested to hear if anyone can suggest a better way.

First, let’s set up client-side validation in the normal way. Enable client-side validation on your form by calling Html.EnableClientValidation():

<h2>Gift List</h2>
What do you want for your birthday?
 
<% Html.EnableClientValidation(); %>
<% using(Html.BeginForm()) { %>
    (rest as before)
<% } %>

Now, as long as you’ve already referenced MicrosoftAjax.js and MicrosoftMvcValidation.js, you can run the app and you’ll immediately get working client-side validation! But hang on a minute… it only works for the elements that are present when the form is first rendered, *not* the elements added when the user clicks “Add another…”.

We need a way to capture the extra validation rules added each time the user adds a new row and somehow attach them to the form. One possible approach is to create a new type of custom view result that registers the validators associated with any partial that is renders, and have it emit some JavaScript to attach those validators to the form.

Here’s a possible implementation. Don’t worry if you don’t understand it.

public class AjaxViewResult : ViewResult
{
    public string UpdateValidationForFormId { get; set; }
 
    public AjaxViewResult(string viewName, object model)
    {
        ViewName = viewName;
        ViewData = new ViewDataDictionary { Model = model };
    }
 
    public override void ExecuteResult(ControllerContext context)
    {
        var result = base.FindView(context);
        var viewContext = new ViewContext(context, result.View, ViewData, TempData, context.HttpContext.Response.Output);
 
        BeginCapturingValidation(viewContext);
        base.ExecuteResult(context);
        EndCapturingValidation(viewContext);
 
        result.ViewEngine.ReleaseView(context, result.View);
    }
 
    private void BeginCapturingValidation(ViewContext viewContext)
    {
        if (string.IsNullOrEmpty(UpdateValidationForFormId))
            return;
        viewContext.ClientValidationEnabled = true;
        viewContext.FormContext = new FormContext { FormId = UpdateValidationForFormId };
    }
 
    private void EndCapturingValidation(ViewContext viewContext)
    {
        if (!viewContext.ClientValidationEnabled)
            return;
        viewContext.OutputClientValidation();
        viewContext.Writer.WriteLine("<script type=\"text/javascript\">Sys.Mvc.FormContext._Application_Load()</script>");
    }
}

Now, we can change the BlankEditorRow() action method to render its partial using this custom view result.

public ViewResult BlankEditorRow(string formId)
{
    return new AjaxViewResult("GiftEditorRow", new Gift()) { UpdateValidationForFormId = formId };
}

You’ll notice that BlankEditorRow() now needs to be told which form it should attach the new validators to. You’ll have to update the link to BlankEditorRow to add this information to the query string.

<%= Html.ActionLink("Add another...", "BlankEditorRow", new { ViewContext.FormContext.FormId }, new { id = "addItem" }) %>

Et voila! Each time the user appends a new row, its validators will magically be associated with the form.

The problem with removing fields

If you proceed with this approach, you’ll soon discover a further flaw. Even if the user removes fields from the form, any validators associated with those fields will still be lurking.  If a field is displaying a “required” error message, and then the user deletes the containing row, it will become impossible to submit the form because the field is still required, except now there is nowhere to enter any data for it. Whoops!

Again, I don’t know if there’s a recommended way to deal with this, but one possibility is to edit the MicrosoftMvcValidation.debug.js script a little. Let’s update the logic slightly so that, if a validator is associated with fields that no longer exist, the validator should no longer apply.

Find the function called “Sys_Mvc_FieldContext$validate”, and right at the top, add the following:

// [Added] Permanently disable this validator if its associated elements are no longer in the document
for (var j = 0; j < this.elements.length; j++) {
    if (!Sys.Mvc.FormContext._isElementInHierarchy(document.body, this.elements[j])) {
        this.validations = [];
        break;
    }
}

Now, assuming you’re referencing the file you just edited (MicrosoftMvcValidation.debug.js, rather than MicrosoftMvcValidation.js), you should get the desired change in behaviour. Deleting a row in the editor now kills its associated validation rules properly, so they don’t rise from the grave and block form submissions like invisible validation zombies.

Summary

Once you’ve got the AjaxViewResult class, it doesn’t take that much work to enable client-side validation on dynamically changing forms. Maybe there’s a better and less hacky way though… Anybody got any suggestions?

Editing a variable length list, ASP.NET MVC 2-style

A while back I posted about a way of editing a list of items where the user can add or remove as many items as they want. Tim Scott later provided some helpers to make the code neater. Now, I find myself making use of this technique so often that I thought it would be worthwhile providing an update to show how you can do it even more easily with ASP.NET MVC 2 because of its strongly-typed and templated input helpers.

Download the demo project or read on for details.

Update (Feb 11, 2010):Thanks to Ryan Rivest for pointing out a bug in the original code and providing a neat fix. Code updated.

Getting Started

For this example I’m going to go with the same theme as last time and build a gift list editor. Our model object can simply be the following:

public class Gift
{
    public string Name { get; set; }
    public double Price { get; set; }
}

Displaying the Initial UI

To display the initial data entry screen, add an action method that renders a view and passes some initial collection of Gift instances.

public ActionResult Index()
{
    var initialData = new[] {
        new Gift { Name = "Tall Hat", Price = 39.95 },
        new Gift { Name = "Long Cloak", Price = 120.00 },
    };
    return View(initialData);
}

Next, add a view for this action, and make it strongly-typed with a model class of IEnumerable<Gift>. We’ll create an HTML form, and for each gift in the collection, we’ll render a partial to display an editor for that gift.

<h2>Gift List</h2>
What do you want for your birthday?
 
<% using(Html.BeginForm()) { %>
    <div id="editorRows">
        <% foreach (var item in Model)
            Html.RenderPartial("GiftEditorRow", item);
        %>
    </div>
 
    <input type="submit" value="Finished" />
<% } %>

I know it would be possible to use ASP.NET MVC 2’s Html.EditorFor() or Html.EditorForModel() helpers, but later on we’re going to need more control over the HTML field ID prefixes so in this example it turns out to be easier just to use Html.RenderPartial().

Next, to display the editor for each gift in the sequence, add a new partial view at /Views/controllerName/GiftEditorRow.ascx, strongly-typed with a model class of Gift, containing:

<div class="editorRow">
    <% using(Html.BeginCollectionItem("gifts")) { %>
        Item: <%= Html.TextBoxFor(x => x.Name) %>
        Value: $<%= Html.TextBoxFor(x => x.Price, new { size = 4 }) %>
    <% } %>
</div>

Here’s where we get to start using ASP.NET MVC 2 features and make things slightly easier than before. Notice that I’m using strongly-typed input helpers (Html.TextBoxFor()) to avoid the need to build element IDs manually. These helpers are smart enough to detect the “template context” in which they are being rendered, and use any field ID prefix associated with that template context.

You might also be wondering what Html.BeginCollectionItem() is. It’s a HTML helper I made that you can use when rendering a sequence of items that should later be model bound to a single collection. You give it some name for your collection, and it opens a new template context for that collection name, plus a random unique field ID prefix. It also automatically renders a hidden field, which in this case is called gifts.index, populating it with that unique ID, so when you later model bind to a list, ASP.NET MVC 2 will know that all the fields in this context should be associated with a single .NET object.

And now, if you visit the Index() action, you should see the editor  as shown below. (I’ve added some CSS styles, obviously.)

image

Receiving the Form Post

To receive the data posted by the user, add a new action method as follows.

[HttpPost]
public ActionResult Index(IEnumerable<gift> gifts)
{
    // To do: do whatever you want with the data
}

How easy is that? Because Html.BeginCollectionItem() observes ASP.NET MVC 2 model binding conventions, you can receive all the items in the list without having to do anything funky.

You could also achieve the same with less code by using the built-in Html.EditorFor() or Html.EditorForModel() helpers, but because these use a different indexing convention (an ascending sequence, not a set of random unique keys), things would get more difficult when you try to add or remove items dynamically.

Dynamically Adding Items

If the user wants to add another item, they’ll need something to click to say so. Let’s add an “Add another…” link. Add the following to the main view, just before the “Finished” button.

<%= Html.ActionLink("Add another...", "BlankEditorRow", null, new { id = "addItem" }) %>

This is a link to an action called BlankEditorRow which doesn’t exist yet. The idea is that the BlankEditorRow action will return the HTML markup for a new blank row. We can fetch this markup via Ajax and dynamically append it into the page.

To make this Ajax call and append the result into the page, make sure you’ve got jQuery referenced, and create a click handler similar to this:

$("#addItem").click(function() {
    $.ajax({
        url: this.href,
        cache: false,
        success: function(html) { $("#editorRows").append(html); }
    });
    return false;
});

Note that it’s very important to tell jQuery to tell the browser not to re-use cached responses, otherwise those unique IDs won’t always be so unique… And before we forget, we’ll need to put the BlankEditorRow action in place:

public ViewResult BlankEditorRow()
{
    return View("GiftEditorRow", new Gift());
}

As you can see, it simply renders the same editor partial, passing a blank Gift object to represent the initial state. I’m very pleased that you can re-use the same editor partial in this way – it means there’s no duplication of view markup and we can stay totally DRY.

And that, in fact, is all you have to do – each time the user clicks “Add another…”, the client-side code will inject a new blank row into the editor. Because each row has its own unique ID, when the user later posts the form, all the data will be model bound into a single IEnumerable<Gift>.

Dynamically Removing Items

Removing items is easier, because all you have to do is remove the corresponding DOM elements from the HTML document. If the elements are gone, their contents won’t be posted to the server, so they won’t be present in the IEnumerable<Gift> that your action receives.

Add a “delete” link to the GiftEditorRow.ascx partial:

<a href="#" class="deleteRow">delete</a>

This needs to go inside the DIV with class “editorRow”, so you can handle clicks on it as follows:

$("a.deleteRow").live("click", function() {
    $(this).parents("div.editorRow:first").remove();
    return false;
});

Notice that this code uses jQuery’s “live” function, which tells it to apply the click handler not only to the elements that exist when the page is first loaded, but also to any matching elements that are dynamically added later.

You’ve now got a working editor with “add” and “delete” functionality.

editor-screenshot.png

Summary

I hope you can see that editing variable-length lists can be very easy. Other than the reusable Html.BeginCollectionItem() helper, I’ve just shown you every line of code needed for this particular strategy, and in total it’s just 36 lines (including the model, action methods, views, and JavaScript, but excluding lines that are purely whitespace or braces).

Download the full demo project

But what about validation?

Yes, I haven’t forgotten about that! You can already do your server-side validation any way you want – by default the model binder will respect any rules associated with your model.

In the next post, I’ll show how you can integrate this list editing strategy with ASP.NET MVC 2’s client-side validation feature.

Measuring the Performance of Asynchronous Controllers

Recently I’ve been working with ASP.NET MVC 2.0’s new asynchronous controllers. The idea with these is that you can split the request handling pipeline into two phases:

  • First, your controller begins one or more external I/O calls (e.g., SQL database calls or web service calls). Without waiting for them to complete, it releases the thread back into the ASP.NET worker thread pool so that it can deal with other requests.
  • Later, when all of the external I/O calls have completed, the underlying ASP.NET platform grabs another free worker thread from the pool, reattaches it to your original HTTP context, and lets it complete handling the original request.

This is intended to boost your server’s capacity. Because you don’t have so many worker threads blocked while waiting for I/O, you can handle many more concurrent requests. A while back I blogged about a way of doing this with an early preview release of ASP.NET MVC 1.0, but now in ASP.NET MVC 2.0 it’s a natively supported feature.

In this blog post I’m not going to explain how to create or work with asynchronous controllers. That’s already described elsewhere on the web, and you can see the process in action in TekPub’s “Controllers: Part 2” episode, and you’ll find extremely detailed coverage and tutorials into my forthcoming ASP.NET MVC 2.0 book when it’s published. Instead, I’m going to describe one possible way to measure the performance effects of using them, and explain how I learned that under many default circumstances, you won’t get any benefit from using them unless you make some crucial configuration changes.

Measuring Response Times Under Heavy Traffic

To understand how asynchronous controllers respond to differing levels of traffic, and how this compares to a straightforward synchronous controller, I put together a little ASP.NET MVC 2.0 web application with two controllers. To simulate a long-running external I/O process, they both perform a SQL query that takes 2 seconds to complete (using the SQL command WAITFOR DELAY ’00:00:02′) and then they return the same fixed text to the browser. One of them does it synchronously; the other asynchronously.

Next, I put together a simple C# console application that simulates heavy traffic hitting a given URL. It simply requests the same URL over and over, calculating the rolling average of the last few response times. First it does so on just one thread, but then gradually increases the number of concurrent threads to 150 over a 30-minute period. If you want to try running this tool against your own site, you can download the C# source code.

The results illustrate a number of points about how asynchronous requests perform. Check out this graph of average response times versus number of concurrent requests (lower response times are better):

image

To understand this, first I need to tell you that I had set my ASP.NET MVC application’s worker thread pool to an artificially low maximum limit of 50 worker threads. My server actually has a default max threadpool size of 200 – a more sensible limit – but the results are made clearer if I reduce it.

As you can see, the synchronous and asynchronous requests performed exactly the same as long as there were enough worker threads to go around. And why shouldn’t they?

But once the threadpool was exhausted (> 50 clients), the synchronous requests had to form a queue to be serviced. Basic queuing theory tells us that the average time spent waiting in a queue is given by the formula:

image

… and this is exactly what we see in the graph. The queuing time grows linearly with the length of the queue. (Apologies for my indulgence in using a formula – sometimes I just can’t suppress my inner mathematician. I’ll get therapy if it becomes a problem.)

The asynchronous requests didn’t need to start queuing so soon, though. They don’t need to block a worker thread while waiting, so the threadpool limit wasn’t an issue. So why did they start queuing when there were more than 100 clients? It’s because the ADO.NET connection pool is limited to 100 concurrent connections by default.

In summary, what can we learn from all this?

  • Asynchronous requests are useful, but *only* if you need to handle more concurrent requests than you have worker threads. Otherwise, synchronous requests are better just because they let you write simpler code. 
    (There is an exception: ASP.NET dynamically adjusts the worker thread pool between its minimum and maximum size limits, and if you have a sudden spike in traffic, it can take several minutes for it to notice and create new worker threads. During that time your app may have only a handful of worker threads, and many requests may time out. The reason I gradually adjusted the traffic level over a 30-minute period was to give ASP.NET enough time to adapt. Asynchronous requests are better than synchronous requests at handling sudden traffic spikes, though such spikes don’t happen often in reality.)
  • Even if you use asynchronous requests, your capacity is limited by the capacity of any external services you rely upon. Obvious, really, but until you measure it you might not realise how those external services are configured.
  • It’s not shown on the graph, but if you have a queue of requests going into ASP.NET, then the queue delay affects *all* requests – not just the ones involving expensive I/O. This means the entire site feels slow to all users. Under the right circumstances, asynchronous requests can avoid this site-wide slowdown by not forcing the other requests to queue so much.

Gotchas with Configuring for Asynchronous Controllers

The main surprise I encountered while trying to use asynchronous controllers was that, at first, I couldn’t observe any benefit at all. In fact, it took me almost a whole day of experimentation before I discovered all the things that were preventing them from making a difference. If I hadn’t been taking measurements, I’d never have known that my asynchronous controller was entirely pointless under default configurations. For me it’s a reminder that understanding the theory isn’t enough; you have to be able to measure it.

Here are some things you might not have realised:

  • Don’t even bother trying to load test your asynchronous controllers using IIS on Windows XP, Vista, or 7. Under these operating systems, IIS won’t handle more than 10 concurrent requests anyway, so you certainly won’t observe any benefits.
  • If your application runs under .NET 3.5, you will almost certainly need to change its MaxConcur
    rentRequestsPerCPU setting
    . By default, you’re limited to 12 concurrent requests per CPU – and this includes asynchronous as well as synchronous ones. Under this default setting, there’s no way you can get anywhere near the default worker threadpool limit of 100 per CPU, so you might as well handle everything synchronously. For me this is the biggest surprise; it just seems like a chronic mistake. I’d be interested to know if anyone can explain this. Fortunately, if your app runs under .NET 4, then the MaxConcurrentRequestsPerCPU value is 5000 by default.
  • If your external I/O operations are HTTP requests (e.g., to a web service), then you may need to turn off autoConfig and alter your maxconnections setting (either in web.config or machine.config). By default, autoconfig allows for 12 outbound TCP connections per CPU to any given address. That limit might be a little low.
  • If your external I/O operations are SQL queries, then you may need to change the max ADO.NET connection pool size. By default, the limit is 100 in total. Whereas the default ASP.NET worker thread pool limit is 100 per CPU, so you’re likely to hit the connection pool size limit first (unless, I guess, you have less than one CPU…).

If you don’t have high capacity requirements – e.g., if you’re building a small intranet application – then you can probably forget all about this blog post and asynchronous controllers in general. Most ASP.NET developers have never heard of asynchronous requests and they get along just fine.

But if you are trying to boost your server’s capacity and think asynchronous controllers are the answer, please be sure to run a practical test and make sure you can observe that your configuration meets your needs.

kick it on DotNetKicks.com