Site Meter
 
 

Measuring the Performance of Asynchronous Controllers

Recently I’ve been working with ASP.NET MVC 2.0’s new asynchronous controllers. The idea with these is that you can split the request handling pipeline into two phases:

  • First, your controller begins one or more external I/O calls (e.g., SQL database calls or web service calls). Without waiting for them to complete, it releases the thread back into the ASP.NET worker thread pool so that it can deal with other requests.
  • Later, when all of the external I/O calls have completed, the underlying ASP.NET platform grabs another free worker thread from the pool, reattaches it to your original HTTP context, and lets it complete handling the original request.

This is intended to boost your server’s capacity. Because you don’t have so many worker threads blocked while waiting for I/O, you can handle many more concurrent requests. A while back I blogged about a way of doing this with an early preview release of ASP.NET MVC 1.0, but now in ASP.NET MVC 2.0 it’s a natively supported feature.

In this blog post I’m not going to explain how to create or work with asynchronous controllers. That’s already described elsewhere on the web, and you can see the process in action in TekPub’s “Controllers: Part 2” episode, and you’ll find extremely detailed coverage and tutorials into my forthcoming ASP.NET MVC 2.0 book when it’s published. Instead, I’m going to describe one possible way to measure the performance effects of using them, and explain how I learned that under many default circumstances, you won’t get any benefit from using them unless you make some crucial configuration changes.

Measuring Response Times Under Heavy Traffic

To understand how asynchronous controllers respond to differing levels of traffic, and how this compares to a straightforward synchronous controller, I put together a little ASP.NET MVC 2.0 web application with two controllers. To simulate a long-running external I/O process, they both perform a SQL query that takes 2 seconds to complete (using the SQL command WAITFOR DELAY ’00:00:02′) and then they return the same fixed text to the browser. One of them does it synchronously; the other asynchronously.

Next, I put together a simple C# console application that simulates heavy traffic hitting a given URL. It simply requests the same URL over and over, calculating the rolling average of the last few response times. First it does so on just one thread, but then gradually increases the number of concurrent threads to 150 over a 30-minute period. If you want to try running this tool against your own site, you can download the C# source code.

The results illustrate a number of points about how asynchronous requests perform. Check out this graph of average response times versus number of concurrent requests (lower response times are better):

image

To understand this, first I need to tell you that I had set my ASP.NET MVC application’s worker thread pool to an artificially low maximum limit of 50 worker threads. My server actually has a default max threadpool size of 200 – a more sensible limit – but the results are made clearer if I reduce it.

As you can see, the synchronous and asynchronous requests performed exactly the same as long as there were enough worker threads to go around. And why shouldn’t they?

But once the threadpool was exhausted (> 50 clients), the synchronous requests had to form a queue to be serviced. Basic queuing theory tells us that the average time spent waiting in a queue is given by the formula:

image

… and this is exactly what we see in the graph. The queuing time grows linearly with the length of the queue. (Apologies for my indulgence in using a formula – sometimes I just can’t suppress my inner mathematician. I’ll get therapy if it becomes a problem.)

The asynchronous requests didn’t need to start queuing so soon, though. They don’t need to block a worker thread while waiting, so the threadpool limit wasn’t an issue. So why did they start queuing when there were more than 100 clients? It’s because the ADO.NET connection pool is limited to 100 concurrent connections by default.

In summary, what can we learn from all this?

  • Asynchronous requests are useful, but *only* if you need to handle more concurrent requests than you have worker threads. Otherwise, synchronous requests are better just because they let you write simpler code. 
    (There is an exception: ASP.NET dynamically adjusts the worker thread pool between its minimum and maximum size limits, and if you have a sudden spike in traffic, it can take several minutes for it to notice and create new worker threads. During that time your app may have only a handful of worker threads, and many requests may time out. The reason I gradually adjusted the traffic level over a 30-minute period was to give ASP.NET enough time to adapt. Asynchronous requests are better than synchronous requests at handling sudden traffic spikes, though such spikes don’t happen often in reality.)
  • Even if you use asynchronous requests, your capacity is limited by the capacity of any external services you rely upon. Obvious, really, but until you measure it you might not realise how those external services are configured.
  • It’s not shown on the graph, but if you have a queue of requests going into ASP.NET, then the queue delay affects *all* requests – not just the ones involving expensive I/O. This means the entire site feels slow to all users. Under the right circumstances, asynchronous requests can avoid this site-wide slowdown by not forcing the other requests to queue so much.

Gotchas with Configuring for Asynchronous Controllers

The main surprise I encountered while trying to use asynchronous controllers was that, at first, I couldn’t observe any benefit at all. In fact, it took me almost a whole day of experimentation before I discovered all the things that were preventing them from making a difference. If I hadn’t been taking measurements, I’d never have known that my asynchronous controller was entirely pointless under default configurations. For me it’s a reminder that understanding the theory isn’t enough; you have to be able to measure it.

Here are some things you might not have realised:

  • Don’t even bother trying to load test your asynchronous controllers using IIS on Windows XP, Vista, or 7. Under these operating systems, IIS won’t handle more than 10 concurrent requests anyway, so you certainly won’t observe any benefits.
  • If your application runs under .NET 3.5, you will almost certainly need to change its MaxConcur
    rentRequestsPerCPU setting
    . By default, you’re limited to 12 concurrent requests per CPU – and this includes asynchronous as well as synchronous ones. Under this default setting, there’s no way you can get anywhere near the default worker threadpool limit of 100 per CPU, so you might as well handle everything synchronously. For me this is the biggest surprise; it just seems like a chronic mistake. I’d be interested to know if anyone can explain this. Fortunately, if your app runs under .NET 4, then the MaxConcurrentRequestsPerCPU value is 5000 by default.
  • If your external I/O operations are HTTP requests (e.g., to a web service), then you may need to turn off autoConfig and alter your maxconnections setting (either in web.config or machine.config). By default, autoconfig allows for 12 outbound TCP connections per CPU to any given address. That limit might be a little low.
  • If your external I/O operations are SQL queries, then you may need to change the max ADO.NET connection pool size. By default, the limit is 100 in total. Whereas the default ASP.NET worker thread pool limit is 100 per CPU, so you’re likely to hit the connection pool size limit first (unless, I guess, you have less than one CPU…).

If you don’t have high capacity requirements – e.g., if you’re building a small intranet application – then you can probably forget all about this blog post and asynchronous controllers in general. Most ASP.NET developers have never heard of asynchronous requests and they get along just fine.

But if you are trying to boost your server’s capacity and think asynchronous controllers are the answer, please be sure to run a practical test and make sure you can observe that your configuration meets your needs.

kick it on DotNetKicks.com

16 Responses to Measuring the Performance of Asynchronous Controllers

  1. Great Article!

    I have one question about asynchronous processing. In ASP.NET, we have APM. However, when I looked outside of ASP.NET territory, it seems not much other platform or technology has BeginXXXX/EndXXXX like asynchrony(e.g. PHP).

    I am wondering why other tech doesn’t support asynchrony well. Is it because that these tech based on Unix doesn’t suffer from threading overhead?

  2. Steve

    Morgan – I can’t really guess why PHP works the way it does – I just don’t know enough about it. Sorry!

  3. “IIS on Windows XP, Vista, or 7 … won’t handle more than 10 concurrent requests anyway” – the artificial IIS 10-connection limit was removed in Vista IIRC

  4. Steve

    Duncan – that’s not quite the point. On Vista and newer, IIS will allow more than 10 simultaneous *connections*, but not more than 10 simultaneous requests. If there are > 10 connections, they are queued – so you won’t observe any benefits from async controllers.

    You can observe this practically using the simple load testing tool I provided with this post, or you can read more about it on Bill Staples’s blog at http://tinyurl.com/33g5cj.

  5. Still… I always wondered, since I’ve heard of this mechanism in ASP.NET WebForms (Async attribute on page directive), which would be real-life scenarios where they would come in handy…

    I understand they are useful in costful operations but what costful public operations would be available in a public site? Private (requiring authentication+authorization) operations can be controlled better however..

  6. Great information Steve.

    As an aside, I was very excited to read that blurb about your new MVC 2 book. Pro ASP.NET MVC was absolutely my favorite of the available MVC books available, and is actually in my overall top-5 favorite reads to date.

    Definitely looking forward to it!

    -Matt

  7. Eric

    Muchos Kudos on the Tekpub series! Even though I’ve read both your MVC book and the Manning MVC in Action book, I’m still enjoying and getting my money’s worth out of the series. Can’t wait for the second edition of your MVC book!

  8. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #525

  9. Steve,

    Given the default IIS thread pool limit, I’m curious what happens as synchronous and asynchronous controllers approach the default connection pool limit. In other words, what does the graph look like if you leave your test environment exactly as stated, but increase the IIS thread pool limit back to its default?

  10. Steve

    Sean, if I reverted to a threadpool limit of 200, then it would be the same except I’d need more than 200 concurrent requests before I started to see any significant benefit from using asynchronous controllers.

    The reason I didn’t do this originally is because it would have made things more complex – I’d have to explain that I also needed to increase the ADO.NET connection pool size to something above 200 otherwise I still couldn’t observe any benefit from asynchronous controllers.

  11. Eric

    Well, I guess I’ll never see how the Tekpub series pans out. After two excellent episodes by Steve Sanderson, I made a possible unflattering comment/question about the first episode on Views (the Matrix analogy for describing the ASP.NET Request pipeline was pretty LAME) and my monthly account was canceled. I didn’t ask for it to be canceled, but whatever… I guess that’s how Rob and Company run the business. I can’t imagine them being in business long, but that’s another topic entirely. I’d recommend the first two episodes of Mastering ASP.NET MVC by Steve Sanderson on Tekpub, but I can’t really recommend the Tekpub service. Rob and Company are pretty sensitive girlie men. :P

  12. Great article on Async controllers in MVC 2. I’m glad you pointed out that there are many instances where you won’t see a performance increase just by switching over to Async; I think there is a common misconception that async = better performance, when in fact that isn’t always the case.

  13. Thomas

    Steve, thanks for a great article. I am running a MVC application (.net 4.0 on IIS7)and when a user submits a form I have to send that information to two external web services. The problem is that the web services took about 20 seconds to reply (this is beyond my control). I decided to move to async controllers so I could hit both services at the same time as oppose to hitting one wait for the response and then hitting the other one.

    Now if I get 10 users submitting at the same time I will create 20 outbound requests on 20 separate threads. Given your article I will be out of luck since the limit is set to 12. If I am reading your article correctly I am forced to set the maxconnections settings to a higher value in my web.config file in order for this to work. Is that correct? Is there anything else I should be thinking about in my particular scenario?

    On another note, thank your for your Tekpub contributions. Your videos are awesome.

  14. Serdar Karahisarlı

    Hi Steve,

    Actually i want to use async controllers for not blocking the ui. We have a web service queries orders from ERP service but users also want to do other staff at the same time.

    Are async controllers proper for my case or just use parallel namespace?

    Thanks.

  15. Adrian

    Hi, Do you know any drawbacks of using asyncontroller in sync actions? more overhead maybe? or if you don’t use the convention is theated like a normal Controller?

    Thanks in advance

  16. Kyle

    Have they considered making controller actions asynch by default? The first action method to handle a request could be called transparently in an asych fashion. Then you would have to opt out of asynch. What would be the downside of this?