Managing Concurrency With Asynchronous HTTP Requests

I developed a rather complicated GWT application last summer and spent plenty of time struggling with the concurrency issues involved with with applications that use asynchronous web requests: for instance, the HttpWebRequest in Silverlight or the XmlHttpRequest in Javascript. Up until Silverlight 2 beta, Silverlight programmers could perform synchronous requests, but the latest version of Silverlight supports only asynchronous requests… We’re scrambling to update our apps.

There’s a “standard model” that works for writing reliable, performant and secure RIAs — it works for GWT, Flex, and Silverlight and plain old AJAX apps too.

The model is that one user UI action should cause one http request with the server and the server should send back a bundle of data that has everything needed to update client state and redraw.

Sound familiar? This is the proven model that conventional web applications use — it works, unlike the generation of failed “client/server” apps we saw in the 90′s that showed that the general problem of client/server choreography is hard to do in practice; particularly with a system like GWT that offers very little support for dealing with concurrency.

Put all the business logic on the server; maybe you can do some form validation on the client, but do it again on the server. Since GWT and Silverlight let you code with the same language on client and server, it’s easy to share code.

Both client and server can send each other “state update” packets that bundle an arbitrary number of commands that are, preferentially, bundled into a single transaction.

Note that this goes against the general principle that complicated operations should be done by composing them out of simpler operations — that’s an excellent principle for developing object-oriented systems that live in a single address space, but it’s a dangerous way to build applications in distributed systems: if a client operation requires 20 web requests, a reliable application needs to do something correct when 19 of those operations succeed and 1 of them fails — never mind the cost in latency.

Now, it’s not possible to do this in every case: there really are cases where you need to connect to a legacy server that you can’t redesign, where there constraints on the client-server protocol, or you can save bandwidth by having the client make selective results (if some information is cached.) If you’re going to do that, you need design patterns that provide reliable answers to common problems. I’ll be talking about that in the next few days.

kick it on DotNetKicks.com