We’ve long had content delivery networks (CDN), that help speed up and reduce latency for your web visitors. This works by making sure that the CDN has servers that are physically closer and on a better network connection (fewer hops) compared to where your visitor is based. You maintain multiple copies of the same content spread out around the world, just to ensure you stay close to where your users are.
But CDNs haven’t been the most advanced players beyond that. Sure, various methods have been available to provide private content to end users that have somehow ensured that only the select few have access to this content. A good example would be any video streaming: You are authorized to watch a video because you signed in or you paid for it, and through some method of signing you get access to the resource.
But this is not a web friendly method. You might need to use special URLs, with long and complicate structures. Or you might be allowed to use a cookie, but the URL will probably need point to a distinct resource.
What if you wanted to have a system, where people signed in, and depending on who they are and what they have access to, their view of the webpage would be fully customized to their session, while maintaining a shared URL structure.
Say you go to www.example.com/todo-list. That link will not show the same TODO list to you as it would to me, because we each have a list. But we can’t serve this URL from a CDN, because it points to one resource (same URL), yet we need distinct resources per user or session.
Previously, it has been possible to have multiple servers with your web app deployed across the world. You’d then have advanced DNS features deployed that would make the server close to the user be the server selected to deal with the request. In a similar fashion, this is also how a CDN ensures you get a server close to you to respond to a request.
But it’s not a trivial task to deploy servers around the world, maintain these and ensure these are located close to where your users are. It’s still, even with today’s global cloud providers, some efforts and manual labour involved. And it’s not globally elastic and auto scaling: You’ll need to make some informed guess as to where you think you should put your servers vs where you think your future users are. It’s a non-optimal situation.
Instead what you want is to have some core infrastructure, ultimately severing all the content, fronted by something like Cloudflare Workers or AWS Lambda@Edge. The key difference is that you can now deploy code to a CDN. And this code is deployed and executed automatically across a global CDN network, where the code can handle session specific logic.
In our earlier example, where I go to www.example.com/todo-list and get my list, while you would get your list, code at the edge would run to look at say the unique user ID encoded and encrypted into the session cookie you get when signing in. And logic in this code could then translate that URL into say www.example.com/todo-list?user-id=1234. This would be fully invisible to me as a user, my browser would just show the www.example.com/todo-list because that’s the actual request my browser made to the CDN end point. But the CDN end point would pass on the request as www.example.com/todo-list?user-id=1234. And this request can be cached at the end point, making the request fast.
The observant reader would spot an issue in the above. If there is to be any benefit we’d like to cache the www.example.com/todo-list?user-id=1234 response on the edge. But what if I finish a task, or add a new task to my TODO list? We want to avoid making calls back to the backend servers where the TODO lists are actually stored, unless we absolutely have to. We should therefore go with immutable URLs. Taking this further, we could put in a counter value in the session cookie. And we’d then increase the count whenever we make a mutating operation, i.e., adding a task to the TODO list. The actual URL called by the CDN server to the backend would look like www.example.com/todo-list?user-id=1234&c-id=1001.
That is a distinct URL for the user and for the specific version of the list. The CDN servers will cache the response on this distinct URL, making subsequent requests fast. And we’ve managed to build a system that can benefit from near user caching while still looking, feeling and otherwise behaving like any modern web application. We’ll also naturally need to put in security mechanisms that prohibit anyone from passing the above URL in directly, as we only want to allow the CDN servers to request such an URL, never the user directly. But this is easily accomplishable with for example a shared secret between our backend servers and the CDN servers, but not known to the end user.
I think we’ll see more and more of this type of edge architecture because the benefits in reduced latency is a win for both the web application user and those owning the application. But it also brings with it resiliency benefits against DDoS attacks, as we now have a global infrastructure available.