The Future of Web Development: JAM Explained

Thursday, December 5, 2019 | By Jack Hillman
Read time: 5-10 minutes
Developer and programming careers Adelaide

Static HTML documents have been the basis of the web since the early days. But as the needs of content editors grew, so did the popularity of content management systems (CMS).

Unfortunately for the end-user, load times have grown alongside. JAM (also referred to here as ‘the JAMstack‘) is one of the ways the development community aim to fix this.

A bit of history

In the beginning, when dinosaurs roamed the earth, all web pages were simply structured content documents. As when writing a text document, you would put together your copy, format some headings, maybe bold some text, and call it a day. You would copy this HTML file onto your web server, and others would download and view it.

This was great – except every document was unique. When we wanted to have the same piece of content updated in multiple places, you needed to update it in every document. This continued to be a growing concern as pages and websites got more and more complex.

Introducing the Personal Home Page

Along came PHP, and we started adopting it to begin our foray into automatically generated content. A new world of possibilities opened up. Shared headers and footers? Easy. Latest posts listings? Scandir to the rescue. Colophon with the correct year? You got it.

Quickly, the needs of our dynamic websites grew, faster then we could maintain our dynamic website solutions. We all soon realised a well-organised system was required to show us the light.

A CMS-ful World

To our rescue came the purpose-built CMS; with templates, routing and a database for storage to boot. We were suddenly able to put together good websites in just a few moments, with WordPress’s famous ‘five-minute install’ being a real show-stopper.

As developers started adopting these CMSs and their frameworks, others started developing add-ons and plugins, adding functionality and features that would be cost-prohibitive for most to implement on their own.

And everyone lived happily ever after… Well, almost. This was all great news, except there were a number of downsides.

1. Convenience costs time

The CMS would do a lot for you, and your plugins would do even more. But each little thing would add a tiny bit of time to each page render, even if nothing was actually changed on the page.

The dynamic bootstrapping of components, and allowing every plugin to have its say, adds tiny amounts of processing time to each and every request. With a modern website often having thousands of these tiny amendments, we start to add a not-so-insignificant amount of additional rendering time to every page view.

2. Your website was entirely reliant on your CMS to exist

All of your content was stored in a database managed by your CMS, which was accessed via the models set up in your CMS, routed by controllers configured in your CMS, and rendered by the templating engine provided by your CMS.

Every step of the way was dependant on your CMS working exactly as intended.

3. Working outside the CMS wasn’t feasible

Adding new functionality or upgrading your tech stack would only be feasible if your changes were in scope of the CMS.

Everything within the scope of the CMS was structured and well-designed. Anything that was outside of the scope of the CMS would typically be a square peg to fit a round hole.

4. Scaling to your needs was difficult and expensive

As our websites started to take longer to render, and more people were viewing them, our web servers were unable to handle this load. For many, this meant just scaling up our servers and calling it a day. However, this was expensive and wasteful. And worse, it was really just a band-aid fix. Site traffic would continue to grow, and our servers too to compensate.

Scaling to meet the requirements in peak hours typically meant also scaling in non-peak hours, where traffic is generally minimal. In many segments, a significant majority of web traffic only comes during a few hours of the day, leaving the rest of the time billed at peak rates.

Where are we now?

In an attempt to meet both scale and performance needs, we’ve looked into all sorts of caching, scaling and delivery methods. With consistent performance improvements in our infrastructure and software, and attempts to split off process-heavy workloads out of our critical request path, we’ve managed to reduce some of our overhead. But we’re fighting an uphill battle.

This is where JAM comes in.

The JAMstack is a return-to-roots of sorts; changing when we build our website markup from the dynamic, per-request nature of modern CMS solutions, to instead be *almost* entirely at build time.

However, where JAM differs is the ecosystem around it. Statically generating sites isn’t a new idea, but with the use of modern tooling and high-performance JS frameworks like React and Vue, we’re able to build static websites which can be re-hydrated into dynamic websites with content on the client-side.

How does this change things?

1. Building at build time saves building at run time

By pre-generating our HTML markup, and storing it on a quick-to-access storage medium, we’re able to drop our Time to First Byte down to almost nothing.

It doesn’t really matter how long it takes to build a page during the build – it will still be super fast if it’s pre-compiled before being requested by the user.

2. Separation of concerns means fewer dependencies and limitations

With our static website only being one thing – a static website – we’re forced to move any additional functionality to separate sub-systems.

With this move to a new subsystem, we’re removing the dependency on our main product – the website. This new sub-system can run entirely on its own, calling out to other sub-systems as and when it needs.

This is particularly valuable when scale is required, either as company requirements change or new functionality is needed, or when heavy traffic hits the site and each sub-system needs to be scaled to meet this load.

3. Deploying statically to the edge is fast, cheap and reliable

When we deploy our JAM sites, we tend to deploy them more-or-less straight to a Content Distribution Network (CDN). As the name entails, these special servers are specifically designed to be able to push as much content to as many users as possible, for the lowest price possible.

Additionally, as these CDNs are typically numerous with the ability to failover to nearby CDNs as required, they provide unparalleled reliability.


Problems left to solve

These are all great things to have, and it’s an exciting time to be in the industry, but there are still some obstacles to overcome.

1. The delay between publishing and viewing

The biggest issue with the JAMstack is the Author Experience, which is typically delayed until a rebuild of the site can occur, often taking several minutes. This means that from the moment you hit “save draft” in your CMS, you may have to wait several minutes until the change is ready to preview.

Fortunately, this is an area of significant progress at the moment, with some recent releases like TinaCMSStackbit Live and Gatsby Cloud offering solutions to improve this experience.

2. A one-button install

Right now, usage of JAM is reserved mostly for those already pretty invested in the industry. As a result, it can be difficult to set up one of these environments for the uninitiated.

Netlify and Stackbit are a couple of the big players who aim to provide an easy experience for those who don’t want to spend hours configuring an AWS hosting environment.

Looking into the future

There’s no doubt in my mind that the JAMstack methodology will be the future of web development. At atomix, we’re committed to learning from the industry and always aiming to stay ahead of the curve.

As such, we are actively investing in developing our solutions around the JAMstack, with preliminary pilot projects already completed and full-scale websites coming very soon.