Skip to content

The Problem: The Tyranny of the Page Reload

As I mentioned in the My 25-Year Odyssey to an Instant Web, my 25-year journey in web development has led me to a fundamental conclusion: most modern architectures are fighting the wrong enemy.

The False Grail: Server-Side Rendering (SSR)

When I first heard about SSR in the Nuxt 3 ecosystem, it sounded like magic (it ended up being almost what I had been doing for a decade and a half). The idea that the server would send a fully rendered HTML file promised a spectacular Time To First Byte (TTFB) and First Contentful Paint (FCP). And it delivers. For a user's first visit from Google, SSR is fantastic.

But the devil is in the details, specifically in the bandwidth. In a traditional SSR model, every page reload (F5), every navigation to a new section, involves downloading a complete HTML document all over again (16KB, 60KB, or more). On a 3G network, that wait is an eternity.

This is where the PWA "App Shell" architecture comes into play, and it's crucial to understand its trade-off:

  • The Upfront Cost: Yes, the very first time a user visits (and each time there's an update), they have to download the complete application "shell" (the JS/CSS/font assets).
  • The Return on Investment: This is where the use case changes everything.
    • If your site is a blog that gets 100 one-time visitors, the upfront cost of the SW might not be worth it. Traditional SSR could be more efficient in that scenario.
    • But if your application is a tool that 10 users visit 300 times a day, the math flips dramatically. They pay the download "toll" once. From then on, the remaining 299 daily visits no longer download 60KB of HTML; they are served instantly from the local cache. The bandwidth savings and the improvement in perceived speed are orders of magnitude.

Frameworks like Nuxt or Next.js are excellent for hybrid navigation, but the fundamental problem of the full page reload persists. Our architecture doesn't just optimize client-side navigation; it attacks the very cost of the reload for the returning user.

The True Enemy: The Network

Frameworks like Nuxt, with its useFetch and useAsyncData composables, or Next.js with getServerSideProps/getStaticProps and Server Components, have perfected hybrid navigation. This technique allows in-app transitions to be handled on the client, fetching only the necessary JSON, which makes them very fast.

But this fast client-side navigation isn't magic exclusive to these frameworks. A well-built "pure" SPA does the exact same thing: it intercepts link clicks and requests only the data it needs.

The real problem, the one these helpers don't solve, is the full page reload (F5) or the first visit to a deep page. In that moment, the network is boss again, forcing us to download the entire HTML document once more.

The Solution: A 3-Phase Architecture

This solution isn't free; it requires a deliberate architecture, just like setting up SSR correctly. The strategy is to attack network latency on three fronts, moving from "Instant Feedback" to "Instant Rendering".

Phase 1: Instant Feedback (SSG + SPA)

The foundation. We use Vite-SSG to generate a static site. The first load is an ultra-fast HTML file. Once loaded, the application "hydrates" and becomes an SPA. Internal navigations are fast because they only request data. For this to work, a clean separation of client and server logic is crucial, using components like <ClientOnly> for parts of the UI that are purely interactive.

NOTE

I use Vue and vite-ssg, but this architecture is framework-agnostic. You can implement it with SvelteKit, Nuxt, or any other modern stack that supports SSG.

Phase 2: Fast Feedback (PWA with Asset Precache)

A page refresh is still slow because, even if the HTML is static, the browser has to re-download all the assets (JS, CSS, fonts). This is where the Service Worker comes in. By turning the app into a PWA, the SW creates a precache with all the assets. On a page reload, resources are served instantly from the SW's local cache, dramatically improving the LCP.


The Modern SPA's Achilles' Heel: The Dynamic import()

The Phase 2 Service Worker doesn't just speed up reloads; it solves the most severe and silent problem of modern applications: the fragility of lazy loading.

Every modern framework (Vue, React, Svelte...) relies on code splitting via dynamic import(). Instead of sending a giant JavaScript file, we send a small core and fetch "chunks" (routes, dialogs, etc.) as they are needed.

On a fiber connection, this is a thing of beauty. But in the real world, this architecture is incredibly fragile. If the network drops and the user clicks on something that needs a new JavaScript "chunk," the request fails, and the application breaks.

The PWA turns this fragile operation into a bulletproof vest. Since the Service Worker has already downloaded all the "chunks" into its precache during installation, when the application requests one, the SW intercepts it and serves it instantly from the local cache, without relying on the network. This isn't an optimization; it's a guarantee that the application will never break due to a network failure while loading its own modules.


Phase 3: Instant Rendering (Resilient App Shell Architecture)

This is the final blow to latency, but it requires a precise choreography between the Service Worker and the Server.

We can't blindly serve the HTML from the cache on every reload, because the application handles sessions, and the server must always have the opportunity to intervene, for example, to issue a redirect.

The final strategy is an intelligent dialogue:

  1. On a reload, the SW intercepts the request and fires two tasks in parallel using Promise.allSettled: a. Load the HTML "shell" (without data) from its local cache. b. Fetch the JSON state from a /state-api endpoint on the network.
  2. The SW waits for both. This is where the resilience lies:
    • If the /state-api request returns an unexpected response (a server error HTML, an authentication redirect, etc.), the SW acts as a smart proxy: it discards the cached shell and serves the server's response, always respecting its authority.
    • If the /state-api request fails (because we're offline), the SW serves the cached shell with an empty state ({}), allowing the application to display its "offline" UI.
    • If both promises succeed, the SW performs the "stitching": it injects the small JSON payload into the HTML shell and serves the complete, hydrated page instantly.

The result is the annihilation of the network payload in the happy path. We go from transferring 16KB of HTML to just a few hundred bytes of JSON. It is this drastic reduction in network traffic, combined with a robust fallback logic, that makes the application feel instantaneous.

Released under the MIT License.