Next.js App Router Migration

Migrated a React SPA to Next.js App Router to improve performance, simplify routing, and create a clearer platform for future features.

  • Next.js 16
  • React
  • TypeScript
  • GraphQL
  • AWS
  • CI/CD

When the SPA Works Against You

React SPAs have a predictable performance trajectory. At small scale, the single-bundle approach is completely defensible: one JavaScript file, one HTML shell, the client does everything. It loads fast because it's small, and it's easy to reason about because there's one rendering model for every part of the app. Adding a new page means adding components. The architecture is generous—it accepts whatever you put into it without complaint.

The problem with an architecture that accepts everything is that it keeps accepting things past the point where the original tradeoffs still hold. By the time a React SPA is generating real performance complaints, it's usually been accumulating JavaScript for several years. The bundle is large not because any single feature was reckless, but because each addition was reasonable given what existed at the time. No one ships a 400KB bundle on purpose; it grows there incrementally, a few KB at a time.

The specific symptoms that triggered this migration were visible in the performance data. Initial load on slower connections—the 3G throttle profile in Chrome DevTools is a brutal but honest proxy for real-world mobile—was producing LCP values that failed Core Web Vitals thresholds. Time-to-interactive was uncomfortable. The bundle contained code for routes the user would never visit in any given session. And because data fetching happened inside components after the client rendered, first meaningful paint was delayed by a waterfall: HTML shell → JavaScript bundle → component mount → API call → data render. Each step was fast in isolation. Together they added up to a user experience that felt slow.

Choosing App Router Over the Alternatives

The remedies available to an SPA with a performance problem are well-charted. You can add code splitting and lazy-load routes. You can put a CDN-backed cache in front of the API to reduce waterfall depth. You can audit the bundle and prune dependencies. These are all valid incremental improvements. We tried most of them and produced modest gains that still left the system short of where it needed to be. Incremental optimization was treating the symptoms.

The other option is to reconsider the rendering architecture. Server-side rendering for a SPA can be retrofitted with frameworks like Remix, Vite + SSR, or Next.js. The question is what you're optimizing for beyond the immediate performance improvement. If the only goal is a better LCP, any of these get you there. If the goal is to improve LCP and change the default behavior for future features—so that new work gets performance for free rather than requiring deliberate per-feature optimization—then Next.js App Router's Server Components model is the interesting choice.

Server Components remove an entire category of client-side fetch waterfall by doing data work on the server before any HTML reaches the browser. They're not an optimization you sprinkle on specific routes; they're the default mode for every new route you write. A feature built on App Router starts with server-side data fetching unless you explicitly opt into client interactivity. That's the inversion that made App Router the right fit: it makes good performance the path of least resistance rather than a decision you re-justify for every new piece of work.

The Server/Client Component Boundary

The most disorienting part of App Router for engineers who have spent years building SPAs is the Server/Client Component split. In a SPA, all components run in the browser. State is state; event handlers are event handlers; everything can import everything. In App Router, Server Components run exclusively on the server. They can't have state, can't use event handlers, and can't import anything that depends on browser APIs. Client Components—marked with `'use client'`—run in the browser and behave like standard React components.

The boundary has strict composition rules. A Server Component can render a Client Component. A Client Component cannot render a Server Component directly, though it can accept one as `children` or `props`. Data flows down through props. This means a top-level data-fetching component that renders a chain of presentational children—a pattern that's idiomatic in SPAs—works differently in App Router: the top-level component can be a Server Component that fetches data and passes it down to Client Components as props, but the Client Components can't initiate their own server-side fetches. Their fetch boundary is the point where they enter the component tree.

Getting this boundary right for each migrated route was the most consequential design decision in the migration. Routes with complex interactive forms needed careful thought about which parts were purely presentational versus which needed local state for validation feedback, focus management, or optimistic updates. The guiding principle that held up across the whole migration: push the Client Component boundary as far down the tree as possible. A page with a data table and a delete button doesn't need the entire table to be a Client Component just because the button has a click handler. Wrap the button in a Client Component and leave the table as a Server Component that renders from server-fetched data. The difference in bundle size and hydration cost is real.

The Route-by-Route Migration Strategy

Migrating an entire SPA to App Router in one step sounds bold and turns out to be painful. Every route in a mature SPA has accumulated behavior that's easy to miss during a big rewrite: analytics events, error boundaries, A/B test variants, accessibility patterns, keyboard navigation behaviors. A big-bang migration has to get all of that right across all routes simultaneously, or you ship regressions on the routes you tested less carefully. The risk is concentrated, the blast radius is large, and rollback means reverting the entire effort.

The strategy was route-by-route migration with parallel operation. For the duration of the migration, the Next.js app and the React SPA could both serve any given route—the SPA served it by default, and the Next.js version was available behind a feature flag. High-traffic routes moved first, which concentrated testing and validation effort on the routes that mattered most for Core Web Vitals benchmarking and user-facing performance. The feature flag was flipped by segment: internal users first, then a percentage rollout to logged-in users, then full traffic.

This approach meant that at any point in the migration, the production SPA was still fully operational. If a Next.js route had a regression, flipping the flag back restored SPA traffic while the issue was fixed. No route was permanently cut over until it had passed through a full rollout cycle without incident. The migration took longer than a big-bang would have, but the risk profile was categorically different: each increment was a small, reversible change rather than a large, irreversible one.

Data Fetching at the Right Altitude

The SPA's data fetching had the classic accumulation problem. As features were added over years, fetching responsibilities drifted: some data was fetched in a global context provider at app initialization, some in route-level components on mount, some in deeply nested components that needed data the parent hadn't passed down. The result was a mix of patterns with different caching strategies, different loading state handling, and non-obvious dependencies between distant parts of the component tree.

Server Components provided a natural forcing function for consolidation. Because a Server Component fetches data during render—not in a `useEffect` or a context provider—the fetching code lives where the data is used. A route showing a list of workspace members fetches membership data in the page's Server Component, which renders a Server Component for each member, each of which can fetch its own detail data in parallel. Fetches are co-located with the components that need them. No context providers. No prop-drilling from a top-level fetcher that doesn't know which children need what.

The performance benefit from co-location is concrete. Next.js deduplicates identical fetch calls automatically within a single render pass, so multiple components requesting the same resource don't generate multiple network requests. And because all of this happens on the server before any HTML is sent to the browser, the user sees complete, hydrated content rather than a loading skeleton that fills in after client JavaScript executes. The waterfall—HTML shell → JS bundle → fetch → render—collapses into a single server-side operation that delivers fully populated markup on the first HTTP response.

Keeping Quality Honest During Migration

Route-by-route migration creates a specific quality risk: because each route moves independently, it's easy for one team's careful work to establish a quality bar that a later route quietly misses. Accessibility patterns get dropped when the engineer migrating the fifth route doesn't know what the engineer who did the first route did to preserve keyboard navigation. Analytics events get lost because the new route uses a different event-firing pattern than the SPA's global setup. Error handling regresses because the SPA had an error boundary that didn't have an obvious App Router equivalent.

The migration checklist was the solution. Before any route could be marked as complete, it had to pass a structured review covering six areas: accessibility (keyboard navigation, ARIA roles, focus management on modal interactions), analytics parity (every event the SPA fired had a corresponding event in the Next.js version), error handling (error boundaries, fallback UI, graceful degradation on failed fetches), Core Web Vitals measurement (LCP, CLS, and FID baselines captured for both the SPA version and the Next.js version), SEO requirements (meta tags, canonical URLs, structured data where applicable), and feature parity (a manual walkthrough of every user-visible behavior on the route).

The checklist wasn't bureaucratic overhead—it was the mechanism that made route-by-route migration work. Each increment was small, but each increment was also complete. The migration didn't accumulate a "we'll clean that up later" backlog on the routes it touched, which meant there was no migration debt left over at the end.

What Moved the Numbers

The Core Web Vitals results after migrating the highest-traffic routes were meaningful but not dramatic. LCP on the main landing route improved by roughly 40%, driven primarily by eliminating the client-side data-fetch waterfall: content that previously required a JavaScript bundle load plus an API round-trip now arrived in the first HTTP response. CLS was essentially unchanged—the SPA had already handled layout shift reasonably well—but FID improved noticeably on routes where the client JavaScript bundle had been largest, because those routes now shipped significantly less client-side code.

The client bundle reduction compounded over the full migration. Every route that moved to Server Components shed the JavaScript for any component that became a Server Component—no bundle cost, no hydration cost, no client-side event listener setup. On routes with large data tables or complex list views, the entire table moved to the server, and the bundle for that route shrank substantially. Measured across all migrated routes, total JavaScript delivered to the client decreased meaningfully even though the feature set stayed constant.

The developer productivity metric is harder to quantify but easy to argue. Engineers onboarding to a migrated route encounter a clear mental model: data is fetched at the top of the route, passed down to components, and those components are either purely presentational or explicitly interactive. The SPA equivalent had data fetching in several places, state in others, and context providers that were a prerequisite for understanding any individual component. The new routes are easier to reason about because the architecture makes data flow explicit rather than distributed.

What I'd Do Differently

Start with the migration checklist, not the first route. The checklist for this migration was written after the first two routes had already shipped, which meant retroactively applying it to work that was already in production. Starting with the checklist—even a rough version—means the first route establishes the quality bar rather than potentially missing something that later routes are then expected to match by implication.

Invest in automated analytics parity testing early. The analytics parity requirement was the most labor-intensive item on the checklist because the SPA's instrumentation was inconsistent: some events fired from components, some from global route-change listeners, some from utility functions deep in the call tree. Verifying parity required manually tracing every event for every user action on every migrated route. An automated test that captures events during a scripted user session—using Playwright with network request interception—would have caught regressions faster and with significantly less manual effort.

Don't defer the hardest routes. The natural tendency in a route-by-route migration is to start with easy routes and leave complex ones for later. This is the wrong order. The complex routes—those with deeply nested client state, non-standard authentication logic, or unusual data fetching patterns—are where the migration is most likely to reveal gaps in your App Router approach. Discovering those gaps on route two means you can adjust your patterns before they propagate across a dozen other routes. Discovering them on route twelve means retroactive refactoring.

Outcomes

  • Improved Core Web Vitals on key landing routes and reduced client bundle size.
  • Simplified developer onboarding with clearer route ownership and co-located data logic.
  • Lowered deployment risk by shipping in small increments with rollback-ready checkpoints.