Skip to main content

Command Palette

Search for a command to run...

SSR Framework Benchmarks v2: What We Got Wrong, and the Real Numbers

Published
7 min read
SSR Framework Benchmarks v2: What We Got Wrong, and the Real Numbers

TL;DR

We ran our SSR framework benchmarks again after finding out that compression was not applied the same way across all frameworks. In the original tests, TanStack did not have compression enabled. React Router had gzip compression turned on in its Express server.js, but Watt skips server.js and uses Fastify. Because of this, React Router’s Watt runs had no compression overhead, while its Node and PM2 runs did. This made Watt seem faster than it actually was compared to the other runtimes.

Once we removed compression from React Router to make the comparison fair, the updated results gave us a clearer picture:

TanStack and React Router are still the top performers, while Next.js continues to have trouble at 1,000 requests per second. The main change is that Watt’s advantage now appears mostly in tail latencies, with p(99) at 83-89ms compared to Node’s 121-298ms, instead of average response times.


What Went Wrong in the Previous Benchmarks

HTTP compression means shrinking the response body before sending it over the wire, usually with gzip or Brotli. That typically reduces bandwidth and speeds up delivery of HTML, JSON, and JavaScript, which is why some frameworks enable it by default as a sensible production optimization. Others leave it off because compression is often handled more efficiently by a CDN or reverse proxy, and because it puts extra load on the CPU.

  • Next.js: Its built-in compress option works at the framework level, not just the HTTP server. We checked and confirmed that Next.js serves gzip responses with both Node and Watt, so compression is always applied no matter the runtime.

  • TanStack Start: Never had compression configured in any runtime. All three runtimes (Node, PM2, Watt) served uncompressed responses. No inconsistency, but it made the comparison between frameworks unfair.

  • React Router: does not ship a default server, but there are several templates. In the one we followed, compression was enabled; Watt did not follow the same example, and it had no compression.

The Fix

We turned off compression on React Router by taking out the compression() middleware and uninstalling the package from server.js. We also set compress: false in Next.js’s next.config.mjs to make sure all three frameworks were tested the same way. Now, with compression removed everywhere, all frameworks serve uncompressed responses in every runtime.

In production, it’s best to handle compression at the reverse proxy or CDN layer, not in the application server.


Corrected Results

All tests run at 1,000 req/s for 3 minutes with mixed e-commerce traffic (homepage, search, card details, game browsing, sellers - you can read more about the sample app we built for these benchmarks here) on AWS EKS. No compression, no Accept-Encoding headers.

Software Versions

React Router: Consistent Across All Runtimes

React Router can handle 1,000 requests per second with no failures on any of the three runtimes. Watt and PM2 have almost the same median response time at 15ms. The difference shows up at the higher end: Watt’s p(99) is 83ms, PM2’s is 123ms, and Node’s is 298ms.

TanStack Start: Watt and Node Neck-and-Neck

TanStack with Watt and TanStack with Node perform almost the same: they have the same average, median, and p(95) times. Watt is slightly better at p(99), with 89ms compared to Node’s 121ms.

PM2 stands out as the outlier. With an 81% success rate and a 2.5 second average latency, PM2’s cluster fork model does not work well with Nitro’s srvx server. This is a problem between PM2 and Nitro, not TanStack. The same PM2 cluster mode works perfectly with React Router’s Express server, giving 100% success and a 20ms average.

Next.js: Still Struggling at 1,000 req/s

Next.js cannot handle 1,000 requests per second, no matter which runtime is used. All three runtimes have about a 55% success rate, which shows that the framework itself is the bottleneck, not the runtime. The high tail latencies (p(99) over 60 seconds) mean requests are piling up and timing out.


What Changed vs the Original Blog Post

Previous React Router Results (with compression inconsistency)

Corrected React Router Results (no compression anywhere)

The average latency numbers are similar because Node’s response time was mostly affected by SSR work, not compression. Still, making this correction is important for our methods. Now we can be sure the gap is real and not just a mistake.

TanStack’s results were always fair. The numbers changed a bit (from 13ms to 18ms) because of normal differences between runs, not because of compression changes.


Key Takeaways

  1. Benchmark Hygiene Matters
    A single middleware inconsistency, like having compression enabled in server.js but skipped by Watt, was enough to make our results questionable. Always make sure your test conditions are the same for every variant, especially when runtimes load applications in different ways.

  2. Watt’s Real Advantage: Tail Latency
    With compression turned off, Watt and Node have similar average and median latencies on both TanStack and React Router. However, Watt always comes out ahead at p(99):

    This is important for services where tail latency matters, like APIs for user-facing pages or services with strict SLAs.

  3. PM2 Cluster Mode Has Compatibility Issues
    PM2 works well with Express (React Router: 100% success, 20ms average), but not with Nitro (TanStack: 81% success, 2.5s average). If you use Nitro-based frameworks like TanStack Start or Nuxt, it’s better to avoid PM2 cluster mode and use Watt or plain Node instead.

  4. Next.js at 1,000 req/s: The Runtime Doesn’t Matter
    All three runtimes, Watt, PM2, and Node, perform the same on Next.js at this load, with about 55% success and a 9-second average. The bottleneck is in Next.js’s SSR pipeline, not in how connections are handled.
    Part of the advantage of Watt is a better handling of CPU-bound activities, like compression. Disabling it reduces the advantage.

  5. TanStack Start and React Router Are Both Excellent
    With compression handled the same way, TanStack (18ms average) and React Router (19ms average on Watt) are very close in performance. Both can handle 1,000 requests per second with 100% success. So, you should choose between them based on developer experience and ecosystem fit, not just performance.


Reproducing These Benchmarks

The complete benchmark infrastructure is available at:
https://github.com/platformatic/k8s-watt-performance-demo/tree/ecommerce

# Benchmark TanStack Start
AWS_PROFILE=<profile-name> FRAMEWORK=tanstack ./benchmark.sh

# Benchmark React Router
AWS_PROFILE=<profile-name> FRAMEWORK=react-router ./benchmark.sh

# Benchmark Next.js
AWS_PROFILE=<profile-name> FRAMEWORK=next ./benchmark.sh

Conclusion

Getting benchmarks right is tough. Even if you have the same applications, the same infrastructure, and careful methods, one small inconsistency, like compression middleware in one framework’s server file that is skipped by one runtime but not others, can make your results questionable.

The corrected results support the main points from our original post: TanStack Start and React Router can easily handle 1,000 requests per second, Next.js struggles at that level, and Watt gives real improvements across all three frameworks, especially for tail latencies. Now, though, we have more accurate numbers and a better idea of where each runtime really helps.

Being open about our methods is important. We made a mistake, fixed it, and are sharing both the error and the fix so others can learn from our experience.

If you’d like to talk about using Watt in your setup or want to learn more, email us at hello@platformatic.dev or reach out toLuca orMatteo on LinkedIn.