Skip to main content

Command Palette

Search for a command to run...

Watt Now Supports TanStack Start

Updated
6 min read
Watt Now Supports TanStack Start

TL;DR

Watt 3.32 introduces first-class support for TanStack Start, the full-stack React framework from the creators of TanStack Query and TanStack Router. We benchmarked TanStack Start on AWS EKS under extreme load (10,000 req/s) and found that Watt matches single-process Node.js throughput and improves tail latency by 10%, consistently demonstrating measurable improvements.

Both configurations were tested under identical conditions at a 10,000 req/s target load. The following section details the full methodology and raw data.


We’re excited to announce that Watt 3.32 adds native support for TanStack Start, bringing the same performance benefits that Next.js users have enjoyed to this rapidly growing full-stack React framework.

What is TanStack Start?

TanStack Start is a modern full-stack React framework built on top of TanStack Router, Vinxi, and Nitro. It offers:

  • Type-safe routing with first-class TypeScript support

  • Server functions for seamless client-server communication

  • SSR and streaming out of the box

  • File-based routing with nested layouts

  • Built-in data loading patterns from the TanStack Query team

For teams already using TanStack Query and TanStack Router, TanStack Start provides a natural progression to full-stack development with familiar patterns and excellent developer experience. Next, we'll explore why running TanStack Start with Watt is a strong architectural choice.

Why Watt for TanStack Start?

Like Next.js, TanStack Start uses server-side rendering (SSR), which is CPU-bound and poses familiar scaling challenges:

  1. Node.js runs on a single CPU core by default, underutilizing multi-core servers.

  2. SSR frameworks require the full request context to gauge load, preventing early request rejection.

  3. Event loop blocking: CPU-intensive rendering can cause the event loop to block, leading to latency spikes.

Watt addresses these with SO_REUSEPORT, distributing connections at the kernel level across workers and removing IPC overhead. To validate this approach, our benchmark methodology is explained below.

Benchmark Methodology

Infrastructure

All benchmarks ran on AWS EKS (Elastic Kubernetes Service) with the following infrastructure:

  • EKS Cluster: 4 nodes running m5.2xlarge instances (8 vCPUs, 32GB RAM each)

  • Region: us-west-2

  • Load Testing Instance: c7gn.2xlarge (8 vCPUs, 16GB RAM, network-optimized)

  • Load Testing Tool: Grafana k6

The environment was ephemeral, created on demand via shell scripts and the AWS CLI, then torn down after each test run.

Software Versions

Resource Allocation

Each configuration received identical total CPU resources:

Pods were distributed evenly across all 4 cluster nodes using topologySpreadConstraints.

Load Test Configuration

We tested under extreme load to stress-test both configurations:

export const options = {
 scenarios: {
   ramping_load: {
     executor: 'ramping-arrival-rate',
     startRate: 100,
     timeUnit: '1s',
     preAllocatedVUs: 1000,
     maxVUs: 10000,
     stages: [
       { duration: '20s', target: 2000 },   // Ramp to 2,000 req/s
       { duration: '20s', target: 5000 },   // Ramp to 5,000 req/s
       { duration: '20s', target: 8000 },   // Ramp to 8,000 req/s
       { duration: '20s', target: 10000 },  // Ramp to 10,000 req/s
       { duration: '100s', target: 10000 }, // Hold at 10,000 req/s
     ],
   },
 },
};

This configuration ramps up to 10,000 requests per second and holds for 100 seconds, deliberately exceeding the capacity of both configurations to observe behavior under stress.

Test Protocol

  1. NLB Warm-up Phase: All endpoints received a 60-second warm-up (ramping from 10 to 500 req/s) to ensure AWS Network Load Balancers were properly scaled

  2. Pre-test Warm-up: Each runtime received a 20-second warm-up before its test

  3. Test Execution: 180 seconds total (80s ramp + 100s hold at 10k req/s)

  4. Cooldown: 480 seconds between each test to allow system recovery

Results

Performance Summary

Latency (Successful Requests Only)

Key Observations

1. Equivalent Throughput Under Extreme Load

Both Watt and single-process Node.js achieved nearly identical throughput (~5,958 req/s) under the 10,000 req/s target load. This demonstrates that Watt’s multi-worker architecture introduces no overhead compared to running Node.js directly.

2. Better Tail Latency with Watt

While average latencies were equivalent, Watt showed measurably better tail latency:

  • p99: 263ms (Watt) vs 289ms (Node.js) - 9% improvement

  • p95: 221ms (Watt) vs 250ms (Node.js) - 12% improvement

  • p90: 196ms (Watt) vs 216ms (Node.js) - 9% improvement

This improvement comes from SO_REUSEPORT’s kernel-level load distribution, which prevents request pileup on any single worker.

3. Slightly Higher Success Rate

Watt achieved a 79.3% success rate compared to Node.js’s 78.6% - a small but consistent improvement under stress. Both configurations were pushed well beyond their sustainable capacity (the target was 10k req/s, but actual throughput was ~6k req/s), so the high failure rates are expected.

4. Test Was Deliberately Extreme

The 20%+ failure rate across both configurations indicates we successfully stress-tested beyond capacity. Under normal production loads (staying within throughput limits), both configurations would achieve near-100% success rates, as demonstrated in our Next.js benchmarks at 1,000 req/s.

Getting Started with TanStack Start on Watt

Adding Watt support to your TanStack Start application requires minimal configuration:

1. Install Dependencies

npm install wattpm @platformatic/tanstack

2. Create watt.json

{
 "$schema": "https://schemas.platformatic.dev/@platformatic/tanstack/3.32.0.json",
 "application": {
   "outputDirectory": ".output"
 },
 "runtime": {
   "logger": {
     "level": "info"
   },
   "server": {
     "hostname": "0.0.0.0",
     "port": 3000
   },
   "workers": {
     "static": 2
   }
 }
}

3. Update package.json Scripts

{
 "scripts": {
   "build": "vite build",
   "build:watt": "NODE_ENV=production wattpm build",
   "start:watt": "wattpm start"
 }
}

4. Build and Run

npm run build:watt

npm run start:watt

That’s it. Watt will automatically detect your TanStack Start application and configure the appropriate build and runtime settings.

Kubernetes Deployment

For Kubernetes deployments, the same principles from our Next.js guide apply. Here’s a sample deployment configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
 name: tanstack-watt
spec:
 replicas: 4
 template:
   spec:
     topologySpreadConstraints:
       - maxSkew: 1
         topologyKey: kubernetes.io/hostname
         whenUnsatisfiable: DoNotSchedule
         labelSelector:
           matchLabels:
             app: tanstack-watt
     containers:
       - name: tanstack-watt
         image: your-registry/tanstack-app:latest
         env:
           - name: WORKERS
             value: "2"
         resources:
           requests:
             cpu: '2000m'
             memory: '4Gi'
           limits:
             cpu: '2000m'
             memory: '4Gi'
         ports:
           - containerPort: 3000

Key points:

  • Use topologySpreadConstraints to distribute pods evenly across nodes.

  • Set WORKERS to match your CPU allocation (2 workers for 2 CPUs)

  • Watt’s health monitoring will automatically restart unhealthy workers without terminating the pod.

Conclusion

Watt 3.32 brings the same performance benefits to TanStack Start that Next.js users have enjoyed: kernel-level load distribution via SO_REUSEPORT, zero-overhead multi-worker scaling, and external health monitoring to improve throughput and tail latency.

Our benchmarks show that under extreme load (10,000 req/s), Watt matches Node.js throughput while delivering measurably better tail latency (p99 improved by 9%, p95 by 12%). In production deployments constrained by capacity, both approaches achieve near-complete reliability.

If you’re building with TanStack Start and deploying to Kubernetes or any multi-core environment, Watt provides a straightforward path to better resource utilization and improved tail latency with minimal configuration changes.

The complete benchmark code is available at: https://github.com/platformatic/k8s-watt-performance-demo.

To get started with Watt, visit: https://docs.platformatic.dev.

For questions or enterprise support, reach out to info@platformatic.dev or connect with us on Discord.