Scale Next.js Image Optimization with a Dedicated Platformatic Application

Node.js TSC Member, Principal Engineer at Platformatic, Polyglot Developer. RPG and LARP addicted and nerd on lot more. Surrounded by lovely chubby cats.
Image optimization with Next.js is a popular feature, but one that quietly causes instability (in the form of latency spikes) for your frontend. This is because image resizing and encoding are very CPU and memory-intensive, especially when traffic is highest, and users expect fast pages. During real launches, 95th percentile render times often rise from about 600ms to over 2 seconds when there are many image requests, even if the app code stays the same. If image processing shares workers with Server-Side Rendering (SSR), React Server Components (RSC), and API routes, a spike in image requests can slow down everything else, and all of a sudden, you’ve got a cascading failure on your hands.
That’s why teams often notice the same pattern during launches and campaigns: /_next/image traffic increases, CPU usage maxes out, render times get longer, and the whole frontend slows down even though the app logic hasn’t changed. In short, image optimization starts to interfere with your most important user flows.
Watt is our open-source Node.js application server that orchestrates frontend frameworks (Next.js, Astro, Remix) and backend services (Node.js, Fastify, Express, Hono, etc) into a single system, with built-in logging, tracing, and multithreading. It leverages the Linux kernel's SO_REUSEPORT to distribute connections across workers with zero coordination overhead. In our production benchmarks on AWS EKS, Watt delivered 93.6% faster median latency and a 99.8% success rate under a sustained load of 1,000 requests per second. After investigating component rendering, it was only a question of time before we looked into images.
By moving image optimization into its own Watt Application, you create a clear microservice boundary. The optimizer becomes a focused service in your setup, with an API that only exposes what’s needed for safe and efficient image delivery. This keeps media processing separate from your main frontend. You can then scale image capacity on its own, let rendering workers focus on rendering, and adjust retries, timeouts, and storage for media processing without having to over-provision your whole frontend.
@platformatic/next is the official Platformatic package for running Next.js inside a Watt Application. It’s fully maintained and supported by the Platformatic team, so you get long-term compatibility with Next.js updates, regular security patches, and best-practice defaults for production. Teams can count on ongoing updates and quick fixes, which lowers maintenance risk and avoids the downsides of custom or community-maintained solutions. The package now includes an Image Optimizer mode, letting you run /_next/image as a dedicated Watt Application, scale it separately, and keep your frontend fast even when image traffic increases.
This capability was introduced in PR #4605, and it builds on top of @platformatic/image-optimizer, our dedicated optimization engine. Our image optimizer is built on top of sharp, leveraging @platformatic/job-queue, which adds flexible storage, job deduplication with caching, and producer/consumer decoupling.
If you are self-hosting Next.js and want the same kind of operational separation that mature platforms use internally, this is the missing building block.
In short, you can keep using Next.js as you always have, but with a cleaner architecture that handles high traffic more efficiently
Why split image optimization from your frontend?
If your frontend handles page rendering, API routes, and image resizing as a single service, any slowdown in one will cascade to the others. This means that when traffic is highest, like during product launches, campaigns, or social media spikes, this architecture causes performance to suffer the most
And it goes without saying (although it’s a blog, so yes, we will say it anyway…) that page performance isn’t just a technical issue - even a 100 ms delay can lower conversion rates by up to 7%, making slowdowns expensive during launches and campaigns.
The reason comes down to architecture: resizing and re-encoding images is bursty, CPU-heavy, and often I/O bound, while SSR and API routes usually need lower latency and more consistent resources. Running both in one service means you have to use the same autoscaling and resource pool for two very different types of work.
Splitting these responsibilities and running them as worker threads using Watt eliminates this ‘noisy neighbour’ effect and lets you apply the right scaling strategy to each path: scale optimizer replicas (or threads) when media demand rises, and keep frontend replicas sized for rendering throughput and tail latency.
Platformatic’s dedicated image optimizer, Watt Application, gives you:
Independent scaling: add replicas for image workloads without scaling the whole frontend stack.
Operational isolation: image spikes do not starve SSR/RSC rendering.
Centralized controls: enforce width/quality validation, timeout, retry behaviour, and storage in one place.
Flexible queue storage: choose memory, filesystem, or Redis/Valkey depending on your topology.
This setup is especially useful for platform engineering and SRE teams who need predictable performance without over-provisioning the whole frontend. Clear ownership lets these teams align this approach with their KPIs for reliability, scalability, and cost efficiency.
What shipped in Platformatic Next
The new next.imageOptimizer configuration lets you turn on optimizer-only mode in @platformatic/next, so you can run a Watt Application focused just on image processing. In other words: flip one flag and route only /_next/image, making adoption fast and low-friction.
When enabled, the service:
Exposes only the Next.js image endpoint (
/_next/image, respecting base path).Validates image parameters using Next.js rules.
Resolves relative URLs through a fallback target (URL or runtime service name).
Fetches and optimizes images through a queue-backed pipeline; if the same image is requested by multiple users at the same time, it would be processed only once.
Returns optimized image bytes and cache headers.
Under the hood, this relies on @platformatic/image-optimizer, which provides a robust processing pipeline with:
image type detection from magic bytes
optimization for
jpeg,png,webp, andavifanimation-aware safeguards
URL fetch + optimize helpers
queue APIs powered by @platformatic/job-queue
The queue can run as a distributed state on Redis/Valkey, so retries, workload distribution, and resilience remain consistent across multiple optimizer replicas.
The main idea is to keep frontend rendering and image optimization separate, while still using the usual Next.js image features.
What this means for teams
Frontend teams keep using
next/imageas usual, without rewriting application code.Platform teams get explicit controls for retries, timeout budgets, and queue storage.
Ops teams can scale optimizer replicas independently from the frontend tier.
Product teams get a smoother user experience during peak traffic windows.
The result is a platform that feels (and… is) faster to end users and more controllable to engineering teams. In recent internal benchmarks, shifting image optimization to a dedicated Watt Application reduced 95th-percentile response times during peak traffic by up to 40%, turning previously unpredictable slowdowns into consistently fast delivery even under heavy load.
Choose the right runtime blueprint
The easiest setup is a three-application Watt setup:
gateway: Watt’s gateway service, receive and routeincoming traffic.
frontend: your standard Next.js application
optimizer:
@platformatic/nextrunning in Image Optimizer mode
Watt’s Gateway sends only GET /_next/image requests to the optimizer, while everything else goes to the frontend. This gives you a clear separation without needing a complicated network setup.
For relative image URLs (for example /hero.jpg), the optimizer fetches originals from frontend via runtime service discovery (http://frontend.plt.local). For absolute URLs, it fetches upstream directly.
If you are deploying on Kubernetes, your best bet is to configure your K8s ingress controller to route GET /_next/image to separate pods running the image optimizer. This configuration is supported and documented at https://docs.platformatic.dev/docs/guides/next-image-optimizer#10-kubernetes-ingress-example-nginx-ingress-controller.
How to set this up
Start by creating a Watt workspace with three applications: Gateway, frontend, and optimizer. The frontend remains your existing Next.js app; the optimizer is another @platformatic/next app with next.imageOptimizer.enabled: true; Gateway routes image traffic to the optimizer and everything else to the frontend.
Use this structure as a baseline:
my-runtime/
watt.json
web/
gateway/
platformatic.json
frontend/
platformatic.json
package.json
next.config.js
app/
optimizer/
next.config.js
platformatic.json
package.json
Then configure it in this order:
Enable image optimizer mode in the
optimizerWatt Application.Set
optimizer.next.imageOptimizer.fallbacktofrontendso relative image URLs are fetched fromhttp://frontend.plt.local.In Gateway, route only
GET /_next/imagetooptimizerand keep all other routes onfrontend.Pick queue storage for your topology:
memory for local/dev
filesystem for single-node persistent disk
Redis/Valkey for distributed replicas
Tune
timeoutandmaxAttemptsusing your target SLO and expected image profile.
With this setup, app teams can keep using next/image as usual, while platform teams get independent scaling and more control over operations.
Configuration example
In your optimizer application config:
{
"$schema": "https://schemas.platformatic.dev/@platformatic/next/3.38.1.json",
"next": {
"imageOptimizer": {
"enabled": true,
"fallback": "frontend",
"timeout": 30000,
"maxAttempts": 3,
"storage": {
"type": "valkey",
"url": "redis://localhost:6379",
"prefix": "next-image:"
}
}
}
}
And in your Gateway config, route only the image endpoint:
{
"$schema": "https://schemas.platformatic.dev/@platformatic/gateway/3.0.0.json",
"gateway": {
"applications": [
{
"id": "frontend",
"proxy": {
"prefix": "/",
"routes": ["/*"]
}
},
{
"id": "optimizer",
"proxy": {
"prefix": "/",
"routes": ["/_next/image"],
"methods": ["GET"]
}
}
]
}
}
Storage choices: what to use and when
memory: local development or simple single-instance setups.
filesystem: single-node deployment with persistent disk.
redis/valkey: distributed production environments with shared queue state.
If you do not specify storage, memory is used by default.
For production multi-instance deployments, Redis/Valkey is usually the best default because it gives shared queue state and predictable behaviour across replicas.
Failure handling and reliability
Optimization runs through a queue with explicit timeout and retry controls:
timeoutsets the fetch/optimization budget per job.maxAttemptscontrols the automatic retry count.
When retries are exhausted, the service returns a 502 Bad Gateway response, keeping failure behaviour explicit, observable, and easier to alert on.
Try it today
If you are self-hosting Next.js and want predictable image performance under load, this capability gives you a practical path that does not require re-architecting your app:
keep your frontend app unchanged,
stand up a dedicated optimizer Watt Application,
route only
/_next/imagethrough Watt’s Gateway service,pick the storage backend that matches your deployment model.
This is a small architectural change with a big benefit: better frontend stability, simpler operations, and image performance that scales when you need it.
If you want to deliver faster and more reliable user experiences as your traffic grows, dedicated image optimization is one of the best upgrades you can make with minimal disruption.
Read more:






