Run Medusa on Kubernetes with Watt as a Monorepo

Node.js TSC Member, Principal Engineer at Platformatic, Polyglot Developer. RPG and LARP addicted and nerd on lot more. Surrounded by lovely chubby cats.
Medusa stands out as a flexible open source commerce platform for Node.js. It offers teams a customizable backend, admin tools, and a modern storefront, all without locking you into a strict SaaS model. This makes it ideal for teams who want to move quickly and keep control over their architecture.
Running Medusa in production is more than just starting a single process. The real challenge is keeping the entire commerce stack fast, organized, and easy to update, especially when you have a backend, storefront, admin UI, image optimization, internal networking, and Kubernetes involved.
This is where using a Watt monorepo really helps.
Watt is Platformatic’s tool for combining multiple Node.js apps into one deployable unit by running them as worker threads under a single process.
Medusa can be deployed in a Kubernetes environment. To manage, monitor, and optimize your application in this setting, you can use the Intelligent Command Center (ICC). ICC is a sophisticated cloud control plane that provides intelligent management, monitoring, and optimization of cloud-native applications deployed in Kubernetes environments. ICC offers enterprise-grade features for application lifecycle management, intelligent autoscaling, compliance monitoring, and comprehensive observability.
For basic deployment, simply running Watt on Kubernetes is sufficient.
Rather than spreading complexity across multiple repos, custom Dockerfiles, and manual service connections, you can keep everything in one workspace and let Watt manage it as a single platform. This gives you one dependency graph, one build process, one deployment artifact, and a single place to manage the rules that keep your system running smoothly.
In this post, we will look at a working Medusa setup deployed on ICC with:
web/backend: Medusa backend via@platformatic/nodeweb/frontend: Medusa Next.js starter via@platformatic/nextweb/gateway: public routing via@platformatic/gatewayimage-server: a dedicated@platformatic/nextimage optimizer application that reuses the same codebase asweb/frontend
This set-up can be both far easier to manage and more performant. Let’s explore.
Why a monorepo is a good fit for Medusa
Medusa already pushes you toward a multi-application architecture. Even in a relatively standard deployment, you are dealing with:
a backend API
an admin UI
a storefront
image optimization
environment variables shared across services
public and internal URLs that must stay aligned
You can spread these parts across different repositories and deployment pipelines, but as soon as you do, even simple changes become complicated.
For example, changing a base path means updating several repos. Keeping React versions consistent gets harder. Coordinating Docker changes turns into a big release task. Even figuring out if the storefront is calling the right backend can take more effort than it should.
With Watt, the monorepo becomes the control plane for the whole stack.
Each application stays isolated as a worker thread with Watt.
The whole platform is configured in one place.
Internal service discovery comes for free.
Deployment stays a single build and a single runtime entry point.
This approach gives you the best of both worlds: separation where it matters, and simplicity where you want it.
The workspace layout
The sample project is structured like this:
.
|-- package.json
|-- pnpm-workspace.yaml
|-- watt.json
`-- web
|-- backend
| |-- medusa-config.ts
| |-- package.json
| |-- url-handler.js
| `-- watt.json
|-- frontend
| |-- next.config.js
| |-- package.json
| |-- watt.image-optimizer.json
| |-- watt.json
| `-- src
`-- gateway
|-- package.json
`-- watt.json
At the root, watt.json autoloads the web/* applications, sets gateway as the public entrypoint, and adds an extra application called image-server that reuses the frontend codebase with a different config.
This is where the monorepo model really shines. You can easily reuse the same codebase for different runtime roles. There’s no need to create a second Next.js project just to separate /_next/image. Instead, you keep one frontend codebase and let Watt run it in two different ways.
pnpm workspace setup: one dependency graph, fewer surprises
If you use pnpm, make the workspace explicit with pnpm-workspace.yaml:
packages:
- web/*
Then pin the React family at the root in package.json:
{
"pnpm": {
"overrides": {
"react": "19.0.4",
"react-dom": "19.0.4",
"@types/react": "19.0.4",
"@types/react-dom": "19.0.4"
}
}
}
This is a clear reason why using a monorepo matters. The Medusa storefront, Next.js, and related tools all rely on React. In a multi-repo setup, versions can easily get out of sync. With a Watt monorepo, you set the version once at the root, and every app benefits right away.
This makes building more predictable and keeps maintenance costs much lower.
One .env, clear public and internal boundaries
The root .env needs a few shared values:
REDIS_HOSTMEDUSA_PUBLIC_BACKEND_URLMEDUSA_BACKEND_URL
The key distinction is this:
MEDUSA_PUBLIC_BACKEND_URLis for the externally visible backend URLMEDUSA_BACKEND_URLis for server-side calls from the frontend
On ICC, this is the ideal setup:
MEDUSA_PUBLIC_BACKEND_URL=https://medusa.plt/backend
MEDUSA_BACKEND_URL=http://backend.plt.local
Why it matters:
browsers and the admin UI use the public backend URL
The frontend server uses
http://backend.plt.localand stays on the Platformatic mesh.
It’s worth emphasizing that second point, since it provides both great DevEx and a substantial performance boost. Thanks to Watt and inter-thread communication, server-side requests skip the public gateway and stay within the process’s internal network.
Once again, the monorepo helps here. The internal service name and public URL strategy are side-by-side in the same workspace, making them much harder to misconfigure.
Backend: run Medusa as a Watt application
In web/backend/package.json, add @platformatic/node:
{
"dependencies": {
"@platformatic/node": "^3.44.0"
}
}
Then configure web/backend/watt.json:
{
"$schema": "https://schemas.platformatic.dev/@platformatic/node/3.44.0.json",
"application": {
"basePath": "/backend",
"commands": {
"development": "npm run dev",
"build": "npm run build",
"production": "npm run start"
},
"changeDirectoryBeforeExecution": false,
"entrypointPort": 3000
},
"node": {
"disableBuildInDevelopment": true,
"dispatchViaHttp": true,
"absoluteUrl": true
},
"watch": false
}
This setup gives Medusa a clear application boundary within the workspace, while still allowing the gateway to publish it under /backend.
The companion change in web/backend/medusa-config.ts is just as important:
import { defineConfig, loadEnv } from '@medusajs/framework/utils'
loadEnv(process.env.NODE_ENV || 'development', process.cwd())
module.exports = defineConfig({
projectConfig: {
databaseUrl: process.env.DATABASE_URL,
http: {
storeCors: process.env.STORE_CORS!,
adminCors: process.env.ADMIN_CORS!,
authCors: process.env.AUTH_CORS!,
jwtSecret: process.env.JWT_SECRET || 'supersecret',
cookieSecret: process.env.COOKIE_SECRET || 'supersecret'
},
cookieOptions: {
sameSite: 'lax',
secure: false
}
},
admin: {
path: (new URL(process.env.MEDUSA_PUBLIC_BACKEND_URL!).pathname + '/app') as `/string`,
backendUrl: process.env.MEDUSA_PUBLIC_BACKEND_URL,
vite: config => {
config.server.allowedHosts ??= []
config.server.allowedHosts.push('.plt.local')
}
}
})
The admin path comes from the public backend URL. So, if ICC publishes the backend at /backend, the admin will automatically be available at /backend/app.
You should also keep web/backend/url-handler.js in place. Medusa’s API and admin UI do not behave identically when you put them behind a prefixed public path, so Watt’s gateway uses this file to rewrite requests correctly.
The implementation used in the sample project looks like this:
const basePath = process.env.PLT_BASE_PATH ?? ''
const adminPath = new URL(process.env.MEDUSA_PUBLIC_BACKEND_URL).pathname.replace(/\/$/, '')
const adminUiPath = adminPath + '/app'
const adminMatcher = new RegExp(`^${adminPath}`)
export default {
preRewrite(url) {
if (basePath && !url.startsWith(basePath)) {
url = `\({basePath}\){url}`
}
url = url.startsWith(adminUiPath) ? url : url.replace(adminMatcher, '')
return url
}
}
This file may be small, but it does important work. It keeps the admin UI path intact while removing the backend prefix for API routes that Medusa expects to serve from the root.
Frontend: one codebase, two runtime roles
In web/frontend/package.json, add @platformatic/next:
{
"dependencies": {
"@platformatic/next": "^3.44.0"
}
}
The standard frontend config in web/frontend/watt.json is simple:
{
"$schema": "https://schemas.platformatic.dev/@platformatic/next/3.44.0.json",
"application": {
"basePath": "{PLT_BASE_PATH}",
"changeDirectoryBeforeExecution": true
},
"next": {
"trailingSlash": true
}
}
And in web/frontend/next.config.js, set:
const nextConfig = {
reactStrictMode: true,
logging: {
fetches: {
fullUrl: true
}
},
eslint: {
ignoreDuringBuilds: true
},
typescript: {
ignoreBuildErrors: true
}
}
Here’s where it gets interesting: the monorepo lets you reuse the same frontend codebase as a dedicated image optimization service, with almost no extra work.
Split image optimization without splitting the repo
We recently covered why this architecture matters in our post on scaling Next.js image optimization with a dedicated Platformatic application: image optimization is CPU-heavy and can become a noisy neighbour for SSR traffic.
That is exactly why this Medusa setup runs /_next/image separately.
Create web/frontend/watt.image-optimizer.json:
{
"$schema": "https://schemas.platformatic.dev/@platformatic/next/3.44.0.json",
"logger": {
"level": "trace"
},
"application": {
"basePath": "/",
"changeDirectoryBeforeExecution": true
},
"next": {
"trailingSlash": true,
"imageOptimizer": {
"enabled": true,
"fallback": "frontend",
"timeout": 30000,
"ttl": 3600000,
"maxAttempts": 3,
"storage": {
"type": "valkey",
"url": "{REDIS_HOST}"
}
}
}
}
This is a great example of why Watt monorepos work so well.
You reuse the same frontend app.
You keep one source tree.
You give it a second runtime role.
You isolate a CPU-heavy path without creating a second frontend project.
This setup improves both maintainability and performance, which is exactly what you want from your platform architecture.
The fallback: "frontend" setting is especially nice here: relative image URLs are resolved through the main storefront service over the runtime network, so the optimizer stays tightly integrated without being coupled to the frontend worker pool.
Next.js build-time pragmatism: force dynamic where it helps
Because the Medusa backend is not available during the wattpm build, the storefront cannot pre-generate some pages safely.
For these files:
web/frontend/src/app/[countryCode]/(main)/products/[handle]/page.tsxweb/frontend/src/app/[countryCode]/(main)/categories/[...category]/page.tsxweb/frontend/src/app/[countryCode]/(main)/collections/[handle]/page.tsx
comment out generateStaticParams and add:
export const dynamic = 'force-dynamic'
This uses Next.js Route Segment Config to force runtime rendering instead of static generation.
In a typical Next.js app, this might seem like a compromise. But in this setup, it’s the right choice. The storefront relies on live Medusa data, and Watt provides that backend at runtime.
This is another area where the monorepo helps. The build behaviour is clear because the backend and frontend are in the same workspace, and their dependencies are easy to see.
Gateway: one public surface for the whole stack
Add @platformatic/gateway in web/gateway/package.json:
{
"dependencies": {
"@platformatic/gateway": "^3.44.0"
}
}
Then define web/gateway/watt.json like this:
{
"$schema": "https://schemas.platformatic.dev/@platformatic/gateway/3.44.0.json",
"gateway": {
"applications": [
{
"id": "backend",
"proxy": {
"prefix": "/backend",
"custom": {
"path": "../backend/url-handler.js"
}
}
},
{
"id": "frontend",
"proxy": {
"prefix": "/"
}
},
{
"id": "image-server",
"proxy": {
"prefix": "/",
"routes": ["/_next/image", "/_next/image/*"],
"methods": ["GET"]
}
}
]
}
}
This is where the monorepo approach really starts to feel smooth and efficient.
/backendgoes to Medusa/goes to the storefrontGET /_next/imagegoes to the image optimizer
Thanks to @platformatic/gateway, you get one public entry point, but the traffic still lands on the right internal application.
This setup is easier to understand, change, and scale than trying to connect separate services outside the repo.
A small middleware detail that improves the experience
There is another subtle optimization in the storefront middleware (web/frontend/src/middleware.ts).
When the request already contains a country code in the URL but does not yet have the medusacache_id cookie, the middleware sets that cookie and returns NextResponse.next() instead of forcing another redirect.
It’s a small detail, but it’s the kind of optimization that’s easier to maintain in a monorepo. Storefront routing, Medusa region lookups, and platform-level caching thanks to Watt HTTP caching handling are all managed together.
In practice, this helps the storefront set up its region-aware state smoothly, without extra steps.
The change is small enough to think of as a focused patch:
if (urlHasCountryCode && !cacheIdCookie) {
+ const response = NextResponse.next()
response.cookies.set('_medusa_cache_id', cacheId, {
maxAge: 60 * 60 * 24
})
return response
}
This is the kind of practical improvement that’s easier to maintain when routing logic, storefront behaviour, and platform deployment are all in the same repo.
ICC environment values
In .env.icc, the main settings to align are:
MEDUSA_PUBLIC_BACKEND_URL=https://medusa.plt/backend
STORE_CORS=https://docs.medusajs.com,https://medusa.plt
ADMIN_CORS=https://docs.medusajs.com,https://medusa.plt
AUTH_CORS=https://docs.medusajs.com,https://medusa.plt
NEXT_PUBLIC_BASE_URL=https://medusa.plt
They all reflect the same core rule: the whole application is published under /medusa, so both Medusa and Next.js need to agree on that public shape.
Since these settings are in one workspace and one deployment artifact, keeping them in sync is much easier than with a split-repo setup.
The Docker build is simple because the repo is simple
The container image is straightforward:
FROM node:22-alpine
# Environment setup
ENV APP_HOME=/home/app/node/
ENV PLT_BASE_PATH="/medusa"
ENV PLT_ICC_URL="http://icc.platformatic.svc.cluster.local"
WORKDIR $APP_HOME
# Install dependencies
RUN npm install -g pnpm wattpm-utils "@platformatic/watt-extra@latest"
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml $APP_HOME
RUN pnpm install --frozen-lockfile --node-linker=hoisted
# Copy application
COPY web $APP_HOME/web
COPY .env.icc watt.json $APP_HOME
RUN mv .env.icc .env
RUN pnpm run build
# Final setup
EXPOSE 3042
EXPOSE 9090
CMD ["watt-extra", "start"]
There are two details worth mentioning.
First, using --node-linker=hoisted with pnpm installs dependencies in a flatter layout, instead of the usual symlink-heavy structure. In a workspace with Medusa, Next.js, shared React versions, and several Watt apps, this makes module resolution more predictable and helps avoid compatibility issues during container builds.
Second, @platformatic/watt-extra is a helper CLI that starts Watt smoothly in container environments like ICC. It adds the operational support you need at runtime, so your container entrypoint remains simple.
This is another area where the monorepo pays off right away: you have one install step, one build step, and one runtime command.
Why does this feel better to maintain
The main advantage of this Medusa setup isn’t any single config file. It’s the overall structure:
One repo for backend, frontend, gateway, and optimizer
One dependency strategy
One place to define public and internal URLs
One deployment artifact for Kubernetes and ICC
One runtime that still preserves application boundaries
Since Watt sees the platform as a group of coordinated apps, you can make performance improvements without making the system harder to manage.
You can send image optimization to a dedicated service, keep frontend-to-backend calls on the mesh network, mount everything under a base path, and update all these rules in one place.
That’s the real value of running Medusa in a Watt monorepo on ICC: convenience and performance work together, instead of getting in each other’s way. Because ICC provides a Kubernetes (K8S)-native environment, your monorepo and its services benefit from K8s's inherent scalability, resilience, and orchestration capabilities. This integration ensures that deploying and managing Medusa within the Watt monorepo is seamless, leveraging the enterprise-grade infrastructure of ICC (which is built on K8S) for optimal operational efficiency.
If you’re building commerce systems with lots of moving parts, this is the kind of platform setup you want.





