Bring Python ASGI to Your Node.js Applications

Today, we are excited to ship @platformatic/python, a new capability for Watt, the Application Server for Node.js, that lets you run Python ASGI applications alongside your existing Node.js workloads. Powered by the lightweight @platformatic/python-node runtime, this release simplifies mixing Python's rich ecosystem with Platformatic's developer experience.
What is @platformatic/python-node?
@platformatic/python-node is a Node.js-native bridge that embeds a Python interpreter and speaks ASGI. It runs a Python app directly inside your Node.js process so you can dispatch requests to it without spawning new processes or going to the network.
But… Why?
Some of you might be wondering, even right now, “Python? Inside my Node.js process? Why would I ever want to do that?” And you know what? That’s a valid question.
First, let’s review how microservices running in containers communicate today. Every time “Microservice A” needs to interact with “Microservice B,” it makes a network call. Simple enough, right? No big deal.
That actually depends. These network calls can be one of the biggest performance bottlenecks in your microservice architecture, and you know what’s faster than reaching out over the network?
Interprocess communication. Threads within a process communicate through shared memory, and it’s not just a little faster - it’s much faster (and much more stable, especially at scale) (https://github.com/rozzilla/watt-poc).
Now, what are some scenarios where you might want a Python service to handle “some task” (ML, AI, whatever), and it needs to communicate with a JavaScript service handling “another task” (rendering a user interface, etc.), especially when speed and stability are critical?
Consider real-time ML-powered fraud detection, AI-based customer service chatbots with no lag, or content personalization at scale for complex web experiences - integrating Python within your Node process opens up new ways to architect AI and ML-powered applications.
Now, let’s take a look beneath the surface hood…
How Does it Work?
When you create a Python instance, the module boots the embedded runtime, watches your docroot, and exposes a small API for translating requests and responses. Each request is wrapped in a Request object, forwarded across the bridge, and handled by the ASGI app you nominate via appTarget. Responses come back as familiar status, headers, and body objects that plug neatly into Express, Fastify, plain Node.js handlers, or any other JavaScript glue you prefer.
Request Flow Architecture
The request handling flow is entirely in-process with zero network overhead:
JavaScript Entry Point: When a request arrives at your Node.js application, it's captured as a standard HTTP request with method, URL, headers, and body.
Rust Bridge Layer: The request data crosses into Rust-land through N-API bindings. Here, the
http-handlercrate takes over, converting JavaScript objects into efficient Rust representations. This layer handles header normalization, body streaming setup, and prepares the ASGI scope dictionary that Python expects.Python Thread Pool: The Rust layer dispatches the request into one of the pre-warmed Python worker threads managed by the embedded interpreter. No process spawning, no socket connections—the request data is handed directly to Python through the C API.
ASGI Application: Inside the Python thread, your ASGI application receives the familiar
scope,receive, andsendcallables. It processes the request using whatever framework you've chosen (FastAPI, Starlette, Django, etc.) and emits response events through the ASGI protocol.Response Return Path: Response data flows back through the same bridge—Python emits ASGI events, Rust collects them into HTTP response primitives, and the N-API layer surfaces them as JavaScript objects that your Node.js application can send to the client.

This architecture means no localhost requests, no Unix sockets, no serialization to HTTP/1.1 text—just direct memory-sharing between the JavaScript event loop and Python worker threads, coordinated by Rust's type-safe bridge code.
The http-handler Foundation
At the core of @platformatic/python-node lies the http-handler crate—a specialized Rust library that provides HTTP request/response primitives optimized for cross-language communication. This crate handles the low-level details of HTTP message format handling, ensuring that requests flowing from Node.js to Python maintain their integrity and performance characteristics.
The http-handler crate is the foundation layer that enables seamless HTTP protocol translation between JavaScript objects in Node.js and the necessary data to produce Python ASGI events. It provides type-safe, zero-copy serialization where possible, minimizing the overhead typically associated with cross-language HTTP proxying.
It is an evolution of the lang_handler crate used in php-node to rely on the Rust ecosystem standard http crate at its core.
Seamless Python Library Integration
One of the technical challenges when embedding Python in native applications is ensuring that dynamically linked Python libraries load correctly across different system configurations. The fix-python-soname utility in python-node addresses this by performing post-install binary patching on Linux and macOS systems.
This process automatically detects your system's Python installation and updates the compiled .node binary library paths to reference the correct libpython.so or Python.framework. Using a WebAssembly-based ELF and Mach-O patcher, the utility rewrites the shared object name (soname) references to match your Python version, ensuring the bindings can locate the correct Python library at runtime.
This behind-the-scenes library path rewriting means you can install @platformatic/python-node on any system with Python 3.8+ and immediately start using it without a manual build or additional configuration.
Key Features
Warm Python runtime that stays alive for the life of your Node.js worker.
ASGI request/response translation handled for you—no hand-rolling protocol glue.
Docroot-aware loader that mirrors Python’s expectations for module discovery.
Structured logging so stdout/stderr and Python exceptions surface through your Node.js logs.
Use Cases
Integrate AI/ML directly with your Frontend: Embed LLM inference, image processing, or recommendation engines directly in your Node.js app. Use PyTorch, TensorFlow, or LangChain for the heavy lifting while Node.js handles auth, WebSockets, and user sessions. The in-process architecture delivers microsecond latency—critical for real-time AI or ML applications, such as fraud detection and content personalization.
Real-time Data Science Workflows: Combine Python's data processing power (Pandas, NumPy, Matplotlib) with Node.js's frontend capabilities. Build analytics dashboards, generate reports, or run ETL pipelines without the overhead of separate services.
Gradual Migration: Already have a FastAPI or Django app? Mount it in Node.js and incrementally migrate endpoints while sharing deployment pipelines, configuration, and observability tools across both languages.
Getting Started
Let's build a complete FastAPI application with data validation and then show how to run it directly with both @platformatic/python-node and the @platformatic/python Watt capability.
Building a FastAPI Application
First, create a Python virtual environment and install dependencies:
# Create and activate a virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install fastapi pydantic uvicorn
Create a `requirements.txt` file to lock your dependencies:
fastapi==0.104.1
pydantic==2.5.0
uvicorn==0.24.0
Now create your FastAPI application at `public/api.py`:
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel, Field
from typing import Optional, List
from datetime import datetime
import uuid
app = FastAPI(title="User Management API", version="1.0.0")
# Pydantic models for request/response validation
class UserCreate(BaseModel):
name: str = Field(..., min_length=1, max_length=100)
email: str = Field(..., regex=r'^[^@]+@[^@]+\.[^@]+$')
age: Optional[int] = Field(None, ge=0, le=150)
class User(BaseModel):
id: str
name: str
email: str
age: Optional[int]
created_at: datetime
class UserUpdate(BaseModel):
name: Optional[str] = Field(None, min_length=1, max_length=100)
email: Optional[str] = Field(None, regex=r'^[^@]+@[^@]+\.[^@]+$')
age: Optional[int] = Field(None, ge=0, le=150)
# In-memory storage (use a real database in production)
users_db: List[User] = []
@app.get("/")
async def root():
return {"message": "Welcome to the User Management API"}
@app.post("/users", response_model=User)
async def create_user(user_data: UserCreate):
new_user = User(
id=str(uuid.uuid4()),
name=user_data.name,
email=user_data.email,
age=user_data.age,
created_at=datetime.now()
)
users_db.append(new_user)
return new_user
@app.get("/users", response_model=List[User])
async def get_users():
return users_db
@app.get("/users/{user_id}", response_model=User)
async def get_user(user_id: str):
user = next((u for u in users_db if u.id == user_id), None)
if not user:
raise HTTPException(status_code=404, detail="User not found")
return user
@app.put("/users/{user_id}", response_model=User)
async def update_user(user_id: str, user_data: UserUpdate):
user = next((u for u in users_db if u.id == user_id), None)
if not user:
raise HTTPException(status_code=404, detail="User not found")
if user_data.name is not None:
user.name = user_data.name
if user_data.email is not None:
user.email = user_data.email
if user_data.age is not None:
user.age = user_data.age
return user
@app.delete("/users/{user_id}")
async def delete_user(user_id: str):
global users_db
users_db = [u for u in users_db if u.id != user_id]
return {"message": "User deleted successfully"}
Your project structure should now look like:
my-python-app/
├── public/
│ └── api.py
├── requirements.txt
└── venv/
Running with @platformatic/python-node Directly
Install the runtime and add it to any Node.js project:
npm install @platformatic/python-node
Embed the FastAPI application directly inside a Fastify route:
import Fastify from 'fastify'
import { fileURLToPath } from 'node:url'
import { dirname, resolve } from 'node:path'
import { Python, Request } from '@platformatic/python-node'
const fastify = new Fastify()
const python = new Python({
docroot: resolve(import.meta.dirname, 'public'),
appTarget: 'api:app' // Points to our FastAPI app
})
// Route all requests to Python
fastify.all('/*', async (req, reply) => {
const headers = Object.fromEntries(
Object.entries(req.headers).map(([key, value]) => [
key,
Array.isArray(value) ? value : [String(value)]
])
)
const pythonRequest = new Request({
method: req.method,
url: new URL(req.raw.url, `http://${req.headers.host ?? 'localhost'}`).toString(),
headers,
body: req.body
})
const pythonResponse = await python.handleRequest(pythonRequest)
reply.status(pythonResponse.status)
for (const [key, value] of pythonResponse.headers.entries()) {
reply.header(key, value)
}
reply.send(pythonResponse.body)
})
await fastify.listen({ port: 3042 })
console.log('Listening on http://localhost:3042')
This FastAPI application is now running inside your Node.js process with full request validation, automatic OpenAPI documentation at /docs, and all the FastAPI features you expect.
Integration with Watt: @platformatic/python
Prefer Platformatic to manage routing, static assets, and configuration for you? The new stackable wraps @platformatic/python-node so you can scaffold a full service instantly.
If you’re running on Watt, bootstrap a service that already wires in the stackable:
npx wattpm@latest create --module=@platformatic/python
Replace the generated public/main.py with the FastAPI application we created earlier, then update your `platformatic.json` configuration:
{
"$schema": "https://schemas.platformatic.dev/@platformatic/python/0.7.0.json",
"module": "@platformatic/python",
"python": {
"docroot": "public",
"appTarget": "main:app"
},
"server": {
"hostname": "{PLT_SERVER_HOSTNAME}",
"port": "{PORT}",
"logger": { "level": "{PLT_SERVER_LOGGER_LEVEL}" }
},
"watch": true
}
Start the service:
npm start
Your FastAPI application is now running through Platformatic with automatic route handling, static file serving, and all the Platformatic features like configuration management, logging, and development mode watching.
Writing your own ASGI Handler
If you prefer to manage the ASGI lifecycle yourself, drop a minimal handler in public/main.py:
import json
from datetime import datetime
async def app(scope, receive, send):
if scope["type"] != "http":
return
await send({
"type": "http.response.start",
"status": 200,
"headers": [[b"content-type", b"application/json"]],
})
payload = {
"message": "Hello from Python!",
"timestamp": datetime.now().isoformat(),
"path": scope.get("path", "/"),
}
await send({
"type": "http.response.body",
"body": json.dumps(payload).encode("utf-8"),
})
Need a machine learning inference endpoint, a data pipeline, or a compatibility layer for an existing ASGI app? Drop it into your docroot, update appTarget if needed, and you’re ready to deploy.
Combining Next.js and FastAPI inside a single Watt instance
In the repository https://github.com/platformatic/watt-next-fastapi, we have prepared an example of a Next.js application communicating with a FastAPI application within Watt.

Production-Ready From Day One
Works with Node.js ≥ 22.18 and Python ≥ 3.8.
Integrates with Platformatic configuration, secrets, and logging.
Surfaces Python stdout/stderr and unhandled exceptions through Fastify logs.
Ready for CI/CD: run
npm testfor JavaScript glue and pair it with the Python tooling you already use.
Benchmarks
We benchmarked various Python ASGI server configurations against @platformatic/python-node using autocannon with 10 connections over 10 seconds. The results show that some performance-centric native servers like Granian and uvicorn deliver the highest throughput, @platformatic/python-node offers competitive performance while providing seamless Node.js integration.



Key Takeaways
Granian leads in raw performance with 11,270 req/sec average throughput.
@platformatic/python-node achieves 5,200 req/sec, outperforming
fastapi runand even beatingdaphneandhypercorn.Watt integration performance remains competitive with most ASGI servers while providing significant power in combination with Node.js services.
Latency remains low across all configurations, with most servers averaging under 2ms.
What’s Next
We’re already exploring deeper streaming support, WebSocket handling, and richer observability hooks between Fastify and ASGI. Tell us what you want to see next!
Try @platformatic/python Today
Grab the package on npm, spin up a service in Watt, and share what you build. Drop feedback in GitHub Discussions or our [Discord](https://discord.gg/platformatic)—we can’t wait to see the Python-powered experiences you ship on Platformatic.






