Streaming and WebSocket Support Now Available in @platformatic/python-node

@platformatic/python-node v2.0.0 introduces full support for HTTP response streaming and bidirectional WebSocket communication. This release enables full-stack teams to build real-time, high-performance applications by bridging Python's async ecosystem with Node.js.
For teams running Python ASGI applications alongside Node.js services, this release enables new application types, including real-time dashboards, live data feeds, WebSocket-powered chat systems, progressive file uploads, and server-sent events—all while maintaining the Python-Node.js integration you've come to expect.
If you're new to @platformatic/python-node, it's a module that lets you run Python ASGI applications (such as FastAPI, Starlette, or Django) directly in Node.js processes. No separate Python server, no HTTP proxy overhead, no complex deployment setup.
What's New in v2.0.0
This release brings four major enhancements that align @platformatic/python-node with modern ASGI server capabilities:
HTTP Response Streaming
The new handleStream() method enables streaming of HTTP responses. Instead of buffering the entire response body before returning it to Node.js, chunks are processed incrementally as they arrive from your Python application. This reduces memory usage for large responses and provides immediate access to response headers before the body completes.
Each chunk will be pulled from Python when Node.js is ready to operate on it. Python will wait until a chunk is requested before continuing to process the ASGI handler. This architecture provides proper backpressure between the two languages and fully async on both ends so either language can do operate on other things while waiting for the other.
const res = await python.handleStream(req)
// Headers available immediately
console.log(res.status) // 200
console.log(res.headers.get('content-type'))
// Body consumed via AsyncIterator as chunks arrive
for await (const chunk of res) {
console.log(chunk.toString())
}
HTTP Request Streaming
In addition to streaming responses, you can now stream request bodies to Python. This is essential for handling large file uploads, processing data progressively, or implementing custom streaming protocols.
Each write returns a promise to provide backpressure from Python so Node.js doesn't write too much data if Python is not consuming it fast enough. It uses an internal buffer, if the buffer has enough space for the write it will resolve immediately, otherwise it will wait for space to be made available.
const req = new Request({
method: 'POST',
url: '/upload',
headers: { 'Content-Type': 'application/octet-stream' }
})
// Dispatch request and write body concurrently
const [res] = await Promise.all([
python.handleStream(req),
async () => {
// Stream chunks to Python
await req.write(chunk1)
await req.write(chunk2)
await req.write(chunk3)
await req.end()
}()
])
Bidirectional WebSocket Support
Full WebSocket support means your Python ASGI applications can now handle persistent, bidirectional connections. Whether you're building a chat application, live dashboard, or multiplayer game, you can implement the WebSocket logic in Python while integrating seamlessly with your Node.js infrastructure.
const req = new Request({
url: '/ws',
websocket: true
})
const res = await python.handleStream(req)
// Send messages to Python
await req.write('Hello from Node.js!')
// Receive messages from Python
for await (const chunk of res) {
console.log('Received:', chunk.toString())
}
ASGI 3.0 Protocol Implementation
Under the hood, v2.0.0 implements the ASGI 3.0 protocol specification for both HTTP and WebSocket communication. This ensures compatibility with the entire Python async ecosystem, including FastAPI's StreamingResponse, Starlette's WebSocket endpoints, and any other ASGI-compliant framework.
Key Benefits
- Lower Memory Footprint: Stream large responses without buffering everything in memory
- Faster Time-to-First-Byte: Access response headers immediately, before body writing even begins
- Real-Time Capabilities: Build WebSocket applications with bidirectional communication
- Better Resource Utilization: Process chunks as they arrive instead of waiting for completion
- Backward Compatible: Existing code using
handleRequest()continues to work almost completely unchanged -- the single breaking change is no morereq.bodysetter/getter
HTTP Streaming in Action: Server-Sent Events with FastAPI
One of the most powerful use cases for HTTP streaming is Server-Sent Events (SSE), which enable servers to push real-time updates to clients over a standard HTTP connection. Let's build a live monitoring dashboard that streams system metrics from Python to Node.js.
Here's a FastAPI application that generates streaming metrics:
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
import asyncio
import json
import random
from datetime import datetime
app = FastAPI()
async def generate_metrics():
"""Generate fake system metrics as server-sent events"""
while True:
# Simulate collecting system metrics
metrics = {
'timestamp': datetime.now().isoformat(),
'cpu_usage': random.uniform(20, 80),
'memory_usage': random.uniform(40, 90),
'active_connections': random.randint(10, 100),
'requests_per_second': random.randint(50, 500)
}
# Format as SSE event
data = f'data: {json.dumps(metrics)}\n\n'
yield data.encode()
# Send update every second
await asyncio.sleep(1)
@app.get('/metrics/stream')
async def stream_metrics():
"""Endpoint that streams real-time metrics"""
return StreamingResponse(
generate_metrics(),
media_type='text/event-stream',
headers={
'Cache-Control': 'no-cache',
'Connection': 'keep-alive'
}
)
@app.get('/health')
async def health_check():
"""Standard health check endpoint"""
return {'status': 'healthy', 'version': '1.0.0'}
Now let's consume this stream from Node.js:
import { Python, Request } from '@platformatic/python-node'
const python = new Python({
docroot: './python-apps',
appTarget: 'metrics_app:app'
})
async function monitorMetrics() {
const req = new Request({
method: 'GET',
url: 'http://localhost/metrics/stream'
})
console.log('Connecting to metrics stream...')
const res = await python.handleStream(req)
console.log(`Status: ${res.status}`)
console.log(`Content-Type: ${res.headers.get('content-type')}`)
console.log('\nReceiving metrics:\n')
// Process metrics as they arrive
for await (const chunk of res) {
const lines = chunk.toString().split('\n')
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6))
console.log(`[${data.timestamp}]`)
console.log(` CPU: ${data.cpu_usage.toFixed(1)}%`)
console.log(` Memory: ${data.memory_usage.toFixed(1)}%`)
console.log(` Connections: ${data.active_connections}`)
console.log(` RPS: ${data.requests_per_second}`)
console.log()
}
}
}
}
monitorMetrics().catch(console.error)
How It Works
The streaming implementation leverages the interaction between Python's async generators and Node.js's AsyncIterator pattern:
Python Side: FastAPI's
StreamingResponseaccepts an async generator that yields chunks. Eachyielddispatches data to Rust via a tokio DuplexStream.ASGI Bridge: The Rust-based ASGI implementation receives
http.response.bodyevents with themore_bodyflag, queuing chunks as they arrive.Node.js Side: The
handleStream()method returns a Response object that implements the AsyncIterator protocol. Each iteration offor await...ofreceives the next chunk.
This architecture means Node.js can start processing data the moment Python sends the first chunk—no waiting for the complete response. But it also means bidirectional backpressure, so each side can only operate as fast as the other and will yield back to its respective event loop when there's no work to be done.
Real-World Use Cases for HTTP Streaming
Beyond metrics dashboards, HTTP streaming enables:
- Large File Downloads: Stream files from Python (e.g., generated reports, media files) without loading them entirely into memory
- AI/ML Model Outputs: Stream generated content from language models or other AI systems
- Progressive Data Processing: Stream database query results or CSV processing as rows are processed
- Video/Audio Streaming: Deliver media content with Python processing (transcoding, filtering) and Node.js delivery
WebSocket Support: Building Real-Time Applications
While HTTP streaming works well for server-to-client communication, WebSockets provide full bidirectional real-time channels. This is useful for chat applications, collaborative editing, live gaming, and scenarios where both client and server need to send messages independently.
Let's build a conversational AI assistant with FastAPI WebSockets:
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
import json
app = FastAPI()
@app.websocket('/ws/assistant')
async def assistant_endpoint(websocket: WebSocket):
await websocket.accept()
# Send welcome message
await websocket.send_text('Hello! I am your AI assistant. Ask me anything or try /help for commands.')
try:
while True:
# Receive message from client
message = await websocket.receive_text()
# Simple routing based on message content
if message.startswith('/help'):
response = 'Available commands: /help, /status, /about, or ask any question'
elif message.startswith('/status'):
response = 'System status: All services operational'
elif message.startswith('/about'):
response = 'AI Assistant v1.0 - Powered by Python and Node.js'
elif message.lower() in ['hi', 'hello', 'hey']:
response = 'Hello! How can I help you today?'
elif 'time' in message.lower():
from datetime import datetime
response = f'The current time is {datetime.now().strftime("%H:%M:%S")}'
else:
# Echo back with a simulated AI response
response = f'You said: "{message}". I am processing your request...'
# Send response back to client
await websocket.send_text(response)
except WebSocketDisconnect:
pass # Client disconnected
Now let's interact with this assistant from Node.js:
import { Python, Request } from '@platformatic/python-node'
const python = new Python({
docroot: './python-apps',
appTarget: 'assistant_app:app'
})
async function runAssistant() {
// Create WebSocket request
const req = new Request({
url: 'http://localhost/ws/assistant',
websocket: true
})
console.log('Connecting to AI Assistant...\n')
const res = await python.handleStream(req)
// Messages to send to the assistant
const messages = [
'Hello',
'/help',
'/status',
'What is the time?',
'Tell me about yourself'
]
let messageIndex = 0
// Clean for-await loop: read response, then write next message
for await (const chunk of res) {
const response = chunk.toString()
console.log(`Assistant: ${response}\n`)
// Send next message if we have more
if (messageIndex < messages.length) {
const nextMessage = messages[messageIndex]
console.log(`You: ${nextMessage}`)
await req.write(nextMessage)
messageIndex++
} else {
// No more messages, close the connection
console.log('Closing connection...')
await req.end()
break
}
}
}
runAssistant().catch(console.error)
How WebSocket Communication Works
The example above demonstrates the clean request-response pattern enabled by WebSockets:
Connection Establishment: Node.js creates a Request with
websocket: true. Python receives a scope withtype: 'websocket'and sendswebsocket.acceptto establish the connection.Bidirectional Messaging: The for-await loop reads from the Python server, and each iteration writes a new message back:
- Python → Node.js: Python's
await websocket.send_text()delivers data to Node.js via the AsyncIterator - Node.js → Python:
await req.write(data)sends data to Python viawebsocket.receive_text()
- Python → Node.js: Python's
Clean Loop Pattern: Unlike the background task pattern often seen in WebSocket examples, this approach uses a single synchronous-style loop where each receive is followed by a send, making the flow easy to understand and debug.
Connection Lifecycle: Either side can close the connection. Python receives
websocket.disconnectevents, while Node.js closes viareq.end().
Real-World WebSocket Use Cases
WebSocket support enables full-stack teams to build:
- Conversational AI Assistants: Build chatbots and AI assistants in Python, exposing them via WebSocket for real-time conversations
- Real-Time Chat and Messaging: Interactive chat backends where Python handles message routing and business logic
- Live Data Feeds: Stream stock prices, sports scores, IoT sensor data, or live metrics with bidirectional control
- Interactive Commands: Command-line style interfaces where users send commands and receive structured responses
- Gaming: Real-time multiplayer game state synchronization with player actions and server updates
- Live Customer Support: Real-time support chat with Python AI integration for automated responses
Real-World Integration Scenarios for Full-Stack Teams
The combination of streaming and WebSocket support enables several integration patterns for teams using both Python and Node.js:
Python ML Models with Real-Time Inference
Run machine learning inference in Python (using PyTorch, TensorFlow, or transformers) and expose results via WebSocket for real-time predictions. Your Node.js API gateway can manage authentication and routing while Python handles the heavy computation.
@app.websocket('/ws/inference')
async def ml_inference(websocket: WebSocket):
await websocket.accept()
model = load_model() # Load your ML model
while True:
data = await websocket.receive_json()
result = model.predict(data['input'])
await websocket.send_json({'prediction': result})
Progressive Data Processing
Stream large dataset processing results back to Node.js as they're computed, enabling progress bars, partial result display, or early termination:
@app.get('/process/dataset')
async def process_dataset():
async def process_stream():
for batch in dataset.batches(size=1000):
result = process_batch(batch)
yield json.dumps(result).encode() + b'\n'
return StreamingResponse(process_stream())
Hybrid API Gateway
Use Node.js as your API gateway for authentication, rate limiting, and routing, while leveraging Python's rich ecosystem for specific endpoints that need streaming or WebSocket capabilities.
Existing Python Tools with WebSocket Interfaces
Wrap existing Python command-line tools or libraries with FastAPI WebSocket endpoints, making them accessible to your Node.js infrastructure without rewriting them.
Getting Started and Migration Guide
Installation
Upgrade to v2.0.0 via npm:
npm install @platformatic/python-node@latest
Or with yarn:
yarn add @platformatic/python-node@latest
Choosing Between handleRequest() and handleStream()
The API now offers two methods for handling requests:
Use handleRequest() when:
- Response bodies are small and fit comfortably in memory
- You would need to buffer the complete response body anyway
- Backward compatibility with existing code is required
const res = await python.handleRequest(req)
console.log(res.body.toString()) // Body available immediately
Use handleStream() when:
- Responses are large or potentially unbounded
- You need access to headers before the body completes
- Implementing Server-Sent Events or streaming protocols
- Building WebSocket applications
- Memory efficiency is important
const res = await python.handleStream(req)
console.log(res.body) // `undefined` for streams!
console.log(res.status) // Headers available immediately
for await (const chunk of res) {
// Process chunks incrementally
}
Migration Checklist
Existing applications using handleRequest() should mostly all continue to work without changes. The one exception being that the req.body setter and getter are no longer available. To adopt streaming:
- Identify endpoints that would benefit from streaming (large responses, real-time data, WebSockets)
- Switch those specific endpoints to use
handleStream() - Update response handling to use
for await...ofiteration - Test thoroughly, especially error handling during streaming
- Monitor memory usage to verify streaming benefits
Documentation and Resources
- Full documentation: python-node GitHub repository
- ASGI specification: asgi.readthedocs.io
- FastAPI streaming: FastAPI Advanced User Guide
- FastAPI WebSockets: FastAPI WebSockets
Conclusion
The addition of HTTP streaming and WebSocket support in @platformatic/python-node v2.0.0 expands what's possible for full-stack teams building with Python and Node.js. The existing integration now extends to real-time, high-performance use cases that previously required complex workarounds.
Whether you're building live dashboards, chat applications, progressive data processing pipelines, or exposing Python ML models via WebSocket APIs, v2.0.0 provides the necessary foundation while maintaining backward compatibility with existing code.
The implementation follows the ASGI 3.0 specification, ensuring compatibility with the Python async ecosystem, including FastAPI, Starlette, Django Channels, and other ASGI-compliant frameworks. Combined with running Python directly in Node.js processes, this release provides a practical option for teams looking to leverage both ecosystems.
Try out v2.0.0 today and share your feedback. If you encounter issues or have questions, please open an issue on our GitHub repository.






