What's new in Platformatic v2

What's new in Platformatic v2

It has been one year since the release of Platformatic v1, which marked the beginning of a cycle of improvements that has led us to this latest release. Over this last year, we have learned one big lesson: developers want to feel the benefits of Platformatic with their stack (Next.js, Astro, Express, Koa, Fastify, etc…) without having to switch their framework to Platformatic.

With this in mind, focussing on extracting what made Platformatic Service unique and providing it to all Node.js applications was central to this release. Through this process, we realized we had created the equivalent of a Node.js Application Server for the modern era.

Introducing Watt, the Application Server for Node.js

Watt allows you to run multiple Node.js applications that are managed centrally– we call them “services”. Some are run inside worker threads, allowing you to start faster but with lower overhead, while others are executed as child processes to accommodate their complex start-up sequences.

Wondering to get started with Watt? Follow our quick start guide.

The virtual mesh network

One of the most frustrating things when developing and deploying a microservice system is to remember to start up each application on its host:port combination and keep all those in sync. In this scenario, one simple mistake can cause the entire house of cards to come crashing down. Ever since v1 shipped, Platformatic has been able to route fetch(‘<serviceId>.plt.local’) to the service with that given id. With v2, this now works across threads and processes managed by Watt or Platformatic.

Each service run by Watt defines how they are exposed throughout the mesh network: either via in-memory communication (only available if they are run as threads) or via a more traditional TCP socket.

This is controlled by how the application is started. If one wants to expose an application via in-memory communication, they have to expose a create function in the entrypoint, like so:

import { createServer } from 'node:http';

export function create () {
  let count = 0

  const server = createServer((req, res) => {
    console.log('received request', req.url)
    res.setHeader('Content-Type', 'application/json');
    res.end(JSON.stringify({ content: `from node:http createServer: ${count++}!` }));
  })

  return server
}

The server can be any Node core server or a Fastify, Express or Koa instance.

Watt can also expose an application via TCP by noticing that your application has called .listen() on a server, in other terms, the “original” Node.js server, still works out of the box:

import { createServer } from 'node:http';

let count = 0

const server = createServer((req, res) => {
  console.log('received request', req.url)
  res.setHeader('Content-Type', 'application/json');
  res.end(JSON.stringify({ content: `from node:http createServer: ${count++}!` }));
})

server.listen(0) // Port number does not matter

Currently, WebSockets are supported only when using TCP.

Processes vs Threads

Watt supports services that must be run as threads or child processes. The key decision factor in using one or the other is due to the complexities of some applications (build steps, specific Node.js CLI flags etc): wherever it’s present, Watt calls npm start, so all of those requirements are respected.

In the future, we will allow the spawning of multiple threads and processes of the same service, allowing the application to use 100% of the resources provided by the hardware.

Monitoring and Logging

The Watt application server allows for three ways of observability:

  1. Monitoring via Prometheus

  2. OpenTelemetry Tracing

  3. Logging Transport via pino

Watt exposes a prometheus endpoint, that can be enabled by adding the following to the watt.json (or platformatic.json) of the runtime itself:

{
  …
  "metrics": {
    "hostname": "0.0.0.0",
    "port": 9090
  }
 …
}

The Prometheus endpoint is exposed on port 9090, so Prometheus can collect the metrics separately (and likely forward them to Grafana, or to our Intelligent Command Center) as recommended. The provided metrics highlight important Node.js metrics by default, such as memory consumption, request latency, and event loop utilization.

This is how this data is rendered in our Intelligent Command Center:

The Watt application server can also set up OpenTelemetry Tracing collection.

{
  …
  "telemetry": {
    "serviceName": "test-runtime",
    "version": "1.0.0",
    "exporter": {
      "type": "otlp",
      "options": {
        "url": "http://url/to/otlp/server"
      }
    }
  }
  …
}

This is used by our Intelligent Command Center to automatically create a taxonomy of all the applications running within it:

Watt allows you to forward all the logs and output of your services to any destination supported by pino. These adapters are called Transports by pino. Given its integration, your application might or might not use Pino itself - just using console.log() would capture your log.

However, you can leverage a globally available logger with:

globalThis.platformatic?.logger.info(‘hello world’)

The transport can be configured like this:

{
  …
  "logger": {
    "level": "trace",
    "transport": {
      "target": "pino-elasticsearch",
      "options": {
        "index": "an-index",
        "node": "http://elasticsearch"
      }
    }
  }
  …
}

For users of our Intelligent Command Center, this is easily accessible in the logs tab:

Supporting frontend applications

Next.js, Remix, Astro, and generic Vite (via fastify-vite) are supported both in their Static Site Generation (SSG) and Server-Side Render (SSR) modes. In the SSR mode, it’s possible to call the other services in the same Watt instance just by using fetch() (Next.js example):

import Image from "next/image";
import styles from "./page.module.css";

export default async function Home() {
  return (
    <div className={styles.page}>
      <main className={styles.main}>
        <Image
          className={styles.logo}
          src="https://nextjs.org/icons/next.svg"
          alt="Next.js logo"
          width={180}
          height={38}
          priority
        />
        <h1 className={styles.title}>Welcome to Next.js + Platformatic!</h1>
        <h3>Coming from the "basic" Node.js server</h3>
        <div className={styles.ctas}>
          {(await (await fetch("http://node.plt.local/", { cache: 'no-store' })).json()).content}
        </div>
        <h3>Coming from Fastify</h3>
        <div className={styles.ctas}>
          {(await (await fetch("http://fastify.plt.local/", { cache: 'no-store' })).json()).content}
        </div>
      </main>
    </div>
  );
}

To learn more about how to run your Next.js application inside Watt, check this out.

Vite-based applications are also supported via fastify-vite and run transparently to the user.

Depending on the which technologies you use, Watt can run your application in two ways:

  1. A separate and dedicated worker thread inside the same Watt process. In this way, the service will try not to start any TCP port if possible.

  2. A separate process exposing a TCP port.

The behaviour of each application is shown in the following table:

Name

Development Mode

Production Mode

Generic Node.js HTTP server

Worker Thread

Worker Thread

Express

Worker Thread

Worker Thread

Fastify

Worker Thread

Worker Thread

Koa

Worker Thread

Worker Thread

Astro

Worker Thread

Worker Thread (for SSR)

Next

Separate Process

Separate Process

Remix

Worker Thread

Worker Thread

Vite

Worker Thread

Worker Thread (for SSR)

All the applications above support the following commands in the service watt.json file to modify the default behaviour:

Setting

Effect

application.commands.dev

The command to execute when running wattpm dev

application.commands.start

The command to execute when running wattpm start

Note that using the properties above will force the application to always be executed as a separate process. Interservice communication is still available.

Building applications for deployment

When bundling the application for deployment, use wattpm build. This command will perform the necessary actions to build all your services.

All the applications listed above are supported out-of-the-box, but you can eventually use a specific custom command using the setting application.commands.build in the service watt.json file.

Fastify v5

Fastify v5 was released on the 17 September (more details here), and all the dependencies of Platformatic have been updated to this latest release. Fastify v5 release had two key objectives:

  1. Simplify long-term maintenance by dropping support for outdated Node.js versions, with Fastify v5 now targeting Node.js v20 and above.

  2. Streamline the framework by removing all deprecated APIs accumulated over the past two years of improvements.

A complete upgrade guide is available here.

Upgrading from Platformatic v2

If you are a Platformatic v1.x user, upgrading to v2 is simple by following these steps:

  1. Upgrade your dependencies @platformatic/service, @platformatic/db or @platformatic/composer to the latest version (v2).

  2. Upgrade all your @fastify dependencies to the latest version (check on the release notes for each package what is supporting Fastify v5).

  3. Wipe node_modules and the various lock files, then run npm install (or pnpm install, etc).

  4. In the root of your project, run npx platformatic upgrade

  5. For each service in your application, run npx platformatic upgrade.

Please contact us on Discord if you encounter any issues.

Wrapping Up

We are currently in the process of updating all our tutorials and guides to reflect the new Watt Application Server, but it might take a few days. Stay tuned for more updates.

To learn more about using our Node.js application platform within your team, read more about our latest release and book a demo.