An end to isolation: Introducing Platformatic Runtime
How can we scale software development, and how is it possible that adding more staff to a team can actually slow down delivery?
The answer to those questions is a major theme in our industry. As with all human activities, building software is a social endeavour requiring tight communication between developers: given a team of size n, there will be n * (n - 1) / 2 connections to maintain.
Considering this, maintaining all these communication channels slows us down to a crawl.
Common wisdom reports that the best team size is between 6 and 8 people (the two pizza’s team rule). So, how can we divide people into small teams?
Splitting an application into microservices has become one of the go-to approaches to solve this issue.
Microservices are an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs to form a larger ecosystem. Individual microservices are owned by small, self-contained teams.
While the self-contained nature of microservices provides an array of benefits, including individual scalability, agility, maintainability and oftentimes resilience, the transition to microservices was not painless.
As the number of services and APIs grew, the network complexity also grew, and developers needed a way to access these services. On one side, the service mesh solved the operational complexity of connectivity. On the other, service registries were born to solve discoverability and distribution. But all of these came with a hefty price tag: we lost the developer experience along the way.
Today, we are changing the game by introducing a solution that enables developers and enterprises to leverage the perks of microservices with the deployment simplicity of a monolith.
Introducing Platformatic Runtime
Runtime allows you to accelerate and simplify the development and execution of microservices by using the power of Platformatic Taxonomy to run your application like a monolith, without losing or compromising the power of distributed and collaborative development.
By consolidating all of an organization’s Node.js applications and microservices into a single unit for deployment, Runtime reduces deployment-related risks and bolsters standardization, enabling users to move away from the model where multiple custom systems all operate at once, with high levels of isolation causing high rates of deployment errors, slowing down shipping velocity.
How does it work?
The Platformatic Runtime allows you to run multiple Platformatic Service, DB, and Composer applications inside of a single process, while exposing its own CLI and programmatic API that feels very familiar to existing users.
Let’s start by exploring an example configuration file, shown below:
This simple configuration file allows you to leverage the power of monorepos to start many Platformatic microservices as a single monolithic application. The
$schema field tells Platformatic that this configuration is for a Runtime application. The
autoload object specifies a directory containing the microservices. In a typical monorepo, each microservice would be in its own directory under a parent
packages directory. Assume our monorepo has the following directory structure:
packages as the
autoload path, we are telling the Platformatic Runtime to start the contents of the
docs directories as Platformatic applications.
Each of these directories is expected to include its own standalone Platformatic application.
The final configuration field,
entrypoint, tells the Runtime that
entrypointApp is how your users will reach your application. The entrypoint is the only application that binds to a port on the host operating system. Internal service-to-service communication happens by injecting HTTP requests directly into the Platformatic microservice applications without requiring them to bind to a port. For more information on how injection works, check out the fastify-undici-dispatcher documentation.
Under the hood, the Runtime analyzes any clients and Composer services to determine which services need to communicate with each other.
Using this dependency graph, the Runtime performs a topological sort to ensure that a service is not started until all of its dependencies have been started. If your dependency graph has a cycle, the topological sort will fail. You can opt out of this behavior by setting
true in your Runtime configuration. In this case, the Runtime will start the services in the order specified in the configuration file.
We also recognize that not all applications will fit nicely into this monorepo-based configuration. For example, in our monorepo, it is possible that the
docs directory is not a Platformatic application. In this case, you can exclude that directory by updating your configuration as shown below:
You can also specify microservices independent of, or in addition to, a monorepo using the
services configuration format shown below. In this format, you specify the directory containing the microservice, the name of the configuration file within that directory, and an ID for the service. Under the hood, the Runtime converts the
autoload configuration to this format.
Once you are satisfied with your configuration, save it in a file named
platformatic.runtime.json and start your microservices using the following command:
platformatic runtime start -c platformatic.runtime.json
As we have seen, Platformatic Runtime allows developers to accelerate and simplify the development and execution of microservices by using the power of Platformatic Taxonomy to run applications like a monolith, without losing or compromising the power of distributed and collaborative development.
Are you ready to try out our Runtime environment?