Solving Microservice Challenges

Solving Microservice Challenges

A Networkless HTTP Guide with Fastify and Undici

When faced with managing complex codebases, typically, engineering teams will typically adopt the Model View Controller (MVC) architecture. The MVC framework offers an architectural pattern separating an application into its three components: model, view and controller. Each component’s business logic and presentation layer is isolated from the other components.

However, this option is not always scalable, seeing as the number of models can at times be as high as 2000.

Another alternative to managing complex codebases is through the adoption of a microservice architecture, but issues around cost and complex coordination pose a problem.

In light of these issues, today, many teams prefer to use modular monoliths.

In this article, we will examine how to scale complex backend applications by leveraging networkless HTTP. We will also explore how to use tools like Fastify and Undici to support this.

Ultimately, we will look at how Platformatic makes networkless HTTP implementation simpler for applications using a modular monolith architecture.

The MVC Architecture: Use Cases and Unscalability

Software engineers have always looked for ways to build complex systems more simply. This led to the popularity of the monolithic architecture, which leverages the Model-View-Controller framework.

For an easier understanding of this architecture, let’s look at what each component does.

The Model is the database powerhouse of the application; this is where the development team stores, gets, and structures all their data. The Model implements the business logic, holds data in memory, and manages persistence.

The View component is the part the end-users get to interact with. The team builds this section with frontend languages like React or Next. In short, the View component is the user interface where the user can visually interact with the application.

The Controller is the intermediary or facilitator between the View and the Model. Based on users' activities on View, the Controller connects with the database to generate the necessary response. This is typically your framework “route” handler.

For simplicity, the Model deals with the database; the View deals with the UI; and the Controller deals with request handling.

The Model View Controller framework is ultimately not scalable for large codebases. But why? Given that when adding a new feature there would only be a few places to put it, most of the heavy lifting would be delegated to the models, i.e. the “fat model” architecture. Soon, you can have up to 2000 models which will be impossible for a single developer to understand, all at once. They will also take several minutes to each load.

This scalability hassle is the main reason some teams with higher levels of complexity and larger customer bases resort to better alternatives other than the MVC model.

Are microservices a better alternative?

Unlike the MVC architecture, microservices are not built with a monolithic approach. Different features can be built separately and worked on independently of the rest. Each component or feature will have its own server and database.

This is why some organizations prefer using microservices.

But what are microservices?

Microservices are an architectural pattern in which each component exists independently, thereby enhancing overall efficiency. Its flexibility, scalability and propensity to encourage fault isolation make it appealing to large teams building solutions for large audiences.

However, microservices also have their shortcomings.

The first issue often comes up in leadership and coordination. Due to how different autonomous teams handle several independent components, decision-making might be slower as different heads must agree on some overall decisions.

Secondly, the cost of maintaining microservices can be high particularly due to the increased number of services may result in higher infrastructure costs, particularly if each service requires its own set of resources. Efficient resource allocation and monitoring are crucial to control costs.

At this point, one thing is clear: microservices might not be suitable for most development teams. This leads to the consideration of Modular Monoliths.

The Modular Monolith: A Better Architectural Option

The modular monolith architecture, used by companies such as Spotify, embraces a fair balance of features from monolithic and microservice architectures. It combines the benefits of modular design with the simplicity of a monolithic architecture. Within a modular monolith, loosely coupled modules will have well-defined boundaries and clear dependencies on other modules.

The beauty of the modularity behind modular monoliths is that an engineer can turn a module into a microservice or a pure monolith at will. The concept behind this architecture is that engineers can split their HTTP framework across different isolated features or domains.

For clarity, the domains are the modules. In a modular monolith application, each module can have its App.js file with its routes and plugins.

In this context, domain means the specific subject a project is developed for, such as catalogue, billing, and cart. That said, it is deducible that a modular monolith is a design-driven architecture.

As much as modular monoliths are incredibly efficient, there is an initial overhead: Standardizing the modular interfaces can be quite difficult and dicey.

Developers have to consider whether modules have to relate by calling a function as well as how easily a model can be converted into a microservice? However, HTTP solves this problem.

There is a new HTTP RFC known as RFC 9110, which renders previous RFCs obsolete. It makes semantics identical across all transports. Therefore, it is possible to create HTTP implementations with HTTP semantics atop anything. This is the main idea behind Networkless HTTP.

Modern Tools for Implementing Networkless HTTP

There are tools that can help implement the concept of networkless HTTP in our Node.js applications. These are Undici and Fastify.

Undici

The HTTP stack in Node.js is deficient in terms of performance, and these bottlenecks cannot be fixed unless the API is broken.

This was the motivation behind the creation of Undici, which is a fast Node.js HTTP library. Undici is the library that powers the fetch library in the Node.js core.

Undici is an HTTP client built for Node.js. Initially, it only supported HTTP/1.1. But with the advent of HTTP progressions, the team behind it has also shipped support for HTTP/2. This library is now compatible with the current versions of HTTP.

To get started with Undici, run this command to import it:

npm i undici --force

Note: Include --force to avoid installation issues in case you have incompatible versions of Express or GraphQL.

Recall we mentioned Undici powers the Fetch request in Node.js Core, this is how to use Fetch:

import { fetch } from "undici";

const res = await fetch('https://example.com')
const json = await res.json()

console.log(json)

This is how to use an HTTP Request in Undici:

import { request } from 'undici'

const {
  statusCode,
  headers,
  trailers,
  body
 } = await request ('http://localhost:3000/foo')

console.log('the server has properly received the response', statusCode)
console.log('headers', headers)

for await (const data of body) {
   console.log('data', data)
}

console.log('trailers', trailers)

Making HTTP calls at scale can be quite complex. Hence, the reason Undici adopts a simple dispatcher interface, which uses the “dispatch method” to define how the request is handled.

Notably in the design of Undici, the API is separated from the internals, which makes it easier to build atop it. This is the hierarchy of a hypothetical dispatcher:

  • The first Dispatcher defines all public methods

  • DispatcherBase adds more code for supporting Dispatcher

  • The Client wraps a single HTTP connection; making one client, one socket.

  • Pools are a set of Clients. They help in routing an HTTP request to an origin

  • The generic pool does round-robin balancing across multiple clients

  • Balanced Pool does automatic load balancing between the different available sockets; evaluating the latency of each one

  • An agent directs requests to the right pools based on their origin.

Undici has a mocking system for a better developer experience while using Node core. We can mock a request with Undici using the same API, which makes it possible to implement networkless HTTP.

For a better understanding of how this work, take a look at the code below:

// here, we mocked a request using the same API
// hints that we can implement networkless HTTP

import {strict as assert} from 'assert'
import {MockAgent, MockAgent, setGlobalDispatcher, } from 'undici'

const MockAgent = new MockAgent();

setGlobalDispatcher(MockAgent);

// input the base URL to the req

const mockPool = MockAgent.get('http://localhost: 3000');

// intercept the req

mockPool.intercept({
    path: '/bank-transfer',
    method: 'POST',
    headers: {
        body: JSON.stringify({
            recipient: '1960',
            amount: '200'
        })
    }
}),reply[200, {
    message: 'processing the transaction. kindly wait'
}]

At this point, to implement an actual networkless HTTP, we need to create and route an agent to our HTTP server rather than a mock. How do we do that? We can create a module where we leverage light-my-request to inject an HTTP request into our server.

import { createServer } from 'node:http'
import inject from  'light-my-request'

const dispatch = function (req, res) {
    const reply = "Hello everyone"
    res.writeHead(200, {
        'Content-Type': 'text/plain',
        'Content-Length' : reply.length
    })
    res.end(reply)
}


  const server = http.createServer(dispatch)

  const res = await inject(dispatch , { method: 'get', url: '/'})

So far, we have a public API to make HTTP requests and a module to inject into the server. Can we possibly merge the two modules? This would have been easy if light-my-request were compatible with Express, but it is not.

This is where Fastify comes in.

Fastify

Fastify is a popular open-source framework for Node.js, and it supports light-my-request. You can simply inject with app.inject as seen below:

import fastify from 'fastify'

const app = fastify()
app.get('/', async () => "Hi everyone")

const res = await app.inject('/')
console.log(res.body.toString())

There is a library that enhances the swift utilization of Unidici dispatcher in a Fastify instance: the Fastify-Undici-Dispatcher library. Here is an example of how to use it:

import {request, Agent} from 'undici'
import fastify from 'fastify'

const server = fastify()
server.get('/', async (req, reply) => {
    return "Hello everyone"
})

const dispatcher = new FastifyUndiciDispatcher({
    // dispatcher and domain specifications are quite optional
    dispatcher: new Agent (),
    domain: '.local'
})
dispatcher.route('myServer', server)

const res = await request('http://myserver.local', {
    dispatcher
})

 console.log(await res.body.text())

Platformatic: Supercharging Networkless HTTP with Undici and Fastify

Platformatic offers a backend development solution with robust support for Networkless HTTP. Developers can build networkless HTTP servers with Undici and Fastify within their Platformatic runtime.

To do so, start by running this command to create Platformatic within your folder:

npm create platformatic@latest

Here, we will create three services: foo, bar, and composer. Foo and bar are networkless HTTPS, which the composer makes to interact.

Indicate that you would want to use TypeScript for foo and bar, but disable it for composer. Among the services, choose to expose the composer.

You will be prompted to answer a series of questions in this order.

$ npm create platformatic@latest
 Hello User, welcome to Platformatic 1.13.1!
 Let's start by creating a new project.
? Which kind of project do you want to create? Runtime
? Where would you like to create your project? platformatic-runtime
? Where would you like to load your services from? platformatic-runtime\services
? Do you want to create the github action to deploy this application to 
Platformatic Cloud? no
? Do you want to enable PR Previews in your application? no
? Do you want to init the git repository? no
[08:00:37] INFO: Let's create a first service!
? What is the name of the service? foo
? Which kind of project do you want to create? Service
? Do you want to use TypeScript? yes
[08:00:45] INFO: Configuration file platformatic.service.json successfully created.
[08:00:45] INFO: Typescript configuration file C:\Users\USER\johnMans\plt\platformatic-runtime\services\foo\tsconfig.json successfully created.
[08:00:45] INFO: Plugins folder "plugins" successfully created.
[08:00:45] INFO: Routes folder "routes" successfully created.
[08:00:45] INFO: Test folder "tests" successfully created.
✔ Types generated!
? Do you want to create another service? yes
? What is the name of the service? bar
? Which kind of project do you want to create? Service
? Do you want to use TypeScript? yes
[08:01:05] INFO: Configuration file platformatic.service.json successfully created.
[08:01:05] INFO: Typescript configuration file C:\Users\USER\johnMans\plt\platformatic-runtime\services\bar\tsconfig.json successfully created.
[08:01:05] INFO: Plugins folder "plugins" successfully created.
[08:01:05] INFO: Routes folder "routes" successfully created.
[08:01:05] INFO: Test folder "tests" successfully created.
✔ Types generated!
? Do you want to create another service? yes
? What is the name of the service? composer
? Which kind of project do you want to create? Composer
? Do you want to use TypeScript? no
? Which services do you want to expose via Platformatic Composer? foo, bar
[08:01:40] INFO: Configuration file platformatic.composer.json successfully created.
[08:01:40] INFO: Plugins folder "plugins" successfully created.
[08:01:40] INFO: Routes folder "routes" successfully created.
[08:01:40] INFO: Test folder "tests" successfully created.
? Do you want to create another service? no
? Which service should be exposed? composer
? What port do you want to use? 3042
[08:01:57] INFO: Configuration file platformatic.runtime.json successfully created.
✔ Installing dependencies...

All done! Please open the project directory and check the README.

To start your application run 'npm start'.

We created two Platformatic runtime services, "foo" and "bar." As well as a Composer for both services. The services "foo" and "bar" will be exposed via the Platformatic Composer, and the Composer will be set to use the services.

To start our application, run the command npm start. This will start your server on the specified port, in this case, port 3042.

Testing the Code with Platformatic Composer

Once your server is running, you can test your application by making a GET request to the route /foo/example. This can be done by opening the OpenAPI documentation interface in your browser.

Next, we use the command pushd services/foo to change the current working directory and push it onto a stack. This allows us to navigate to the desired directory in our project structure.

To create a Platformatic client, run the command:

npx platformatic client -runtime bar -name bar

This command creates a new client named "bar" in the "foo" sub-directory. This client will be used to interact with the "bar" service in our Platformatic Composer.

Navigate to the “routes” folder Inside the “bar” client and add the following code to the “root.ts” file:

/// <reference path="../global.d.ts" />
/// <reference path="../bar/bar.d.ts" />
import { FastifyInstance, FastifyPluginOptions } from 'fastify'

declare module 'fastify' {
  interface FastifyInstance {
    example: string
  }
}

export default async function (fastify: FastifyInstance, opts: FastifyPluginOptions) {
  fastify.get('/example', async (request, reply) => {
    return { hello: fastify.example }
  })

  fastify.get('/temple-bar', async (request, reply) => {
    const res = await request.bar.getExample()
    return { fromBar: res }
  })
}

This code adds a new route to the "bar" client: “/temple-bar”. The “/temple-bar” route requests the "bar" service and returns the result.

Save the file to restart the server, you will observe that a new route /foo/temple-bar was created.

You can then proceed to make a GET request to this route and observe the results. This demonstrates how the new routes have been successfully added to the client and how they interact with the "bar" service.

In the example.ts, we printed “hello: nodeconf” with fastify-decorate, and this is the result:

Wrapping Up

This explored the deficiencies of the MVC and Microservices architectures, and a Modular Monolith architecture offers a middle ground with an array of benefits. Notably, Modular Monoliths have no standard interface interaction models, which is where the concept of Networkless HTTP comes in as a scalable method of building complex backend applications easily.

We also explored how to build a networkless HTTP application with Platformatic using Fastify and Undici.

For any questions you encounter while following this guide, or to simply join our community, find us on Discord.