Table of contents
- Prerequisites
- Create an AI Warp Application
- Setting Up Prompt Decorators
- Setting up Rate Limiting
- Creating a Client for AI Warp OpenAPI Server
- Adding Authentication with GitHub OAuth2
- Creating Plugins for Custom OAuth2 Flow Handling
- Building a React application
- Creating Utility Functions for the React App
- Fetching a Prompt from AI Warp
- Handling Prompt Submit
- Creating a Terms Component
- Building our React Application
- Restructuring Static Files
- Dockerizing and Running your Application
- Deploying to Fly.io
- Wrapping Up
In the fast-paced world of software development, the path to learning a new stack can feel overwhelming. With a vast array of programming languages, frameworks, and specialties, it's tough to know where to start and what skills to prioritize.
Now imagine if you had access to a personalized plan, tailored to your interests and goals?
In this tutorial, we will show you how to build a Developer Roadmap Generator with OpenAI and React using Platformatic AI Warp.
Platformatic AI Warp is a gateway that simplifies integrating AI providers, including OpenAI, into your application. This guide will teach you how to set up a Platformatic AI Warp application, add GitHub OAuth2 for authentication, create a React frontend to interact with AI Warp, and deploy your application to Fly.io.
The complete code is available on GitHub, and a live version of this application can be found here.
Prerequisites
To follow this tutorial, you will need:
Basic understanding of Platformatic and JavaScript
Node.js LTS version
Docker installed on your machine for easy deployment
A GitHub account is required for OAuth2 authentication
Create an AI Warp Application
To create and run your AI Warp application, first navigate to the GitHub for AI Warp and run the application using the steps below:
npx create-platformatic@latest
Select Application, then
@platformatic/ai-warp
Enter your ai-app name, in our case,
developer-roadmap-generator
Select your AI provider.
Enter the model you want to use
Enter the API key if you are using an online provider (OpenAI).
Run the command to start AI Warp:
npm start
You will see your AI warp application when you open http://localhost:3042/ in your browser.
NOTE: Check out our guide to running a local model of Llama2 and add it as an AI provider to your platformatic.json
file.
Setting Up Prompt Decorators
With Platformatic AI Warp, you can pre-set a system prompt to have more control over how an LLM service is used by adding promptDecorators
.
To do this, navigate to the platformatic.json
file in your ai-warp application's services/ai
folder and update it as shown below.
// services/ai/platformatic.json
"promptDecorators": {
"prefix": "You are an expert career advisor specializing in creating detailed and structured developer roadmaps. You will generate an in-depth and comprehensive roadmap for users in markdown format including stages, skills, and resources required to progress from a beginner to an advanced developer.\nThe specific roadmap you need to create is: ",
"suffix": "Please ensure the roadmap is very detailed, covering key areas such as foundational knowledge, programming languages, frameworks, tools, best practices, and continuous learning strategies."
}
Setting up Rate Limiting
We need to set up rate limits and disable AI Warp's default homepage. To do this, navigate to platformatic.json inside the services/ai folder and add the following rate limits.
// services/ai/platformatic.json
{
"showAiWarpHomepage": false,
"rateLimiting": {
"max": 100,
"timeWindow": "1 minute",
"maxByClaims": [
{
"claim": "userType",
"claimValue": "premium",
"max": 1000
}
]
}
}
Creating a Client for AI Warp OpenAPI Server
To create a frontend client for a remote OpenAPI server for AI Warp, run the command below in the services/ai directory:
npx platformatic client --frontend http://0.0.0.0:3042 --name AI
This will create a folder in your project directory with three files:
AI-types.d.ts
: Contains types and interfaces for your AI WarpAI.mjs
: the client, a JavaScript implementation of your OpenAPI server, including routes and endpoints from AI Warp.AI.openapi.json
: A JSON object of your client, including AI Warp endpoints.
NOTE: Make sure to change the URL and directory name in the command to your application URL and name.
Adding Authentication with GitHub OAuth2
To ensure that only authenticated users can generate developer roadmaps with AI, we will integrate GitHub OAuth2. This section will guide you through the process of setting up and configuring GitHub OAuth2 in your application.
First, follow the steps outlined in the GitHub documentation to create a new OAuth2 application and save your GitHub OAuth credentials in your .env
file.
// .env
GITHUB_OAUTH_CLIENT_SECRET=xxxxxxxx
GITHUB_OAUTH_CLIENT_ID=xxxxxxxxx
Install Fastify Packages
Install fastify/oauth2, fastify-cookie, fastify-pluginpackages in the root of our application with the command.
npm i @fastify/oauth2 fastify-cookie fastify-plugin undici
Follow the steps outlined in the GitHub documentation and create a new OAuth2 application.
In the services/ai
folder, create a new folder auth
, and inside it, create a file utils.js
and add the code below:
// services/ai/auth/utils.js
const { request } = require("undici");
async function callGHEndpoint({ path, method, body, accessToken }) {
const res = await request(`https://api.github.com/${path}`, {
method: method.toUpperCase(),
headers: {
authorization: `Bearer ${accessToken}`,
accept: "application/json",
"X-GitHub-Api-Version": "2022-11-28",
"User-Agent": "A Platformatic App",
},
});
return await res.body.json();
}
module.exports = {
callGHEndpoint,
};
Here, we create a utility function called callGHEndpoint
, which authenticates requests to the GitHub API. We construct the request URL, set the request headers, and parse the response as JSON.
Next, we integrate the OAuth2 plugin into our AI Warp application. Inside the auth
folder, create a new file, github.js
, and add the code blocks below.
// services/ai/auth/github.js
"use strict";
/// <reference path="../global.d.ts" />
const { callGHEndpoint } = require("./utils");
const oauthPlugin = require("@fastify/oauth2");
const CALLBACK_URL = process.env.CALLBACK_URL || "http://localhost:3042";
/** @param {import('fastify').FastifyInstance} app */
module.exports = async function (app, opts) {
app.register(oauthPlugin, {
name: "githubOAuth2",
credentials: {
client: {
id: opts.GITHUB_OAUTH_CLIENT_ID,
secret: opts.GITHUB_OAUTH_CLIENT_SECRET,
},
auth: oauthPlugin.GITHUB_CONFIGURATION,
},
startRedirectPath: "/login/github",
callbackUri: `${CALLBACK_URL}/login/github/callback`,
cookie: {
path: "/",
secure: true,
sameSite: "none",
},
});
app.get("/login/github/callback", async (req, res) => {
try {
const { token } =
await app.githubOAuth2.getAccessTokenFromAuthorizationCodeFlow(req);
const githubUser = await callGHEndpoint({
path: "user",
accessToken: token.access_token,
method: "GET",
body: {},
});
const user = {
username: githubUser.login,
full_name: githubUser.name,
image: githubUser.avatar_url,
email: githubUser.email,
};
const userToken = Buffer.from(user.email).toString("base64");
const redirectUrl = `${CALLBACK_URL}?username=${encodeURIComponent(user.username)}&token=${userToken}`;
res.redirect(redirectUrl);
} catch (error) {
app.log.error(error);
res.send("Authentication failed");
}
});
}
Here, we added GitHub OAuth2 authentication to our AI Warp application to enable users to log in with GitHub. First, we imported the utility function to call the GitHub API. Next, we registered the @fastify/oauth2
with a Fastify instance and configured it with our GitHub credentials. The callback URI
will handle the response from GitHub authenticating and set a cookie for the application OAuth2 flow.
In the second code block, we defined a route handler for /login/github/callback
to handle the OAuth2 callback, which gets an access token from a user after authorizing the application. It then fetches the user’s profile information and creates a user object. Finally, we redirect the user to the CALLBACK_URL
, our AI application's homepage.
With this approach, our application initiates the OAuth2 flow when users attempt to log in, redirects them to GitHub for authentication, and handles the callback to complete the login process. This way, only authenticated users can use the developer roadmap application.
Creating Plugins for Custom OAuth2 Flow Handling
In this section we will add a custom method to handle actions after the OAuth2 flow is completed. We will use the fastify-plugin
module to create a Fastify plugin that manages user login and redirection after successful authentication.
Create an Authentication Plugin
First, in the services/ai
folder, create a new file named authentication.js
and add the following code.
// services/ai/plugins/authentication.js
const fp = require("fastify-plugin");
async function plugin(app, options) {
app.decorate("afterOAuth2Flow", async (user, externalId, req, res) => {
req("user", user);
const loggedInUser = await app.loginUser({
email: user.email,
username: user.username,
externalId,
});
app.log.info({ loggedInUser });
if (app.config.DEV === true) {
return res.redirect(app.config.PLT_MAIN_URL);
}
return res.redirect(app.config.PLT_MAIN_URL);
});
}
plugin[Symbol.for("skip-override")] = true;
module.exports = fp(plugin, {
name: "authentication",
});
We extended our Fastify application by adding a custom OAuth2 flow-handling plugin to manage user login and redirection after authentication. We defined a plugin function that adds a custom after OAuth2Flow
method to the Fastify instance.
The after OAuth2Flow
method handles the authenticated user's details, attaches the user object to the request, and logs the user in using their email, username, and external ID. It then logs the logged-in user information and redirects the user to the main URL, the application homepage.
Finally, we exported the plugin using fastify-plugin
, naming it 'authentication' to integrate seamlessly with our AI application.
Setting up Environment Variables
Before building our React application, let’s inspect and setup our environment variables, navigate to the .env file in the root folder of your application and verify that it looks similar to the one below:
PLT_SERVER_HOSTNAME=127.0.0.1
PORT=3042
PLT_SERVER_LOGGER_LEVEL=info
PLT_MANAGEMENT_API=true
PLT_MAIN_URL=http://127.0.0.1:3042
PLT_AI_PROVIDER=openai
PLT_AI_MODEL=gpt-3.5-turbo
PLT_AI_API_KEY=sk-
PLT_AI_TYPESCRIPT=false
PLT_AI_URL=http://0.0.0.0:3042/
PLT_GITHUB_OAUTH_CLIENT_ID=
PLT_GITHUB_OAUTH_CLIENT_SECRET=
VITE_AI_URL=http://127.0.0.1:3042
Building a React application
Follow the official Vite documentation to create a React application with the latest version of React. Run this command in the services/ai folder of your application:
npm create vite@latest
Here, you will be presented with a few options — for the sake of this tutorial:
Project name: …
Ai-frontend
Package name: …
ai-frontend
Select a framework: › React
Select a variant: › JavaScript
Once you have created the project, go to the project directory and install the packages:
npm i @heroicons/react axios next-themes react-hot-toast react-markdown react-modal remark-gfm
Creating Utility Functions for the React App
Create a new file utils.js
, inside a utils
folder in the src
folder of your frontend application and add the code block below:
// // services/ai/src/utils/utils.js
import { useState, useCallback } from "react";
import { toast } from "react-hot-toast";
import { prompt as apiPrompt, setBaseUrl } from "../../AI.mjs";
setBaseUrl(import.meta.env.VITE_AI_URL);
We imported the postApiV1Prompt
function and the setBaseUrl
function from our API utilities module at ../../AI.mjs
. Using setBaseUrl
, we configure our application to use the base URL specified in our environment variables (import.meta.env.VITE_AI_WARP_URL
).
This approach allows us to dynamically set the base URL for API requests, which is particularly useful for handling different environments such as development, testing, and production.
You can set the base URL directly in your application code using setBaseUrl
, either by hardcoding or fetching it from an environment configuration file (.env
).
Fetching a Prompt from AI Warp
Let’s create a utility function to fetch a prompt:
// services/ai/src/utils/utils.js
export const useFetchPrompt = () => {
const [prompt, setPrompt] = useState("");
const [response, setResponse] = useState("");
const [loading, setLoading] = useState(false);
const fetchPrompt = useCallback(async (userPrompt) => {
setLoading(true);
try {
const fetchedPrompt = await apiPrompt({ prompt: userPrompt });
setPrompt(userPrompt);
const responses = [...response, fetchedPrompt.response];
console.log(responses);
setResponse(fetchedPrompt.response);
toast.success("Prompt fetched successfully!");
} catch (error) {
console.error("Failed to load prompt:", error);
toast.error("Failed to fetch data.");
} finally {
setLoading(false);
}
}, []);
return {
prompt,
setPrompt,
response,
setResponse,
fetchPrompt,
loading,
setLoading,
};
};
Here, we created the useFetchPrompt
hook function to store our application's prompt
, response
, and loading
status for our AI application.
Handling Prompt Submit
In this section, we will create the function for submitting our prompt on the front end.
// services/ai/src/utils/utils.js
export const handlePromptSubmit = async (prompt, setResponse, setLoading) => {
setLoading(true);
try {
const res = await fetch(`${import.meta.env.VITE_AI_URL}/api/v1/stream`, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({ prompt }),
});
if (!res.ok) {
const errorResponse = await res.json();
setResponse(`Error: ${errorResponse.message} (${res.status})`);
return;
}
const reader = res.body.getReader();
const decoder = new TextDecoder();
let completeResponse = "";
let loading = true;
while (loading) {
const { done, value } = await reader.read();
if (done) break;
const decodedValue = decoder.decode(value, { stream: true });
const lines = decodedValue.split("\n");
for (let i = 0; i < lines.length; i++) {
const line = lines[i];
if (line.startsWith("event: ")) {
const eventType = line.substring("event: ".length);
const dataLine = lines[++i];
if (dataLine.startsWith("data: ")) {
const data = dataLine.substring("data: ".length);
const json = JSON.parse(data);
if (eventType === "content") {
completeResponse += json.response;
} else if (eventType === "error") {
setResponse(`Error: ${json.message} (${json.code})`);
return;
}
}
}
}
}
setResponse(completeResponse);
} catch (error) {
console.error("Failed to fetch data:", error);
setResponse("Failed to fetch data.");
} finally {
setLoading(false);
}
};
export const handleKeyDown = (event, handlePromptSubmitCallback) => {
if (event.key === "Enter" && !event.shiftKey) {
event.preventDefault();
handlePromptSubmitCallback();
}
};
This function sends a user prompt to the /API/v1/stream
of the base URL and handles the response. It reads and processes the data as streams, parses, and appends the results for a request.
We then log any errors during the request and send them to our frontend as a response. Any errors or exceptions during the request or processing phases are caught and logged, and a generic error message is displayed to the user.
The handleKeyDown
function checks if the Enter key is pressed without the Shift key. If so, it prevents the default action (e.g., form submission) and triggers the handlePromptSubmitCallback
function used to submit data.
Creating a Terms Component
In the src
folder, create a new file terms.jsx
, where you will add a terms and conditions modal for our application.
// services/ai/src/components/terms.jsx
import { useState } from "react";
import Modal from "react-modal";
import "../App.css";
const TermsOfService = () => {
const [modalIsOpen, setModalIsOpen] = useState(false);
const currentDate = new Date().toLocaleString();
const openModal = () => setModalIsOpen(true);
const closeModal = () => setModalIsOpen(false);
return (
<>
<footer className="fixed bottom-0 w-full bg-gray-800 text-white py-2 text-center">
<button onClick={openModal} className="text-sm">
Terms of Service
</button>
</footer>
<Modal
isOpen={modalIsOpen}
onRequestClose={closeModal}
contentLabel="Terms of Service"
className="modal bg-white p-6 rounded-lg shadow-lg max-w-2xl mx-auto my-10"
overlayClassName="overlay fixed top-0 left-0 right-0 bottom-0 bg-black bg-opacity-50 flex items-center justify-center"
>
<div className="max-w-2xl mx-auto">
<h1 className="text-3xl font-bold mb-6">Terms of Service</h1>
<p className="mb-4">
Welcome to Platformatic AI roadmap generator. By using our service,
you agree to the following terms:
</p>
<h2 className="text-2xl font-semibold mb-4">
1. Acceptance of Terms
</h2>
<p className="mb-4">
By accessing and using our services, you accept and agree to be
bound by the terms and provisions of this agreement.
</p>
<h2 className="text-2xl font-semibold mb-4">
2. Description of Service
</h2>
<p className="mb-4">
Our service provides users with access to resources and tools for
managing and interacting with AI models.
</p>
<h2 className="text-2xl font-semibold mb-4">3. User Obligations</h2>
<p className="mb-4">
You must provide accurate registration information and keep it up to
date. You are responsible for maintaining the confidentiality of
your account.
</p>
<h2 className="text-2xl font-semibold mb-4">4. Termination</h2>
<p className="mb-4">
We reserve the right to terminate or suspend your account at our
sole discretion for conduct that we believe violates these terms.
</p>
<p className="mt-8">Last updated: {currentDate}</p>
<button
onClick={closeModal}
className="mt-4 bg-gray-800 text-white py-2 px-4 rounded-lg"
>
Close
</button>
</div>
</Modal>
</>
);
};
export default TermsOfService;
Building our React Application
Update the App.jsx
with the utility functions and add a React component for our application.
// services/ai/src/App.jsx
import { useEffect, useState } from "react";
import "./App.css";
import { useFetchPrompt, handlePromptSubmit } from "./utils/utils.js";
import ReactMarkdown from "react-markdown";
import remarkGfm from "remark-gfm";
function App() {
const {
prompt,
setPrompt,
response,
setResponse,
fetchPrompt,
loading,
setLoading,
} = useFetchPrompt();
const [isAuthenticated, setIsAuthenticated] = useState(false);
const [user, setUser] = useState(null);
useEffect(() => {
fetchPrompt();
}, [fetchPrompt]);
useEffect(() => {
const query = new URLSearchParams(window.location.search);
const token = query.get("token");
const username = query.get("username");
const fullname = query.get("fullname");
const image = query.get("image");
const email = query.get("email");
if (token) {
setIsAuthenticated(true);
setUser({ username, fullname, image, email });
}
}, []);
const handlePromptChange = (event) => {
setPrompt(event.target.value);
};
Here, we import our utility functions, useFetchPrompt
, and handlePromptSubmit
to handle API requests. ReactMarkdown
and remarkGfm
will render the response from our API as markdown with GitHub-flavored markdown.
We then restructured the useFetchPrompt custom hook. The isAuthenticated
state checks if the user is authenticated, and the user state stores the authenticated user details — the handlePromptChange
updates the prompt state whenever the input field value changes.
// services/ai/src/App.jsx
const submit = () => handlePromptSubmit(prompt, setResponse, setLoading);
const handleLogin = () => {
window.location.href = `${import.meta.env.VITE_AI_URL}/login/github`;
};
return (
<div className="min-h-screen bg-gray-900 text-white flex flex-col items-center justify-center">
<div className="p-6 bg-gray-800 rounded-lg shadow-lg w-full max-w-lg">
{!isAuthenticated ? (
<div className="mb-4">
<h1 className="text-2xl font-bold mb-2">
Platformatic AI Roadmap Generator
</h1>
<button
className="w-full bg-indigo-600 text-white py-2 px-4 rounded-lg hover:bg-indigo-700 transition duration-200 mb-4"
onClick={handleLogin}
>
Login with GitHub
</button>
</div>
) : (
<>
<div className="mb-4">
<label
className="block text-xl font-bold mb-2"
htmlFor="prompt-text"
>
Platformatic Roadmap Generator
</label>
</div>
<div className="mb-4">
<textarea
className="w-full p-2 text-black rounded-lg focus:outline-none focus:ring-2 focus:ring-indigo-500"
id="prompt-text"
cols="60"
rows="3"
value={prompt}
onChange={handlePromptChange}
placeholder="Get your AI roadmap here..."
/>
</div>
<div className="mb-4">
<button
className="w-full bg-indigo-600 text-white py-2 px-4 rounded-lg hover:bg-indigo-700 transition duration-200"
onClick={submit}
disabled={loading}
>
{loading ? "Generating..." : "Get Roadmap"}
</button>
</div>
{!loading && response && (
<div className="p-4 bg-gray-700 rounded-lg mt-4 w-full max-h-96 overflow-y-auto">
<div id="messages" className="prose prose-invert">
<ReactMarkdown remarkPlugins={[remarkGfm]}>
{response}
</ReactMarkdown>
</div>
</div>
)}
</>
)}
</div>
</div>
);
}
export default App;
Here, we authenticate users and redirect them to the developer roadmap homepage. A textarea input field allows the authenticated user to enter a prompt and receive an AI-personalized roadmap. The submit button triggers the handlePromptSubmit
function to send the prompt to our AI provider API. We then render the response as a markdown using the ReactMarkdown
with remarkGfm
.
Restructuring Static Files
In this section, we will restructure our application and serve our frontend application as static files using @fastify/static.
First, move your frontend application's src
and public
folders into the services/ai
folder and install all your frontend packages in the package.json
file in the services/ai
folder.
Rendering the Static Files
To serve the frontend application as a static file, first install the @fastify/static package.
npm i @fastify/static
Create a new file named static.js
in the plugins
folder in your services/ai
project directory and add the code below:
const fp = require("fastify-plugin");
const path = require("path");
async function plugin(app, options) {
app.register(require("@fastify/static"), {
root: path.join(__dirname, "..", "dist"),
prefix: "/",
});
}
module.exports = fp(plugin, {
name: "static",
});
Here, we created a plugin function that registers the @fastify/static
plugin with a Fastify instance. We then set the root directory for static files to the dist folder and serve the files from the root URL path (/)
.
Dockerizing and Running your Application
Before we deploy our application to Fly.io, we first need to dockerize it. To do this, create a Dockerfile in the root directory of your project and add the script.
// Dockerfile
FROM node:20-alpine
# RUN corepack enable && corepack prepare pnpm@latest --activate
ARG VITE_AI_URL
ENV VITE_AI_URL=$VITE_AI_URL
ENV APP_HOME=/home/app/node/
WORKDIR $APP_HOME
COPY package.json package.json
COPY package-lock.json package-lock.json
COPY platformatic.json platformatic.json
COPY services services
RUN npm install
RUN cd services/ai && \
npm install && \
npm run build
EXPOSE 3042
CMD ["npm", "start"]
In this Dockerfile, we use the node:20-alpine
base image for our application. We added a build time argument VITE_AI_URL
which is an environment variable inside the container. We then set the working directory to /home/app/node/,
where application files are copied.
npm install
installs the dependencies in our package.json
. We then exposed port 3042 for running the application. The CMD ["npm", "start"]
command launches the application when the container starts.
To build the Docker image, use the command:
docker build -t roadmap-generator:latest .
Then, run your entire application with the command:
docker run --env-file .env -p 3042:3042 roadmap-generator:latest
You can access the application at http://localhost:3042
Click on the Login with GitHub
button and you will be redirected back to the homepage.
Deploying to Fly.io
Now we can deploy our application to Fly.io. To do so, first, follow the Fly documentation on installing Fly on your local machine.
Run the command to deploy your application.
fly deploy
Follow the steps outlined in the Fly documentation to complete deployment.
Wrapping Up
In this tutorial, we built a developer roadmap generator using Platformatic AI Warp, integrated GitHub OAuth2 authentication and deployed the application with Docker and Fly.io. We also learned to restructure our application to serve static files. By following these steps, your developer roadmap application is now deployed on Fly.io.
To improve the application, you can extend it by adding PDF downloads for personalized roadmaps.
Further Resources