How to build an AI assistant with OpenAI, Vercel AI SDK, and Ollama with Next.js
In todayâs blog post, weâll build an AI Assistant using three different AI models: Whisper and TTS from OpenAI and Llama 3.1 from Meta.
While exploring AI, I wanted to try different things and create an AI assistant that works by voice. This curiosity led me to combine OpenAIâs Whisper and TTS models with Metaâs Llama 3.1 to build a voice-activated assistant.
Hereâs how these models will work together:
* First, weâll send our audio to the Whisper model, which will convert it from speech to text.
* Next, weâll pass that text to the Llama 3.1 model. Llama will understand the text and generate a response.
* Finally, weâll take Llamaâs response and send it to the TTS model, turning the text back into speech. Weâll then stream that audio back to the client.
Letâs dive in and start building this excellent AI Assistant!
Getting started
We will use different tools to build our assistant. To build our client side, we will use Next.js. However, you could choose whichever framework you prefer.
To use our OpenAI models, we will use their TypeScript / JavaScript SDK. To use this API, we require the following environmental variable: OPENAI_API_KEYâ
To get this key, we need to log in to the OpenAI dashboard and find the API keys section. Here, we can generate a new key.
Awesome. Now, to use our Llama 3.1 model, we will use Ollama and the Vercel AI SDK, utilizing a provider called ollama-ai-provider.
Ollama will allow us to download our preferred model (we could even use a different one, like Phi) and run it locally. The Vercel SDK will facilitate its use in our Next.js project.
To use Ollama, we just need to download it and choose our preferred model. For this blog post, we are going to select Llama 3.1. After installing Ollama, we can verify if it is working by opening our terminal and writing the following command:
Notice that I wrote âllama3.1â because thatâs my chosen model, but you should use the one you downloaded.
Kicking things off
It's time to kick things off by setting up our Next.js app. Let's start with this command:
`
After running the command, youâll see a few prompts to set the app's details. Let's go step by step:
* Name your app.
* Enable app router.
The other steps are optional and entirely up to you. In my case, I also chose to use TypeScript and Tailwind CSS.
Now thatâs done, letâs go into our project and install the dependencies that we need to run our models:
`
Building our client logic
Now, our goal is to record our voice, send it to the backend, and then receive a voice response from it.
To record our audio, we need to use client-side functions, which means we need to use client components. In our case, we donât want to transform our whole page to use client capabilities and have the whole tree in the client bundle; instead, we would prefer to use Server components and import our client components to progressively enhance our application.
So, letâs create a separate component that will handle the client-side logic.
Inside our app folder, let's create a components folder, and here, we will be creating our component:
`
Letâs go ahead and initialize our component. I went ahead and added a button with some styles in it:
`
And then import it into our Page Server component:
`
Now, if we run our app, we should see the following:
Awesome! Now, our button doesnât do anything, but our goal is to record our audio and send it to someplace; for that, let us create a hook that will contain our logic:
`
We will use two APIs to record our voice: navigator and MediaRecorder. The navigator API will give us information about the userâs media devices like the user media audio, and the MediaRecorder will help us record the audio from it. This is how theyâre going to play out together:
`
Letâs explain this code step by step. First, we create two new states. The first one is for keeping track of when we are recording, and the second one stores the instance of our MediaRecorder.
`
Then, weâll create our first method, startRecording. Here, we are going to have the logic to start recording our audio.
We first check if the user has media devices available thanks to the navigator API that gives us information about the browser environment of our user:
If we donât have media devices to record our audio, we just return. If they do, then let us create a stream using their audio media device.
`
Finally, we go ahead and create an instance of a MediaRecorder to record this audio:
`
Then we need a method to stop our recording, which will be our stopRecording. Here, we will just stop our recording in case a media recorder exists.
`
We are recording our audio, but we are not storing it anywhere. Letâs add a new useEffect and ref to accomplish this.
We would need a new ref, and this is where our chunks of audio data will be stored.
`
In our useEffect we are going to do two main things: store those chunks in our ref, and when it stops, we are going to create a new Blob of type audio/mp3:
`
It is time to wire this hook with our AudioRecorder component:
`
Letâs go to the other side of the coin, the backend!
Setting up our Server side
We want to use our models on the server to keep things safe and run faster. Letâs create a new route and add a handler for it using route handlers from Next.js. In our App folder, letâs make an âApiâ folder with the following route in it:
We want to use our models on the server to keep things safe and run faster. Letâs create a new route and add a handler for it using route handlers from Next.js. In our App folder, letâs make an âApiâ folder with the following route in it:
`
Our route is called âchatâ. In the route.ts file, weâll set up our handler. Letâs start by setting up our OpenAI SDK.
`
In this route, weâll send the audio from the front end as a base64 string. Then, weâll receive it and turn it into a Buffer object.
`
Itâs time to use our first model. We want to turn this audio into text and use OpenAIâs Whisper Speech-To-Text model. Whisper needs an audio file to create the text. Since we have a Buffer instead of a file, weâll use their âtoFileâ method to convert our audio Buffer into an audio file like this:
`
Notice that we specified âmp3â. This is one of the many extensions that the Whisper model can use. You can see the full list of supported extensions here: https://platform.openai.com/docs/api-reference/audio/createTranscription#audio-createtranscription-file
Now that our file is ready, letâs pass it to Whisper! Using our OpenAI instance, this is how we will invoke our model:
`
Thatâs it! Now, we can move on to the next step: using Llama 3.1 to interpret this text and give us an answer. Weâll use two methods for this. First, weâll use âollamaâ from the âollama-ai-providerâ package, which lets us use this model with our locally running Ollama. Then, weâll use âgenerateTextâ from the Vercel AI SDK to generate the text.
Side note: To make our Ollama run locally, we need to write the following command in the terminal:
`
`
Finally, we have our last model: TTS from OpenAI. We want to reply to our user with audio, so this model will be really helpful. It will turn our text into speech:
`
The TTS model will turn our response into an audio file. We want to stream this audio back to the user like this:
`
And thatâs all the whole backend code! Now, back to the frontend to finish wiring everything up.
Putting It All Together
In our useRecordVoice.tsx hook, let's create a new method that will call our API endpoint. This method will also take the response back and play to the user the audio that we are streaming from the backend.
`
Great! Now that weâre getting our streamed response, we need to handle it and play the audio back to the user. Weâll use the AudioContext API for this. This API allows us to store the audio, decode it and play it to the user once itâs ready:
`
And that's it! Now the user should hear the audio response on their device. To wrap things up, let's make our app a bit nicer by adding a little loading indicator:
`
Conclusion
In this blog post, we saw how combining multiple AI models can help us achieve our goals. We learned to run AI models like Llama 3.1 locally and use them in our Next.js app. We also discovered how to send audio to these models and stream back a response, playing the audio back to the user.
This is just one of many ways you can use AIâthe possibilities are endless. AI models are amazing tools that let us create things that were once hard to achieve with such quality. Thanks for reading; now, itâs your turn to build something amazing with AI!
You can find the complete demo on GitHub: AI Assistant with Whisper TTS and Ollama using Next.js...