Tutorials·8 min read

How to Integrate an AI API into Your Next.js Application: A Step-by-Step Guide

A complete walkthrough for adding AI-powered features to a Next.js app using a REST-compatible AI API, including streaming responses and error handling.

Published ·Updated ·By Daymora

Next.js is one of the most popular frameworks for building full-stack web applications, and it is an excellent platform for integrating AI features. In this guide we will walk through the complete process of connecting a Next.js application to a REST-compatible AI API, handling streaming responses, managing API keys securely, and building a production-ready chat interface.

Prerequisites

This guide assumes you have a Next.js 14 or 15 application set up with the App Router, and that you have an AI API key from a provider that supports the OpenAI-compatible chat completions endpoint.

Setting Up Your API Key

Never expose your AI API key in client-side code. In Next.js, store your API key in a .env.local file at the root of your project:

AI_API_KEY=your_api_key_here
AI_API_BASE_URL=https://api.yourprovider.com/v1

The .env.local file is excluded from version control by default and its values are only accessible on the server side in Next.js.

Creating the API Route

Create a server-side API route that acts as a proxy between your frontend and the AI API. This keeps your API key secure and gives you a place to add rate limiting, logging, and error handling.

Create src/app/api/chat/route.ts:

import { NextRequest } from "next/server";

export async function POST(req: NextRequest) {
  const { messages } = await req.json();

  const response = await fetch(
    `${process.env.AI_API_BASE_URL}/chat/completions`,
    {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${process.env.AI_API_KEY}`,
      },
      body: JSON.stringify({
        model: "gpt-4o",
        messages,
        stream: true,
      }),
    }
  );

  if (!response.ok) {
    return new Response("AI API error", { status: response.status });
  }

  return new Response(response.body, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache",
      Connection: "keep-alive",
    },
  });
}

This route forwards the request to the AI API and streams the response back to the client. The stream: true parameter tells the AI API to return Server-Sent Events instead of waiting for the full response.

Building the Chat UI Component

Now create a client component for the chat interface. Create src/components/ChatWidget.tsx:

"use client";

import { useState, useRef, useEffect } from "react";

interface Message {
  role: "user" | "assistant";
  content: string;
}

export function ChatWidget() {
  const [messages, setMessages] = useState<Message[]>([]);
  const [input, setInput] = useState("");
  const [loading, setLoading] = useState(false);
  const bottomRef = useRef<HTMLDivElement>(null);

  useEffect(() => {
    bottomRef.current?.scrollIntoView({ behavior: "smooth" });
  }, [messages]);

  async function sendMessage() {
    if (!input.trim() || loading) return;

    const userMessage: Message = { role: "user", content: input };
    const updated = [...messages, userMessage];
    setMessages(updated);
    setInput("");
    setLoading(true);

    const response = await fetch("/api/chat", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ messages: updated }),
    });

    const reader = response.body?.getReader();
    const decoder = new TextDecoder();
    let assistantText = "";

    setMessages((prev) => [...prev, { role: "assistant", content: "" }]);

    while (reader) {
      const { done, value } = await reader.read();
      if (done) break;

      const chunk = decoder.decode(value);
      const lines = chunk.split("\n").filter((l) => l.startsWith("data: "));

      for (const line of lines) {
        const data = line.slice(6);
        if (data === "[DONE]") break;
        try {
          const parsed = JSON.parse(data);
          const delta = parsed.choices[0]?.delta?.content ?? "";
          assistantText += delta;
          setMessages((prev) => {
            const next = [...prev];
            next[next.length - 1] = { role: "assistant", content: assistantText };
            return next;
          });
        } catch {}
      }
    }

    setLoading(false);
  }

  return (
    <div className="flex flex-col h-96 border rounded-xl overflow-hidden">
      <div className="flex-1 overflow-y-auto p-4 space-y-3">
        {messages.map((msg, i) => (
          <div
            key={i}
            className={`flex ${msg.role === "user" ? "justify-end" : "justify-start"}`}
          >
            <div
              className={`max-w-xs px-4 py-2 rounded-xl text-sm ${
                msg.role === "user"
                  ? "bg-zinc-900 text-white"
                  : "bg-zinc-100 text-zinc-900"
              }`}
            >
              {msg.content}
            </div>
          </div>
        ))}
        <div ref={bottomRef} />
      </div>
      <div className="border-t p-3 flex gap-2">
        <input
          className="flex-1 text-sm border rounded-lg px-3 py-2 outline-none"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          onKeyDown={(e) => e.key === "Enter" && sendMessage()}
          placeholder="Ask anything..."
        />
        <button
          onClick={sendMessage}
          disabled={loading}
          className="px-4 py-2 bg-zinc-900 text-white rounded-lg text-sm disabled:opacity-50"
        >
          Send
        </button>
      </div>
    </div>
  );
}

Handling Errors and Edge Cases

Production AI integrations need robust error handling. The AI API may be temporarily unavailable, rate limits may be hit, or the network connection may drop mid-stream. Add error handling to your API route:

try {
  const response = await fetch(...);
  if (!response.ok) {
    const error = await response.json().catch(() => ({}));
    console.error("AI API error:", error);
    return Response.json(
      { error: "Unable to process request. Please try again." },
      { status: 502 }
    );
  }
} catch (err) {
  console.error("Network error:", err);
  return Response.json(
    { error: "Service temporarily unavailable." },
    { status: 503 }
  );
}

Managing API Key Rotation

When you need to rotate your API key (a security best practice every 90 days), update the value in your deployment environment variables without any code changes. In Vercel, navigate to Settings → Environment Variables. In other platforms, update the secret and redeploy.

Monitoring Usage

Add basic logging to your API route to track usage patterns. Log the number of messages in each request, the response time, and any errors. This data helps you understand how your AI features are being used and identify opportunities to optimize prompts.

Next Steps

Once you have the basic integration working, consider adding:

  • **Conversation history persistence** using a database like MongoDB or PostgreSQL
  • **Rate limiting per user** to prevent abuse using a library like `upstash/ratelimit`
  • **Prompt templates** for consistent AI behavior across your application
  • **Feedback collection** to gather user ratings on AI responses

With a well-structured integration, your Next.js application will have reliable, streaming AI capabilities that work in development and production.

Start building

Unlimited AI API for $25/month

Flat-rate pricing, premium model access, and a unified API endpoint. No usage surprises.

Create your API key →

More articles