One of the most practical developments in the AI API ecosystem is the emergence of a de facto standard: the OpenAI REST API format. Most major AI API providers now support this format, which means switching providers often requires nothing more than updating a base URL and an API key. This guide shows you exactly how to do it.
The OpenAI-Compatible API Standard
The core of the standard is the chat completions endpoint:
POST /v1/chat/completionsWith a JSON body that follows this shape:
{
"model": "gpt-4o",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "What is the capital of France?" }
],
"stream": false
}If your current provider and your target provider both support this format, the migration is trivial.
Step 1: Check API Compatibility
Before starting the migration, verify your target provider explicitly documents OpenAI API compatibility. Look for documentation phrases like:
- "Compatible with the OpenAI API"
- "Drop-in replacement for OpenAI"
- "Uses the OpenAI REST API format"
- "`/v1/chat/completions` endpoint"
If the provider does not explicitly state this, check their endpoint documentation for the request/response schema.
Step 2: Update Your Environment Variables
If you followed best practices and stored your API configuration in environment variables, the migration is two lines:
# Old
AI_API_KEY=sk-old-provider-key
AI_API_BASE_URL=https://api.oldprovider.com/v1
# New
AI_API_KEY=your-new-api-key
AI_API_BASE_URL=https://api.newprovider.com/v1Update these in your .env.local file for local development and in your deployment platform's environment configuration for production.
Step 3: Update Your HTTP Client Configuration
If you are using the openai npm package, update the base URL:
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.AI_API_KEY,
baseURL: process.env.AI_API_BASE_URL,
});If you are making direct fetch calls, the base URL is likely already an environment variable. If it is hardcoded, extract it:
const response = await fetch(
`${process.env.AI_API_BASE_URL}/chat/completions`,
{ ... }
);Step 4: Update Model Names
Model names vary between providers. Where you previously used gpt-4o, your new provider may use claude-3-5-sonnet, gemini-2.0-flash, or a custom name. Update your model references:
// Before
model: "gpt-4o"
// After — use your new provider's model name
model: process.env.AI_MODEL_NAME ?? "gpt-4o"Making model names configurable via environment variables is good practice — it allows you to change models without code changes.
Step 5: Test the Migration
Run your test suite against the new provider configuration. Pay particular attention to:
- **Output format consistency**: Does the new provider return the same JSON structure?
- **Streaming behavior**: If you use streaming, does the SSE format match?
- **Error response format**: Do error responses have the same structure?
Most compatible providers return identical response structures, but test edge cases: empty responses, maximum length outputs, and error conditions.
Step 6: Test Your Prompts
Your existing prompts will generally work with a new provider, but may produce slightly different outputs due to model differences. Run your most important prompts against the new provider and compare outputs for correctness and quality.
If you have structured output requirements (expecting JSON, specific formats), verify these still work with the new model. Different models respond differently to formatting instructions — you may need minor prompt adjustments.
Common Migration Issues
Different tokenization. Different models tokenize text differently, which can affect context window management and cost calculations. After migrating, re-measure your average token counts.
Different default behavior. Some models are more verbose, more concise, or have different defaults for code formatting. Review AI outputs after migration for format changes that might affect downstream processing.
Rate limit differences. Your new provider may have different rate limits. Check these before going to production, especially if you have high-volume use cases.
Testing in Production Safely
For a zero-downtime migration in production:
1. Deploy the new configuration to a staging environment
2. Run your full test suite in staging
3. Perform manual QA on the most critical AI features
4. Deploy to production during low-traffic hours
5. Monitor error rates and response quality for 24 hours post-migration
If something goes wrong, rolling back is as simple as reverting the environment variables to the previous provider's values.
The Benefit of API Portability
The ability to switch providers in under an hour is a genuine advantage. It means you are never locked into a single vendor's pricing, reliability, or model quality decisions. If a better option emerges or your current provider raises prices, you can move quickly. Build your AI integration with this portability in mind from day one.