AI Market provides a unified API to access many AI models from multiple providers. Use one API key and OpenAI-compatible requests to switch between GPT-4, Claude, Gemini, Llama, and more.
The gateway normalizes routing, billing, and usage logging so your application code stays simple.
Production API root for REST and inference (append path segments as documented below):
https://api.airouter.shop/api/v1If you self-host the API gateway, replace the example host with your own deployment URL.
OpenAI SDKs often use a base URL ending with /v1. Here, set base URL to …/api/v1 (see quick start) or use the dedicated /v1/chat/completions path with host only — both are supported for inference.
Two credential types exist: (1) JWT from email/password login — used by the web console for dashboard, billing, keys management, etc. (2) API keys (sk-am-…) — used only for model inference endpoints.
Pass your API key in the Authorization header as a Bearer token. Keys created in the console start with sk-am-.
Authorization: Bearer sk-am-…Do not use your login JWT as the API key for chat/completions; create a dedicated key under Console → API Keys.
Some public REST routes accept X-Platform: intl or cn for regional catalog and configuration. When you call inference with an API key, the key’s stored platform is applied automatically; you do not need to send X-Platform on those requests.
X-Platform: intl|X-Platform: cnInvalid X-Platform values are rejected with HTTP 400 on routes that validate the header.
Open Console → API Keys → create a key. Copy the full secret immediately; it is shown only once.
You can create multiple keys, rename them, or revoke them from the same page. Each key has its own optional rate limit.
Use the official OpenAI SDKs — point base URL at this gateway and set apiKey to your sk-am- key. Example model: openai/gpt-4o (replace with any slug from the model catalog).
import openai client = openai.OpenAI( base_url="https://api.airouter.shop/api/v1", api_key="sk-your-api-key") response = client.chat.completions.create( model="openai/gpt-4o", messages=[{"role": "user", "content": "Hello!"}]) print(response.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.airouter.shop/api/v1",
apiKey: process.env.AI_MARKET_API_KEY!,
});
const response = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0]?.message?.content);curl -s https://api.airouter.shop/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-your-api-key" \
-d '{
"model": "openai/gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'Send a POST with Content-Type: application/json. Two equivalent entry points are available:
POST /api/v1/chat/completionsPOST /v1/chat/completionsThe JSON body is the same as OpenAI’s chat/completions: at minimum model (string) and messages (array of objects with role and content fields).
model — required. Model slug, e.g. openai/gpt-4o, anthropic/claude-3-5-sonnet-latest.messages — required. Conversation turns: system / user / assistant roles; content may be string or structured for multimodal where supported upstream.stream — optional. If true, the response is text/event-stream with SSE chunks in OpenAI stream format.temperature, top_p, max_tokens — optional. Passed through to the provider when supported.tools, tool_choice — optional. Function calling / tools objects when the provider supports them.Discover models without consuming tokens:
GET /api/v1/models — Public catalog list (no API key). Optional query parameters may filter by category depending on server version.GET /api/v1/models/:slug — Public detail for one model by slug.GET /v1/models — OpenAI-compatible list while authenticated with an API key (same slugs as chat).The website catalog is at Models
Slugs are usually provider/model-name. Use the exact value returned by the API or shown on the model detail page.
Set "stream": true in the JSON body. The gateway streams tokens using Server-Sent Events (data: lines) compatible with OpenAI streaming clients.
{
"model": "openai/gpt-4o",
"messages": [{"role": "user", "content": "Count to 3"}],
"stream": true
}Consume the body with an SSE-aware client (OpenAI SDK stream helpers, or read the response stream manually).
If the upstream provider does not support streaming for a model, the API returns HTTP 400 with a clear message.
Charges are calculated from actual usage (tokens and provider pricing) after a successful response. Ensure your wallet has sufficient balance before large jobs.
If balance is too low, the gateway responds with HTTP 402 and does not call the upstream provider.
Validation and gateway errors return JSON with code and message. Typical cases:
Example gateway error payload:
{
"code": 402,
"message": "insufficient balance"
}A successful chat/completions call returns the upstream OpenAI-shaped JSON body (not wrapped in an extra code/data envelope).
Per-key rate limits can be set when creating or editing a key in the console. When exceeded, the gateway returns an error response describing the limit.
Install the openai package (v1 SDK). Set base_url to the gateway /api/v1 root and api_key to sk-am-….
import openai client = openai.OpenAI( base_url="https://api.airouter.shop/api/v1", api_key="sk-your-api-key") response = client.chat.completions.create( model="openai/gpt-4o", messages=[{"role": "user", "content": "Hello!"}]) print(response.choices[0].message.content)Install the openai package. Use the same baseURL and apiKey pattern as Python; works with async/await in Node 18+.
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.airouter.shop/api/v1",
apiKey: process.env.AI_MARKET_API_KEY!,
});
const response = await client.chat.completions.create({
model: "openai/gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0]?.message?.content);Send JSON with curl; ensure Authorization and Content-Type headers are set. Pipe to jq for readable output.
curl -s https://api.airouter.shop/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-your-api-key" \
-d '{
"model": "openai/gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'Successful chat completion responses follow the OpenAI Chat Completions schema: choices[], message, usage (prompt_tokens, completion_tokens, total_tokens), id, model, object, created.
{ "id": "chatcmpl-abc123", "model": "openai/gpt-4o", "choices": [{ "message": { "role": "assistant", "content": "Hello!" } }], "usage": { "prompt_tokens": 10, "completion_tokens": 5 }}