Skip to main content
POST
/
api
/
ai
/
chat
/
completion
Generate chat completion
curl --request POST \
  --url https://api.example.com/api/ai/chat/completion \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "openai/gpt-4",
  "messages": [
    {
      "role": "user",
      "content": "<string>"
    }
  ],
  "stream": false,
  "temperature": 1,
  "maxTokens": 123,
  "topP": 0.5,
  "systemPrompt": "<string>"
}
'
{
  "success": true,
  "content": "<string>",
  "metadata": {
    "model": "<string>",
    "usage": {
      "promptTokens": 123,
      "completionTokens": 123,
      "totalTokens": 123
    }
  }
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
model
string
required

OpenRouter model identifier

Example:

"openai/gpt-4"

messages
object[]

Array of messages for conversation

stream
boolean
default:false

Enable streaming response via Server-Sent Events

temperature
number

Controls randomness in generation

Required range: 0 <= x <= 2
maxTokens
integer

Maximum number of tokens to generate

topP
number

Nucleus sampling parameter

Required range: 0 <= x <= 1
systemPrompt
string

System prompt to guide model behavior

Response

Chat completion response

success
boolean
content
string

AI model response

metadata
object