POST
/api/v1/chat/completions
Create a chat completion using any text model. Fully OpenAI-compatible.
Request Body
| Parameter | Type | Description |
|---|---|---|
modelrequired | string | Model slug, e.g. "gpt-4o", "claude-sonnet-4-5-20250929" |
messagesrequired | object[] | Array of {role, content} message objects. |
temperature | number | Sampling temperature (0-2).Default: 1 |
max_tokens | number | Maximum output tokens. |
top_p | number | Nucleus sampling (0-1).Default: 1 |
frequency_penalty | number | Repeated token penalty (-2 to 2).Default: 0 |
presence_penalty | number | Token presence penalty (-2 to 2).Default: 0 |
stop | string | string[] | Stop sequences. |
Examples
javascript
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "rw_live_xxxxx",
baseURL: "https://railwail.com/api/v1",
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello!" },
],
temperature: 0.7,
max_tokens: 500,
});
console.log(response.choices[0].message.content);Response
JSON
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1700000000,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 9,
"total_tokens": 29
}
}Compatible Models
gpt-4o
gpt-4o-mini
claude-sonnet-4-5-20250929
claude-3-haiku
gemini-pro
gemini-1.5-flash
llama-3.1-70b
llama-3.1-8b
deepseek-chat
mistral-large
+ many more
Browse all available text models at /models.