Text
rw.chat()
Create a chat completion with the full OpenAI-compatible response object, including usage stats and finish reason.
Signature
TypeScript
rw.chat(model: string, messages: ChatMessage[], options?: ChatOptions): Promise<ChatResponse>Parameters
| Parameter | Type | Description |
|---|---|---|
modelrequired | string | Model slug, e.g. "gpt-4o", "claude-sonnet-4-5-20250929", "gemini-pro" |
messagesrequired | ChatMessage[] | Array of messages with "role" (system/user/assistant) and "content". |
options.temperature | number | Sampling temperature (0-2). Higher values = more creative.Default: 1 |
options.max_tokens | number | Maximum number of tokens in the response. |
options.top_p | number | Nucleus sampling threshold (0-1).Default: 1 |
options.frequency_penalty | number | Penalize repeated tokens (-2 to 2).Default: 0 |
options.presence_penalty | number | Penalize tokens based on presence in text so far (-2 to 2).Default: 0 |
options.stop | string | string[] | Stop sequences — the model stops generating when it encounters these. |
Response
Returns a full OpenAI-compatible ChatResponse object.
TypeScript
interface ChatResponse {
id: string;
object: "chat.completion";
created: number;
model: string;
choices: {
index: number;
message: { role: "assistant"; content: string };
finish_reason: string;
}[];
usage: {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
};
}Examples
Basic chat completion
TypeScript
const res = await rw.chat("gpt-4o", [
{ role: "user", content: "Hello!" },
]);
console.log(res.choices[0].message.content);
console.log(res.usage); // { prompt_tokens: 9, completion_tokens: 12, total_tokens: 21 }With system prompt and options
TypeScript
const res = await rw.chat("claude-sonnet-4-5-20250929", [
{ role: "system", content: "You are a concise technical writer." },
{ role: "user", content: "Explain WebSockets in 2 sentences." },
], {
temperature: 0.3,
max_tokens: 200,
});
console.log(res.choices[0].message.content);
console.log(res.choices[0].finish_reason); // "stop"Multi-turn conversation
TypeScript
const messages = [
{ role: "user", content: "What's the capital of France?" },
];
const res1 = await rw.chat("gpt-4o", messages);
const answer = res1.choices[0].message.content;
// Continue the conversation
messages.push({ role: "assistant", content: answer });
messages.push({ role: "user", content: "What's its population?" });
const res2 = await rw.chat("gpt-4o", messages);
console.log(res2.choices[0].message.content);Supported models
All text/chat models are supported including GPT-4o, Claude, Gemini Pro, Llama 3.1, DeepSeek, and more. Browse available models at /models.