o1 Mini

OpenAI
Text & Chat

Smaller, faster version of OpenAI's o1 reasoning model. Optimized for STEM tasks with lower latency and cost.

Try o1 Mini

0.7

Response will appear here...

Sign up free to start generating
Get Started

Pricing

Price per Generation
Per generationFree

API Integration

Use our OpenAI-compatible API to integrate o1 Mini into your application.

Install
npm install railwail
JavaScript / TypeScript
import railwail from "railwail";

const rw = railwail("YOUR_API_KEY");

// Simple — just pass a string
const reply = await rw.run("o1-mini", "Hello! What can you do?");
console.log(reply);

// With message history
const reply2 = await rw.run("o1-mini", [
  { role: "system", content: "You are a helpful assistant." },
  { role: "user", content: "Explain quantum computing simply." },
]);
console.log(reply2);

// Full response with usage info
const res = await rw.chat("o1-mini", [
  { role: "user", content: "Hello!" },
], { temperature: 0.7, max_tokens: 500 });
console.log(res.choices[0].message.content);
console.log(res.usage);
Specifications
Context window
128,000 tokens
Max output
65,536 tokens
Provider
OpenAI
Category
Text & Chat
Tags
reasoning
fast
STEM
Try this model

Free credits on sign-up

Start using o1 Mini today

Get started with free credits. No credit card required. Access o1 Mini and 100+ other models through a single API.