Mistral Large

Mistral
Text & Chat

Mistral's flagship model. Strong reasoning, multilingual, and coding capabilities.

Examples

See what Mistral Large can generate

Multilingual Writing

Write a short product description for a smartwatch in English, then translate it to French and German

English: The AuraFit Pro tracks your heart rate, sleep, and workouts with medical-grade accuracy. Its always-on AMOLED display and 7-day battery life keep you connected without compromise. French: L'AuraFit Pro suit votre fréquence cardiaque, votre sommeil et vos entraînements avec une précision de qualité médicale. Son écran AMOLED toujours actif et son autonomie de 7 jours vous gardent connecté sans compromis. Deutsch: Die AuraFit Pro überwacht Herzfrequenz, Schlaf und Training mit medizinischer Präzision. Das Always-on-AMOLED-Display und die 7-Tage-Akkulaufzeit halten Sie ohne Kompromisse verbunden.

Technical Documentation

Write a brief API documentation entry for a POST /users endpoint that creates a new user

## POST /users Creates a new user account. **Request Body:** ```json { "email": "string (required)", "name": "string (required)", "password": "string (required, min 8 chars)" } ``` **Response 201:** ```json { "id": "uuid", "email": "string", "name": "string", "createdAt": "ISO 8601" } ``` **Errors:** 400 (validation failed), 409 (email already exists), 500 (internal error). Rate limit: 10 requests per minute per IP.

Try Mistral Large

0.7

Response will appear here...

Sign up free to start generating
Get Started

Pricing

Price per Generation
Per generationFree

API Integration

Use our OpenAI-compatible API to integrate Mistral Large into your application.

Install
npm install railwail
JavaScript / TypeScript
import railwail from "railwail";

const rw = railwail("YOUR_API_KEY");

// Simple — just pass a string
const reply = await rw.run("mistral-large", "Hello! What can you do?");
console.log(reply);

// With message history
const reply2 = await rw.run("mistral-large", [
  { role: "system", content: "You are a helpful assistant." },
  { role: "user", content: "Explain quantum computing simply." },
]);
console.log(reply2);

// Full response with usage info
const res = await rw.chat("mistral-large", [
  { role: "user", content: "Hello!" },
], { temperature: 0.7, max_tokens: 500 });
console.log(res.choices[0].message.content);
console.log(res.usage);
Specifications
Context window
128,000 tokens
Max output
8,192 tokens
Avg. latency
2.5s
Provider
Mistral
Category
Text & Chat
Tags
multilingual
coding
Try this model

Free credits on sign-up

Start using Mistral Large today

Get started with free credits. No credit card required. Access Mistral Large and 100+ other models through a single API.