OpenVLA

New
Popular
Custom
VLA / Robotics

Open-source 7B Vision-Language-Action model built on Prismatic VLM and Llama 2. Converts visual observations and language goals into robot actions.

Try OpenVLA

0.7

Response will appear here...

Sign up free to start generating
Get Started

Pricing

Price per Generation
Per generationFree

API Integration

Use our OpenAI-compatible API to integrate OpenVLA into your application.

Install
npm install railwail
JavaScript / TypeScript
import railwail from "railwail";

const rw = railwail("YOUR_API_KEY");

// Simple — just pass a string
const reply = await rw.run("openvla", "Hello! What can you do?");
console.log(reply);

// With message history
const reply2 = await rw.run("openvla", [
  { role: "system", content: "You are a helpful assistant." },
  { role: "user", content: "Explain quantum computing simply." },
]);
console.log(reply2);

// Full response with usage info
const res = await rw.chat("openvla", [
  { role: "user", content: "Hello!" },
], { temperature: 0.7, max_tokens: 500 });
console.log(res.choices[0].message.content);
console.log(res.usage);
Specifications
Provider
xAI
Category
VLA / Robotics
Tags
open-source
7B
Llama-based
Try this model

Free credits on sign-up

Start using OpenVLA today

Get started with free credits. No credit card required. Access OpenVLA and 100+ other models through a single API.