Models / x-ai/grok-4.20

xAI: Grok 4.20

by x-ai

Input price

$1.25 /1M tok

Output price

$2.50 /1M tok

Context length

2,000K tokens

JSON mode

Yes

Model details

Model ID
x-ai/grok-4.20
Provider
x-ai
Modality
text+image+file->text
Supports response_format
Yes
Added to LLMTest
Jan 21, 1970

API usage

Use this model through the LLMTest proxy. Replace your base URL and set the model ID:

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "your-llmt_-key-here",
  baseURL: "https://llmtest.io/v1",
});

const response = await client.chat.completions.create({
  model: "x-ai/grok-4.20",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "Hello, how are you?" }
  ],
  temperature: 0.7,          // 0-2, higher = more creative
  max_tokens: 1024,          // max tokens to generate
  // stream: true,           // enable streaming (SSE)
  // top_p: 0.9,             // nucleus sampling
  // response_format: { type: "json_object" },  // guaranteed JSON output
});

Supported parameters

Parameter Type Description
modelstringMust be x-ai/grok-4.20
messagesarrayArray of message objects with role and content
temperaturenumberSampling temperature (0-2). Default varies by model.
max_tokensintegerMax tokens to generate
top_pnumberNucleus sampling (0-1)
streambooleanStream response via SSE
stopstring | arrayStop sequences
response_formatobjectSet to {"type": "json_object"} for guaranteed JSON output

LLMTest features for this model