BlockRunBlockRun

Models

BlockRun provides access to models from multiple providers through a unified API.

List Models

GET https://blockrun.ai/api/v1/models

Returns a list of available models with pricing information.

Available Models

All prices shown are provider rates. BlockRun adds a 5% platform fee to cover infrastructure costs.

OpenAI GPT-5 Family

Model IDNameInput PriceOutput PriceContext
openai/gpt-5.2GPT-5.2$1.75/M$14.00/M400K
openai/gpt-5-miniGPT-5 Mini$0.25/M$2.00/M200K
openai/gpt-5-nanoGPT-5 Nano$0.05/M$0.40/M128K
openai/gpt-5.2-proGPT-5.2 Pro$21.00/M$168.00/M400K

OpenAI GPT-4 Family

Model IDNameInput PriceOutput PriceContext
openai/gpt-4.1GPT-4.1$2.00/M$8.00/M128K
openai/gpt-4.1-miniGPT-4.1 Mini$0.40/M$1.60/M128K
openai/gpt-4.1-nanoGPT-4.1 Nano$0.10/M$0.40/M128K
openai/gpt-4oGPT-4o$2.50/M$10.00/M128K
openai/gpt-4o-miniGPT-4o Mini$0.15/M$0.60/M128K

OpenAI O-Series (Reasoning)

Model IDNameInput PriceOutput PriceContext
openai/o1o1$15.00/M$60.00/M200K
openai/o1-minio1-mini$1.10/M$4.40/M128K
openai/o3o3$2.00/M$8.00/M200K
openai/o3-minio3-mini$1.10/M$4.40/M128K
openai/o4-minio4-mini$1.10/M$4.40/M128K

OpenAI GPT-OSS (Open-Weight)

Apache 2.0 licensed open-weight models, hosted on NVIDIA.

Model IDNameInput PriceOutput PriceContext
openai/gpt-oss-20bGPT-OSS 20B$0.03/M$0.14/M128K
openai/gpt-oss-120bGPT-OSS 120B$0.18/M$0.84/M128K

Anthropic Claude

Model IDNameInput PriceOutput PriceContext
anthropic/claude-opus-4Claude Opus 4$15.00/M$75.00/M200K
anthropic/claude-sonnet-4Claude Sonnet 4$3.00/M$15.00/M200K
anthropic/claude-haiku-4.5Claude Haiku 4.5$1.00/M$5.00/M200K

Google Gemini

Model IDNameInput PriceOutput PriceContext
google/gemini-3-pro-previewGemini 3 Pro$2.00/M$12.00/M1M
google/gemini-2.5-proGemini 2.5 Pro$1.25/M$10.00/M1M
google/gemini-2.5-flashGemini 2.5 Flash$0.15/M$0.60/M1M

DeepSeek

Model IDNameInput PriceOutput PriceContext
deepseek/deepseek-chatDeepSeek V3.2 Chat$0.28/M$0.42/M128K
deepseek/deepseek-reasonerDeepSeek V3.2 Reasoner$0.28/M$0.42/M128K

Qwen

Model IDNameInput PriceOutput PriceContext
qwen/qwen3-maxQwen3 Max$0.46/M$1.84/M262K
qwen/qwen-plusQwen Plus$0.10/M$0.30/M128K
qwen/qwen-turboQwen Turbo$0.02/M$0.06/M128K

xAI (Grok)

Model IDNameInput PriceOutput PriceContext
xai/grok-3Grok 3$3.00/M$15.00/M128K
xai/grok-3-fastGrok 3 Fast$5.00/M$25.00/M128K
xai/grok-3-miniGrok 3 Mini$0.30/M$0.50/M128K

Coming Soon

These models are configured but not yet available (no API keys):

  • Mistral: Mistral Large 2, Mistral Medium 3, Codestral, Pixtral Large
  • Cohere: Command R+, Command R
  • Perplexity: Sonar Pro, Sonar
  • Meta: Llama 3.3 70B, Llama 3.1 405B

Image Generation

Model IDNamePrice
openai/dall-e-3DALL-E 3$0.04-0.08/image
openai/gpt-image-1GPT Image 1$0.02-0.04/image
google/nano-bananaNano Banana$0.05/image
google/nano-banana-proNano Banana Pro$0.10-0.15/image
black-forest/flux-1.1-proFlux 1.1 Pro$0.04/image

Model Categories

Models are tagged with capabilities:

  • chat - General conversation
  • reasoning - Complex problem-solving
  • coding - Code generation and analysis
  • vision - Image understanding

Pricing

Prices are per 1 million tokens. Your actual cost depends on:

  1. Input tokens - Length of your prompt and context
  2. Output tokens - Length of the model's response
  3. Platform fee - 5% added to provider rates

The SDK calculates the exact price before each request.

Example

Python:

from blockrun_llm import LLMClient

client = LLMClient()
models = client.list_models()

for model in models:
    print(f"{model['id']}: ${model['inputPrice']}/M input")

TypeScript:

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient({ privateKey: '0x...' });
const models = await client.listModels();

for (const model of models) {
  console.log(`${model.id}: $${model.inputPrice}/M input`);
}