Chat Completions API

The Chat Completions API allows you to generate text responses using the Routstr platform’s proxy to OpenRouter.ai.

Base URL

https://api.routstr.com/v1/chat/completions

Endpoints

Create Chat Completion

POST /v1/chat/completions

Generate a completion for the provided messages and parameters.

Request Body

ParameterTypeDescription
modelstringID of the model to use (e.g., “gpt-4”)
messagesarrayArray of message objects with role and content
max_tokensintegerMaximum number of tokens to generate
temperaturenumberSampling temperature (0-2)
streambooleanWhether to stream the response

Example Request

{
  "model": "gpt-4",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "Hello, who are you?"
    }
  ],
  "max_tokens": 150,
  "temperature": 0.7
}

Example Response

{
  "id": "chatcmpl-123abc",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4",
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "I'm an AI assistant designed to provide helpful and informative responses. How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 18,
    "completion_tokens": 22,
    "total_tokens": 40
  }
}

Streaming Response

To receive a streaming response, set stream: true in your request body:

{
  "model": "gpt-4",
  "messages": [
    {
      "role": "user",
      "content": "Write a short poem about technology."
    }
  ],
  "stream": true
}

The API will then return a stream of server-sent events, each containing a delta of the response.

Error Handling

Status CodeDescription
400Bad Request - Check your request parameters
401Unauthorized - Invalid Cashu token
402Payment Required - Insufficient funds
404Not Found - Model not found
429Too Many Requests - Rate limit exceeded
500Server Error - Something went wrong