Skip to main content
POST
/
llm
/
chat
General chat endpoint
curl --request POST \
  --url https://api.httpayer.com/llm/chat \
  --header 'Content-Type: application/json' \
  --data '
{
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant"
    },
    {
      "role": "user",
      "content": "Hello, how are you?"
    }
  ],
  "model": "gpt-4o-mini",
  "temperature": 0.7
}
'
{
  "response": "<string>",
  "model": "<string>",
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123
  },
  "finish_reason": "<string>"
}

Headers

X-Payment
string

x402 v1 payment authorization header (base64-encoded JSON). Include on second request after receiving 402.

Body

application/json
messages
object[]
required

Array of chat messages

temperature
number
default:0.7

Sampling temperature (0-2)

max_tokens
integer

Maximum tokens in response

Response

Chat successful

response
string

AI-generated response

model
string

OpenAI model used

usage
object
finish_reason
string

Reason for completion