Skip to main content
POST
/
llm
/
chat
General chat endpoint
curl --request POST \
  --url https://api.httpayer.com/llm/chat \
  --header 'Content-Type: application/json' \
  --data '
{
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant"
    },
    {
      "role": "user",
      "content": "Hello, how are you?"
    }
  ],
  "temperature": 0.7
}
'
{
  "response": "I'm doing well, thank you for asking! How can I help you today?",
  "model": "gpt-4o-mini-2024-07-18"
}

Headers

X-Payment
string

x402 v1 payment authorization header (base64-encoded JSON). Include on second request after receiving 402.

Body

application/json
messages
object[]
required

Array of chat messages

temperature
number
default:0.7

Sampling temperature (0-2). Controls randomness in responses.

Required range: 0 <= x <= 2
max_tokens
integer

Maximum tokens in response (will be capped to server-configured limit)

Example:

500

Response

Chat successful

response
string

AI-generated response

Example:

"I'm doing well, thank you for asking! How can I help you today?"

model
string

OpenAI model used (server-configured)

Example:

"gpt-4o-mini-2024-07-18"