model, requests route to the project’s default model, or to a model in an active A/B test.
Interactions (prompt + completion pairs) are logged automatically. See Interactions for details.
- SDK
- UI
Chat completions
| Parameter | Type | Description |
|---|---|---|
model | str | Model key. Omit to use the project default. |
messages | list | Chat messages with role and content |
labels | dict | Key-value pairs for filtering interactions |
stream | bool | Enable streaming (default: False) |
temperature | float | Sampling temperature |
max_tokens | int | Maximum tokens to generate |
stop | list | Stop sequences |
top_p | float | Top-p sampling threshold |
session_id | str or UUID | Session ID for KV-cache reuse across turns |
store | bool | Whether to log the interaction (default: True) |
Streaming
Get the completion ID
Usecompletion_id to log Metrics against the response:Vision requests
Models with the Multimodal tag accept images alongside text. Images must be base64-encoded data URIs (JPEG, PNG, WebP, or GIF, up to 10 MB each).OpenAI compatibility
Use the OpenAI Python library with your Adaptive deployment:model to project_key/model_key. Use metadata instead of labels.
Image format difference
Image format difference
Multimodal image format differs between Adaptive and OpenAI:
HTTP requests
Use any HTTP client to call the chat completions endpoint directly.- requests
- curl

