Use this file to discover all available pages before exploring further.
An interaction is a prompt, completion, and any associated feedback or labels. Adaptive logs interactions automatically when you run inference, creating a record you can browse, filter, and use for training.
Log interactions for completions generated outside Adaptive:
adaptive.interactions.create( model="model_key", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ], completion="Hi there! How can I help you today?", labels={"source": "external"},)
Use manual logging to import historical data or log completions from external models.For multimodal interactions, pass image content parts in messages (see Vision models):
adaptive.interactions.create( model="model_key", messages=[ { "role": "user", "content": [ {"type": "text", "text": "What's in this image?"}, {"type": "image_url", "image_url": "data:image/png;base64,..."}, ], } ], completion="The image shows a bar chart comparing model accuracy across three benchmarks.", labels={"source": "external"},)
Bulk upload with async
Upload large datasets efficiently with async concurrency:
import asynciofrom adaptive_sdk import AsyncAdaptiveasync def upload_data(data: list[dict], labels: dict, max_concurrency: int = 30): adaptive = AsyncAdaptive( base_url=os.environ["ADAPTIVE_URL"], api_key=os.environ["ADAPTIVE_API_KEY"], ) adaptive.set_default_project("my-project") semaphore = asyncio.Semaphore(max_concurrency) async def upload_one(item): async with semaphore: await adaptive.interactions.create( messages=item["messages"], completion=item["completion"], labels=labels, ) await asyncio.gather(*[upload_one(item) for item in data])# Example data formatdata = [ { "messages": [{"role": "user", "content": "Hello!"}], "completion": "Hi there!" }]asyncio.run(upload_data(data, labels={"dataset": "training-v1"}))
Navigate to your project and open the Interactions tab to view logged data.Filter by label, time range, or feedback values to find specific interactions. Click any row to see the full prompt and completion.Use interactions to:
Audit model responses
Find examples for training datasets
Debug unexpected behavior
Multimodal interactions display images inline, so you can review exactly what the model saw.