Skip to main content

Client

Adaptive (Sync)

Adaptive(base_url: str, api_key: str | None = None, default_headers: Optional = None, timeout_secs: float | None = 90.0)
Instantiates a new synchronous Adaptive client bounded to a use case.
parameters
  • base_url: The base URL for the Adaptive API.
  • api_key: API key for authentication. Defaults to None, in which case environment variable ADAPTIVE_API_KEY needs to be set.
  • timeout_secs: Timeout in seconds for HTTP requests. Defaults to 90.0 seconds. Set to None for no timeout.

AsyncAdaptive (Async)

AsyncAdaptive(base_url: str, api_key: str | None = None, default_headers: Optional = None, timeout_secs: float | None = 90.0)
Instantiates a new asynchronous Adaptive client bounded to a use case.
parameters
  • base_url: The base URL for the Adaptive API.
  • api_key: API key for authentication. Defaults to None, in which case environment variable ADAPTIVE_API_KEY needs to be set.
  • timeout_secs: Timeout in seconds for HTTP requests. Defaults to 90.0 seconds. Set to None for no timeout.

Resources

A/B Tests

Resource to interact with AB Tests Access via adaptive.ab_tests

cancel

cancel(key: str)
Cancel an ongoing AB test.
parameters
  • key: The AB test key.

create

create(ab_test_key: str, feedback_key: str, models: List[str], traffic_split: float = 1.0, feedback_type: Literal['metric', 'preference'] = 'metric', auto_deploy: bool = False, use_case: str | None = None)
Creates a new A/B test in the client’s use case.
parameters
  • ab_test_key: A unique key to identify the AB test.
  • feedback_key: The feedback key against which the AB test will run.
  • models: The models to include in the AB test; they must be attached to the use case.
  • traffic_split: Percentage of production traffic to route to AB test. traffic_split*100 % of inference requests for the use case will be sent randomly to one of the models included in the AB test.
  • feedback_type: What type of feedback to run the AB test on, metric or preference.
  • auto_deploy: If set to True, when the AB test is completed, the winning model automatically gets promoted to the use case default model.

get

get(key: str)
Get the details of an AB test.
parameters
  • key: The AB test key.

list

list(active: bool | None = None, status: Literal['warmup', 'in_progress', 'done', 'cancelled'] | None = None, use_case: str | None = None)
List the use case AB tests.
parameters
  • active: Filter on active or inactive AB tests.
  • status: Filter on one of the possible AB test status.
  • use_case: Use case key. Falls back to client’s default if not provided.

Artifacts

Resource to interact with job artifacts Access via adaptive.artifacts

download

download(artifact_id: str, destination_path: str)
Download an artifact file to a local path.
parameters
  • artifact_id: The UUID of the artifact to download.
  • destination_path: Local file path where the artifact will be saved.

Chat

Access via adaptive.chat

create

create(messages: List[input_types.ChatMessage], stream: bool | None = None, model: str | None = None, stop: List[str] | None = None, max_tokens: int | None = None, temperature: float | None = None, top_p: float | None = None, stream_include_usage: bool | None = None, session_id: str | UUID | None = None, use_case: str | None = None, user: str | UUID | None = None, ab_campaign: str | None = None, n: int | None = None, labels: Dict[str, str] | None = None, store: bool | None = None)
Create a chat completion.
parameters
  • messages: Input messages, each dict with keys role and content.
  • stream: If True, partial message deltas will be returned. If stream is over, chunk.choices will be None.
  • model: Target model key for inference. If None, the requests will be routed to the use case’s default model.
  • stop: Sequences or where the API will stop generating further tokens.
  • max_tokens: Maximum # of tokens allowed to generate.
  • temperature: Sampling temperature.
  • top_p: Threshold for top-p sampling.
  • stream_include_usage: If set, an additional chunk will be streamed with the token usage statistics for the entire request.
  • session_id: Session ID to group related interactions.
  • use_case: Use case key. Falls back to client’s default if not provided.
  • user: ID of user making request. If not None, will be logged as metadata for the request.
  • ab_campaign: AB test key. If set, request will be guaranteed to count towards AB test results, no matter the configured traffic_split.
  • n: Number of chat completions to generate for each input messages.
  • labels: Key-value pairs of interaction labels.
  • store: Whether to store the interaction for future reference. Stores by default.

Compute Pools

Resource to interact with compute pools Access via adaptive.compute_pools

list

list()
List all compute pools available in the system.

resize_inference_partition

resize_inference_partition(compute_pool_key: str, size: int)
Resize the inference partitions of all harmony groups in a compute pool.

Recipes

Resource to interact with custom scripts Access via adaptive.recipes

delete

delete(recipe_key: str, use_case: str | None = None)
Delete a recipe.
parameters
  • recipe_key: The key or ID of the recipe to delete.
  • use_case: Optional use case key. Falls back to client’s default.

generate_sample_input

generate_sample_input(recipe_key: str, use_case: str | None = None)
Generate a sample input dictionary based on the recipe’s JSON schema.
parameters
  • recipe_key: The key or ID of the recipe.
  • use_case: Optional use case key. Falls back to client’s default.

get

get(recipe_key: str, use_case: str | None = None)
Get details for a specific recipe.
parameters
  • recipe_key: The key or ID of the recipe.
  • use_case: Optional use case key. Falls back to client’s default.

list

list(use_case: str | None = None)
List all custom recipes for a use case.
parameters
  • use_case: Optional use case key. Falls back to client’s default.

update

update(recipe_key: str, path: str | None = None, entrypoint: str | None = None, name: str | None = None, description: str | None = None, labels: Sequence[tuple[str, str]] | None = None, use_case: str | None = None)
Update an existing recipe.
parameters
  • recipe_key: The key of the recipe to update.
  • path: Optional new path to a Python file or directory to replace recipe code. If None, only metadata (name, description, labels) is updated.
  • entrypoint: Optional path to the recipe entrypoint file, relative to the path directory. Only applicable when path is a directory. Raises ValueError if: - path is a single file (entrypoint not supported for single files) - path is a directory that already contains main.py Raises FileNotFoundError if: - the specified entrypoint file doesn’t exist in the directory If path is a directory and entrypoint is None: - The directory must contain a main.py file, or FileNotFoundError is raised
  • name: Optional new display name.
  • description: Optional new description.
  • labels: Optional new key-value labels as tuples of (key, value).
  • use_case: Optional use case key. Falls back to client’s default.

upload

upload(path: str, recipe_key: str | None = None, entrypoint: str | None = None, name: str | None = None, description: str | None = None, labels: dict[str, str] | None = None, use_case: str | None = None)
Upload a recipe from either a single Python file or a directory (path).
parameters
  • path: Path to a Python file or directory containing the recipe.
  • recipe_key: Optional unique key for the recipe. If not provided, inferred from: - File name (without .py) if path is a file - “dir_name/entrypoint_name” if path is a directory and custom entrypoint is specified - Directory name if path is a directory and no custom entrypoint is specified
  • entrypoint: Optional path to the recipe entrypoint file, relative to the path directory. Only applicable when path is a directory. Raises ValueError if: - path is a single file (entrypoint not supported for single files) - path is a directory that already contains main.py Raises FileNotFoundError if: - the specified entrypoint file doesn’t exist in the directory If path is a directory and entrypoint is None: - The directory must contain a main.py file, or FileNotFoundError is raised
  • name: Optional display name for the recipe.
  • description: Optional description.
  • labels: Optional key-value labels.
  • use_case: Optional use case identifier.

Datasets

Resource to interact with file datasets Access via adaptive.datasets

delete

delete(key: str, use_case: str | None = None)
Delete dataset.

get

get(key: str, use_case: str | None = None)
Get details for dataset.
parameters
  • key: Dataset key.

list

list(use_case: str | None = None)
List previously uploaded datasets.

upload

upload(file_path: str, dataset_key: str, name: str | None = None, use_case: str | None = None)
Upload a dataset from a file. File must be jsonl, where each line should match supported structure.
parameters
  • file_path: Path to jsonl file.
  • dataset_key: New dataset key.
  • name: Optional name to render in UI; if None, defaults to same as dataset_key.

Embeddings

Resource to interact with embeddings Access via adaptive.embeddings

create

create(input: str, model: str | None = None, encoding_format: Literal['Float', 'Base64'] = 'Float', use_case: str | None = None, user: str | UUID | None = None)
Creates embeddings inference request.
parameters
  • input: Input text to embed.
  • model: Target model key for inference. If None, the requests will be routed to the use case’s default model. Request will error if default model is not an embedding model.
  • encoding_format: Encoding format of response.
  • user: ID of user making the requests. If not None, will be logged as metadata for the request.

Graders

Resource to interact with grader definitions used to evaluate model completions Access via adaptive.graders

delete

delete(grader_key: str, use_case: str | None = None)
Delete a grader. Returns True on success.

get

get(grader_key: str, use_case: str | None = None)
Retrieve a specific grader by ID or key.

list

list(use_case: str | None = None)
List all graders for the given use case.

lock

lock(grader_key: str, locked: bool, use_case: str | None = None)
Lock or unlock a grader.
parameters
  • grader_key: ID or key of the grader.
  • locked: Whether to lock (True) or unlock (False) the grader.
  • use_case: Explicit use-case key. Falls back to client.default_use_case.

test_external_endpoint

test_external_endpoint(url: str)
Test external endpoint to check if it is reachable from Adaptive and returns a valid response.

Integrations

Resource to manage integrations and notification subscriptions Access via adaptive.integrations

create

create(team: str, input: CreateIntegrationInput)
Create a new integration.
parameters
  • team: Team ID or key.
  • input: Integration creation input.

delete

delete(id: str)
Delete an integration.
parameters
  • id: Integration UUID.

get

get(id: str)
Get a specific integration by ID.
parameters
  • id: Integration UUID.

get_provider

get_provider(name: str)
Get a specific provider by name.
parameters
  • name: Provider name.

list

list(team: str)
List integrations for a team.
parameters
  • team: Team ID or key.

list_providers

list_providers()
List available integration providers.

test_notification

test_notification(input: TestNotificationInput)
Test notification delivery.
parameters
  • input: Test notification input with topic, scope, and payload.

update

update(id: str, input: UpdateIntegrationInput)
Update an existing integration.
parameters
  • id: Integration UUID.
  • input: Integration update input.

Jobs

Resource to interact with jobs Access via adaptive.jobs

cancel

cancel(job_id: str)
Cancel a running job.
parameters
  • job_id: The ID of the job to cancel.

get

get(job_id: str)
Get the details of a specific job.
parameters
  • job_id: The ID of the job to retrieve.

list

list(first: int | None = 100, last: int | None = None, after: str | None = None, before: str | None = None, kind: list[Literal['TRAINING', 'EVALUATION', 'DATASET_GENERATION', 'MODEL_CONVERSION', 'CUSTOM']] | None = None, use_case: str | None = None)
List jobs with pagination and filtering options.
parameters
  • first: Number of jobs to return from the beginning.
  • last: Number of jobs to return from the end.
  • after: Cursor for forward pagination.
  • before: Cursor for backward pagination.
  • kind: Filter by job types.
  • use_case: Filter by use case key.

run

run(recipe_key: str, num_gpus: int, args: dict[str, Any] | None = None, name: str | None = None, use_case: str | None = None, compute_pool: str | None = None)
Run a job using a specified recipe.
parameters
  • recipe_key: The key of the recipe to run.
  • num_gpus: Number of GPUs to allocate for the job.
  • args: Optional arguments to pass to the recipe; must match the recipe schema.
  • name: Optional human-readable name for the job.
  • use_case: Use case key for the job.
  • compute_pool: Optional compute pool key to run the job on.

Feedback

Resource to interact with and log feedback Access via adaptive.feedback

get_key

get_key(feedback_key: str)
Get the details of a feedback key.
parameters
  • feedback_key: The feedback key. return self._gql_client.describe_metric(input=feedback_key).metric
link(feedback_key: str, use_case: str | None = None)
Link a feedback key to the client’s use case. Once a feedback key is linked to a use case, its statistics and associations with interactions will render in the UI.
parameters
  • feedback_key: The feedback key to be linked.

list_keys

list_keys()
List all feedback keys.

log_metric

log_metric(value: bool | float | int, completion_id: str | UUID, feedback_key: str, user: str | UUID | None = None, details: str | None = None)
Log metric feedback for a single completion, which can be a float, int or bool depending on the kind of feedback_key it is logged against.
parameters
  • value: The feedback values.
  • completion_id: The completion_id to attach the feedback to.
  • feedback_key: The feedback key to log against.
  • user: ID of user submitting feedback. If not None, will be logged as metadata for the request.
  • details: Textual details for the feedback. Can be used to provide further context on the feedback value.

log_preference

log_preference(feedback_key: str, preferred_completion: str | UUID | input_types.ComparisonCompletion, other_completion: str | UUID | input_types.ComparisonCompletion, user: str | UUID | None = None, messages: List[Dict[str, str]] | None = None, tied: Literal['good', 'bad'] | None = None, use_case: str | None = None)
Log preference feedback between 2 completions.
parameters
  • feedback_key: The feedback key to log against.
  • preferred_completion: Can be a completion_id or a dict with keys model and text, corresponding the a valid model key and its attributed completion.
  • other_completion: Can be a completion_id or a dict with keys model and text, corresponding the a valid model key and its attributed completion.
  • user: ID of user submitting feedback.
  • messages: Input chat messages, each dict with keys role and content. Ignored if preferred_ and other_completion are completion_ids.
  • tied: Indicator if both completions tied as equally bad or equally good.

register_key

register_key(key: str, kind: Literal['scalar', 'bool'], scoring_type: Literal['higher_is_better', 'lower_is_better'] = 'higher_is_better', name: str | None = None, description: str | None = None)
Register a new feedback key. Feedback can be logged against this key once it is created.
parameters
  • key: Feedback key.
  • kind: Feedback kind. If "bool", you can log values 0, 1, True or False only. If "scalar", you can log any integer or float value.
  • scoring_type: Indication of what good means for this feeback key; a higher numeric value (or True) , or a lower numeric value (or False).
  • name: Human-readable feedback name that will render in the UI. If None, will be the same as key.
  • description: Description of intended purpose or nuances of feedback. Will render in the UI.
unlink(feedback_key: str, use_case: str | None = None)
Unlink a feedback key from the client’s use case.
parameters
  • feedback_key: The feedback key to be unlinked.

Interactions

Resource to interact with interactions Access via adaptive.interactions

create

create(messages: List[input_types.ChatMessage], completion: str, model: str | None = None, feedbacks: List[input_types.InteractionFeedbackDict] | None = None, user: str | UUID | None = None, session_id: str | UUID | None = None, use_case: str | None = None, ab_campaign: str | None = None, labels: Dict[str, str] | None = None, created_at: str | None = None)
Create/log an interaction.
parameters
  • messages: Input chat messages, each dict should have keys role and content.
  • completion: Model completion.
  • model: Model key.
  • feedbacks: List of feedbacks, each dict should with keys feedback_key, value and optional(details).
  • user: ID of user making the request. If not None, will be logged as metadata for the interaction.
  • session_id: Session ID to group related interactions.
  • use_case: Use case key. Falls back to client’s default if not provided.
  • ab_campaign: AB test key. If set, provided feedbacks will count towards AB test results.
  • labels: Key-value pairs of interaction labels.
  • created_at: Timestamp of interaction creation or ingestion.

get

get(completion_id: str, use_case: str | None = None)
Get the details for one specific interaction.
parameters
  • completion_id: The ID of the completion.

list

list(order: List[input_types.Order] | None = None, filters: input_types.ListCompletionsFilterInput | None = None, page: input_types.CursorPageInput | None = None, group_by: Literal['model', 'prompt'] | None = None, use_case: str | None = None)
List interactions in client’s use case.
parameters
  • order: Ordering of results.
  • filters: List filters.
  • page: Paging config.
  • group_by: Retrieve interactions grouped by selected dimension.

Models

Resource to interact with models Access via adaptive.models

add_external

add_external(name: str, external_model_id: str, api_key: str, provider: Literal['open_ai', 'google', 'azure'], endpoint: str | None = None)
Add proprietary external model to Adaptive model registry.
parameters
  • name: Adaptive name for the new model.
  • external_model_id: Should match the model id publicly shared by the model provider.
  • api_key: API Key for authentication against external model provider.
  • provider: External proprietary model provider.

add_hf_model

add_hf_model(hf_model_id: SupportedHFModels, output_model_name: str, output_model_key: str, hf_token: str, compute_pool: str | None = None)
Add model from the HuggingFace Model hub to Adaptive model registry. It will take several minutes for the model to be downloaded and converted to Adaptive format.
parameters
  • hf_model_id: The ID of the selected model repo on HuggingFace Model Hub.
  • output_model_key: The key that will identify the new model in Adaptive.
  • hf_token: Your HuggingFace Token, needed to validate access to gated/restricted model.

add_to_use_case

add_to_use_case(model: str, use_case: str | None = None)
Attach a model to the client’s use case.
parameters
  • model: Model key.
  • use_case: Use case key. Falls back to client’s default if not provided.

attach

attach(model: str, wait: bool = False, make_default: bool = False, use_case: str | None = None, placement: input_types.ModelPlacementInput | None = None)
Attach a model to the client’s use case.
parameters
  • model: Model key.
  • wait: If the model is not deployed already, attaching it to the use case will automatically deploy it. If True, this call blocks until model is Online.
  • make_default: Make the model the use case’s default on attachment.

deploy

deploy(model: str, wait: bool = False, make_default: bool = False, use_case: str | None = None, placement: input_types.ModelPlacementInput | None = None)
Deploy a model for inference in the specified use case.
parameters
  • model: Model key.
  • wait: If True, block until the model is online.
  • make_default: Make the model the use case’s default after deployment.
  • use_case: Use case key.
  • placement: Optional placement configuration for the model.

detach

detach(model: str, use_case: str)
Detach model from client’s use case.
parameters
  • model: Model key.

get

get(model)
Get the details for a model.
parameters
  • model: Model key.

list

list(filter: input_types.ModelFilter | None = None)
List all models in Adaptive model registry.

terminate

terminate(model: str, force: bool = False)
Terminate model, removing it from memory and making it unavailable to all use cases.
parameters
  • model: Model key.
  • force: If model is attached to several use cases, force must equal True in order for the model to be terminated.

update

update(model: str, is_default: bool | None = None, desired_online: bool | None = None, use_case: str | None = None, placement: input_types.ModelPlacementInput | None = None)
Update config of model attached to client’s use case.
parameters
  • model: Model key.
  • is_default: Change the selection of the model as default for the use case. True to promote to default, False to demote from default. If None, no changes are applied.
  • attached: Whether model should be attached or detached to/from use case. If None, no changes are applied.
  • desired_online: Turn model inference on or off for the client use case. This does not influence the global status of the model, it is use case-bounded. If None, no changes are applied.

update_compute_config

update_compute_config(model: str, compute_config: input_types.ModelComputeConfigInput)
Update compute config of model.

Permissions

Resource to list permissions Access via adaptive.permissions

list

list()
List all available permissions in the system.

Roles

Resource to manage roles Access via adaptive.roles

create

create(key: str, permissions: List[str], name: str | None = None)
Creates new role.
parameters
  • key: Role key.
  • permissions: List of permission identifiers such as use_case:read. You can list all possible permissions with client.permissions.list().
  • name: Role name; if not provided, defaults to key.

list

list()
List all roles.

Teams

Resource to manage teams Access via adaptive.teams

create

create(key: str, name: str | None = None)
Create a new team.
parameters
  • key: Unique key for the team.
  • name: Human-readable team name. If not provided, defaults to key.

list

list()
List all teams.

Use Cases

Resource to interact with use cases Access via adaptive.use_cases

create

create(key: str, name: str | None = None, description: str | None = None, team: str | None = None)
Create new use case.
parameters
  • key: Use case key.
  • name: Human-readable use case name which will be rendered in the UI. If not set, will be the same as key.
  • description: Description of model which will be rendered in the UI.
  • team: Team key to associate the use case with.

get

get(use_case: str | None = None)
Get details for the client’s use case.

list

list()
List all use cases.

share

share(use_case: str, team: str, role: str, is_owner: bool = False)
Share use case with another team. Requires use_case:share permissions on the target use case.
parameters
  • use_case: Use case key.
  • team: Team key.
  • role: Role key.

unshare

unshare(use_case: str, team: str)
Remove use case access for a team. Requires use_case:share permissions on the target use case.
parameters
  • use_case: Use case key.
  • team: Team key.

Users

Resource to manage users and permissions Access via adaptive.users

add_to_team

add_to_team(email: str, team: str, role: str)
Update team and role for user.
parameters
  • email: User email.
  • team: Key of team to which user will be added to.
  • role: Assigned role

create

create(email: str, name: str, teams_with_role: Sequence[tuple[str, str]])
Create a user with preset teams and roles.
parameters
  • email: User’s email address.
  • name: User’s display name.
  • teams_with_role: Sequence of (team_key, role_key) tuples assigning the user to teams with specific roles.

delete

delete(email: str)
Delete a user from the system.
parameters
  • email: The email address of the user to delete.

list

list()
List all users registered to Adaptive deployment.

me

me()
Get details of current user.

remove_from_team

remove_from_team(email: str, team: str)
Remove user from team.
parameters
  • email: User email.
  • team: Key of team to remove user from.