Skip to main content

Client

Adaptive (Sync)

Adaptive(base_url: str, api_key: str | None = None, default_headers: Optional = None, timeout_secs: float | None = 90.0)
Instantiates a new synchronous Adaptive client bounded to a project.
parameters
  • base_url: The base URL for the Adaptive API.
  • api_key: API key for authentication. Defaults to None, in which case environment variable ADAPTIVE_API_KEY needs to be set.
  • timeout_secs: Timeout in seconds for HTTP requests. Defaults to 90.0 seconds. Set to None for no timeout.

AsyncAdaptive (Async)

AsyncAdaptive(base_url: str, api_key: str | None = None, default_headers: Optional = None, timeout_secs: float | None = 90.0)
Instantiates a new asynchronous Adaptive client bounded to a project.
parameters
  • base_url: The base URL for the Adaptive API.
  • api_key: API key for authentication. Defaults to None, in which case environment variable ADAPTIVE_API_KEY needs to be set.
  • timeout_secs: Timeout in seconds for HTTP requests. Defaults to 90.0 seconds. Set to None for no timeout.

Resources

A/B tests

Resource to interact with AB Tests Access via adaptive.ab_tests

cancel

cancel(key: str)
Cancel an ongoing AB test.
parameters
  • key: The AB test key.

create

create(ab_test_key: str, feedback_key: str, models: List[str], traffic_split: float = 1.0, feedback_type: Literal['metric', 'preference'] = 'metric', auto_deploy: bool = False, project: str | None = None)
Creates a new A/B test in the client’s project.
parameters
  • ab_test_key: A unique key to identify the AB test.
  • feedback_key: The feedback key against which the AB test will run.
  • models: The models to include in the AB test; they must be attached to the project.
  • traffic_split: Percentage of production traffic to route to AB test. traffic_split*100 % of inference requests for the project will be sent randomly to one of the models included in the AB test.
  • feedback_type: What type of feedback to run the AB test on, metric or preference.
  • auto_deploy: If set to True, when the AB test is completed, the winning model automatically gets promoted to the project default model.

get

get(key: str)
Get the details of an AB test.
parameters
  • key: The AB test key.

list

list(active: bool | None = None, status: Literal['warmup', 'in_progress', 'done', 'cancelled'] | None = None, project: str | None = None)
List the project AB tests.
parameters
  • active: Filter on active or inactive AB tests.
  • status: Filter on one of the possible AB test status.
  • project: Project key. Falls back to client’s default if not provided.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

Artifacts

Resource to interact with job artifacts Access via adaptive.artifacts

download

download(artifact_id: str, destination_path: str)
Download an artifact file to a local path.
parameters
  • artifact_id: The UUID of the artifact to download.
  • destination_path: Local file path where the artifact will be saved.

Chat

Access via adaptive.chat

create

create(messages: List[input_types.ChatMessage], stream: bool | None = None, model: str | None = None, stop: List[str] | None = None, max_tokens: int | None = None, temperature: float | None = None, top_p: float | None = None, stream_include_usage: bool | None = None, session_id: str | UUID | None = None, project: str | None = None, user: str | UUID | None = None, ab_campaign: str | None = None, n: int | None = None, labels: Dict[str, str] | None = None, store: bool | None = None)
Create a chat completion.
parameters
  • messages: Input messages, each dict with keys role and content.
  • stream: If True, partial message deltas will be returned. If stream is over, chunk.choices will be None.
  • model: Target model key for inference. If None, the requests will be routed to the project’s default model.
  • stop: Sequences or where the API will stop generating further tokens.
  • max_tokens: Maximum # of tokens allowed to generate.
  • temperature: Sampling temperature.
  • top_p: Threshold for top-p sampling.
  • stream_include_usage: If set, an additional chunk will be streamed with the token usage statistics for the entire request.
  • session_id: Session ID to group related interactions.
  • project: Project key. Falls back to client’s default if not provided.
  • user: ID of user making request. If not None, will be logged as metadata for the request.
  • ab_campaign: AB test key. If set, request will be guaranteed to count towards AB test results, no matter the configured traffic_split.
  • n: Number of chat completions to generate for each input messages.
  • labels: Key-value pairs of interaction labels.
  • store: Whether to store the interaction for future reference. Stores by default.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

Compute pools

Resource to interact with compute pools Access via adaptive.compute_pools

list

list()
List all compute pools available in the system.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

resize_inference_partition

resize_inference_partition(compute_pool_key: str, size: int)
Resize the inference partitions of all harmony groups in a compute pool.

Recipes

Resource to interact with custom scripts Access via adaptive.recipes

delete

delete(recipe_key: str, project: str | None = None)
Delete a recipe.
parameters
  • recipe_key: The key or ID of the recipe to delete.
  • project: Optional project key. Falls back to client’s default.

generate_sample_input

generate_sample_input(recipe_key: str, project: str | None = None)
Generate a sample input dictionary based on the recipe’s JSON schema.
parameters
  • recipe_key: The key or ID of the recipe.
  • project: Optional project key. Falls back to client’s default.

get

get(recipe_key: str, project: str | None = None)
Get details for a specific recipe.
parameters
  • recipe_key: The key or ID of the recipe.
  • project: Optional project key. Falls back to client’s default.

list

list(project: str | None = None)
List all custom recipes for a project.
parameters
  • project: Optional project key. Falls back to client’s default.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

update

update(recipe_key: str, path: str | None = None, entrypoint: str | None = None, entrypoint_config: str | None = None, name: str | None = None, description: str | None = None, labels: Sequence[tuple[str, str]] | None = None, project: str | None = None)
Update an existing recipe.
parameters
  • recipe_key: The key of the recipe to update.
  • path: Optional new path to a Python file or directory to replace recipe code. If None, only metadata (name, description, labels) is updated.
  • entrypoint: Optional path to the recipe entrypoint file, relative to the path directory. Only applicable when path is a directory. Raises ValueError if: - path is a single file (entrypoint not supported for single files) - path is a directory that already contains main.py Raises FileNotFoundError if: - the specified entrypoint file doesn’t exist in the directory If path is a directory and entrypoint is None: - The directory must contain a main.py file, or FileNotFoundError is raised
  • entrypoint_config: Optional path to a separate config file that specifies the InputConfig for the recipe entrypoint. Only applicable when path is a directory. Raises ValueError if: - path is a single file (entrypoint_config not supported for single files) - path is a directory that already contains config.py Raises FileNotFoundError if: - the specified entrypoint_config file doesn’t exist in the directory If path is a directory and entrypoint_config is None: - If entrypoint is specified, the InputConfig should be included in it. - If entrypoint is not specified, main.py should contain the InputConfig, or a config.py file must be present.
  • name: Optional new display name.
  • description: Optional new description.
  • labels: Optional new key-value labels as tuples of (key, value).
  • project: Optional project key. Falls back to client’s default.

upload

upload(path: str, recipe_key: str | None = None, entrypoint: str | None = None, entrypoint_config: str | None = None, name: str | None = None, description: str | None = None, labels: dict[str, str] | None = None, project: str | None = None)
Upload a recipe from either a single Python file or a directory (path).
parameters
  • path: Path to a Python file or directory containing the recipe.
  • recipe_key: Optional unique key for the recipe. If not provided, inferred from: - File name (without .py) if path is a file - “dir_name/entrypoint_name” if path is a directory and custom entrypoint is specified - Directory name if path is a directory and no custom entrypoint is specified
  • entrypoint: Optional path to the recipe entrypoint file, relative to the path directory. Only applicable when path is a directory. Raises ValueError if: - path is a single file (entrypoint not supported for single files) - path is a directory that already contains main.py Raises FileNotFoundError if: - the specified entrypoint file doesn’t exist in the directory If path is a directory and entrypoint is None: - The directory must contain a main.py file, or FileNotFoundError is raised
  • entrypoint_config: Optional path to a separate config file that specifies the InputConfig for the recipe entrypoint. Only applicable when path is a directory. Raises ValueError if: - path is a single file (entrypoint_config not supported for single files) - path is a directory that already contains config.py Raises FileNotFoundError if: - the specified entrypoint_config file doesn’t exist in the directory If path is a directory and entrypoint_config is None: - If entrypoint is specified, the InputConfig should be included in it. - If entrypoint is not specified, main.py should contain the InputConfig, or a config.py file must be present.
  • name: Optional display name for the recipe.
  • description: Optional description.
  • labels: Optional key-value labels.
  • project: Optional project identifier.

upsert

upsert(path: str, recipe_key: str | None = None, entrypoint: str | None = None, entrypoint_config: str | None = None, name: str | None = None, description: str | None = None, labels: dict[str, str] | None = None, project: str | None = None)
Upload a recipe if it doesn’t exist, or update it if it does.
parameters
  • path: Path to a Python file or directory containing the recipe.
  • recipe_key: Optional unique key for the recipe. If not provided, inferred from: - File name (without .py) if path is a file - “dir_name/entrypoint_name” if path is a directory and custom entrypoint is specified - Directory name if path is a directory and no custom entrypoint is specified
  • entrypoint: Optional path to the recipe entrypoint file, relative to the path directory. Only applicable when path is a directory. Raises ValueError if: - path is a single file (entrypoint not supported for single files) - path is a directory that already contains main.py Raises FileNotFoundError if: - the specified entrypoint file doesn’t exist in the directory If path is a directory and entrypoint is None: - The directory must contain a main.py file, or FileNotFoundError is raised
  • entrypoint_config: Optional path to a separate config file that specifies the InputConfig for the recipe entrypoint. Only applicable when path is a directory. Raises ValueError if: - path is a single file (entrypoint_config not supported for single files) - path is a directory that already contains config.py Raises FileNotFoundError if: - the specified entrypoint_config file doesn’t exist in the directory If path is a directory and entrypoint_config is None: - If entrypoint is specified, the InputConfig should be included in it. - If entrypoint is not specified, main.py should contain the InputConfig, or a config.py file must be present.
  • name: Optional display name for the recipe.
  • description: Optional description.
  • labels: Optional key-value labels.
  • project: Optional project identifier, falls back to client’s default if it is set.

Datasets

Resource to interact with file datasets Access via adaptive.datasets

delete

delete(key: str, project: str | None = None)
Delete dataset.

get

get(key: str, project: str | None = None)
Get details for dataset.
parameters
  • key: Dataset key.

list

list(project: str | None = None)
List previously uploaded datasets.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

upload

upload(file_path: str, dataset_key: str, name: str | None = None, project: str | None = None)
Upload a dataset from a file. File must be jsonl, where each line should match supported structure.
parameters
  • file_path: Path to jsonl file.
  • dataset_key: New dataset key.
  • name: Optional name to render in UI; if None, defaults to same as dataset_key.

Embeddings

Resource to interact with embeddings Access via adaptive.embeddings

create

create(input: str, model: str | None = None, encoding_format: Literal['Float', 'Base64'] = 'Float', project: str | None = None, user: str | UUID | None = None)
Creates embeddings inference request.
parameters
  • input: Input text to embed.
  • model: Target model key for inference. If None, the requests will be routed to the project’s default model. Request will error if default model is not an embedding model.
  • encoding_format: Encoding format of response.
  • user: ID of user making the requests. If not None, will be logged as metadata for the request.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

Graders

Resource to interact with grader definitions used to evaluate model completions Access via adaptive.graders

delete

delete(grader_key: str, project: str | None = None)
Delete a grader. Returns True on success.

get

get(grader_key: str, project: str | None = None)
Retrieve a specific grader by ID or key.

list

list(project: str | None = None)
List all graders for the given project.

lock

lock(grader_key: str, locked: bool, project: str | None = None)
Lock or unlock a grader.
parameters
  • grader_key: ID or key of the grader.
  • locked: Whether to lock (True) or unlock (False) the grader.
  • project: Explicit project key. Falls back to client.default_project.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

test_external_endpoint

test_external_endpoint(url: str)
Test external endpoint to check if it is reachable from Adaptive and returns a valid response.

Integrations

Resource to manage integrations and notification subscriptions Access via adaptive.integrations

create

create(team: str, name: str, provider: Literal['slack', 'smtp', 'webhook', 'github'], connection: ConnectionConfigInputSlack | ConnectionConfigInputSmtp | ConnectionConfigInputWebhook | ConnectionConfigInputGitHub, subscriptions: Optional[List[SubscriptionInput]] = None, delivery_policy: Literal['multishot', 'singleshot'] | None = None)
Create a new integration.
parameters
  • team: Team ID or key.
  • name: Human-readable name for the integration.
  • provider: Provider name.
  • connection: Connection config. Use one of: - ConnectionConfigInputSlack(webhook_url=..., bot_token=...) - ConnectionConfigInputSmtp(host=..., port=..., username=..., password=..., from_email=..., to_emails=[...]) - ConnectionConfigInputWebhook(url=..., method=..., headers=...) - ConnectionConfigInputGitHub(api_token=..., org=..., repo=...)
  • subscriptions: Optional list of SubscriptionInput notification subscriptions.
  • delivery_policy: Delivery policy, either "multishot" or "singleshot".

delete

delete(id: str)
Delete an integration.
parameters
  • id: Integration UUID.

get

get(id: str)
Get a specific integration by ID.
parameters
  • id: Integration UUID.

get_provider

get_provider(name: str)
Get a specific provider by name.
parameters
  • name: Provider name.

list

list(team: str)
List integrations for a team.
parameters
  • team: Team ID or key.

list_providers

list_providers()
List available integration providers.

test_notification

test_notification(topic: str, payload: NotificationPayload, scope_user: Optional[List[str]] = None, scope_team: Optional[str] = None, scope_organization: bool = False, scope_admin: bool = False)
Test notification delivery.
parameters
  • topic: Notification topic string.
  • payload: Notification payload, e.g. NotificationPayload(job_update=JobUpdatePayload(...)).
  • scope_user: List of user UUIDs to scope the notification to.
  • scope_team: Team ID or key to scope the notification to.
  • scope_organization: If True, scope the notification to the organization.
  • scope_admin: If True, scope the notification to admins.

update

update(id: str, name: Optional[str] = None, enabled: Optional[bool] = None, connection: ConnectionConfigInputSlack | ConnectionConfigInputSmtp | ConnectionConfigInputWebhook | ConnectionConfigInputGitHub | None = None, subscriptions: Optional[List[SubscriptionInput]] = None, delivery_policy: Literal['multishot', 'singleshot'] | None = None)
Update an existing integration.
parameters
  • id: Integration UUID.
  • name: New name for the integration.
  • enabled: Enable or disable the integration.
  • connection: Updated connection config. See create() for the available types.
  • subscriptions: Updated list of SubscriptionInput notification subscriptions.
  • delivery_policy: Updated delivery policy, either "multishot" or "singleshot".

Jobs

Resource to interact with jobs Access via adaptive.jobs

cancel

cancel(job_id: str)
Cancel a running job.
parameters
  • job_id: The ID of the job to cancel.

get

get(job_id: str)
Get the details of a specific job.
parameters
  • job_id: The ID of the job to retrieve.

list

list(first: int | None = 100, last: int | None = None, after: str | None = None, before: str | None = None, kind: list[Literal['TRAINING', 'EVALUATION', 'DATASET_GENERATION', 'MODEL_CONVERSION', 'CUSTOM']] | None = None, project: str | None = None)
List jobs with pagination and filtering options.
parameters
  • first: Number of jobs to return from the beginning.
  • last: Number of jobs to return from the end.
  • after: Cursor for forward pagination.
  • before: Cursor for backward pagination.
  • kind: Filter by job types.
  • project: Filter by project key.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

run

run(recipe_key: str, num_gpus: int, args: dict[str, Any] | None = None, name: str | None = None, project: str | None = None, compute_pool: str | None = None)
Run a job using a specified recipe.
parameters
  • recipe_key: The key of the recipe to run.
  • num_gpus: Number of GPUs to allocate for the job.
  • args: Optional arguments to pass to the recipe; must match the recipe schema.
  • name: Optional human-readable name for the job.
  • project: Project key for the job.
  • compute_pool: Optional compute pool key to run the job on.

Feedback

Resource to interact with and log feedback Access via adaptive.feedback

get_key

get_key(project: str, feedback_key: str)
Get the details of a feedback key.
parameters
  • feedback_key: The feedback key. return self._gql_client.describe_metric(input=feedback_key).metric

list_keys

list_keys()
List all feedback keys.

log_metric

log_metric(value: bool | float | int, completion_id: str | UUID, feedback_key: str, user: str | UUID | None = None, details: str | None = None)
Log metric feedback for a single completion, which can be a float, int or bool depending on the kind of feedback_key it is logged against.
parameters
  • value: The feedback values.
  • completion_id: The completion_id to attach the feedback to.
  • feedback_key: The feedback key to log against.
  • user: ID of user submitting feedback. If not None, will be logged as metadata for the request.
  • details: Textual details for the feedback. Can be used to provide further context on the feedback value.

log_preference

log_preference(feedback_key: str, preferred_completion: str | UUID | input_types.ComparisonCompletion, other_completion: str | UUID | input_types.ComparisonCompletion, user: str | UUID | None = None, messages: List[Dict[str, str]] | None = None, tied: Literal['good', 'bad'] | None = None, project: str | None = None)
Log preference feedback between 2 completions.
parameters
  • feedback_key: The feedback key to log against.
  • preferred_completion: Can be a completion_id or a dict with keys model and text, corresponding the a valid model key and its attributed completion.
  • other_completion: Can be a completion_id or a dict with keys model and text, corresponding the a valid model key and its attributed completion.
  • user: ID of user submitting feedback.
  • messages: Input chat messages, each dict with keys role and content. Ignored if preferred_ and other_completion are completion_ids.
  • tied: Indicator if both completions tied as equally bad or equally good.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

register_key

register_key(project: str, key: str, kind: Literal['scalar', 'bool'], scoring_type: Literal['higher_is_better', 'lower_is_better'] = 'higher_is_better', name: str | None = None, description: str | None = None)
Register a new feedback key. Feedback can be logged against this key once it is created.
parameters
  • key: Feedback key.
  • kind: Feedback kind. If "bool", you can log values 0, 1, True or False only. If "scalar", you can log any integer or float value.
  • scoring_type: Indication of what good means for this feedback key; a higher numeric value (or True) , or a lower numeric value (or False).
  • name: Human-readable feedback name that will render in the UI. If None, will be the same as key.
  • description: Description of intended purpose or nuances of feedback. Will render in the UI.

Interactions

Resource to interact with interactions Access via adaptive.interactions

create

create(messages: List[input_types.ChatMessage], completion: str, model: str | None = None, feedbacks: List[input_types.InteractionFeedbackDict] | None = None, user: str | UUID | None = None, session_id: str | UUID | None = None, project: str | None = None, ab_campaign: str | None = None, labels: Dict[str, str] | None = None, created_at: str | None = None)
Create/log an interaction.
parameters
  • messages: Input chat messages, each dict should have keys role and content.
  • completion: Model completion.
  • model: Model key.
  • feedbacks: List of feedbacks, each dict should with keys feedback_key, value and optional(details).
  • user: ID of user making the request. If not None, will be logged as metadata for the interaction.
  • session_id: Session ID to group related interactions.
  • project: Project key. Falls back to client’s default if not provided.
  • ab_campaign: AB test key. If set, provided feedbacks will count towards AB test results.
  • labels: Key-value pairs of interaction labels.
  • created_at: Timestamp of interaction creation or ingestion.

get

get(completion_id: str, project: str | None = None)
Get the details for one specific interaction.
parameters
  • completion_id: The ID of the completion.

list

list(order: List[input_types.Order] | None = None, filters: input_types.ListCompletionsFilterInput | None = None, page: input_types.CursorPageInput | None = None, group_by: Literal['model', 'prompt'] | None = None, project: str | None = None)
List interactions in client’s project.
parameters
  • order: Ordering of results.
  • filters: List filters.
  • page: Paging config.
  • group_by: Retrieve interactions grouped by selected dimension.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

Models

Resource to interact with models Access via adaptive.models

add_external

add_external(name: str, external_model_id: str, api_key: str, provider: Literal['open_ai', 'google', 'azure'], endpoint: str | None = None, extra_params: dict[str, Any] | None = None)
Add proprietary external model to Adaptive model registry.
parameters
  • name: Adaptive name for the new model.
  • external_model_id: Should match the model id publicly shared by the model provider.
  • api_key: API Key for authentication against external model provider.
  • provider: External proprietary model provider.
  • extra_params: Additional provider-specific parameters (supported for open_ai and azure).

add_hf_model

add_hf_model(hf_model_id: SupportedHFModels, output_model_name: str, output_model_key: str, hf_token: str, compute_pool: str | None = None)
Add model from the HuggingFace Model hub to Adaptive model registry. It will take several minutes for the model to be downloaded and converted to Adaptive format.
parameters
  • hf_model_id: The ID of the selected model repo on HuggingFace Model Hub.
  • output_model_key: The key that will identify the new model in Adaptive.
  • hf_token: Your HuggingFace Token, needed to validate access to gated/restricted model.

add_to_project

add_to_project(model: str, project: str | None = None)
Attach a model to the client’s project.
parameters
  • model: Model key.
  • project: Project key. Falls back to client’s default if not provided.

attach

attach(model: str, wait: bool = False, make_default: bool = False, project: str | None = None, placement: input_types.ModelPlacementInput | None = None, num_draft_steps: int | None = None)
Attach a model to the client’s project.
parameters
  • model: Model key.
  • wait: If the model is not deployed already, attaching it to the project will automatically deploy it. If True, this call blocks until model is Online.
  • make_default: Make the model the project’s default on attachment.
  • num_draft_steps: Optional number of speculative decoding draft steps.

deploy

deploy(model: str, wait: bool = False, make_default: bool = False, project: str | None = None, placement: input_types.ModelPlacementInput | None = None, num_draft_steps: int | None = None)
Deploy a model for inference in the specified project.
parameters
  • model: Model key.
  • wait: If True, block until the model is online.
  • make_default: Make the model the project’s default after deployment.
  • project: Project key.
  • placement: Optional placement configuration for the model.
  • num_draft_steps: Optional number of speculative decoding draft steps.

detach

detach(model: str, project: str)
Detach model from client’s project.
parameters
  • model: Model key.

get

get(model)
Get the details for a model.
parameters
  • model: Model key.

list

list(filter: input_types.ModelFilter | None = None)
List all models in Adaptive model registry.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

terminate

terminate(model: str, force: bool = False)
Terminate model, removing it from memory and making it unavailable to all projects.
parameters
  • model: Model key.
  • force: If model is attached to several projects, force must equal True in order for the model to be terminated.

update

update(model: str, is_default: bool | None = None, desired_online: bool | None = None, project: str | None = None, placement: input_types.ModelPlacementInput | None = None, num_draft_steps: int | None = None)
Update config of model attached to client’s project.
parameters
  • model: Model key.
  • is_default: Change the selection of the model as default for the project. True to promote to default, False to demote from default. If None, no changes are applied.
  • attached: Whether model should be attached or detached to/from project. If None, no changes are applied.
  • desired_online: Turn model inference on or off for the client project. This does not influence the global status of the model, it is project-bounded. If None, no changes are applied.
  • num_draft_steps: Optional number of speculative decoding draft steps.

update_compute_config

update_compute_config(model: str, compute_config: input_types.ModelComputeConfigInput)
Update compute config of model.

Permissions

Resource to list permissions Access via adaptive.permissions

list

list()
List all available permissions in the system.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

Roles

Resource to manage roles Access via adaptive.roles

create

create(key: str, permissions: List[str], name: str | None = None)
Creates new role.
parameters
  • key: Role key.
  • permissions: List of permission identifiers such as project:read. You can list all possible permissions with client.permissions.list().
  • name: Role name; if not provided, defaults to key.

list

list()
List all roles.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

Teams

Resource to manage teams Access via adaptive.teams

create

create(key: str, name: str | None = None)
Create a new team.
parameters
  • key: Unique key for the team.
  • name: Human-readable team name. If not provided, defaults to key.

list

list()
List all teams.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

Projects

Resource to interact with projects Access via adaptive.projects

create

create(key: str, name: str | None = None, description: str | None = None, team: str | None = None)
Create new project.
parameters
  • key: Project key.
  • name: Human-readable project name which will be rendered in the UI. If not set, will be the same as key.
  • description: Description of model which will be rendered in the UI.

get

get(project: str | None = None)
Get details for the client’s project.

list

list()
List all projects.

optional_project_key

optional_project_key(project: str | None)
Get the project key if available, or None.
parameters
  • project: Optional explicit project key.

project_key

project_key(project: str | None)
Get the project key, falling back to the client’s default.
parameters
  • project: Optional explicit project key.

share

share(project: str, team: str, role: str, is_owner: bool = False)
Share project with another team. Requires project:share permissions on the target project.
parameters
  • project: Project key.
  • team: Team key.
  • role: Role key.

unshare

unshare(project: str, team: str)
Remove project access for a team. Requires project:share permissions on the target project.
parameters
  • project: Project key.
  • team: Team key.

Users

Resource to manage users and permissions Access via adaptive.users

add_to_team

add_to_team(email: str, team: str, role: str)
Update team and role for user.
parameters
  • email: User email.
  • team: Key of team to which user will be added to.
  • role: Assigned role

create

create(email: str, name: str, teams_with_role: Sequence[tuple[str, str]])
Create a user with preset teams and roles.
parameters
  • email: User’s email address.
  • name: User’s display name.
  • teams_with_role: Sequence of (team_key, role_key) tuples assigning the user to teams with specific roles.

create_service_account

create_service_account(name: str, teams_with_role: Sequence[tuple[str, str]])
Create a service account (system user) with an API key. Service accounts authenticate via API keys only (no OIDC login). The API key is returned once and cannot be retrieved later.
parameters
  • name: Account name. Must contain only lowercase letters (a-z), numbers, hyphens, and underscores.
  • teams_with_role: Sequence of (team_key, role_key) tuples.

delete

delete(email: str)
Delete a user from the system.
parameters
  • email: The email address of the user to delete.

list

list()
List all users registered to Adaptive deployment.

me

me()
Get details of current user.

remove_from_team

remove_from_team(email: str, team: str)
Remove user from team.
parameters
  • email: User email.
  • team: Key of team to remove user from.