Skip to main content
Connect external models or use Adaptive with LangChain.

External models

Connect proprietary models (OpenAI, Azure, Google, Anthropic, NVIDIA NIM) to use them through the Adaptive API with interaction and metrics logging.

OpenAI (direct)

external_model = adaptive.models.add_external(
    provider="open_ai",
    external_model_id="GPT4O",
    name="GPT-4o",
    api_key="OPENAI_API_KEY"
)
Supported model IDs: GPT4O, GPT4O_MINI, GPT4, GPT4_TURBO, GPT3_5_TURBO

Azure OpenAI

external_model = adaptive.models.add_external(
    provider="azure",
    external_model_id="DEPLOYMENT_NAME",
    name="Azure GPT-4o",
    api_key="AZURE_API_KEY",
    endpoint="https://aoairesource.openai.azure.com"
)
The external_model_id is your deployment name, and endpoint is your Azure OpenAI subscription endpoint.

Google

external_model = adaptive.models.add_external(
    provider="google",
    external_model_id="gemini-1.5-pro",
    name="Gemini 1.5 Pro",
    api_key="GOOGLE_API_KEY"
)
Supported models: Gemini model variations (excluding embeddings).

Anthropic

external_model = adaptive.models.add_external(
    provider="anthropic",
    external_model_id="claude-sonnet-4-5-20250929",
    name="Claude Sonnet 4.5",
    api_key="ANTHROPIC_API_KEY"
)
Use the model ID from Anthropic’s API documentation as the external_model_id. Once connected, attach the model to a project and make inference requests like any other model.

LangChain

Adaptive is compatible with LangChain chat model classes.

ChatOpenAI

from langchain_openai import ChatOpenAI
import os

os.environ["OPENAI_API_KEY"] = "ADAPTIVE_API_KEY"

llm = ChatOpenAI(
    model="project_key/model_key",  # model_key is optional
    base_url="ADAPTIVE_URL/api/v1",
)

messages = [
    ("system", "You are a helpful assistant that translates English to French."),
    ("human", "I love programming."),
]
response = llm.invoke(messages)

ChatGoogleGenerativeAI

from langchain_google_genai import ChatGoogleGenerativeAI
import os

os.environ["GOOGLE_API_KEY"] = "ADAPTIVE_API_KEY"

llm = ChatGoogleGenerativeAI(
    model="project_key",
    client_options={"api_endpoint": "ADAPTIVE_URL/api/v1/google"},
    transport="rest",
)

messages = [
    ("system", "You are a helpful assistant that translates English to French."),
    ("human", "I love programming."),
]
response = llm.invoke(messages)

Platform notifications

Subscribe to platform events (job completion, failures, model status changes) and receive alerts through Slack, email, or webhook.

Topic patterns

Notifications use a topic-based system with pattern matching. Subscribe to specific events or use wildcards:
PatternMatches
project:*:job:**:completedAny job completion in any project
project:my-project:job:**:failedFailed jobs in a specific project
project:*:model:**:status_changedModel status changes across all projects
Use * to match a single segment and ** to match multiple segments.

Integration types

Configure notifications in Settings → Integrations. Three delivery methods are available:
TypeSetupDetails
SlackWebhook URL, optional bot tokenPosts to a channel via incoming webhook
EmailSMTP server configurationSends to specified recipients
WebhookHTTP endpoint URL, custom headersPOST request with event payload
Each integration can subscribe to different topic patterns. You can create multiple integrations of the same type for different channels or recipients.

Permissions

Creating and managing integrations requires integration:create, integration:read, integration:update, or integration:delete permissions. See Permissions for role configuration.
No SDK resource exists for notifications yet. To configure integrations programmatically, use the GraphQL API directly. Contact your administrator for the schema details.