Adaptive Engine enables you to connect proprietary models, so you can use them through the Adaptive API in tandem with other platform features - such as
interaction and feedback logging. The only requirement is that you supply a valid external API key.
This feature currently supports OpenAI (both via OpenAI’s API and Azure OpenAI Service) and Google models.
To connect an external model, click on Connect external model on the top right of the Models page in the Adaptive Engine UI.
You can also connect an external model using the Adaptive SDK.
external_model = adaptive.models.add_external(
provider="open_ai",
external_model_id="GPT4O",
name="GPT-4o",
api_key="OPENAI_API_KEY"
)
Supported model ID’s:
GPT4O
GPT4O_MINI
GPT4
GPT4_TURBO
GPT3_5_TURBO
external_model = adaptive.models.add_external(
provider="azure",
external_model_id="DEPLOYMENT_NAME",
name="Azure GPT-4o",
api_key="AZURE_API_KEY",
endpoint="AZURE_OPENAI_SUBSCRIPTION_ENDPOINT"
)
When connecting a model through Azure OpenAI Service:
- the
external_model_id should be the the deployment name/id of your model
- the
endpoint should be your Azure OpenAI subscription endpoint, which should
look something like https://aoairesource.openai.azure.com.
See here for more information.external_model = adaptive.models.add_external(
provider="google",
external_model_id="gemini-1.5-pro",
name="Gemini 1.5 Pro",
api_key="GOOGLE_API_KEY"
)
Supported model ID’s are the ones listed in this table, with the exception of embeddings models.
Once the proprietary model is connected to Adaptive Engine, you can attach it to a use-case
and make inference requests the same way you would for any other Adaptive Engine model.