Overview

Models are the core components that power your AI applications in Adaptive Engine. This page explains how to manage and organize models in your deployment.

The Models UI lists all the models available for use in your Adaptive Engine deployment

Supported Model Types

Open Source Models

Your Adaptive model registry can be populated with open-source models of your choice, which you can import directly from the HuggingFace Model Hub. Adaptive Engine supports most transformer-based generative models for both inference and training. This includes but is not limited to all variants of:
  • Llama 3 and 4
  • Qwen 2.5 and 3
  • Gemma 3
  • Mistral
  • DeepSeek Coder and R1
  • Falcon
  • etc.
In case your selected model is gated (requires data sharing or license agreement), make sure your HuggingFace user has previously been granted access to the model.

Proprietary Models

Adaptive Engine also supports connecting proprietary models, so you can make use of them through the Adaptive API in tandem with other platform features - such as A/B tests, interaction and feedback logging. Adaptive Engine support many external provider such:
  • OpenAI
  • Anthropic
  • Google
  • Azure
You must supply a valid external API key to connect external models. Read more on this in Connect proprietary models.

Adapters

In the use case page and model registry page, model artifacts that are parameter efficient adapters will be indented under their corresponding backbone model. Embedding only models are indicated as such.

Parameter-efficient adapter models are indented under their backbone model; embedding models are flagged as `Embedding only`.

Model Deployment

In order to make a model available for inference, you must attach it to a use case. Attaching a model automatically deploys it, loading it to memory. The same model can be attached to multiple use cases. If no use cases are attached to a model, it is terminated and removed from memory.

Next Steps

See the SDK Reference for all model-related methods.