The Models UI lists all the models available for use in your Adaptive Engine deployment

Adaptive Engine supports most transformer-based generative models for both inference and training. Your Adaptive Cluster can be pre-populated with open-source models of your choice.

Adaptive Engine also supports connecting proprietary models, so you can use them through the Adaptive API in tandem with other platform features - such as A/B tests, interaction and feedback logging. You must only supply a valid external API key to connect external models. Read more on this in Connect proprietary models.

In order to make a model available for inference, you must attach it to a use case. Attaching a model automatically deploys it, loading it to memory. The same model can be attached to multiple use cases. If no use cases are attached to a model, it is terminated and removed from memory.

In the use case page and model registry page, model artifacts that are parameter efficient adapters will be indentend under their corresponding backbone model. Embedding only models are indicated as such.

Parameter-efficient adapter models are indented under their backbone model; embedding models are flagged as `Embedding only`.

See the SDK Reference for all model-related methods.