Note: Up-to-date Helm Chart Documentation
This page provides an overview of deploying Adaptive Engine on Kubernetes. For the most up-to-date and detailed instructions on using the Adaptive Helm chart, please refer to the official documentation in the adaptive-helm-chart repository.
Requirements
Before deploying, please ensure your environment meets the following prerequisites as described in the helm chart’s README.Getting Started
To get started, we recommend you follow the installation instructions in the helm chart’s README. This will guide you on how to:- Install the chart from the GitHub OCI Registry.
- Get the default
values.yamlconfiguration file. - Customize
values.yamlfor your environment. - Deploy the chart to your Kubernetes cluster.
Configuration Highlights
Thevalues.yaml file contains all the configuration options for the Adaptive Engine Helm chart. Below are some of the key sections you’ll need to configure.
Container Registry Information
You will need to provide the details for the container registry where the Adaptive Engine images are stored.Resource Limits
Adjust the resource limits based on your cluster’s capabilities and workload/model requirements.harmony.gpusPerNode should match the available GPU resources
for each node in the cluster where Adaptive Harmony will be deployed. For example:
Configuration Secrets
The chart requires several secrets for configuration, such as S3 bucket URLs, database connection strings, and authentication provider details. You can set these directly invalues.yaml or use an external secrets manager.
Considerations for deployment on shared clusters
When deploying Adaptive Engine in a shared cluster where other workloads are running, there are a few best practices you can implement to enforce resource isolation:Deploy Adaptive in a separate namespace
When installing the Adaptive Helm chart, you can do so in a separate namespace by passing the--namespace option. Example:
--create-namespace if the namespace does not exist yet.
Use Node Selectors to schedule Adaptive on specific GPU nodes
You can use theharmony.nodeSelector value in values.yaml to schedule Adaptive Harmony only on a specific node group.
For example, if you are deploying Adaptive on an Amazon EKS cluster, you might add:
Dedicated GPU node tenancy
Although the Adaptive control plane can run on any node where there are available CPU and memory resources, it is recommended that Harmony is scheduled to request and take ownership of all of the GPUs available on each GPU-enabled node. Although you might have already made sure Adaptive Harmony is only scheduled on a designated GPU node group using the instructions in the step above, you might want to guarantee no other workloads can be scheduled on those nodes. To dedicate a set of GPU nodes for Adaptive Harmony, you can use a combination of:- Adding a taint to the GPU nodes
- Adding a corresponding toleration to Harmony in the
values.yamlof the Adaptive Helm Chart
kubectl get nodes -o name to see all the existing node names, and then
taint them as exemplified below (replacing node_name):
values.yaml file (harmony.tolerations)
which will allow it to be scheduled on the tainted nodes:
Advanced configuration
Database SSL/TLS configuration
Adaptive Engine supports secure TLS connections between the database and control plane.Basic setting
If your PostgreSQL database supports TLS, you can enforce encrypted connections by adding the parametersslmode=require to your PostgreSQL connection string dbUrl in the Helm chart’s values.yaml file:
sslmode=require encrypts the database connection, it does not verify the server’s identity.
Server certificate verification
In order for the application to be able to verify the server certificate, you must set sslmode toverify-ca or -verify-full.
verify-cawill verify the server certificateverify-fullwill verify the server certificate and also that the server host name matches the name stored in the server certificate
verify-full is the recommended option for maximum security.
You will need to provide the application with a root certificate to make server certification possible. You can do so by following these steps:
-
Download the db server certificate (if you’re using AWS RDS for example, refer to this page), for instance
rds-ca-rsa2048-g1.pem - Upload the pem file to your k8s cluster. As the certificate is non-critical, public information, it can uploaded as a ConfigMap
- Mount the file as a volume to the control plane deployment by editing
values.yaml:
- Use the
sslrootcertparameter to refer to the certificate in the PostgresDB connection url, specifyingmountPath + filename:

