When you start a run for a recipe in Adaptive platform, that process is executed in the same infrastructure that hosts Adaptive Engine. This is the appropriate way to execute long running processes like training or evaluation recipes. However, when you first start writing a custom recipe or a grader, you might want to experiment and debug your logic locally before you neatly wrap the recipe in the appropriate launch syntax and parametrize its inputs. Thankfully, adaptive_harmony allows you to establish a direct connection via secure websockets between your local environment and the compute plane of your Adaptive Engines’ deployment. When you instantiate a HarmonyClient, a desired number of GPU’s is allocated directly to you as an interactive session. These GPU’s are freed to run other workloads or interactive sessions as soon as the local python process holding that client in memory is killed.
import asyncio
from adaptive_harmony import get_client

client = asyncio.run(
    get_client(
        addr="wss://YOUR_ADAPTIVE_DEPLOYMENT_URL.com",
        num_gpus=2,
        api_key="YOUR_ADAPTIVE_API_KEY",
        use_case="my-use-case"  # must exist in your Adaptive deployment
    )
)
If you use adaptive_harmony in a Jupyter Notebook, you can directly await async methods like get_client in a Jupyter cell (i.e. await get_client()), no need to use asyncio.
A harmony client effectively serves as an LLM API backed by real compute resources. You can locally call methods on this client which in reality get executed in the Adaptive compute plane, and return python objects back to your environment as results. For example, if you run a spawn method to spawn a model on GPU, you’ll get back a new handle in Python to a remote model, such as TrainingModel or InferenceModel, which you can also call methods on (such as .generate(), .train_grpo(), .optim_step(), etc.). This create a hybrid development environment, where you can step through python recipe code locally in your IDE, but have powerful compute resources execute the methods that require them.

(Top) A Harmony client connects to a desired number of GPU's, reserving them as an interactive session. (Bottom) LLM specific python method are sent to the compute plane for execution, and the result returns to your local environment as a python object.

As you will see in Recipe syntax, when you structure a custom recipe into the required format for execution in the Adaptive Engine, a client is created automatically at recipe launch and passed to the recipe in the RecipeContext (ctx).