Overview
All training classes inadaptive_harmony support logging metrics to external tracking services. Metrics include training loss, validation scores, grader evaluations, generation statistics, and any custom metrics from callbacks.
Available Loggers
Adaptive Harmony supports four logging backends:WandbLogger (Weights & Biases)
Integration with Weights & Biases for experiment tracking and visualization.- Real-time metric visualization
- Table logging for sample completions and evaluations
- Hyperparameter tracking
- Direct link to W&B dashboard via
logger.training_monitoring_link
- Set your W&B API key:
export WANDB_API_KEY=your_key_here - Or login via
wandb login
MLFlowLogger
Integration with MLflow for experiment tracking.- Metric and parameter logging
- Table logging (exported as JSON artifacts)
- Integration with MLflow UI
- Support for custom tags and metadata
- Deploy an MLflow tracking server (can be deployed alongside Adaptive, check the Adaptive Helm Chart)
- Set
tracking_urito your server URL
TBMetricsLogger (TensorBoard)
Integration with TensorBoard for visualization.- Scalar metric tracking
- Table logging (exported as HTML files with auto-generated index)
- Text logging for string metrics
- Specify a directory for logs
- View with:
tensorboard --logdir=/path/to/tensorboard/logs
StdoutLogger
Simple console output logger. Prints metrics to stdout using rich formatting. No external dependencies required.- Pretty-printed output to console
- No setup required
- Useful for local development and debugging
Using globally configured logger
An external logging solution can be deployed globally for all runs in an Adaptive deployment to use. This is accomplished with environment variables defined in the Adaptive Helm Chart. Theget_prod_logger() function automatically returns the appropriate logger based on your environment configuration, with an active run. Runs are automatically organized in a similar fashion as Adaptive runs, use-case first, then run name.
This is the recommended approach for production recipes running on Adaptive.
- Weights & Biases - If
WANDB_API_KEYenvironment variable is set - MLflow - If
MLFLOW_TRACKING_URIenvironment variable is set - TensorBoard - If
TENSORBOARD_LOGGING_DIRenvironment variable is set - StdoutLogger - Fallback if none of the above are configured
HARMONY_JOB_ID- Unique identifier for this runHARMONY_USE_CASE- Use case nameHARMONY_RECIPE_NAME- Recipe nameHARMONY_JOB_NAME- Job name- Plus any logger-specific variables you’ve configured in your deployment
What Gets Logged
Training classes automatically log: Scalar metrics:loss/train- Training lossloss/grad_norm- Gradient normrewards/*- Grader scores and statisticskl_divergence- KL divergence from reference model (RL methods)learning_rate- Current learning rate
validation/rewards/*- Validation grader scores (GraderEvalCallback)validation/loss- Validation loss (ValidationLossCallback)generation/*- Sample generation metrics (GenerateSamplesCallback)- Any custom metrics from your callbacks
- Sample completions with prompts and responses
- Grader evaluations with reasoning
- Validation results
- All trainer parameters (learning rate, batch size, etc.)
- Model configuration
- Recipe config parameters
Training classes automatically link their logger’s monitoring dashboard to Adaptive’s progress reporting UI.
When users view your run’s progress in Adaptive, they can click through to view detailed metrics in the external logging solution you have set up globally for your Adaptive deployment.
See Progress reporting for more details.
Logging Custom Metrics
You can log custom metrics from your training code by calling the logger directly:callback method (see Training Callbacks).
Best Practices
- Use get_prod_logger() for recipes - Let Adaptive handle logger selection and configuration automatically
-
Use specific loggers for local development - When developing locally, you might want to use a specific logger:
- Configure one logger - Only set environment variables for one logger when deploying Adaptive to avoid confusion
-
Check monitoring links - Access the dashboard URL via:
-
Close loggers when done - Training classes handle this automatically, but if you’re using loggers manually:

