Adaptive Engine allows you to easily log feedback on completions to monitor and improve your models.

All metric feedback must be logged against a feedback_key (see Feedback).

When you make an inference request, the API response includes a completion_id UUID along with the model’s output (see Make inference requests to learn more). You must log your feedback for an output using its completion_id.

Make sure to use the response’s completion_id for logging, not its id.

You can access the completion_id for a Chat API response as follows:

completion_id = response.json()["choices"][0]["completion_id"]

If you are passing stream=True to the Chat API to stream completions, you can find the same completion_id in each streamed chunk as follows:

Adaptive SDK / OpenAI Python
for chunk in streaming_response:
  completion_id = chunk.choices[0].completion_id

Log metric feedback

Metric feedback allows you to score a completion with scalar or boolean values. For example, the code snippet below logs that Llama3.1 8B’s completion to your prompt scored a CSAT (customer satisfaction score) of 5.

Create an Adaptive client first

response = client.feedback.log_metric(
  value=5,
  feedback_key="CSAT",
  completion_id=completion_id,
  details="This answer was perfect" # optional text details of 
)

As exemplified in the above code snippets, you can log textual details for more context or justification on the provided feedback. See the SDK Reference to see the full method definition.

Log preference feedback

Preference feedback allows you to log a pairwise comparison between 2 completions. You can also log a tie between the 2, as equally good or equally bad.

Adaptive SDK
response = client.feedback.log_preference(
  feedback_key="acceptance",
  preferred_completion="completion_id_1",
  other_completion="completion_id_2",
)

See the SDK Reference to see the full method definition.