A/B test selected models on a percentage of live user traffic
traffic split %
. A/B test requests are then distributed equally among tested models.
For example, if your configured traffic split
is 10% and you are A/B testing 2 models, 5% of the full use case traffic will be routed to each model.
AB tests can run on metric or preference feedback. If you configure the test to run on metric feedback,
preference feedback you log for completions during the course of the test will not count towards its results and vice versa, even
if the request was routed to one of the AB tested models.
You can create an A/B Test as follows:
Adaptive
client firsttraffic split
and guarantee your request counts towards the A/B test,
you can specify the A/B test key in the ab_campaign
parameter when using the Chat API.
See the SDK Reference for all A/B test-related methods.