Documentation Index
Fetch the complete documentation index at: https://docs.adaptive-ml.com/llms.txt
Use this file to discover all available pages before exploring further.
StringThread is the atomic data element in adaptive_harmony. A thread is a sequence of turns (role + content), combined with turn weights for training and optional metadata (metric feedback, ground truth labels, or any custom key-value pairs).
Create a StringThread
Builder methods
Each method returns a newStringThread with the turn appended:
Access turns and content
get_turns() returns every turn as a (role, content) tuple. For multimodal turns, images are represented as <|image|> in the string content.
messages() returns all turns except the final one if it has the assistant role. This is useful when you need to split a thread into prompt and completion.
Multimodal StringThread
The difference with text-only is thatcontent becomes a list of fragments instead of a plain string. There are two fragment types:
| Type | Content key | Example |
|---|---|---|
TextFragment | text | {"type": "text", "text": "Describe this image."} |
ImageFragment | url | {"type": "image", "url": "data:image/png;base64,..."} |
StringThread.from_fragments() to create a multimodal thread. Don’t forget the await! from_fragments is async because it loads and decodes images.
StringThread is equivalent to a fragment thread with a single TextFragment:
The fragment format in Harmony differs from the chat completions API. In Harmony, image fragments use
{"type": "image", "url": "..."}, while the chat completions API uses {"type": "image_url", "image_url": {"url": "..."}}.Image encoding
ImageFragment expects a url field with the full data URI. To base64-encode a local image, use the built-in helper:
image_to_base64 returns the raw base64 string (without the data:... prefix). It also allows you to resize images and convert to grayscale:
| Parameter | Type | Description |
|---|---|---|
image_path | str or Path | Path to the image file |
format | str | Output format (default: "PNG") |
longest_side_max_size | int or None | Resize so the longest side fits this limit |
black_and_white | bool | Convert to grayscale (default: False) |
Supported image formats
The formats accepted depend on the context:- In recipes (
adaptive_harmony): most image formats are supported (PNG, JPEG, GIF, WebP, BMP, TIFF, etc.). Images can be loaded from file paths, HTTP URLs, ordata:URIs. - Via the chat completions API (SDK / OpenAI client): only PNG, JPEG, GIF, and WebP are accepted, and only
data:URIs, HTTP URLs are rejected.
Turn weighting
During training, turn weights control how much each turn contributes to the loss. A weight of0.0 means the model does not learn from that turn, while 1.0 means it contributes fully. This is how you tell the model which parts of a conversation to learn from: typically you want the model to learn from assistant responses, not from user prompts or system messages.
By default, turns added with .assistant() get a weight of 1.0 and all other roles get 0.0. When you load a dataset that contains completions, with_weight_last_assistant_turn() is applied automatically: only the final assistant turn is weighted. You can override this after loading using one of the methods below.
Weighting methods
Each method returns a newStringThread with updated weights:
| Method | Behavior |
|---|---|
with_weight(w) | Set weight w on all turns |
with_weight_all_assistant_turns() | Weight 1.0 on all assistant turns, 0.0 on others |
with_weight_last_assistant_turn() | Weight 1.0 on the last assistant turn only, 0.0 on others |
with_weight_assistant_turns_from_index(i) | Weight 1.0 on assistant turns starting from the i-th assistant turn |

