Selecting Model

Adaline’s model picker dropdown lets you choose which LLM will process your prompts. Model dropdown displays all supported LLMs corresponding to the Providers you’ve configured in your Workspace settings.

  1. Navigate to the model picker dropdown.

  2. Browse through the list of models from your connected providers.

    • Each model name includes a prefix (like “OpenAI (personal)::”) that shows the provider name you configured when adding your API keys.
    • This prefix helps you choose not only the right model but also which specific provider account to use - especially useful when you have multiple keys for the same provider, such as “OpenAI-dev” and “OpenAI-prod” setups.
  3. Select the model that best fits your current task.

Adjusting Model Parameters

Adaline lets you fine-tune LLM behavior through adjustable parameters in the model settings panel. Available parameters vary by model, but commonly include temperature, max tokens, top_p, frequency penalty, and stop sequences.

You can access and adjust model parameters by clicking the three dots (⋯) button located to the right of the model picker.

The parameters displayed in the interface depend on the selected model. The interface automatically updates to show only relevant parameters for your chosen model.

JSON Schema Configuration

To implement JSON schema configuration in Adaline:

1

Navigate to the Response Format under model configuration settings.

It accessible via the three dots (…) menu in the model toolbar.

2

Locate the JSON schema configuration section

3

Define your schema using OpenAI JSON Schema structure under the response schema role.

To learn more about JSON schema response follow Response Schema.