This guide will walk you through integrating Adaline into your application. A prompt in Adaline is a collection of model parameters, messages and tools that are sent to a model to generate a response. Any prompt you orchestrate in Adaline can be deployed for your application to fetch at runtime. After your application has completed it’s workflow, you can send telemetry to Adaline to monitor the performance of your prompt.

Explore your Workspace

Upon sign up at app.adaline.ai, you’ll automatically receive:

  • A private teamspace to organize your projects.
  • A project to contain your prompts and datasets.
  • A default prompt to begin customizing.
  • An empty dataset to store and organize your test cases.

Click on the Prompt in the sidebar to view your default prompt.

Setup a LLM provider

A LLM provider securely stores your API keys / secrets and is used to run your prompts and evaluations.

  • In the sidebar, click Settings → Providers.
  • Click on the plus icon to setup a provider of your choice. For this guide, we will use OpenAI. Click here to learn more about all providers.
  • Paste your OpenAI API key and click Create.
  • Your workspace now has access to all OpenAI models.

Setup your Prompt

  • Click on < Back button in sidebar and then click on Prompt again to view the Editor and Playground.
  • Click on Select a model and choose a model to run your prompt.
  • (Optional) You can click on ellipsis (three dots) next to the model to configure model parameters such as temperature, max tokens, etc.

Run your Prompt

Before you run your prompt, notice the Variables section in the bottom right to set variable values. Variables are placeholders for values that will be used in your prompt at runtime. These usually represent the end-user inputs, additional context, outputs from previous prompts, etc.

  • Click on Run button (top right) in the Playground to run your prompt.

Congratulations! You just ran your first prompt in Adaline.

  • (Optional) This guide assumes the default prompt but you can edit the prompt and variables to suit your use case.
    • Add as many messages as you need for zero shot or few shot prompts.
    • Update roles per message by clicking on the role (eg. User in the screenshot).
    • Add as many variables as you need by typing variable name between double curly brackets {{}}.
    • Update the variable values in the Variable Editor.

Setup Workspace API key

  • In the sidebar, click Settings → API keys.
  • Click on Create API key.
  • Rename the API key to something meaningful.
  • Click on the generated API key to copy and paste in a secure location. It will not be visible again.
  • Click on Create key.

Deploy Your Prompt

  • Click on Prompt in the sidebar that you want to deploy.
  • Click on Deploy tab in the top bar.
  • Click on Deploy button to deploy on top right to deploy your prompt to ‘Production’ environment (can be renamed later).

Since this is a new prompt with no deployments, you will see the entire prompt highlighted to show it’s diff. Subsequent deployments will only show the changes.

Congratulations! You just deployed your first prompt in Adaline. This prompt is now accessible by API for your application to fetch at runtime.

  • Click on Copy Prompt ID to copy the prompt ID.
  • Click on Environment ID to copy the environment ID.
  • Click on project from

Integrate with Your Application

Using this example, you can integrate Adaline with your application. This example fetches the latest deployment of a prompt at runtime, runs a sample workflow and sends telemetry to Adaline.

  • Replace my_workspace_api_key with your workspace API key.
  • Replace my_prompt_id with your prompt ID.
  • Replace my_deployment_environment_id with your deployment environment ID.
  • Replace my_openai_api_key with your OpenAI API key for the sample workflow.
const { v4: uuidv4 } = require('uuid');

// Adaline constants
const ADX_API_KEY = "my_workspace_api_key";
const ADX_PROMPT_ID = "my_prompt_id";
const ADX_DEPLOYMENT_ENVIRONMENT_ID = "my_deployment_environment_id";
const ADX_BASE_URL = "https://api.adaline.ai";

// OpenAI constants
const OPENAI_API_KEY = "my_openai_api_key";

// Replace variables in prompt with your runtime values
function injectVariables(messages, variables) {
  return messages.map(message => ({
    role: message.role,
    content: message.content.map(c => {
      let text = c.value;
      variables.forEach(v => {
        const placeholder = `{{${v.name}}}`;
        text = text.replaceAll(placeholder, v.value);
      });
      return text;
    }).join(" "),
  }));
}

// Fetch latest deployment from Adaline
async function getLatestDeployment(promptId, deploymentEnvironmentId) {
  const response = await fetch(`${ADX_BASE_URL}/v2/deployments?promptId=${promptId}&deploymentId=latest&deploymentEnvironmentId=${deploymentEnvironmentId}`, {
    headers: {
      Authorization: `Bearer ${ADX_API_KEY}`,
    },
  });

  if (!response.ok) {
    const error = await response.json();
    throw new Error(`API Error: ${error.error}`);
  }

  return response.json();
}

// Send log to Adaline after workflow execution
async function sendLog(projectId, trace, span) {
  const response = await fetch(`${ADX_BASE_URL}/v2/logs/trace`, {
    method: "POST",
    headers: {
      Authorization: `Bearer ${ADX_API_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      projectId,
      trace,
      spans: [span],
    }),
  });

  if (!response.ok) {
    const error = await response.json();
    throw new Error(`API Error: ${error.error}`);
  }

  return response.json();
}

// Sample workflow using OpenAI
async function callOpenAIChatCompletion(model, messages) {
  const spanId = uuidv4();
  const spanStartTime = Date.now();

  const response = await fetch("https://api.openai.com/v1/chat/completions", {
    method: "POST",
    headers: {
      "Authorization": `Bearer ${OPENAI_API_KEY}`,
      "Content-Type": "application/json",
      "X-Span-Id": spanId,
    },
    body: JSON.stringify({
      model: model,
      messages: messages,
    }),
  });

  const spanEndTime = Date.now();

  if (!response.ok) {
    const error = await response.json();
    throw new Error(`OpenAI API Error: ${error.error.message}`);
  }

  const data = await response.json();

  return {
    spanId,
    spanStartTime,
    spanEndTime,
    rawResponse: data,
    content: data.choices[0].message.content,
  };
}

// Main function
(async () => {
  try {

    // Start trace
    const traceId = uuidv4();
    const traceStartTime = Date.now();

    // Get latest deployment
    const deployment = await getLatestDeployment(ADX_PROMPT_ID, ADX_DEPLOYMENT_ENVIRONMENT_ID);
    const deploymentId = deployment.id;
    const projectId = deployment.projectId;
    const prompt = deployment.prompt;
    const model = prompt.config.model;
    const provider = prompt.config.providerName;

    // Inject variables into prompt
    const inputVariables = [{ name: "persona", value: "financial analyst" }];
    const messages = injectVariables(prompt.messages, inputVariables);

    // Run sample workflow
    const openAIResult = await callOpenAIChatCompletion(model, messages);

    // End trace
    const traceEndTime = Date.now();

    console.log("========== TRACE INFO ==========");
    console.log("Trace ID:", traceId);
    console.log("Trace Start:", new Date(traceStartTime).toISOString());
    console.log("Trace End:  ", new Date(traceEndTime).toISOString());
    console.log("Trace Duration (ms):", traceEndTime - traceStartTime);

    console.log("\n---- Span: OpenAI API Call ----");
    console.log("Span ID:", openAIResult.spanId);
    console.log("Span Start:", new Date(openAIResult.spanStartTime).toISOString());
    console.log("Span End:  ", new Date(openAIResult.spanEndTime).toISOString());
    console.log("Span Duration (ms):", openAIResult.spanEndTime - openAIResult.spanStartTime);

    console.log("\n✅ OpenAI LLM Response:");
    console.log(openAIResult.content);

    // Construct log payload
    const trace = {
      startedAt: traceStartTime,
      endedAt: traceEndTime,
      name: "test-trace",
      status: "success",
      referenceId: traceId,
      attributes: {
        application: "test-app",
        environment: "test-env"
      },
      tags: ["test-trace-tag"]
    };

    const span = {
      promptId: ADX_PROMPT_ID,
      deploymentId,
      startedAt: openAIResult.spanStartTime,
      endedAt: openAIResult.spanEndTime,
      name: "test-span",
      status: "success",
      referenceId: openAIResult.spanId,
      content: {
        type: "Model",
        provider,
        model,
        input: JSON.stringify({
          config: prompt.config,
          messages: prompt.messages,
          tools: prompt.tools
        }),
        output: JSON.stringify(openAIResult.rawResponse),
        variables: inputVariables.reduce((acc, v) => {
          acc[v.name] = {
            modality: "text",
            value: v.value
          }
          return acc;
        }, {})
      },
      attributes: {
        application: "test-app",
        environment: "test-env"
      },
      tags: ["test-span-tag"]
    };

    // Send trace and span to Adaline
    const logIds = await sendLog(projectId, trace, span);

    // Print log IDs from Adaline
    console.log("\n✅ Log IDs:", logIds);
  } catch (error) {
    console.error("\n❌ Error:", error.message);
  }
})();

Upon successful execution you should receive something like this:

========== TRACE INFO ==========
Trace ID: 6d2491b5-9cbf-4929-a52d-ff94cb34a8aa
Trace Start: 2025-06-16T21:56:58.842000
Trace End:   2025-06-16T21:57:05.238000
Trace Duration (ms): 6396

---- Span: OpenAI API Call ----
Span ID: e1a2041d-4d77-470a-8274-1aa4fbb2314a
Span Start: 2025-06-16T21:56:59.441000
Span End:   2025-06-16T21:57:05.238000
Span Duration (ms): 5797

 OpenAI LLM Response:
As a financial analyst, I navigate the intricate web of numbers and data with both precision and enthusiasm. Each day is an opportunity to explore the financial landscape, and I approach it with the focus of a monk, immersing myself in spreadsheets, balance sheets, and income statements like a meditative practice.

I find joy in identifying trends and patterns hidden within the data, like discovering secret flavors in a candy store. The thrill of analysis is akin to savoring a rich piece of chocolate—each data point contributes to a larger narrative, and every insight has the potential to shape the future of a business.

Balancing books is not just about adding and subtracting; it’s about creating harmony. I thrive on the interplay of revenues and expenses, equity and liabilities, and I bring a sense of artistry to financial modeling. Like a maestro conducting a symphony, I orchestrate my findings into actionable insights, guiding stakeholders toward strategic decisions that resonate through the organization.

In this realm of numbers, I embrace both the rigor of analysis and the creativity of strategic thinking. I revel in the challenges of financial forecasting, budgeting, and variance analysis, transforming potential complexities into streamlined solutions. Each report I create is a story waiting to be told, providing clarity and direction in the ever-evolving world of finance.

Whether I’m collaborating with colleagues, presenting to executives, or mentoring budding analysts, I carry that childlike wonder for discovery with me. Every financial statement is a chance to uncover the sweet spots of opportunity, and I’m here to ensure that we navigate the journey with both diligence and delight.

 Log IDs: {'traceId': '4d423841-1d53-4ce3-bffa-7b94d3637ea2', 'spanIds': ['5fe854ba-e5ca-4e34-b22d-fef35a802300']}

For detailed understanding of the Latest Deployment API, click here.

For detailed understanding of the Log Trace API, click here.

Monitor Your Prompt

After running the above code, you can view the trace and span you just sent to Adaline.

View Spans

  • Click on Prompt in the sidebar.
  • Click on Monitor tab in top bar.

You should see all the spans you just sent. In this case, just the one span you sent. A span consists of the prompt, the LLM response, variables, latency, token usage, cost, etc. Spans are at the prompt monitor level since each span represents a single execution of the prompt.

View Traces

  • Click on project in the sidebar, Untitled in the screenshot.
  • Click on Monitor tab in top bar.

You should see all the traces you just sent. In this case, just the one trace you sent. A trace consists of the spans, latency, status, etc. Traces are at project level since each trace represents a single execution of a workflow which may contain one or more prompts.

You can also view the individual trace in detail by clicking on it.

Next Steps

Now that you’ve integrated a prompt with your application, you can iteratively improve your prompt in Adaline and one-click deploy it to production while monitoring its performance in real-time.

  • Create and deploy more prompts in the same project to build a denser workflow.
  • Send more types of spans such as embedding, retrieval, etc.
  • After sending a few logs, use search and filters within Monitor tab to narrow down the traces and spans you want to view.