ChatBot API

The ChatBot API allows you to try out different Large Language Models (LLMs) from providers like Mistral, OpenAI, and Claude.

For more details on the supported LLM models and their capabilities, please refer to the ChatBot.

Chat with LLM

POST /stream_chat

Headers

NameValue

Content-Type

application/json

Authorization

Bearer <token>

Body

NameTypeDescription

llm

string

-> The llm parameter specifies the type of Large Language Model (LLM) to be used. The supported values are:

  • ClaudeAI

  • VertexMistralAI

  • OpenAI

  • GeminiAI

  • GroqAI

  • LLamaAI

llm_model

string

-> The llm_model parameter specifies the name of the Large Language Model (LLM) to be used. The supported values for this parameter depend on the llm you have selected.

For example, if the llm is OpenAI, the supported llm_model values include:

  • "gpt-4-turbo-2024-04-09"

  • "gpt-3.5-turbo-0125"

Similarly, if the llm is MistralAI, the supported llm_model values include:

  • "mistral-large-latest"

  • "open-mistral-7b"

Ensure that the llm_model value you provide is compatible with the selected model_type.

-> Supported values are :

  1. gemini-1.5-pro-latest

  2. gemini-1.5-flash-001

  3. gpt-4o-2024-05-13

  4. gpt-4o-mini

  5. claude-3-haiku@20240307

  6. claude-3-opus@20240229

  7. claude-3-5-sonnet@20240620

  8. meta/llama3-405b-instruct-maas

  9. llama-3.1-70b-versatile

  10. llama-3.1-8b-instant

  11. mistral-large@2407

  12. codestral@2405

history

List of Dictionary (Python) or map (C++)

-> The history parameter allows you to provide the previous chat history and the last user message. The history should follow a specific pattern:

  1. The history should start with a user message.

  2. After each user message, there should be a response from the assistant.

  3. The last message in the history should be a user message or query.

-> The history should be provided as a list of dictionaries, where each dictionary represents a message. The dictionary should have the following structure:

{
    "content": {
        "text": "User query or assistant response",
        "image_data": [
            {
                "url": "URL of the image",
                "details": "low" or "high" (optional, only for OpenAI models)
            }
        ]
    },
    "role": "user" or "assistant"
}

The image_data parameter is a list that can contain up to 5 image URLs. The details parameter is optional and can be used to specify the level of detail required for the image analysis (only for OpenAI models).

Here's an example of the history parameter:

"history": [
    {
        "content": {
            "text": "hey",
            "image_data": []
        },
        "role": "user"
    },
    {
        "content": {
            "text": "Hello! How can I assist you today?",
            "image_data": []
        },
        "role": "assistant"
    },
    {
        "content": {
            "text": "Please analyze this image",
            "image_data": [
                {
                    "url": "https://res.cloudinary.com/qolaba/image/upload/v1695690455/kxug1tmiolt1dtsvv5br.jpg",
                    "details": "low"
                }
            ]
        },
        "role": "user"
    }
]

system_msg

string

-> The system_msg parameter allows you to set a system message for the Large Language Model (LLM). This message can be used to provide context or instructions to the model, which can influence the tone and behavior of the generated responses.

temperature

float

-> The temperature parameter accepts a float value between 0 and 1. This parameter helps control the level of determinism in the output from the Large Language Model (LLM).

-> Higher temperature values (closer to 1) will result in more diverse and creative output, while lower values (closer to 0) will lead to more deterministic and conservative responses.

image_analyze

bool

-> If you are passing image URLs and want the model to analyze the images, set the image_analyze parameter to true.

enable_tool

bool

-> To use the tools supported by the Chat API, enable the enable_tool parameter. The Chat API currently supports two tools:

  1. Vector Search

  2. Internet Search

-> After enabling the enable_tool parameter, you can provide the details of the tool you want to use in the tools parameter.

tools

dictionary or map

-> The tools parameter allows you to specify the tool you want to use with the Chat API. The supported tools and their configurations are as follows:

  1. Internet Search Tool:

    • tool_name: "Tavily"

    • tool_type: "InternetSearch"

  2. PDF Search Tool:

    • tool_name: "QdrantDB"

    • tool_type: "DBSearch"

    • pdf_references: A list of PDF IDs from which you want to retrieve details. These PDF IDs can be obtained by using the pdfVectorStore endpoint to index the PDFs.

-> The tools parameter should be provided as a dictionary with the above structure, specifying the desired tool and its configuration.

take_route

bool

-> Enable routing functionality by setting this parameter to true. Routing allows defining two LLM models: a strong model and a weak model. The router selects the weak model for general queries and the strong model for advanced queries. This approach optimizes costs by using the more expensive, powerful model only when necessary.

router

dictionary or map

-> Use this dictionary to specify the LLM and LLM model for both strong and weak cases. Ex.

 "router": {
        "strong_llm": "OpenAI",
        "strong_llm_model": "gpt-4o-2024-05-13",
        "weak_llm": "OpenAI",
        "weak_llm_model": "gpt-4o-mini"
      }

Example of Input body parameters :

data = {
    "llm_model": "claude-3-opus-20240229",
    "temperature": 0.5,
    "system_msg": "You are helpful assistatt.Follow the user's instructions carefully. Respond using markdown.",
    "llm": "ClaudeAI",
    "image_analyze": False,
    "enable_tool": True,
    "history": [
        {
            "content": {
                "text": "hey",
                "image_data": []
            },
            "role": "user",
        },
        {  
             "content": {
                "text": "Hello! How can I assist you today?",
                "image_data": []
            },
            "role": "assistant",
        }
    ],
    "tools": {
        "tool_name": "Tavily",
        "tool_type": "InternetSearch",
        "number_of_context": 3,
        "pdf_references": [123]
    },
    "take_route": false,
    "router": {
        "strong_llm": "OpenAI",
        "strong_llm_model": "gpt-4o-2024-05-13",
        "weak_llm": "OpenAI",
        "weak_llm_model": "gpt-4o-mini"
      }
}

After passing the necessary parameters and executing the Chat API, you will receive a stream response. A successful response will look similar to the following:

Response

{
  "output": null, 
  "error": null, 
  "error_data": null
}

The Chat API response is a streaming response, which means you will receive the output in chunks as the model generates the response. The response will continue to stream until the generation is complete.

The response will contain the following elements:

  • output: This object contains the generated text output from the model.

The text output from the LLM can be obtained from the output parameter. When the response is complete, the final chunk will contain a null value in the output parameter, indicating the end of the stream.

Run the API

To test this API, please use the following link:

Parse File

Use this endpoint to add a PDF, CSV, TXT or DOC/DOCX document to the Qolaba AI database. The API will parse the document and store it in the vector database, returning a unique ID that can be used to retrieve information from the document using the Large Language Model (LLM).

Please note the following guidelines when indexing Document:

  • The PDF, DOC/DOCX file should not exceed 200 pages.

  • The CSV file should not contain more than 30 columns and 500 rows.

  • When uploading a CSV file to the API, the first row must contain the column names. This helps the Large Language Model (LLM) better understand the values in each row of the CSV file.

  • Ensure that the document does not contain any sensitive or confidential information.

The unique ID returned after indexing the document can be used in subsequent requests to the Chat API to retrieve relevant information from the document.

POST /pdfVectorStore

Headers

NameValue

Content-Type

application/json

Authorization

Bearer <token>

Body

NameTypeDescription

url

string

The url parameter specifies the URL of the document to be indexed.

Response

{
  "output": null, 
  "error": null, 
  "error_data": null
}

Upon successfully indexing a document, the API will return a response with the unique identifier of the indexed document in output parameter.

You can use the unique identifier in subsequent requests to the Chat API to retrieve information from the indexed PDF.

Run the API

To test this API, please use the following link:

Last updated