ChatBot
Last updated
Last updated
Qolaba's platform offers a robust chatbot feature equipped with a selection of advanced Large Language Models (LLMs). Here’s how to navigate and utilize the chatbot dashboard effectively.
Dashboard Navigation: Once logged in, locate the chatbot option on the left panel of your dashboard.
Chatbot Interface: Clicking the chatbot icon will lead you to the dedicated chatbot dashboard.
The top of the left panel allows you to choose from 9 different LLMs, each with unique capabilities and context window sizes. The context window determines the amount of information the LLM can consider when generating responses. This is akin to the model's memory limit during a conversation. If the conversation exceeds this limit, the model may start "forgetting" earlier parts of the conversation, which could impact the relevance and coherence of its responses.
Here's a breakdown of the available LLMs:
GPT-4o mini (OpenAI): Max input length in words : 78132, Output words Length : 11,468, Total context window in words : 89,600
GPT-4o (OpenAI): Max input length in words : 86733, Output words Length : 2,867, Total context window in words : 89,600
Claude 3 Haiku (Anthropic): Max input length in words : 137,133, Output words Length : 2,867, Total context window in words : 140,000
Claude 3 Opus (Anthropic): Max input length in words : 137,133, Output words Length : 2,867, Total context window in words : 140,000
Claude 3.5 Sonnet (Anthropic): Max input length in words : 134,266, Output words Length : 5,734, Total context window in words : 140,000
Gemini 1.5 Pro Vision (Google): Max input length in words : 694,266, Output words Length : 5,734, Total context window in words : 700,000
Gemini 1.5 Pro Flash (Google): Max input length in words : 694,266, Output words Length : 5,734, Total context window in words : 700,000
Mistral Large 2 (MistralAI): Max input length in words : 86,733, Output words Length : 2,867, Total context window in words : 89,600
Mistral Small (MistralAI): Max input length in words : 16,666, Output words Length : 5,734, Total context window in words : 22,400
Codestral (MistralAI): Max input length in words : 19,533, Output words Length : 2,867, Total context window in words : 22,400
Llama 3.1 405B (Meta) : Max input length in words : 86,733, Output words Length : 2,867, Total context window in words : 89,600
Below the LLM selection, you'll find pre-defined "Assistant" options. These are system prompts designed to guide the LLM towards specific tasks. For instance, the "Image Prompt Engineer" assistant helps you craft effective prompts for image generation using tools like DALL-E or SDXL.
Text Box: The central text box is where you input your queries or instructions for the LLM.
Internet Search: Enable the toggle button available in left panel to allow the LLM access to basic internet information, enhancing its responses with real-time data.
Upload Options: The "+" icon next to the text box provides options to:
Upload Image: Upload up to 5 images for the LLM to analyze and provide information about.
Upload PDF/TXT/DOC/DOCX: Upload a document (max 100 pages for PDF/DOC/DOCX) for the LLM to process and answer your questions based on its content. This is ideal for extracting specific information, generating summaries, or exploring the document in detail.
Initial Upload: The first upload of a document may take some time for data extraction and processing.
Document Storage: Uploaded documents are saved for future use and can be accessed from the right panel.
Reusing Documents: Type "#" in the text box to view a dropdown list of your uploaded documents. Select up to 5 documents at a time for the LLM to reference in its responses.
History: Review past conversations and resume any chat from where you left off.
Saved Prompts: Store frequently used prompts for easy access and reuse.
Saved Documents: View and manage your uploaded documents.
The left panel provides an option to adjust the temperature parameter, which influences the creativity of the LLM model's output.
Higher temperature values encourage the model to generate more diverse and unexpected responses.
Lower temperature values result in more focused and deterministic outputs, adhering closely to the provided context.
To initiate a new chat session, simply use the "New Chat" option available on the left panel.