Last updated
Last updated
The AI integration in Vult is structured to provide real-time AI chat, file-based AI processing, memory retention, and batch processing. The technical implementation consists of frontend components, backend APIs, database interactions, and AI model processing pipelines.
This section details how chat requests, file processing, batch execution, and memory retention are handled within Vult AI.
Users interact with the Vult AI Chat Interface by entering a text prompt, selecting an AI model. and optionally attaching a file.
If the user attaches a file:
The file is privately shared using AuthTicket & LookupHash.
AI downloads the file securely.
The file content is analyzed using the selected AI model.
The request is sent to the backend API for processing and the selected AI model generates a response.
Chat responses are saved in AI Chat Folder and File-based responses are stored in AI Agent Folder.
Handles real-time AI chat interactions.
Endpoint:
Request Payload:
Response:
Processes multiple files in one request for AI-based batch execution.
Endpoint:
Retrieves past user interactions to maintain context across sessions.
Endpoint:
Files are not uploaded to third-party AI models. Instead, a secure private share is created using an AuthTicket, which is a unique Base64-encoded authentication token, and a LookupHash, which serves as a reference hash to identify the file.
The AI model then downloads and processes the file.
The process begins with the user prompt and any attached file being sent to the Backend API. The backend determines the selected AI model. If a file is attached, it is downloaded from Vult Storage.
The AI model then generates a response, which can be text, image, or document-based. This response is stored and displayed in the user interface.
The Mem0 AI API stores previous chat sessions. When a user starts a new conversation, the AI retrieves contextually relevant past interactions.
The retrieved memory is appended to the current prompt before the AI processes it.
The user begins by selecting a folder containing multiple files. The AI Agent then creates a batch request for processing these files.
All selected files are securely downloaded. These files, along with any user-defined instructions, are packaged into a JSONL file. This JSONL file is then sent to the OpenAI Batch API for processing.
The batch API processes each file independently. The AI responses are stored as individual text files in the AI Agent Folder. For large batches, processing can take up to 24 hours.
Chat interactions between users and the AI agent are stored in a NoSQL database. The schema is designed to be flexible, allowing for variable message formats, timestamps, metadata, and session grouping.
While conversations are stored in the database, media assets and outputs generated by the AI agent are saved on the file system for efficient retrieval, archival, and offline access.
Responses are stored in the AI Agent Folder for future retrieval.
Text responses are stored as .txt or .pdf files.
Image-based responses are processed and saved in the AI Image Directory.