Batch Processing
Last updated
Last updated
Batch processing in Vult AI allows users to process multiple files simultaneously using AI-powered automation. This feature is particularly useful for users dealing with large datasets, bulk document analysis, text extraction, or AI-assisted content modification. Unlike single-file processing in AI Agent, batch processing submits a group of files to the AI system and returns structured results after processing.
Batch processing is primarily handled through OpenAI’s Batch API, ensuring high efficiency and cost optimization when working with multiple files. However, due to the complexity of processing multiple requests, batch responses may take up to 24 hours to be completed.
Users can initiate batch processing by placing multiple files inside a designated folder within the Vult AI All Files section. Supported file types include:
Text Files (.txt, .json, .md)
Code Files (.py, .js, .go, .java, etc.)
Document Files (.pdf, .docx, .ppt)
Image Files (.png, .jpg, .jpeg)
Each batch request is limited to 200MB in total file size.
To start batch processing, the user must:
Navigate to the "All Files" section. Right-click on the folder containing multiple files.
Select "AI Agent" to initiate batch processing. Enter a processing prompt—this tells the AI what to do with the files (e.g., "Summarize all PDF documents" or "Extract key points from text files").
Submit the request.
Once the batch request is submitted:
The system downloads all files from the selected folder.
A JSONL file (JSON Lines format) is generated, containing file metadata, prompt text, and structured content for processing.
The request is then sent to OpenAI’s Batch API, which processes all files in parallel.
The batch processing system does not combine all files into a single response; instead, each file is processed independently and returns individual results.
Single-file processing results are typically available instantly in the AI Agent Folder. Batch processing results take longer, usually up to 24 hours, depending on the request size and complexity.
Users receive notifications once processing is complete. Processed files and AI-generated responses are saved automatically in a dedicated Batch API folder within All Files.
Batch processing features include scalability and efficiency, supporting hundreds of files within a single batch request. The AI optimally processes multiple requests, significantly reducing manual effort.
It is also cost-effective, as batch processing is optimized for bulk processing, making it more economical compared to real-time AI queries.
Automated processing with AI models uses OpenAI’s Batch API to automatically analyze, summarize, or extract data from files, and works with different AI models, both text-based and image-based.
Additionally, session tracking and storage ensure that batch results are stored in the AI Agent Folder or Batch API Folder, allowing users to track processing progress and easily retrieve past batch results.
Batch processing can be used for large-scale text analysis, such as summarizing multiple research papers and extracting key information from contracts or legal documents.
It is also useful for bulk document formatting and grammar correction, improving grammar and structure in multiple text-based files like emails and reports, with AI-assisted proofreading for large document sets.
Additionally, batch processing can be applied to code review and analysis, running AI-powered code explanation or debugging on multiple scripts simultaneously, and optimizing and refactoring code across different programming languages.
For image processing, batch processing can enhance or modify multiple images using AI models and convert multiple images into videos or animations using Lightricks AI.
Batch requests take significantly longer to process compared to single-file AI Agent requests, with processing times extending up to 24 hours.
This is due to the limited model availability, as only OpenAI models support batch processing, while other models like Grok, DeepSeek, or Lightricks are not available for such requests.
Additionally, there is a fixed file size limit, with the maximum allowed file size for batch processing being 200MB per request. Unlike interactive AI chat, users do not receive step-by-step responses; instead, all files are processed at once, and the results are returned after the completion of the batch processing.
Batch-processed files and responses are automatically stored in a dedicated Batch API folder.
Each file's result can be reviewed, downloaded, or shared via Vult AI’s file management system. Users can access past batch processing results under their file history.