Workers
The 0Box platform employs a suite of background workers to manage asynchronous tasks, ensuring robust system operation and scalability without blocking user-facing APIs.
These workers handle critical background functions including Stripe-funded allocation processing, restricted blobber ticket generation, expiring storage notifications, free token disbursement, AI batch job submission, and external resource cleanup.
Each worker is designed to be:
Isolated – Responsible for a single well-defined function (e.g., creating allocations, sending funding).
Resilient – Capable of retrying failed operations automatically or on the next scheduled cycle.
Asynchronous – Operates outside the main request/response loop, allowing longer-running or delayed tasks to be completed safely.
Periodic schedules, database state conditions, or webhook/task insertion typically trigger workers. Most include a producer-executor model:
Producers scan the system for pending work (e.g., Stripe payments needing allocation creation).
Executors carry out the actual task (e.g., calling the blockchain SDK or OpenAI API).
All workers log activity and errors for traceability. Some support idempotent execution, ensuring tasks can be retried without duplicating side effects. Each worker's impact on data is scoped to its purpose and may involve the database, external APIs (like Stripe or OpenAI), or in-memory state.
Worker Categories
create_allocation
Storage / Stripe
Ensures successful creation of allocations from paid Stripe tasks
authorized_blobbers
Blobber Access Control
Issues and stores access tickets for restricted blobbers
expiring_allocations
User Notifications
Detects allocations nearing expiry and generates alerts
delete_wallet_transactions
Database Maintenance
Cleans historical wallet spend records for performance/compliance
free_storage
Blockchain Config Sync
Updates free storage parameters from smart contract config
funding_zbox
Token Grants
Transfers free tokens from team wallet to eligible users
openai_batch_request
AI / OpenAI Integration
Submits user file/prompt batches to OpenAI asynchronously
delete_openai_files
OpenAI Storage Cleanup
Periodically deletes old or unneeded files from OpenAI storage
1. create_allocation
create_allocation
Purpose: Automatically creates or upgrades storage allocations for users who have completed Stripe payments, but where the blockchain transaction failed or was never initiated. This ensures allocations paid for via Stripe are eventually fulfilled without user intervention.
Trigger Mechanism:
Runs periodically. Monitors the Stripe allocation task table for entries with TxSuccess = false
.
Producer Logic:
Queries for allocation tasks where:
A Stripe payment was received
TxSuccess = false
(transaction not yet successful)
Batches eligible tasks for execution, respecting concurrency limits
Executor Logic:
Determine task type:
If allocation exists → perform an upgrade
If not → create a new allocation
New Allocation Flow:
Calculate required tokens (based on data/parity shards, size, price)
Generate blobber authorization tickets (if needed)
Call
CreateAllocationForOwner
via blockchain SDKRecord funding via
CreateAllocationStripeRecord
Capture blockchain transaction hash
Upgrade Flow:
Load existing allocation
Calculate additional token cost
Add/remove blobbers as needed (with auth tickets)
Call
UpdateAllocation
via SDKRecord funding used
Capture updated allocation ID
Finalise Task:
Call
UpdateStripeAllocationTask
Mark
TxSuccess = true
and update DB statusCompleted tasks are skipped in future runs
Error Handling:
If blockchain call fails, task remains
TxSuccess = false
Retries are handled by future producer runs
Logs all errors per attempt
Data Affected:
StripeAllocationTask
tableAllocation records
Funding record (via
CreateAllocationStripeRecord
)Blockchain transaction (via SDK)
2. authorized_blobbers
authorized_blobbers
Purpose: Generates and stores authorization tickets for blobbers marked as restricted. These tickets allow authenticated clients to interact with blobbers that require access control.
Trigger Mechanism:
Runs periodically. Scans for blobbers where auth_ticket
is null and restricted = true
.
Producer Logic:
Queries restricted blobbers missing
auth_ticket
Batches up to a configured limit
Enqueues tasks for processing
Executor Logic:
For each blobber:
Call the blobber’s API endpoint to generate
auth_ticket
Store the ticket in the DB (RestrictedBlobber record)
If it fails, apply cooldown (extend retry delay)
Tracks retry attempts or last-attempt time (for cooldown management)
Failure Handling:
If blobber is unreachable or errors, task is not deleted
Retry is delayed using a cooldown mechanism
Data Affected:
RestrictedBlobber
DB tableFields:
auth_ticket
, timestamp, retry state
Use Cases:
Enables clients to use restricted blobbers without manual ticket generation
Ensures blobbers are ready to serve storage securely
3. expiring_allocations
expiring_allocations
Purpose: Detects allocations that are nearing expiration and triggers user notifications, allowing proactive renewals or downloads before access ends.
Trigger Mechanism: Scheduled task (e.g. daily). Checks for allocations expiring in X days (1, 3, 7 etc.).
Producer Logic:
Queries for allocations with expiration in the configured future window
Creates one task per matching allocation
Batches them per cycle
Executor Logic:
For each expiring allocation:
Creates a notification entry in Elasticsearch
Message: “Your allocation expires in X days”
User ID and allocation ID are attached
Side Effects:
Elasticsearch notifications may be consumed by:
User dashboard
Mobile/app notification service
(Possibly triggers push/email separately)
Error Handling:
If Elasticsearch fails, logs the error
Task will be retried on the next cycle if window still valid
Data Affected:
Writes new notification documents to Elasticsearch
Does not modify allocations themselves
4. delete_wallet_transactions
delete_wallet_transactions
Purpose:
Performs daily cleanup of historical entries in the TeamWalletSpending
table, removing old wallet spending records to comply with data retention policies and improve database efficiency.
Trigger Mechanism: Runs on a fixed schedule: once every 24 hours.
Operation Logic:
Begins a new database transaction
Calls
fundingRepo.DeleteWalletSpendingData
to delete outdated rows (Retention cutoff not specified — likely configurable)Commits the transaction if successful
Logs and aborts if any error occurs
Retry Behaviour:
No immediate retry
Job will automatically run again the next day
Data Affected:
Deletes old rows from
TeamWalletSpending
table only
Side Effects:
None — purely internal cleanup
Notes:
All deletions are performed transactionally
This job ensures the table only contains recent, relevant data
5. free_storage
free_storage
Purpose: Fetches the latest free storage and pricing configuration from the blockchain and updates the application’s global storage config accordingly.
Trigger Mechanism: Runs every 1 hour on a schedule
Operation Logic:
Calls
gosdk.zboxcore.UpdateStorageConfig()
Retrieves config from the blockchain smart contractUpdates the system’s global storage settings:
FreeAllocationSettingsSize
FreeAllocationSettingsDataShards
FreeAllocationSettingsParityShards
MaxWritePrice
CancellationCharge
Changes are applied globally and are visible across all services and UIs that reference the
StorageConfig
Error Handling:
If the fetch or update fails, logs the error
Retries on the next scheduled hour
Data Affected:
In-memory global configuration (and/or cached config store)
No user-specific data is changed
Side Effects:
Ensures current system behaviour reflects updated economic limits or campaign settings
6. funding_zbox
funding_zbox
Purpose: Processes pending funding tasks that send tokens from the team wallet to users (e.g. free signup tokens, promotional grants).
Trigger Mechanism: Runs periodically (frequency not specified; likely every few minutes). Scans for incomplete funding tasks.
Producer Logic:
Queries DB for all funding tasks marked as incomplete
Executor Logic:
For each pending task:
Performs token transfer using blockchain SDK (from team wallet to user wallet)
Marks the task as complete in DB
Sends a user notification (via Firebase Cloud Messaging)
Records transaction hash and updates audit logs if needed
Error Handling:
On transfer failure:
Task remains marked as incomplete
Will be retried in future run
Logs errors
Data Affected:
Updates funding task status in DB
May log transaction details or audit trail
Affects user balance on-chain (not stored locally)
Side Effects:
Triggers Firebase notification: “You have received X free tokens”
7. openai_batch_request
openai_batch_request
Purpose: Handles asynchronous batch processing of files or prompts using OpenAI’s GPT-based APIs. Used for large-scale summarisation, document parsing, etc.
Trigger Mechanism:
Runs periodically to pick up pending tasks from OpenAiBatchEntity
table.
Producer Logic:
Queries DB for tasks with
status = pending
Locks or tags tasks to prevent duplicate execution
Executor Logic:
Reads task input: prompt, system prompt, model, ShareInfoIds
Downloads files from blobbers (via auth_ticket + lookup_hash)
Prepares
.jsonl
file:If image: uploads to S3, includes URL in prompt
If text: embeds content directly
Constructs structured OpenAI prompts per file
Uploads
.jsonl
to OpenAI Files APISubmits batch job to OpenAI’s chat/completion API
Stores the returned Batch ID in the database
Deletes any temp files/S3 uploads
Post-Processing:
Worker does not fetch batch results
Retrieval is triggered separately via
GET /openai_batch_response
Error Handling:
On error:
Marks task as
failed
Logs details
Task is not retried automatically unless re-submitted
Data Affected:
OpenAiBatchEntity
table (status, batch ID, etc.)Temporary S3 buckets and local temp files (cleaned up)
Notes:
System ensures task idempotency
Actual result delivery is separated for responsiveness
8. delete_openai_files
delete_openai_files
Purpose: Cleans up stale or unnecessary files from the OpenAI account to control storage costs and enforce data retention limits. Targets JSONL or temporary files uploaded during batch requests.
Trigger Mechanism: Runs every 1 hour on a fixed schedule.
Operation Logic:
List Files from OpenAI:
Uses OpenAI’s API to fetch all files
Each file includes metadata:
created_at
,purpose
Filter for Deletion:
File must meet both conditions:
Created more than 24 hours ago
purpose
is not"batch"
Delete Eligible Files:
For each matching file:
Calls OpenAI’s API to delete it permanently
Log Results:
Logs all deletions
Logs any API errors or failures
Error Handling:
If listing or deletion fails:
Errors are logged
No retry is attempted within the same hour
Files will be re-evaluated in the next cycle
Data Affected:
Only affects OpenAI’s file storage
No changes made to 0Box database or internal services
Side Effects:
Helps reduce cost and risk by removing stale uploaded content
Prevents build-up of unused temporary files
Considerations:
Files marked as
"batch"
are not deletedAssumed to still be in use or retained for audit/debug purposes
Schedule can be tuned if batch jobs complete faster/slower
Policy may evolve based on OpenAI’s file lifecycle guarantees
Last updated