Technical Specification: Auto Report
This document details the technical architecture, ingestion pipeline, and delivery mechanisms for the Auto Report module.
v1.0 — ApprovedPlatform: VENI-AI ShellStack: Bun/Ignis/Hono
1. High-Level Architecture
The Auto Report module operates as a Task-Oriented System that interacts with multiple satellite APIs to gather context before invoking an LLM for generation.
Component Diagram
2. Technology Stack
Backend (API & Worker)
- Runtime: Bun
- Framework: Ignis Framework
- Task Runner: Mastra (AI-first workflow engine).
- LLM: OpenAI GPT-4o-mini.
- Database: PostgreSQL.
Frontend (UI)
- Library: React 18
- Build Tool: Vite
- Reporting UI: Recharts for visual data mapping.
- PDF Generation: Puppeteer (Headless Chrome) for high-fidelity exports.
3. Implementation Logic
3.1 Data Ingestion Pipeline
- Source Discovery: The template defines which modules (e.g., HRM) and endpoints to fetch.
- Context Compression: Large data payloads are summarized or truncated to fit the LLM context window using a hierarchical summarization technique.
- Prompt Injection: The compressed context is injected into the template's system prompt.
3.2 Scheduling & Queuing
- Automated runs are managed via a distributed cron engine within the Mastra framework.
- Each run is recorded as an
execution_logentry with its status (pending,running,success,error).
3.3 Snapshot Integrity
To ensure report consistency, data fetched during the "Running" state is saved as a JSON snapshot in the database. This allows admins to see exactly what data the AI used to generate a specific report.
4. Security & Compliance
- Multi-tenancy: The executor only fetches data from satellite APIs using the organization's service-to-service (S2S) credentials or the triggering user's identity.
- Retention Policy: Organizations can configure a retention period (e.g., 90 days) after which report snapshots are archived or deleted.
Related Links