StackNova

Skip to main content
← Case studies

Lumina — AI-Powered Business Intelligence Platform

AI / Business Intelligence

Lumina

Product overview

Lumina is an AI-native business intelligence platform that lets teams ask questions in plain language, get instant answers from connected data sources, and receive proactive alerts and trend summaries. It combines semantic search over structured and unstructured data with a conversational interface and a flexible report builder.

Problem statement

Product and operations leaders were drowning in dashboards and one-off SQL requests. Data lived in data warehouses, CRMs, support tools, and spreadsheets. Answering a single business question could take days and require engineering or analytics handoffs. Decision-making was delayed and insights were often stale by the time they reached stakeholders.

Product vision

A single place where any stakeholder can ask 'What's driving churn this quarter?' or 'Which segments are most at risk?' and get accurate, sourced answers in seconds—without writing queries or waiting on analysts. The product should feel like a capable colleague that knows the business and can explain its reasoning.

Key features

  • Natural-language query interface with clarification and follow-up
  • Unified semantic layer across Snowflake, BigQuery, Postgres, and REST APIs
  • Proactive insights and anomaly detection with configurable thresholds
  • Report builder with AI-suggested charts and narrative summaries
  • Role-based access and audit trails for compliance
  • Slack and email digests for key metrics and exceptions

UX / product design approach

We ran discovery with the client’s core product and ops leads to map how they ask questions and use data today. We prioritized a single, chat-first entry point with a sidebar for saved questions and reports. Simple answers first, with optional drill-down and explain controls. Desktop-first with a shared design system; mobile reserved for alerts and quick checks.

Technical architecture

Next.js front end with a real-time WebSocket channel for long-running queries. Python backend services for query parsing, planning, and execution; we use a query planner that compiles natural language to optimized SQL via an internal DSL. Vector store (pgvector) for semantic indexing of schemas and docs. Caching layer (Redis) for repeated queries and precomputed aggregates. Event-driven pipeline for syncing metadata and sample data from source systems.

Technology stack

  • Next.js 14, React, TypeScript
  • Python (FastAPI), Celery
  • PostgreSQL, pgvector
  • Snowflake, BigQuery (as data sources)
  • Redis, WebSockets
  • OpenAI / compatible LLM APIs for NL understanding and summarization

Challenges solved

  • Translating vague questions into valid, performant queries across multiple schemas
  • Handling permission and row-level security when executing user-initiated queries
  • Keeping latency under 3 seconds for common questions via caching and incremental execution
  • Explaining AI-generated answers in a way that builds trust and supports audit

Business impact

The client’s team reports noticeably faster answers to ad-hoc questions without writing SQL or waiting on analysts. The product is live internally and the client is using it to validate the approach before a wider rollout.

Visual elements

Suggested UI highlights for this product.

  • Main dashboard: chat interface with sample questions and recent answers
  • Report builder: drag-and-drop chart types with AI-suggested visualizations
  • Analytics view: trend lines, cohort tables, and anomaly highlights
  • Mobile: notification center and quick-answer cards

Outcome

MVP shipped on time; now in use by the client’s internal team with positive early feedback. Foundation in place for broader rollout.

Services

  • AI Solutions
  • SaaS
  • Custom Development
  • Design