Back to projects
2026 - Present
AI Brainstorm
Get better answers by letting multiple LLMs think together
A full-stack web application that lets users assemble teams of AI models, chat with them individually, and run multi-model brainstorm sessions that produce a unified answer.
Role: Solo Developer - Full Stack
AI Brainstorm is a multi-model AI platform designed to help users get more thoughtful, well-rounded answers. Instead of relying on a single LLM, users can build a custom 'brainstorm' of models from providers like OpenAI, Anthropic, Google, Mistral, and xAI, then bring them into structured discussions. The product supports both one-on-one conversations and collaborative AI meetings, where models respond, challenge each other, and a designated leader synthesizes the final result. I built the platform as a modern SaaS application with authentication, subscriptions, token-based usage tracking, file attachments, and a polished responsive frontend.
Key Results
Supports multiple leading AI providers
Multi-model discussion and summary workflow
Token-based billing and subscription system
Full-stack SaaS architecture
Features
Multi-Model Brainstorms
Users can assemble custom groups of LLMs and run structured discussions where each model contributes its perspective before a leader generates a final synthesis.
Shared Conversation Context
Chat history is preserved and condensed into reusable summaries so models can maintain context across longer conversations without excessive token usage.
Provider-Agnostic Model Selection
Supports models across multiple AI providers, allowing users to mix strengths like reasoning, nuance, and breadth in a single workflow.
Usage and Token Accounting
Built-in token reservation, balance enforcement, monthly allowances, and purchased-token tracking support predictable billing and cost control.
Authentication and Billing
Secure user accounts, subscription flows, and payment handling are integrated with Clerk and Stripe for a complete SaaS experience.
Technical Challenges
Challenge
Managing context across long-running AI conversations without making inference costs explode
Solution
Implemented conversation summarization and token-budgeting workflows so the app can preserve important context while keeping prompts efficient.
Challenge
Coordinating multiple LLMs in a single discussion flow
Solution
Designed a meeting system with configurable members, multi-round discussion support, and leader-based summary generation to create coherent final outputs.
Challenge
Building sustainable SaaS billing around variable AI costs
Solution
Created a token accounting system with reservations, monthly subscription allowances, overflow handling, and one-off token purchases tied into Stripe.