Gurujada — Building a 147K LOC EdTech Platform with Phoenix LiveView (and No Frontend Framework)
TL;DR: We built a full education platform — whiteboard canvas, audio/video players, real-time presence, AI assistants, 325 business components, 80 LiveView pages — entirely in Phoenix LiveView. No React. No Vue. No
SPA. This is the first in what I hope will be a series of posts sharing what we learned.
The Problem We’re Solving
I recently published an eight-part series on LinkedIn about why we’re building an education product in the age
of AI. The short version:
A few months ago, an institutional administrator told me: “We’ve given every student and teacher free access to ChatGPT. What can you do that it can’t?” When I asked how her teachers were using it — long pause. “Most
of our students use it more than our teachers do.”
A competitive exam coaching founder told me something similar: his top teachers were each using different AI tools with zero coherence and zero visibility. Students who scored brilliantly on internal tests (with AI
casually available) collapsed in the actual exam, where they sat alone with a paper and their own mind.
Both conversations revealed the same thing: every AI tool in education is designed for the student. The teacher — the person who should be most empowered by AI — has been almost entirely ignored.
So we built Gurujada (Guru = teacher, Jada = path in Sanskrit). A platform where the teacher designs learning spirals — Learn,
Practice, Assess — and AI acts as the teacher’s intelligent secretary, not the student’s shortcut. The AI sits beside every student but its true loyalty is to the teacher. It observes where students struggle, fake
understanding, or break through — and gives the teacher complete visibility.
If you’re curious about the full argument — the death of productive struggle, why institutions are trapped, what the classroom taught us that no pitch deck could — the summary
article links to all eight parts.
Why I’m Posting This on ElixirForum
Because this entire thing is built on Elixir, Phoenix, and LiveView. And I think the story of how we built it is as interesting to this community as what we built.
Over the coming weeks, I want to share detailed technical lessons from the trenches. Two themes in particular:
1. The magnitude of a real LiveView codebase
This isn’t a side project or a CRUD app. The numbers:
- 147,000+ lines of Elixir/HEEx
- 728 Elixir modules
- 325 business components, 80 LiveView pages, 33 contexts
- 63 migrations, 41 AI modules, 15 Oban workers
- Real-time collaboration, PubSub-driven presence, institutional multi-tenancy
I want to share how we organized this — the component architecture, the context boundaries, the patterns that scaled and the ones that didn’t. What happens when your LiveView app grows past the “nice small app” phase
that most blog posts cover.
2. No frontend framework — and we have a whiteboard
This is the one that surprises people. We have:
- A collaborative whiteboard with a full canvas (draw, erase, shapes, real-time sync)
- Audio and video players with timeline markers, bookmarks, and transcription sync
- Rich text editors (Tiptap integration)
- Drag-and-drop exercise builders
- Real-time presence and live collaboration on walls/discussions
- AI streaming responses in-browser
All of it runs on LiveView with JS hooks. No React. No Vue. No npm-installed component library. The JavaScript we do have is targeted — thin hooks that bridge LiveView to Canvas APIs, media APIs, and a handful of
browser-native capabilities. The server owns the state. Always.
I want to write honestly about where this approach shines (and it shines in places you might not expect), where it hurts, and the specific patterns we developed to make complex client-side interactions work within
LiveView’s model.
3. AI writing most of the code
Here’s the part that might be controversial. A significant portion of this codebase was generated by AI — Claude, specifically. Not copy-paste from ChatGPT. A genuine workflow where AI is a production-grade
collaborator: writing contexts, composing LiveView pages, generating business components, even debugging its own output.
I want to share what that workflow actually looks like at scale. What you need to get right (project instructions, architectural guardrails, review discipline). What breaks. How it changes the economics of building a
product as a small team. And whether “AI-generated code” can be good code — not just working code, but code that a human would be proud to maintain.
What’s Next
This post is the introduction. In the follow-ups, I’ll dig into specifics — with code, with numbers, with honest assessments of what worked and what we’d do differently.
If you’re building something non-trivial with LiveView, or curious about what a large LiveView codebase looks like in practice, I hope this series will be useful.
Happy to answer questions in the meantime. And if you want the full context on why we built this — the education problem, the teacher’s crisis, the institutional trap — the LinkedIn
series goes deep.






















