We built Gurujada (an education platform) with 147K lines of LiveView, no frontend framework

Gurujada — Building a 147K LOC EdTech Platform with Phoenix LiveView (and No Frontend Framework)

TL;DR: We built a full education platform — whiteboard canvas, audio/video players, real-time presence, AI assistants, 325 business components, 80 LiveView pages — entirely in Phoenix LiveView. No React. No Vue. No
SPA. This is the first in what I hope will be a series of posts sharing what we learned.


The Problem We’re Solving

I recently published an eight-part series on LinkedIn about why we’re building an education product in the age
of AI. The short version:

A few months ago, an institutional administrator told me: “We’ve given every student and teacher free access to ChatGPT. What can you do that it can’t?” When I asked how her teachers were using it — long pause. “Most
of our students use it more than our teachers do.”

A competitive exam coaching founder told me something similar: his top teachers were each using different AI tools with zero coherence and zero visibility. Students who scored brilliantly on internal tests (with AI
casually available) collapsed in the actual exam, where they sat alone with a paper and their own mind.

Both conversations revealed the same thing: every AI tool in education is designed for the student. The teacher — the person who should be most empowered by AI — has been almost entirely ignored.

So we built Gurujada (Guru = teacher, Jada = path in Sanskrit). A platform where the teacher designs learning spirals — Learn,
Practice, Assess — and AI acts as the teacher’s intelligent secretary, not the student’s shortcut. The AI sits beside every student but its true loyalty is to the teacher. It observes where students struggle, fake
understanding, or break through — and gives the teacher complete visibility.

If you’re curious about the full argument — the death of productive struggle, why institutions are trapped, what the classroom taught us that no pitch deck could — the summary
article
links to all eight parts.


Why I’m Posting This on ElixirForum

Because this entire thing is built on Elixir, Phoenix, and LiveView. And I think the story of how we built it is as interesting to this community as what we built.

Over the coming weeks, I want to share detailed technical lessons from the trenches. Two themes in particular:

1. The magnitude of a real LiveView codebase

This isn’t a side project or a CRUD app. The numbers:

  • 147,000+ lines of Elixir/HEEx
  • 728 Elixir modules
  • 325 business components, 80 LiveView pages, 33 contexts
  • 63 migrations, 41 AI modules, 15 Oban workers
  • Real-time collaboration, PubSub-driven presence, institutional multi-tenancy

I want to share how we organized this — the component architecture, the context boundaries, the patterns that scaled and the ones that didn’t. What happens when your LiveView app grows past the “nice small app” phase
that most blog posts cover.

2. No frontend framework — and we have a whiteboard

This is the one that surprises people. We have:

  • A collaborative whiteboard with a full canvas (draw, erase, shapes, real-time sync)
  • Audio and video players with timeline markers, bookmarks, and transcription sync
  • Rich text editors (Tiptap integration)
  • Drag-and-drop exercise builders
  • Real-time presence and live collaboration on walls/discussions
  • AI streaming responses in-browser

All of it runs on LiveView with JS hooks. No React. No Vue. No npm-installed component library. The JavaScript we do have is targeted — thin hooks that bridge LiveView to Canvas APIs, media APIs, and a handful of
browser-native capabilities. The server owns the state. Always.

I want to write honestly about where this approach shines (and it shines in places you might not expect), where it hurts, and the specific patterns we developed to make complex client-side interactions work within
LiveView’s model.

3. AI writing most of the code

Here’s the part that might be controversial. A significant portion of this codebase was generated by AI — Claude, specifically. Not copy-paste from ChatGPT. A genuine workflow where AI is a production-grade
collaborator: writing contexts, composing LiveView pages, generating business components, even debugging its own output.

I want to share what that workflow actually looks like at scale. What you need to get right (project instructions, architectural guardrails, review discipline). What breaks. How it changes the economics of building a
product as a small team. And whether “AI-generated code” can be good code — not just working code, but code that a human would be proud to maintain.


What’s Next

This post is the introduction. In the follow-ups, I’ll dig into specifics — with code, with numbers, with honest assessments of what worked and what we’d do differently.

If you’re building something non-trivial with LiveView, or curious about what a large LiveView codebase looks like in practice, I hope this series will be useful.

Happy to answer questions in the meantime. And if you want the full context on why we built this — the education problem, the teacher’s crisis, the institutional trap — the LinkedIn
series
goes deep.

11 Likes

Let me explain the techstack:

The Core

  • Phoenix 1.8 + LiveView 1.1 — The backbone. Every page, every interaction, every real-time feature runs
    through LiveView. 80 LiveView pages, 325 business components. The server owns the state, always.

  • Ecto + PostgreSQL — 63 migrations, 33 contexts. We lean heavily on Ecto’s composable queries and changesets. Nothing exotic here — just solid, well-organized data access.


The Libraries That Made This Possible

Phoenix Vite — This deserves special mention. Moving from esbuild to Vite was transformative. We have a non-trivial JS surface — Tiptap editors, Fabric.js canvases, MathLive for equations, PDF rendering, media players — and Vite handles all of it cleanly. Hot reload that actually works. Tree shaking that matters when you’re pulling in libraries like pdfjs-dist and mermaid. @LostKobrakai made a library that should probably be the default Phoenix asset pipeline.

Oban + Oban Web — 15 workers handling everything from email delivery to media transcription to AI processing. Oban is one of those libraries where the more you use it, the more you appreciate the design. We started with simple perform/1 workers and gradually leaned into uniqueness constraints, scheduled jobs, and the pruning system. Oban Web gives us production visibility without bolting on a separate monitoring stack. Parker Selbert and Shannon built something exceptional.

ReqLLM — Our entire AI layer runs through ReqLLM. 41 AI modules — tool-using assistants, streaming responses, teaching analytics, content generation — all talking to multiple LLM providers through a single, composable interface built on Req. When you’re building AI features in Elixir, the ecosystem is thinner than Python’s. ReqLLM fills that gap elegantly. The plugin architecture means swapping providers or adding new ones is a config change, not a rewrite.

SutraUI — This is ours. A component library for Phoenix LiveView with shadcn-style styling — our answer to needing consistent, polished UI primitives without pulling in a JavaScript component framework. Buttons, modals, drawers, dropdowns, tooltips — all server-rendered, all composable with LiveView’s assigns and slots. We built it because the alternative was writing the samephx-click-away and focus-trap logic in every project. It’s open source and we use it as the foundation for all 325 business components in Gurujada.

ErrorTracker — Error tracking that lives in your own database. No external SaaS, no data leaving your system. For an education platform handling student data, this matters. The built-in LiveView dashboard is clean and genuinely useful.

Image — Kip Cole’s library handles all our image processing — avatar uploads, cover images, S3 optimization. Wraps Vips/libvips with an Elixir-native API that composes beautifully with our upload pipeline.


The JS Layer (Yes, There Is One)

We said “no frontend framework” — we didn’t say “no JavaScript.” The distinction matters.

Our package.json has real dependencies: Fabric.js for the collaborative whiteboard canvas, Tiptap for rich text editing, MathLive for equation input, media-chrome for audio/video players,
pdfjs-dist for document rendering, mermaid for diagrams, SortableJS for drag-and-drop.

But these are all leaf dependencies — they render into specific DOM nodes that LiveView hooks manage. The hooks are thin bridges: LiveView pushes state down, the JS library renders it, user interactions push events
back up. The server remains the source of truth. No client-side routing. No state management library. No build-time framework overhead.

Tailwind CSS v4 handles all styling, with the @tailwindcss/vite plugin.


Infrastructure

Hetzner — Production runs on a single Hetzner server. No Kubernetes. No multi-region. No container orchestration. A Phoenix release, a systemd service, and a deploy script. For a platform serving Indian schools and colleges, Hetzner’s price-to-performance is hard to beat. We’ll scale when we need to — but one BEAM VM handles a surprising amount of concurrent users before you need to think about it.


Dev & Quality

  • Credo + Styler — Credo for linting, Styler for consistent formatting beyond what mix format does. With AI generating much of the code, automated style enforcement isn’t optional — it’s essential.
  • MJML — Transactional emails that actually render correctly across email clients. Because HTML email is still a nightmare in 2026.
  • Hammer — Rate limiting. Simple, effective, saved us from some creative abuse patterns.
  • Tidewave — AI-assisted dev tooling for Phoenix. We use a fork — we had to add a few things to make it work for our workflow. More on that in a future post.

Thank You

This post is really a thank-you note disguised as a tech stack list.

@josevalim and @chrismccord — Phoenix and LiveView are the reason a small team can build a 147K LOC platform with real-time collaboration, AI streaming, and a collaborative whiteboard without ever reaching for a
frontend framework. The programming model is right. Server-rendered HTML over WebSockets sounds like a limitation until you realize it eliminates entire categories of bugs, state sync issues, and architectural
complexity. Every time someone asks “but can LiveView handle X?” — we have one more feature that proves it can.

To the maintainers of every library listed above — you’re the reason the Elixir ecosystem punches so far above its weight. Each of these libraries is small, focused, well-designed, and composes cleanly with the rest.
That’s not an accident. It’s a culture.


18 Likes

Thanks for having AI write all of that :upside_down_face:

I’m interested in learning how you did the canvas and JS parts. Would be nice to read your writing in it and not AI

1 Like

I thought I will get more eyeballs if I say AI wrote it. :wink: Unfortunately, everyone seems to have skipped the thread after that.

Sure, I will be putting out a few more detailed writeups on some interesting facets of development.

Till then, between you and me - I am responsible for the entire code. Not a line came in without the human in the loop. What we preached in https://gurujada.com - we practiced it in code as well. :slight_smile:

1 Like