sTELgano - a zero-knowledge messaging app on Phoenix 1.8 + LiveView (feedback welcome on the unauthenticated channel design)

Hi all,

Wanted to share a side project I just launched, and open a thread on a few Phoenix patterns I went with that I’m not 100% sure are idiomatic.

sTELgano (pronounced stel-GAH-no — a portmanteau of stegano-graphy and TEL) is a privacy-focused messaging app where the “shared secret” between two people is a fake phone number they each save in the other’s real contact card. You enter that number and a PIN at https://stelgano.com, and the browser derives all keys locally. The server only ever sees SHA-256 hashes and AES-256-GCM ciphertext.

The threat model is deliberately narrow and stated clearly throughout: it protects against an intimate-access attacker (a partner who picks up your unlocked phone), not state actors. I wanted to be upfront about this rather than imply more than the system actually provides.

Worth acknowledging up front: this sits alongside other zero-knowledge Phoenix/LiveView projects the forum has featured — [Mosslet](https://mosslet.com/) and [Metamorphic]( Metamorphic - Zero-knowledge, E2E encrypted habit tracker built with Phoenix LiveView ) in particular. Different product categories (privacy-first social, habit tracking) but a shared server-blind philosophy; sTELgano is the messaging-shaped sibling, with an unusually narrow threat model as its differentiator.

There are three parts I’d most like feedback on from the Phoenix crowd:

1. Fully unauthenticated socket, auth inside `join/3`

The chat uses a raw Phoenix Channel on a session-less socket — no cookie, no token, no `connect/3` auth:

def connect(_params, socket, _connect_info), do: {:ok, socket}

All access control lives inside AnonRoomChannel.join/3, which validates an (room_hash, access_hash, sender_hash) triple — each a 64-char hex SHA-256 — against the DB. It felt unusual to have a socket with no notion of identity at all, but it keeps the auth surface tiny and the socket stateless. Is there a more idiomatic way to model this in Phoenix that I’m missing?

2. N=1 invariant enforced at two layers

At most one message ever exists per room. Replying atomically deletes the previous one:

def send_message(room_id, sender_hash, ciphertext, iv) do

Repo.transaction(fn ->

existing = current_message(room_id)

if existing && existing.sender_hash == sender_hash do

Repo.rollback(:sender_blocked)

end

if existing, do: Repo.delete!(existing)

    %Message{} |> Message.changeset(...) |> Repo.insert!()

end)

end

Application-layer transaction plus a UNIQUE index on messages.room_id as a backstop against concurrent inserts under READ COMMITTED. I went back and forth on whether the DB guard is belt-and-braces or actually necessary — curious how others would model this.

3. LiveView as a pure state machine, crypto in JS hooks

ChatLive holds no crypto state server-side. It flips between :entry → :deriving → :connecting → :chat → :locked → :expired atoms and delegates every cryptographic operation to a colocated JS hook, which handles PBKDF2 key derivation (600k iters, OWASP 2023), AES-GCM encrypt/decrypt, and the Channel lifecycle. The LiveView orchestrates screens and pushes events; the hook holds the keys.

This is the first time I’ve deliberately kept secrets out of LiveView assigns — would be interested if anyone has shipped something similar and hit sharp edges I haven’t.

Stack: Elixir 1.18, Phoenix 1.8, LiveView, PostgreSQL, Oban (for TTL-based room expiry), Req, Tailwind v4. Zero npm cryptographic libraries — Web Crypto API only.

- Repo: GitHub - sTELgano/sTELgano: Private messaging that hides in your contacts. · GitHub (AGPL-3.0)

- Crypto spec (sTELgano-std-1): Spec — sTELgano

- Single-file crypto implementation: assets/js/crypto/anon.js

Happy to answer anything about the design. Particularly interested in:

- whether the unauthenticated-socket pattern has a more idiomatic counterpart

- thoughts on the N=1 transaction (races, pitfalls, better ways to express “turn-based”)

- whether :telemetry would be worth wiring in for the aggregate country/daily counters (currently plain Ecto writes)

Cheers.

2 Likes

Not my area of expertise, so apologies if this sounds naive.. but can the security be upgraded or is it the platform itself (client side and/or server-side) you deem unsuitable/rendering any such upgrade futile? Please expand on that.

Thanks for the question — I’m not a security researcher either, so let me answer in plain terms based on what I learned while designing the threat model.

The short answer: the platform (browser + server) is the ceiling, not the crypto. I could swap PBKDF2 for Argon2, or AES-GCM for XChaCha20, and it wouldn’t move sTELgano into “protects against governments” territory — because the web itself isn’t suited to that threat tier. A few well-known reasons:

  1. The server ships the code. Every visit re-downloads anon.js from stelgano.com. If the server is compromised or legally compelled, it can serve a modified version that leaks the key. Native apps can be code-signed and pinned; websites can’t. This is the main reason Signal is an app and not a site — Tony Arcieri’s 2013 essay “What’s wrong with in-browser cryptography?” is still the canonical reference.
  2. Metadata leaks outside the crypto. TLS SNI and DNS tell any upstream observer that a device connected to stelgano.com. No amount of payload encryption hides the connection itself.
  3. No hardware-backed key storage in the browser. Native apps can use Secure Enclave / StrongBox. Browsers give you sessionStorage, which is cleartext to anyone with the device unlocked.

So I’d say the upgrades would be mostly futile for the state-actor threat tier — it’s a platform limit, not an implementation one. Which is why I scoped sTELgano to the intimate-access attacker specifically: the web platform is actually well-suited to that problem. Anyone who needs protection from law enforcement should use Signal on a hardened device; I’m not trying to duplicate that.

Worth noting that Arcieri’s post argues against in-browser crypto for all threat models, including mine. I disagree on that specific point — for the intimate-access attacker, the server-compromise risk he focuses on is much less salient than for the nation-state case. But his mechanical analysis of why the platform is a weak substrate is spot-on, and that’s what I’m pointing to.

Happy to be corrected by anyone with more security background than me.

1 Like

But the no. 2 applies to Signal and the likes as well. For as long as there’s a server there will be an IP address with or without a domain name.

No. 1 can be solved by the likes of Electron or even: GitHub - elixir-desktop/desktop: Building Local-First apps for Windows, MacOS, Linux, iOS and Android using Phoenix LiveView & Elixir! · GitHub

When it comes to no. 3 it’s likely already too late, so it’s not that it would matter much in real life.

All fair points, you’re right on at least two of three. Let me engage with them.

On no. 2 (metadata): I overclaimed. Any client-server app leaks IP/SNI, Signal included. The real differentiator at that layer is things like Sealed Sender and whether the app can be routed over Tor — not web vs. native. Should have cut that point.

On no. 3 (hardware keys): Also fair, and even more so for sTELgano specifically. The design derives keys on demand from phone + PIN — no persistent keys sitting on the device for hardware storage to protect. You’re right that it barely matters here.

On no. 1 (code delivery) — this is the interesting one, because you’re technically correct that Electron or elixir-desktop solves the “server ships the code” problem. But doing so would destroy the thing sTELgano is actually trying to be.

The core design constraint isn’t “maximum cryptographic security” — established tools already occupy that slot and do it better. The constraint is simpler: a partner unlocks your phone, scrolls through apps, opens messages, checks recent activity. What do they find? With sTELgano today, no app icon, no app-drawer entry, no home-screen tile, no entry in Settings → Apps. The moment I ship a bundled binary, there’s a “sTELgano” icon somewhere on the device. The passcode test fails before crypto enters the picture at all. I rejected shipping even a PWA for the same reason — install banners, chrome://apps, iOS long-press “Add to Home Screen” all leak.

To be honest about where this argument stops working, though — “invisible” isn’t absolute. The passcode test protects against casual inspection, not forensic inspection. A few residual weaknesses a careful partner could exploit:

  1. Browser history / URL autocomplete. The site sets Cache-Control: no-store and recommends incognito; a planned improvement is to keep the URL pinned at / regardless of in-app state via replaceState, so history shows only stelgano.com and not any in-app route. But the domain itself will always appear in browser history outside incognito — that’s a browser-level behaviour a site can’t override. Residual concern, partially mitigated.
  2. The homepage reveals the product. Anyone who types the URL learns what it is. So invisibility is conditional on them never getting the URL in the first place — which is exactly what the browser-history issue compromises.
  3. The fake number in contacts. Mitigated by the usage pattern (save it as a second number under an existing contact, not as a new entry), but still requires user judgment.

So the honest framing is: sTELgano raises the cost of discovery against a casual intimate-access attacker. It does not make you invisible to a forensic one.

On who this is actually for — sTELgano isn’t trying to be a better messenger than established tools, they’re excellent and I recommend them to most people. It’s trying to serve a specific user that the messenger category as a whole structurally excludes: someone whose partner actively monitors their phone, including installed apps, Settings → Apps, recent SMS, and account activity.

For that user, the bottleneck isn’t “which messenger has the strongest crypto.” It’s “can I adopt a private channel at all, without the act of adoption being itself what gets me caught.” Every native app — however well-designed — appears in the app drawer, the app list, OS-level backups, the recent-apps switcher. Every phone-verified account generates a verification SMS. For most people that’s fine. For the specific demographic I built this for, each of those is the exact signal they’re trying to avoid.

The web sidesteps both. A URL opened in incognito leaves no installed app, no account, no SMS, no settings-panel entry. That’s not a cryptographic argument — it’s a “can the tool be used at all by this user” argument, and it’s why I chose the web despite knowing the crypto tradeoffs it costs me.

If your threat model doesn’t include being monitored to that degree, you have better options than sTELgano, and I’d point you to them. The product is narrowly aimed at people whose adoption visibility is itself the threat.

Good thread, appreciate your feedback.

got it