AtomVM won’t work with my approach. AtomVM’s own docs say it targets tiny/lightweight systems, implements only a subset of BEAM/OTP, and is “much slower than BEAM” even with JIT, while using less RAM.
Sorry. That’s the wrong syntax. This is what’s working ‘so far’ but the planned syntax is below.
The actual syntax is very much inspired by LiveView / LVN. It was part of my goal that existing LVN code could go straight across with minor changes although I’m not bending over backwards to make that happen.
LVN did a lot of work on the layout and styling so I’m intending to copy what works well there and otherwise adapt it.
defmodule HelloScreen do
use Mob.Screen
def mount(_params, _session, socket) do
{:ok, assign(socket, :greeting, "Hello, Mob!")}
end
def render(assigns) do
~M"""
<.column padding={16}>
<.text size={24}><%= @greeting %></.text>
</.column>
"""
end
end
I also have ‘planned’, although this may be pie in the sky thinking, that there will be a ‘common path’ approach that does close to the desired result on both platforms and a native ios / android only syntax and also a pass through syntax like sql snippets which is ‘trust me bro’ code. Hopefully this is the most robust way that developers can always deliver to the intended platform without waiting for the library to catch up.
I actually prefer plain data structure instead of sigils pretending to be some knid of pesudo-HTML. Plain data structure, if designed properly, is quite readable and much easier to refactor.
def render(assigns) do
{:column, {:text, assigns.greeting, size: 24}, padding: 16}
end
Ideas:
- use tagged tuple for elements. elements are 3-tuples (tag, children, prop} or 2 tuples {tag, children}
- put prop last, as a keyword list
- auto-promote bare element to list of elements for children
- allow bare string as a element
It is fairly easy to implement the above and make this relaxed AST easy to write.
That makes more sense now.
I think I understand your “heavy” point as: if the BEAM is already embedded locally, then adding a Phoenix endpoint + LiveView transport layer on top can feel like extra machinery compared with just sending events into local Elixir processes and driving native UI more directly.
That part seems fair.
I’m not shipping a separate browser runtime. On Android/desktop I’m reusing the platform’s built-in WebView/browser surface, so part of the tradeoff for me is that I get to reuse existing system components rather than owning a full native widget abstraction myself.
Still I think the distinction is a bit goal-dependent. If the goal is maximum reuse of the existing Phoenix/LiveView model, those layers are not wasted. If the goal is the thinnest possible local-native runtime, then they probably are.
I explored a similar direction myself: a small Rust project that defines a typed JSON UI schema + reducer/runtime, then renders that into Jetpack Compose over JNI. I actually liked that style a lot.
What I’m still not convinced about is the long-term maintenance boundary.
Here is a short video of the wrapper experiment:
https://streamable.com/r6q3ld
So the other issue with reusing LiveView + Webview is that your app looks like a website now. This is an uphill battle that React Native has been fighting for a long time and I’m not sure they’re any closer. There are sets of libraries that look like native buttons, etc. but those can change per Android / iOS version so you’re still out of sync. They also have various attempts at tying into ‘actual native’ elements.
If you just want to build an app and are not trying to satisfy designers that want native feel then this approach can be fine. I think also the market penetration of React Native and cousins that are presenting a non native looking app is mostly accepted by the average users but some will actually notice the difference.
There was also a ton of effort put into making things like lists as performant on RN as they are native. By piggybacking on the system we get this for free essentially although there may be a bit of work down the line to do things like streaming to lists like LiveView does. That being said we are 90% of the way there just by using the system utils.
The shortest path for me is to just ride what the native OS is already delivering and work with that. This is very akin to the spirit of what LiveView is already doing combined with the general pragmatism that the device itself has to have processing power as network connections may not be good enough to support remote server access like LiveView is doing.
Nice.
This would be much nicer as {@greeting}. I think the <%= are an artifact of the initial phoenix templating approach. It’s a bit odd to me that they exist even with the recent move to curly braces. I think they should have been eliminated at that point for consistency across the board.
Agreed. I’ll update my plan.
This is truly awesome to see! I gave up running BEAM in Iphone after some time. Android worked fine. Used Scenic for UX: GitHub - borodark/probnik · GitHub Claude was a great help. OpenGL is opening new UX boundary but I am not qualified to design one.
Scenic is another interesting path. You would access it via NIF and have to implement some sort of interaction method on your own. I briefly considered this for this framework but I don’t want to build out the whole interaction system when there is already native available and native is what people normally want.
This highlights another interesting aspect that would be in the fantasy future for this framework which would be for high performance things like mobile games where you dedicate the bulk of the app to the game running as a NIF and then you have 1000s of processes (or however many you need) servicing all the network requests and other events happening in the game.
I played around a bit more and translated my Rust → Jetpack Compose schema experiment into Elixir via NIF.
Fun stuff, but staying in the “experiment” bucket for now.
Just a side comment from my niche perspective. This coupled with the iroh nif opens up a wealth of possibilities. <3 I love the idea.
Yes.
mob is double named
- short for mobile
- a (large) group
This is almost exactly what I’ve been thinking about!
One difference in my approach: instead of embedding BEAM in each app, I’m considering a shared BEAM VM running as an Android service that multiple apps can connect to.
The idea:
-
One BEAM VM service running in background (~80-120MB total)
-
Apps bind to it via HTTP (localhost:8080) or AIDL
-
Each app spawns its own processes/GenServers on the shared VM
-
Or optionally spawn separate BEAM instances for isolation
Why shared VM:
-
One 80MB runtime instead of N copies
-
Apps can message each other via Erlang distribution if needed
-
Update the VM once, all apps benefit
-
Standard Android service patterns
Two modes:
-
Shared: All apps in same VM (lightweight, can communicate)
-
Isolated: Service spawns separate BEAM per app (better isolation)
Questions:
-
Why did you choose embedded-per-app vs shared service?
-
How do you handle the ~80MB ERTS size per app?
I was originally thinking Phoenix LiveView for UI, but your native component approach makes a lot of sense. Avoiding the server dependency and latency is huge.
Haven’t started building yet, just exploring the architecture. Would love to hear your thoughts on the shared VM approach!
The main reason why the BEAM needs to be in the app bundle is because the view tree is being directly modified by the BEAM via NIF. This is similar to if I was running Pythonx, Zigler or Rustler. I think this would be complicating my life a lot if I went with one shared BEAM per device and I don’t think devices are really that constrained for storage.
If I wasn’t using BEAM in the app I guess I’d have some sort of internal service that calls it or something just to facilitate maybe multiple apps. That’s too much overhead for a feature I don’t think anyone is asking for just to avoid depositing an extra 80 MB which is not that much given every phone has many GB these days.
I don’t think people are picking app frameworks based on build size.
I asked Claude to give a summary here:
- The BEAM VM itself (libharry.so at 4.2 MB uncompressed) is smaller than Flutter’s engine (~8 MB) and vastly smaller than the V8/Hermes engines React Native ships.
- The OTP stdlib (the ~80 MB on device) is the real cost — that’s all the .beam files for kernel, stdlib, crypto, etc. A production build could strip unused OTP applications and cut this dramatically. React Native has the same problem with its JS bundle + native modules.
- Debug vs release: debug APKs carry unstripped symbols. A release build with --strip would shrink libharry.so significantly.
- NativeScript at 97 MB APK is in a different category entirely — it bundles the V8 engine, the whole Node.js runtime, and the app JS in one shot.
Bottom line: Mob is very competitive. The VM overhead is lower than every cross-platform framework except Flutter, and the stdlib bloat is fixable with OTP app selection at build time. That’s a strong story.
I’m betting on Elixir because I think it brings enormous advantages. Many places are already using Elixir for backend (and frontend with LiveView) so you can also get both mobile platforms natively with the same language.
I’m betting on the BEAM because all the mobile work I’ve ever done I wished I had the BEAM. JS is ok but everything is built on callbacks. It doesn’t have a great fail mode story but it does have a pretty good async story. I don’t know enough about Flutter but people are learning Dart just to use Flutter. One language and one framework seems like a weird stretch to me. The alternative to those are basically native, so you pretty much need two teams or have extremely adaptable people.
This way combines all the benefits of native with the benefits of the BEAM and unifies on Elixir as the language. There will be a bit of platform glue but I’m hoping to keep that to a minimum.
Process separation on the BEAM cannot be used as security boundary because everything can call everything else.. I do not want to share a VM with other apps, say, apps from Meta.
So far I’ve got the onboarding generator started which will deliver a hello world style app. It also gives you a random icon. If you knew where to put it you could do your own so that’s already supported but that will be documented later.
If you have the Android emulator and / or iOS simulator open it will push to those devices. I’ve also pushed to my Android phone manually.
I don’t have an iOS device so if anyone wants to give that a shot and report back that would be great.
mix archive.install hex mob_new
mix mob.new my_app
cd my_app
mix mob.install
# both
mix mob.deploy --native
# ios only
mix mob.deploy --native --ios
# android only
mix mob.deploy --native --android
Give it a shot and tell me if it works for you.
I’ve only tested it on Mac. I’d be pleasantly surprised if it works on Linux. It would only be Android if it did.
Maybe long term we’d just support other platforms with a cloud service as most mobile development is pretty tightly coupled to Mac due to Apple’s requirement to build iOS apps on their machines.
I was wondering if the BEAM was too heavy for mobile - that’s what I had heard - but it’s really not.
From my benchmark, Claude:
With screen-on the numbers were noise, but here you can see:
- E (Nerves) at 202 mAh/h is essentially identical to A (no-beam) at 200 mAh/h — the BEAM overhead is within measurement noise. That’s a very positive result.
- D at 184 mAh/h looks suspiciously better than no-beam, which is probably just battery state variation (different starting voltages, temperatures, cell wear).
- B (untuned) at 250 mAh/h — 25% worse than no-beam, confirming the busy-wait flags matter.
The headline: with Nerves tuning flags, the idle BEAM is essentially free. That’s what you wanted to know
I copied Nerves flags and those seem to be fine.
The stdlib we use for Popcorn, which should be sufficient for most apps, weighs ~6,8MB uncompressed. It includes both Erlang and Elixir. We use AtomVM’s packbeam tool that strips docs, debug info etc from the beams.
Hi I have been working on Emerge for past few months. I am planning on getting to the mobile eventually with basically same idea you have here. It is just not the priority.
I think BEAM + EmergeSkia can be pretty small. EmergeSkia NIF ends up being around 6mb, there would need to be some stuff around mobile handling but I think it is possible to make whole mobile app base be around 12MB.
I am on purpose not following LiveView model since having templating like heex is direct result of HTML + CSS baggage and assigns model is copy of stateful React model which is not the happiest path to take, especially in functional programming languages. If you ever wrote non trivial LiveView/stateful React application you know that the model start falling apart quite quickly.
If you are interested in making mobile story around it I would be very happy to collaborate on that effort.
I’m planning on using a format similar to LiveView; more or less what was in use by LiveView Native. This tracks to users’ expectations and knowledge around how LiveView works and we get native looking apps so there is no second tier appearance like with React Native or whatever JS tools like Cordova that are some version of wrapped html. Users and designers consistently want native looking apps so this is the shortest path to get there.
I briefly considered Scenic which also writes via NIF but if we go that way we have to build our own layout engine but by going with the native view tree we get that more or less for free and it lines up with how developers already work and understand the system. It’s intuitive for LiveView devs and devs coming from other frameworks like React / React Native there will only be a short learning curve.
I have thought about the NIF route and there are interesting things there and perhaps we could use both. For example, if you wrote a mobile game and most of it was happening in the NIF, you could still have 1000s of processes making network calls and handling other aspects of game state. So I’d like to leave the door open to NIFs but that’s not my focus right now.
I need to build this out to stable and I’ll reach back out to you to see what a collaboration would look like there.






















