Love to see the interest. Was just reading the above thread…
Think of it is that it is a retained-mode model specifically designed to take advantage of OTP.
All UI scenes are GenServers. Scenes can reference (embed) other scenes. Input is all message passing/filtering/handling. This also means that developers can build “components”, which are just scenes/GenServers that can be reused by other devs. Nice for extensibility.
Drivers (rendering) are abstracted away from the scenes so that they can be swapped to run on different hardware without re-writing the UI logic. This allows me to run my app on Mac to debug, change a target, and build firmware for a raspberry on Nerves. Everything looks and works the same across the two.
The driver model has also been really good for getting remoting working. More on that later…
A big thing I’m going for is reliability. Devices in the field are going to encounter errors, whether it be code, unexpected data, malfunctioning sensors or whatever. This was harder to get right than I expected, but now any piece of UI can crash and things recover sensibly into a good state.
Multiple people have said they can tell it looks like something someone with game experience would make. Partly that was to get my dependencies down to just OGL. But also because transforms are just a great way to manipulate things.
As far as overall status goes, I’ve stopped adding features and am now focused on the large amount of supporting things around it. (Including stripping out the code from various dead-ends and experiments). I want the API to be pretty stable before it goes out to lots of people.