What are the remaining gaps in the elixir ecosystem?

I’m excited about all Hologram goals, but it’s a completely different topic. Creating a native apps would always be best for system integrations. Of course in theory we can bundle a browser or use a similar solution and do same, but it’s very tricky and requires much more resources. The zed code editor is doing all drawing on GPU - there is no HTML or JavaScript stuff that happen on CPU, so that’s a huge difference. While Hologram may be best Web solution I would always prefer native apps.

1 Like

I don’t know, maybe I’ll just type it here briefly.

So what I learned throughout 20 years of building web apps in different stacks is that you basically have one interface that matches both: API type requests and web pages.

Nowadays I call the entities involved a “Form” and a “Workflow”, and no matter if I use API or not, and if the actions taken are READ, WRITE or both.

A “Form” can be a set of filters for a listing page, and a Workflow would return you the list of entries. You can think about it as of input objects and resolvers in GraphQL if you’re familar with it.

Let’s take user registration, in Elixir pseudocode we have

form = %Registration.Form{username: "", password: "", password_confirmation: ""}

Form’s responsibility is defining which fields are expected from the user, and of which types. The library that defines the form does some preliminary data casting, and then does data validation. You can use Ecto schemas without backing them up with database tables to perform that action.

Then, let’s say it’s a form submit action in controller. It takes in username, password, password confirmation and if valid? then calls a Registration.Workflow.run(form)

That Workflow may re-run validations, maybe in transaction and create database records (insert one into “users” and another one into “roles” for example) and return {:ok, account} or {:error, form_with_errors}.

The same approach applies to API calls, the parameters sent by the client are mapped to form fields, then some workflow is executed and some data are returned.

That’s the gist of it. I have some supplementary libraries I use in my projects that implement some wrappers around what I described above but basically that’s the approach.

There is no business logic on Ecto schemas then at all. I actually use a custom macro that adds changeset/2 function to all of them that accepts all fields. I never pass parameters from HTTP request to Ecto changeset directly, however, and use the form structs as a strong_parameters or parameters filter etc.

As I mentioned, I have built some libraries that help me with that, one you can find examples in the test suite to see what this does. This one was for a LiveView project I was working on and works pretty well in a form-heavy application.

In my mind, Ecto schemas or ActiveRecord models are just an implementation detail used by Workflows.

2 Likes

I will have a look at Hologram. BTW: did you resolve the copyright issue with the agency that accused you of plagiarizing name and logo? :wink:

TLDR: There was no copyright issue. I own the copyright to the logo, and the agency’s claims were baseless. After I hired a law firm to analyze the situation, they confirmed my position. The agency CEO has agreed to issue a public apology on LinkedIn.

Longer version:

For those unaware of the situation: After releasing Hologram, I announced it on LinkedIn with our project logo and name. Shortly after, the CEO of a design agency publicly accused me of stealing both their company name and logo. Instead of reaching out privately first, he sent me an email demanding I change the project’s name (and the logo).

The situation was quite stressful. I released Hologram at around 3-4 AM and announced it on various platforms, and his aggressive accusations were literally the first thing I saw when I woke up. What made it worse was that his employees then flooded the LinkedIn announcement thread, engaging in public shaming and bullying behavior.

The logo concept itself is based on common iconography - a minimalist representation of a 2D projection of a hologram machine projecting a hologram. While it’s true that one element of the logo (the icon) is similar, the text treatment is completely different, and they typically only use text in their branding anyway. More importantly, there are thousands of companies and GitHub projects using “Hologram” in their names, and the agency doesn’t hold any trademark rights.

I took this seriously and hired a law firm specializing in trademarks and patents. They thoroughly analyzed the situation and confirmed that the agency’s claims were completely unreasonable. I have clear documentation proving independent creation of our logo and hold the copyright.

The CEO has since removed his LinkedIn post, and we’re awaiting his public rectification. While I understand his initial reaction seeing a similar name and icon, this could have been handled much more professionally through direct communication. If he doesn’t follow through with the agreed-upon public apology, we’ll have to resolve this in court.

Interesting tidbit: After dozens of iterations and finalizing all the geometric calculations, I actually generated the final logomark using an Elixir script that outputs an SVG:

# alfa = 60 degrees
sin_alfa = :math.sqrt(3) / 2
tan_alfa = :math.sqrt(3)

x1 = 0
y1 = 0

x2 = 100
y2 = 0

x3 = 50
y3 = tan_alfa * 50

a = 18
b = a / sin_alfa

x4 = (a + b * tan_alfa) / tan_alfa
y4 = a

x5 = 100 - x4
y5 = a

x6 = 50
y6 = tan_alfa * (50 - b)

File.write!("logomark.svg", """
<svg width="100" height="#{y3 + a}" xmlns="http://www.w3.org/2000/svg">
  <path d="
    M #{x1} #{y1} L #{x2} #{y2} L #{x3} #{y3} Z
    M #{x4} #{y4} L #{x5} #{y5} L #{x6} #{y6} Z
  " fill="#a78bfa" fill-rule="evenodd" />

  <rect x="#{a}" y="#{y3}" width="#{100 - 2 * a}" height="#{a}" fill="#a78bfa" />
</svg>
""")

This whole experience has been quite draining…I appreciate your concern about this situation! @hubertlepicki

18 Likes

Awesome!

I bet it was. I was on a receiving end of something like that at one point. In that particular case both of us used the same agency, who just reused the same template for multiple clients… kinda shitty move.

3 Likes

Definitely not ready yet - maybe in a couple months! :slight_smile:

I definitely agree with you @Eiji that native technologies are the optimal choice when you need maximum performance and total control. However, I’d like to offer a different perspective on web UI rendering requirements.

For typical UI applications, we only need to target 60 Hz refresh rates (~17ms per frame) since human perception of smoothness diminishes beyond this point. It’s interesting to note that even Zed’s impressive demo video was displayed at 60 Hz - while their actual rendering speed is mind-blowing, the demo couldn’t actually showcase that extra performance because it goes beyond what we can perceive or what typical displays can show.

While higher refresh rates can benefit very dynamic animations (and yes, there are gaming monitors that go beyond 200 Hz for competitive gaming scenarios), standard web UIs are relatively simple scenes compared to games with thousands of polygons. Based on my preliminary testing with Hologram’s upcoming Bitstring and Renderer modules overhaul, I’m confident we can achieve this target, and then target even higher values in later stages.

Also, you make a great point about GPU rendering (like in Zed editor). While that approach has clear benefits for certain use cases as I discussed before, CPU-based rendering through WebView becomes increasingly viable as CPUs continue to advance. Moore’s law may have slowed but hasn’t stopped - we’re seeing exciting advances in CPU technology through chiplets, transistor stacking, new materials, and gate-all-around transistors. These improvements directly benefit WebView performance, making it an increasingly practical choice for many applications.

What’s particularly exciting is how far Progressive Web Apps (PWAs) have come. Looking at whatpwacando.today, modern PWAs can now handle a wide range of native-like capabilities. Hologram will be able to leverage all these capabilities while providing a unified development experience. And we all want to use use Elixir wherever we can, right? :wink:

The key point here is that for typical web UI scenarios, we don’t need to be the fastest - we just need to be fast enough that users can’t perceive any lag or stuttering. As long as we can consistently render within that ~17ms window, which is very achievable for standard UI components, the end user experience will be indistinguishable from native performance in these use cases. Having the ability to render much faster than what humans can perceive or displays can show, while impressive technically, doesn’t necessarily translate to better user experience.

To sum this up: In my opinion - while native apps will always have their place, especially for performance-critical applications, the continuous improvement in CPU performance makes WebView-based solutions increasingly viable for a broad range of applications.

5 Likes

I agree with your overall point, but I disagree with this bit. Anyone who has used 120 or 240hz displays knows that every step has a very noticeable improvement in smoothness. I have not yet witnessed a 500+hz display but I have heard the difference is again very noticeable, and I believe it.

The only reason we focus on 60hz is because that has been the standard for a while- but this is changing. For example, nearly every major Apple product line now features 120hz displays.

But for webapps I think your overall point is again correct: the “re-render” speed of an (e.g.) React app is decoupled from the “animation” frame rate, which dictates things like scrolling. It’s the same way in games - often the game simulation is ticked at a much lower rate and then everything is interpolated for rendering at a high refresh rate. (Famously, some old games would tie physics to the rendering framerate, causing amusing broken behavior).

1 Like

ODBC doesn’t support UTF-8 encoded data, I would not recommend anyone use it seriously in 2025

1 Like

No, no - it’s not Moore law if it’s slower than Moore law :joy:

Definitely no, we heard about dozens if not hundreds of new battery technologies and what have changed in practice. There are indeed many new technologies, but besides that there are companies in first place. Before they switch to next technology they want to have enough money from the current one and obviously there is never enough money … :money_mouth_face:

You can see presentation of top tech companies that have planned technologies for years. How is that possible? How do they know what kind of tech they would have after x years? The only reasonable concept is that they already have this tech, but do not rush with introducing it. What’s worst companies are owned by companies, holdings and corporations. At the top there are at most few of them and they do not plan to compete with each other with new technologies and prices. :handshake:

In other words … don’t optimise code, optimise hardware. What would you say if producers would “play UNO reverse card”? It’s obvious waste of resources when you use your hardware in the wrong way. That’s said I’m speaking generally since you mentioned working on CPU. I believe the modern browsers uses GPU for rendering. :thinking:

That’s a false assumption. Of course everyone is different, but people see up to 200-300 Hz. You may be used to low refresh rate, you may need to train a lot to be on the end of the range, but it does not mean your eyes are too weak for that. I have a vision problems and after I bought a monitor with 240 Hz refresh rate I was able to make really long sessions and my eyes are not tired. It really matters even if you can’t describe it you still feel it.

Of course I don’t mean every app needs to render with 300 Hz refresh rate. I just wanted to say it’s a false assumption and may affect other people decisions. For example why should I buy 240 Hz monitor if I can only notice 60 Hz? That’s a huge deal many people forgot about.

True, but it started with my proposition and I was saying about native desktop apps and not how to improve web apps. For sure I would be interested in desktop apps as described above, but I would still see Hologram amazing.

1 Like

I’d suggest that this is a common misconception driven typically by past experiences with things that resemble Ash. The tools we provide work independently of data interactions (although there are some cases that you can’t put X & Y together w/o a data layer or with only certain data layers, but that is just a reality of software engineering).

This is a perfectly valid resource:

defmodule MyApp.Hello do
  use Ash.Resource

  actions do
    action :say_hello, :string do
      argument :to, :string, allow_nil?: false

      run fn input, _context -> 
        {:ok, "Hello #{input.arguments.to}"}
      end
    end
  end
end

If you want to add that to a GraphQL API:

defmodule MyApp.Hello do
  use Ash.Resource,
    extensions: [AshGraphql.Resource]

  graphql do
    queries do
      action :say_hello
    end
  end

  actions do
    action :say_hello, :string do
      argument :to, :string, allow_nil?: false

      run fn input, _context -> 
        {:ok, "Hello #{input.arguments.to}"}
      end
    end
  end
end

Or if you want to build a form for it:

# make a form
form = AshPhoenix.Form.for_action(MyApp.Hello, :say_hello)

# validate it with inputs from the client
form = AshPhoenix.Form.validate(form, %{to: "fred"})

# submit it
form = AshPhoenix.Form.submit(form)
# {:ok, "Hello fred"}

The benefits that Ash provide are that in the cases that you need persistence as you often do when building applications, you can opt into it without a bunch of choreography.

defmodule MyApp.Person do
  use Ash.Resource,
    extensions: [AshGraphql.Resource],
    data_layer: AshPostgres.DataLayer

  postgres do
    table "table"
    repo Repo
  end

  graphql do
    queries do
      action :say_hello
    end

    mutations do
      create :create
    end
  end

  actions do
    defaults [:create]

    action :say_hello, :string do
      argument :to, :uuid, allow_nil?: false

      run fn input, _context ->
        with {:ok, user} <- Ash.get(user, input.arguments.to) do
           {:ok, "Hello #{user.name}"}
        end
      end
    end
  end

  attributes do
    uuid_primary_key :id
    attribute :name, :string, allow_nil?: false
  end
end

Oftentimes folks will have resources with data layers and without.

Ash isn’t about slapping an API on top of a database. Things like calculations, generic actions (that action/3 that I showed up there), and tons of configuration for how various extensions work with your resources give you the best of both worlds in terms of simple powerful tools derived from resources, with the ability to progressively enhance as complexity increases.

Obviously I’m biased, so grain of salt and all that :person_shrugging:

17 Likes

Check out Freedom Formatter.

4 Likes

No, you appear to be completely right and I was wrong. This indeed looks very nice. I apologize and stand corrected.

Looks like you did titanical work with the generation of database from the schema, but it appears you are saying I can keep using Ecto directly too if I want.

This is very cool, I will be looking closer into Ash, thank you! :heart_eyes:

13 Likes

I can completely agree with that!

I never used Ash, however I naturally arrived at the conclusion that such an approach is useful in a lot of cases, as it allows more flexibility between the declarative and implementation part. There were projects where I have implemented a small part of what Ash is from scratch, and I can say that it is worthwhile to go with this approach.

Nonetheless, writing implementations for DSLs can be tedious and extremely complex. Ash levels that field by providing a battery included solution for the implementation, that is usually the hard part, so you can say it’s a win-win situation as long as the framework offers the solution.

2 Likes

I was waiting to see if you’d respond and was just coming back to deliver a much less coherent version of your response :sweat_smile:

Glad you are more convinced, @hubertlepicki. I’m also a big fan of fully top down (from UI) and I was really excited about Ash. Unfortunately I wasn’t unable to convince my company to use it for their greenfield project so my Ash learning is on hold (my hobby programming time as been spent on other projects, atm).

Even though it’s two years old, there are a lot of “a-ha” moments in this video. It’s quite long (there is a part two) and these moments are a bit spread out, but I think it still does a good job of dispelling—with examples—a lot of the common misconceptions about what Ash is.

I’m only bothering to write this as this isn’t the first time I’ve seen someone say that they don’t want to use Ash and then describe Ash when trying to explain what it is they do want :sweat_smile: So coming from someone who was a previous skeptic and has zero biases (I’ve never contributed to it or even currently use it in a real project), you should really check it out if you’re a current skeptic.

3 Likes

I’d like to see better tooling for exploring and understanding both the static and dynamic structure of complex Elixir (et al) systems. People talk about how Elixir systems can have millions of processes, but generally this involves a lot of replication. So, in Rich Hickey’s terms (e.g., Simple Made Easy), it doesn’t complect things very much.

The real challenge (IMHO) is making it easier to comprehend the relationships among thousands of data structures, files, functions, process and message types, etc. Observer hints at this when it draws diagrams of supervision trees, but that is only one type of connectivity. Each instance of message transmission or process spawning has the potential to create new relationships among sets of entities (e.g., CPUs, nodes, processes).

I’ve imagined setting up a system that could harvest these relationships from static (e.g., code) files and dynamic (e.g., trace) data, then record them in (say) a graph database such as ArangoDB or Neo4j. This would make them available to generate diagrams, sets of interlinked web pages, etc. There is also the possibility of using an LLM to examine and explicate this information. And a pony…

5 Likes

Same. LiveView is cool but imo the apps with the best ux are the opposite, i.e. local-first with server sync.

Currently I’m building my own solution involving Svelte, IndexedDB, and Phoenix channels. CRDTs are fascinating but I think only truly needed if doing something like collaborative document editing. This was an interesting read Architectures for Central Server Collaboration - Matthew Weidner and follow up discussion.

This sounds amazing.

4 Likes

When I read your post, this association entered my thoughts: Kino.Process — Kino v0.14.2

Kino.Process can be used to draw traces of messages, application trees and supervision trees in Livebook. The traces overlap a bit with what you explained I guess.

2 Likes

What I would love is something similar to Phoenix LiveView but JUST the data store itself. We use a hacked together internal implementation that sits on top of Phoenix Channels and sends diffs over to a data store on the frontend.

The boundary where the server renders the markup has not personally been a good fit for me because I work on a lot of applications with high levels of interactivity.

Offline-first seems interesting, but for many apps it could be a lot more complex than necessary. Whereas something like LiveView that syncs data (rather than html) over to an in-memory store on the frontend has similar trade-offs and complexities.

Thoughts?

1 Like

You are kinda describing LiveState: LiveState — live_state v0.8.2

2 Likes