Reality check your library idea

I don’t know about you but in my time working with Elixir I’ve had numerous ideas about libraries which would be cool/interesting/helpful to work on.

At the same time I was never sure if the community would be actually interested in the idea, or if they would deem it a waste of time (to exaggerate).

From there I thought “why not create a thread to reality check the idea” and here we are. Consider this the place to get feedback on your library idea!


This is a great idea! Well I have been busy due to the whole… global situation, and have several projects in varying states of bakedness.

Full-baked (but not sure if this library is “literally the worst” or not, so i’m too scared to generally advertise it):

Half-baked WIP, and you should probably use Matrex for now:

Cookie Dough: compile-time differentiable programming in elixir:

And I have one project that I just need to untangle from some other software, make better tests, and document it (and come up with a name for it - I’m thinking “Fakebooks”), it’s basically “ansible for elixir”, except using actual elixir as code instead of yaml. (so when debugging it you can drop IO.inspects in, etc)


Making a finite-state automata library for Elixir might be worth it. People around here regularly use state machines to enforce proper state transitions and that certain persistent workers aren’t stuck in an invalid state that the code authors have no answer for.

FSTs sound like a good answer for part of these scenarios.

EDIT: There is such a library already: fsmx.


That’s probably born out of my ignorance but aren’t there already options for state machines on the BEAM? How would this be different?

Before I write everything down, I want to say that I’m really interested in any kind of feedback you might have on this idea, because I’m not certain if there is actually an interest for it. So please tell me what you think.

I’ve been considering to create a bunch of micro frameworks to help with doing DDD-style development in elixir. What do I mean with that?

DDD is about actually understanding the problem domain and then translating this understanding into code which actually solves the underlying problems. While there is no “one true way” of doing that, there are a bunch of patterns which tend to be very helpful, such as aggregates, events, commands etc.

All of these are provided by Commanded, and while Commanded is a fine piece of software, it comes with a heavy buy-in. As far as I know there is no option to “pick and choose”. Only want to use commands, and aggregates but no events? Well that’s to bad.

Here is where this idea about the DDD micro frameworks come in. The thought is that there are multiple parts to it, each focusing on specific patterns while providing entry points to hook in functionality.

You might have a command library, but where those commands get routed is under your control.

You could have an aggregate modeling library, spinning up an aggregate as a process from a data store, but where the data is stored and how (snapshots or events) is under your control.

You might choose to go the event route, and publish and subscribe to them, but where they come from and where they go is under your control.

Each of these parts would then provide well defined APIs to hook into the edges of their functionality, and each of them could provide adapters for hooks of the other parts.

At the end of the day this would leave you with a modular group of libraries from which you can pick and choose what you need. Want to go full blown? Take them all, probably by using a single dependency which includes them all. Want to only do this one thing and maybe the other? Pick the respective parts.

The end result is that you stay in control about how much buy in you do. After all it is you who knows best what is needed in your particular problem domain.


Yeah, I got a brain fart and I forgot that there are libraries like that already. Sorry. :slight_smile:

1 Like

What might be interesting in that direction could be an implementation of Petri Nets in Elixir.

I saw an interesting talk at the Code BEAM STO a few years ago about them, and was under the impression that they can solve certain kinds of problems which are hard to model in a state machine.


Haha no prob that’s what this thread is for! I also have one that’s been humming along in prod for a year now:

1 Like

It is in erlang, but its already there:


as promised, untangled “ansible for elixir”, very much version 0.1.0… I haven’t tested it against deploying my lab or production machines in the “untangled state” yet, so, expect errors unless my tests actually do correctly recapitulate everything correctly…

1 Like

I’ve spent the day working on an idea of mine which had been lying in my backlog for too long:

Easy schema validation for incoming requests on a per-route basis.

In past projects I’ve occasionally used JSON schemas to ensure that incoming data actually looked like I was expecting it, which is especially relevant when doing API development.

This boiled down to either assigning the path to the schema in conn.private and evaluating it in a plug:

pipeline :api do
  # ...
  plug JSONSchemaValidator, private_key: :json_schema

scope "/", MyAppWeb do
  post "thing", MyController, :create, private: %{json_schema: "path/to/schema.json"}

or defining a plug per-route in the controller:

plug JSONSchemaValidator, [schema: "path/to/schema.json"] when action == :create

While this works it feels a bit clunky, so I dabbled a bit and ended up at this (I’m using Validation as a placeholder here, don’t have a name yet):

defmodule MyAppWeb.MyController do
  use Validation,
    adapter: Validation.JSONSchema,
    resolver: Validation.JSONSchema

  use MyAppWeb, :controller

  @validate schema: "path/to/schema.json"
  def create(conn, params) do
    # ...

This actually complies down to the plug + action version from above but I feel like it’s so much clearer. The adapter and resolver options can of course also be defined in the config. The idea here is to stay agnostic from the kind of schema and provide these as separate libraries. One for JSON schema, one for Protobuf, etc.

What do you think? And do you have any name suggestions (I’m considering Gandalf - you shall not pass)?

1 Like

Even with your explanations I’d still be very confused on what do @validate, adapter and resolver mean. Sounds too enterprise-y somehow and could literally mean anything.

I’d probably replace this:

@validate schema: "path/to/schema.json"

With this:

@request_schema "path/to/schema.json"

This immediately makes it clear that we’re validating request parameters and not anything else.

I’m also not clear on the difference between an adapter and a resolver. Can’t they be one module or even just separate functions, described as function captures? Like this:

use RequestValidator, schema: :json
use RequestValidator, schema: :protobuf
use RequestValidator, module: MyApp.JsonSchemaValidator
use RequestValidator, adapter: &MyApp.Contexts.Validator.json/1, resolver: &MyApp.Contexts.Validator.something_else/1

…etc. And then those atoms could resolve to actual modules inside your library. I think it’s very non-ergonomic to make people write your library’s module prefix several times like in your example – Validation once and then Validation.JSONSchema twice – so I’d go the extra mile to introduce a terse DSL.

1 Like

I have a terrible jsonschema validator library here:, it’s actually been running in prod for about a year with no hiccups, if you’re interested in using that code feel free to without restriction - or if you are interested in collaborating at some point I’d like to refactor it to “not be terrible”, it’s just that i don’t actively use jsonschema very much anymore since I keeping more and more things in the BEAM these days.

1 Like

I have been looking at CRDT recently, that would be nice to have CRDT datatype because we could wrote some amazing … rich text editor for Phoenix.

I know Phoenix uses CRDT with trackers, but not datatype (array, string, etc).

We can see Action Text in Rails, and many other frameworks have their custom editor.

Often I use draftjs, or ckeditor, or tinymce, but I am sure we can do better :slight_smile:

JS can do this

and I hope Elixir will too.

1 Like

I think this text editing crdt is promising: Martin had given talks at codesync in the past iirc.

1 Like

I have been following some of his videos :slight_smile:

Thanks for the feedback! I really like the aliasing, definitely something I’ll consider!

The example I chose probably wasn’t ideal but your feedback definitely helped with pushing the API in a better direction. The idea behind adapter and resolver is a distinct separation between “validation” and “schema loading” (see below for details on what that means).

In my current iteration you (could) now use them like this (name is as before a placeholder):

use ValidationLibrary.Phoenix.Controller,
  adapter: ValidationLibrary.Adapter.JSONSchema,
  resolver: {ValidationLibrary.Resolver.File, directory: "priv/schemas"}

use MyAppWeb, :controller

@validation_library schema: "show.json"
def show(conn, params) do
  # ...

Of course the whole adapter and resolver config can also be moved into config/:

config :validation_library,
  adapter: ValidationLibrary.Adapter.JSONSchema,
  resolver: {ValidationLibrary.Resolver.File, directory: "priv/schemas"}

Which would allow you to reduce the use to this:

use ValidationLibrary.Phoenix.Controller
use MyWebApp, :controller

# ...

There are a number of reasons I chose this API:

adapter and resolver being distinct

This allows the user to use the actual validation logic adapter with whatever kind of schema storage, be it local storage, remote storage (such as an S3 bucket) or even a full-blown schema registry.

A reasonable default for resolver would be local storage (ValidationLibrary.Resolver.File) which is probably perfectly fine for most use-cases.

The adapter can in turn focus on actually performing the validation, without having to care about where the schema came from. It gets the resolved schema, the data and then validates.

(In the earlier example I used JSONSchema as resolver because the local storage loading “just” loads the file without json parsing. I’ve now added an optional prepare/2 callback to the Adapter which transforms the schema from a plain string to an actual JSONSchema.)

@validation_library schema: "..."

The reason behind using a module attribute with the same name as the library is that you can overwrite any configuration here.

If you for example have this one endpoint which needs to accept XML to interact with some kind of legacy system, you could then do something like this:

@validation_library schema: "priv/schemas/legacy/create.xml", adapter: ValidationLibrary.Adapter.XML

But you’re certainly right that this isn’t really self-explanatory. It’s certainly feasibly to allow “aliases” of the “full” module attribute, such as the one you proposed (I would opt for namespacing though, to reduce the risk of name clashes, the final library name would be a lot shorter anyway).

@validation_library_request_schema "show.json"

Does that clarify things a bit, @dimitarvp? Any additional thoughts?

1 Like

Here is the next crazy idea from the place that is my brain: (I’m committed to finishing Selectrix, don’t worry).

LiveView over WebRTC. The idea being that you could have a server behind a firewall, connecting up to a handshake server. Then you could from your device or terminal connect to the handshake server, and it would perform STUN/TURN and connect you P2P (ideally) to your server on the other side of the firewall.

I’m thinking the best use case would be a privacy-oriented storage device. You could build out the interface using phoenix liveview, and owners could connect into their own servers easily.


Ok… Now for a super half-baked idea that I don’t want to work on (but if someone wants to do this and wants help I can give some help):

Serverless Elixir/Erlang

Compile a module, or a bundle of modules. Ship it to a service. Server disassembles the BEAM bytecode, identifies operations that need to be redacted (String.to_atom), or sandboxed (for example, calls to File modules, send, calls to Node module, Process module, etc), recompiles the module, and then launches it into the VM, co-tenanted with a bunch of other modules. Might be possible to do things like track reductions spent in modules, etc, if you wanted to make a hosting service out of this.

Sorry for being showstopper, but without spawning the additional BEAM VM in the restricted environment it is not really possible to make it safe. Main problem? apply(m, f, a) where you cannot guarantee that it will not call some unsafe code.