Marker 2.0.1 - small and performant library for writing HTML markup

Hi all,

I just released Marker 2.0.1 at:

Marker is a small and performant library for writing HTML markup in Elixir. It provides both templates and components as convenient abstraction. An example of Marker syntax:

defmodule Example do
  use Marker

  component :simple_list do
    items = for c <- @__content__, do: li c
    ul items

  template :example do
    html do
      body do
        h1 "Hello " <> @name
        p "You can find more information about Marker at:"
        simple_list do
          a "Github", href: ""
          a "", href: ""
          a "HexDocs", href: ""


You can now call the template like this:

Example.example name: "World"
=> {:safe,
      "<!doctype html>\n<html><body><h1>Hello World</h1><p>You can find more information about Marker at:</p>
       <ul><li><a href=''>Github</a></li><li><a href='
       /marker'></a></li><li><a href=''>HexDocs</a></li></ul>

Marker can be simply used in Phoenix by calling Marker templates from your view’s render functions. See marker-phoenix-example for more information.

The performance of Marker is pretty good, since it tries to precompile during macro expansion as much as possible. For example

div do
  span 1 + 1

Will get expanded during compile time to

"<div><span>" <> Marker.Encode.encode(1 + 1) <> "</span></div>"

Marker is sort of a successor of Eml, a library for generating, parsing and querying HTML. Although Eml has many more features, I personally mostly used the markup DSL, but it had some unpleasant corner cases due to its design. Marker tries to do only the markup DSL and does it simpler and better than Eml. I might later release the parsing and querying capabilities of Eml in a seperate improved library.

It has been used for some months internally and I was a bit too fast with releasing the 1.0 version, so that’s why it is already at major version 2. I don’t expect any breaking changes anymore for a long time and it should be stable enough for production use.



Is there a reason why the library is not using iolists? It seems surprising from a library that is claiming performance. Or maybe you’ve measured that it doesn’t improve performance in that case?

1 Like

The Eml library used to have an option to render to an iolists, but after some benchmarking I removed the option since it didn’t improve performance at all and since Marker’s compiler is derived from Eml’s, I didn’t bother trying it with Marker. However, intuitively I would say that iolists should indeed be faster to generate, so it might be worth checking with Marker again.

At the other hand, in most siuations Marker’s current performance is about the same as EEx templates and I think those are pretty fast.

1 Like

IOLists performance is two fold, it is just bringing together lots of commands into a single list instead of building a potentially single huge binary, and second that it can send through, say, a network socket without needing to build up a single huge binary. I would expect the performance to be fairly similar between these two:

"<div><span>" <> Marker.Encode.encode(1 + 1) <> "</span></div>"
["<div><span>", Marker.Encode.encode(1 + 1), "</span></div>"]

In such a small case, however once you start getting larger is when it will start to hurt. So you should not be benchmarking the ‘generation’ step of large Marker templates, but rather also benchmarking how fast large ones send through a socket too.

There are times to resolve an iolist into a binary, but if you are streaming them to a socket and the overall components of it may be up to a significant part of a packet in size, there will almost never be performance boon to resolve it to a binary. If the list had tons and tons and tons and tons of tiny little cells then I could see it maybe helping, but even then I would not try to do so without testing. IOLists are HIGHLY optimized in the EVM/BEAM.

1 Like

I did some benchmarking again and binaries seems still faster than iolists. I did not include sending data through a socket in the benchmark. I understand that iolists are highly optimized in the EVM and that it’s faster to just send the iolist through a socket, instead of converting it first to a binary when your end result is a list, but I find it hard to believe that sending an iolist through a socket is actually faster than the equivalent binary.

I think one can gain most from using iolists if you’re working primarily with charlists, because something like :erlang.iolist_to_binary([?H, ?e, ?l, ?l, ?o] ++ [32, ?W, ?o, ?r, ?l, ?d]) should perform much worse than something like [[?H, ?e, ?l, ?l, ?o] | [32, ?W, ?o, ?r, ?l, ?d]], but since appending data to binaries is also highly optimized by the EVM, it seems that ["Hello", " World"] is not necessarily much faster than "Hello" <> " World".

Still, I don’t really understand why iolists perform worse in my benchmarks. My expectation was that they would perform a little better.

Regarding the benchmark itself, the input for the benchmark can be found here:

Results of the benchmark on my 3 year old i5 iMac are:

## CompileBench
benchmark name        iterations   average time 
marker binary flat        100000   12.08 µs/op
marker binary small       100000   14.00 µs/op
marker iolist small       100000   16.90 µs/op
marker binary med         100000   23.16 µs/op
marker binary simple      100000   24.89 µs/op
marker iolist simple      100000   28.95 µs/op
marker iolist med          50000   30.26 µs/op
marker iolist flat         50000   45.27 µs/op

The implementation to switch between iolist and binary is a little bit hacky at the moment, so I haven’t published these modifications to Marker’s repo yet, but here is the generated output for the ‘small’ test when compiling to iolists:

 ["<!doctype html>\n<html><head><meta charset='utf-8'/><meta http-equiv='X-UA-Compatible' content='IE=edge'/><meta name='viewport' content='width=device-width, initial-scale=1'/><meta name='description' content='result type benchmarking'/><meta name='author' content='zambal'/></head>",
  ["<body>", "<h2>", "Lists", "</h2>", "<ul>", "<li>", "Strawberry", "</li>",
   "<li>", "Banana", "</li>", "<li>", "Apple", "</li>", "<li>", "Orange",
   "</li>", "</ul>", "</body>"], "</html>"]}

EDIT: some typos

1 Like

AFAIK, those are indeed nearly equivalent, even a little faster for the binary depending on the amount of data and number of items in the list due to not needing to do any list traversal … the main benefit of iolists is when you have a list of items … that said, if you are pasting together a bunch of small bits into a single binary then, depending on the size of the resulting binary, it may still be less expensive to put them into an iolist where they won’t be copied (esp out into a shared memory space vs the process-local memory), before shoving them through a socket. There are a lot of moving parts in the BEAM, as we all know, … and performance can shift about depending on what kind of data, how much of it, and where it is getting used.

One of my favourite blog entries on this topic -> … i mean, c’mon, “Template of Doom” … has to be great! :wink:


The gist I’m getting from this exchange is that it would be a nice convenience to be able to use straight iolists with the library without being forced to explicitly convert from iolists to binaries - even at a potential cost of a performance hit within the library. This allows the option of staying with iolists until the “last responsible moment”, permitting the remainder of the system/application(s) to reap any potential performance benefits that iolists may be afforded within the EVM/BEAM (???).

nanoseconds per iteration is only one kind of performance, I’d be very curious about memory and GC pressure.

Is the code up anywhere for the io-list version of your library?

1 Like

Yeah I’d be curious on using benchee on it as well.

I first pushed some planned refactoring, but I have now also pushed a new branch to the github repo that tries to output to iolists as optimal as possible within the limits of the current compiling/encoding design. I’m curious what kind of results you get with it and/or if you see room for improvement.


1 Like