Grammar is a very simple Elixir library that helps building parsers / transformers for LL(1) structured data, via two macros, and a protocol.
The main idea is to declare reduction rules in an Elixir-ish way, a bit like regular functions.
e.g.
defmodule MyStuff do
use Grammar
rule start("hello", :finish, :bang) do
[_, finish, bang] = params
"I see #{finish} #{bang || ""}"
end
rule finish("world") do
"the world"
end
rule? bang("!"), do: "¡"
rule? bang("?"), do: "¿"
end
iex> MyStuff.parse("hello world !")
{:ok, "I see the world ¡"}
It started like “How would I” and ended in something that quite works
Hi !
Sorry I didn’t see your post !
Frankly speaking I didn’t know about NimbleParsec or any other tool of the same kind when I started thinking about the topic
From the quick look I throw at NimberParsec and xpeg, it seems to me that those tools follow a different path : they are based on a more formal description of the grammar with actions sowed at different times during parsing, where I choose a more “declarative” way, where actions occur only when a rule if fully and successfully parsed, and results collected.
I also followed another path, by delegating token extraction to a dedicated protocol, which allow for high level of customization. As an example, one can chose to extract a full bunch of data as a single token, because theis data block has a variable length.
I’m pretty sure also it allows for sub-byte tokens extraction, hence parsing of binary protocols.
My feeling is that those tools are more advanced, polished and efficient than mine, which makes sense regarding the time spent in each project.
Good to known that I’m currently rewriting the internals of my humble tool, for better space effienciency compared to the naive current approach.
This version enables parsing at the bit level. Thanks to the design decision of delegating tokens extraction to a protocol, this feature comes almost for free (I must confess I was pretty happy with that ).
As usual a livebook illustrates this “new” feature.