How difficult would it be to add the infix notation à la Haskell?

To back this up, Elm makes a special allowance for the custom infix operators used in its applicative-style combinatorial parsing library even when it removed the ability to introduce custom infix operators for libraries generally.

(I’ve been meaning to convert one of my Elm combinatorial parsers to Elixir to see how much expressiveness and clarity is lost when using its Nimble parser. And to compare speed.)

1 Like

The argument above is that the code will have more (not less) clarity for those coming from Haskell and the expressiveness of its infix operators. A comparison is made to the alien-ness of the Plug macro when first encountered.

One can think of Witchcraft library users as ivory tower kids gentrifying the Elixir neighborhood, or as squatters from Haskell-land. Changes to infix operators has kicked them out without an eviction notice.

Perhaps from Elixir citizens’ perspective they were always being tolerated rather than invited. (99% were unaware they had moved in.) But I think it’s worth making allowances for them, even if there’s only a 1% chance that their code expressiveness bleeds into Elixir proper.

PS: I presume the Elixir formatter currently handles (custom) infix operators. And it sounds like they’re willing to make the pull requests to keep it compatible. I’m not sure if there are other “accommodations” that need be made.

1 Like

Dear Robert,

Won’t that seriously complicate the understanding of other peoples code?

we’ve tried to address the first point when we mentioned the C++-style operator overloading and the response to “adding ways to do things” argument.

Custom operators are just symbols, same as a human-readable name. Either way when we’re looking at a function called map without knowing the library from which map comes from, we have to use our experience to guess the semantics. For some it will be obvious that map(%{a: :b}, fn x -> x end) shall return [a: :b], someone else may expect %{a: :b}.

Would seeing code fn x -> x end <$> %{a: :b} or, in current witchcraft notation %{a: :b} <~ fn x -> x end change anything? I hardly think so.

Note that in absence of operator overloading, the intent to use any infix operator absent from Kernel has to be explicit by the programmer, just as the intent to use any normal function.

To use witchcraft’s <>, we ought to explicitly hide Kernel.<> using import Kernel, except: [<>: 2].

And how should/would the formatter handle these things?

At the end of the day, infix operators are atoms, so as far as we can tell, neither AST, nor pretty-printing facilities shall be affected.

P.S.

I forgot to add $ and = to the opC grammar entry. Silly me!

I wouldn’t emphasise ivory-towerness. Believe it or not, but in doma.dev we’re all about pragmatism. All we use from witchcraft ecosystem on a day-to-day basis is:

  • map.
  • Algebraic data types as a UX improvement over defstruct.
  • Typeclasses as a UX improvement over defprotocol / defimpl.

On a rare occassion, when we have computations encoded as data (see the “pattern” of building computation and running computation at different sites), we reach for more powerful operators that take data and either spread it over the built computation or threads through a chain of computations.

We’re working on moving the bits that we use frequently and want to actively support into a separate, integrated library. Then we’ll modify some behaviours to be more pragmatic under “offensive programming”, as well as – hopefully – make it play nicer with dialyzer. (We lose time and debugging cycles to the fact that wrapping something of type t0() into an algebraic data type A currently loses the type information about t0(), making the type be A.t()).

However, we would also like to keep helping the original authors of witchcraft maintain compatibility with newer versions of Elixir.

I guess, my point here is that it’s not about Haskell at all. The library family is pragmatic with an added benefit of having powerful tools of execution control beyond what is available OOTB.

1 Like

We have deprecated arrows and are now enjoying code like this:

map(
  compose(
    compose(&Binary.new!/1, fn x -> x.raw end),
    &B.mk_url!/1
  )
)

instead of map(&Binary.new!/1 <|> fn x -> x.raw end <|> &B.mk_url!/1).

Is that really more clear/expressive than this?

fn x ->
  x
  |> Binary.new!()
  |> then(& &1.raw)
  |> B.mk_url!()
end

or even

fn x ->
  binary = Binary.new!(x)
  B.mk_url(binary.raw)
end

I guess it’s a matter of preference but the only thing valuable I see in your example is map as a more general(as in generic functor map) instead of just Enum.map.

I get the value of a Functor and maybe even Monad or Monoid in your codebase if you enjoy these generalised operations in your codebase, but when it comes to functional composition I find the examples in this thread rather alien and fail to see what’s so pragmatic about it in this language.

2 Likes