Protocols central repository?

Would it make sense to have some central repository for protocols to make interop between apps/libs easier?
So we would have packages that just define a protocol that could be used by various libs apps?
(So we have protocols decoupled from packages that implement it?)

1 Like

What is the problem you’re trying to solve? What interop problems exist when the protocol is inside the library?

2 Likes

multiple libs can implement same common protocol why would I want to depend on a package just to pull in the definition of protocol if I want to use implementation from different package. As example say we had CC processing protocol and a bunch of libs implementing it.

1 Like

With CC processing do you mean credit card processing? For that I think it’s wrong to use protocols because the credit card is a single data structure or can you show where the extension points are?

You might have the inversion of control in the wrong direction. It’s usually the library that defines the protocol that is using it, for example with a JSON encoder protocol only the JSON library is calling the protocol.

Can you give a practical example of a current protocol existing inside a library that would be better outside of the library?

1 Like

I agree credit card processing is contrived example but if we extend to say payment processing in general might be more practical example. You might be doing a charge against CC or against bank Account or whatever else payment method etc.

1 Like

What is the data structure you are implementing the protocol for in your example and what is the purpose of the protocol? Credit card processing can mean many things, is it validation of the card, making a charge against the card, associating the card with a subscription or customer. Is the data structure the transaction itself, why should the transaction data structure be different for the payment method?

I can’t think of any protocol in the wild where the point of the protocol is that the user should call it. All protocols I can think of it is actually the library that defines the protocol that calls the protocol. I am probably missing some instances, but this makes me feel you are trying to use protocols for something where there are better abstractions.

I think you need to show an example of something existing in the wild that would be improved by your proposed change, it’s hard to discuss around hypotheticals and hypothetical improvements shouldn’t be the basis of a large change to the library ecosystem.

3 Likes

I am definitely not suggesting any major change just wanted to get opinions. OK lets use JSON that you brought up
say my app depends on Poison and implemented the relevant protocol for my %domainSpecificThing{} now for the sake of argument @OvermindDL1 created super optimized mega fast JSON lib I want to switch to using it if the protocol was defined in some separate package would not need much changes on my end. Getting back to payment processing operations are generally the same yet data structures would be different e.g. for bank account in US it would ABA number and Bank Account number for cc credit card number and expiration date etc.

2 Likes

Hearing about an ultra fast JSON lib, I feel somehow summoned :laughing:

The chance is, that the protocol requirement for that ultra fast lib would be different and it might offer different callbacks or options - maintaining a 100% compatibility for all the features is generally hard. Additionally, usually when implementing protocol for some library you use functions from that library in the implementation (poison is a great example, when you usually delegate the rendering back to poison after building some data structure). There’s a very high chance that switching the consumer of the protocol would require switching the implementation.

5 Likes

This sounds more like an plugin API rather than a protocol API. Though I’ve thought about building an in-system optimizing plugin API, I’ve not done so yet.

The reason it sounds like a plugin API instead of a protocol API is that the examples you give seem to be:

  • There is an implementation/interface, the users of it do not know what is actually implementing it.
  • There is a single library that fulfills that interface, even though many such libraries exist, you will only be using one such library at a time.

Where Protocols are for dispatch based on the type/shape of a binding’s value to different code.

Although using something like ProtocolEx you could define such a ‘plugin’ interface and even enforce only one thing implementing it pretty easily (the resolve function!), I’d not do so.

The way I would do it is via a method that even Erlang has built-in, behaviours. :slight_smile:

Specifically you have a behaviour. You have the application configuration (in Elixir it is those nice config.exs and so forth files) and one of the options specifies the library name to use for that, then the behaviour definition module just passes calls to that specified module. Now it would not be quite as efficient as doing the same in ProtocolEx (you can get rid of one of the two cross-module calls, though technically if you made macro bouncers in the behaviour it would fix that too) it is the method the BEAM is built for. :slight_smile:

And with behaviours you can even enforce at compile-time if the specific module implements it fully or not. ^.^

Hehe, use Jiffy I think it’s called, it is a C NIF JSON parsing libary that blows Poison away (though no derivations I think). ^.^

2 Likes

You are prob. right :slight_smile:

1 Like

I’ve wondered the same thing for stream-based processing. For example, in the JVM world we have Reactive Streams: http://www.reactive-streams.org/. It defines a low-level API for other projects to implement. This low-level API allows each project to interoperate while providing higher-level APIs to their users.

This is how the JVM supports multiple RS projects (Akka Streams, JDK9 Stream, Swave, etc), with the ability of each of these projects to interoperate with each other.

In this case Elixir could have defined a reactive-streams protocol and then implemented GenStage and Flow on top of this protocol. Another project could implement this protocol and support additional DSLs or processing semantics while supporting interop (in theory).

At least I think it could have gone this way. I’m still not quite up to speed on advanced Elixir though.

2 Likes

GenStage defines the underlying message protocol, so in theory anybody could implement it and be compatible.

https://hexdocs.pm/gen_stage/GenStage.html#module-message-protocol-overview

2 Likes

This defines a message protocol for interoperating with GenStage. It doesn’t provide a general stream processing API (with backpressure). That API shouldn’t require message passing.

So while this would help those wanting to communicate with GenStage only. A standard protocol would allow standard interop in both directions (Something <-> GenStage <-> SomethingElse).

1 Like

Also, GenStage itself is a useful user-level component abstraction, but Flows are likely where the interop would happen.

There is already interop within Flow to existing Elixir protocols like Enumerable. Now if Flow/GenStage were more protocol driven, it would be trivial to chain two completely separate projects together by integrating via the standard protocol.

Expose a function that turns a sink in one project into a standard publisher, then use this as a source in the next project. Each project has its own user-level component model and flow/graph abstraction but can integrate with any other project that implements the same underlying standard protocol. Just like many things are able to integrate via Collectable/Enumerable.

2 Likes