Spear - a sharp EventStoreDB 20+ gRPC client backed by Mint

EventStoreDB is an append-only stream database designed for Event Sourcing. EventStoreDB champions immutable data streams and real-time subscriptions, making it easy to build reactive eventually consistent systems with built-in audit logs of how data has changed. Version 20 deprecated the TCP+protobuf client-server interface and added a new gRPC+protobuf interface (among other improvements).

Spear (hex) (github) is a gRPC client library for EventStoreDB 20+. It provides a familiar t:Enumerable.t/0 interface for interacting with the EventStoreDB with functions from Enum and Stream.

For example, you can stream a large CSV into events and the enumeration won’t be run until each event goes over-the-wire:

{:ok, conn} = Spear.Connection.start_link(connection_string: "esdb://localhost:2113")

# e.g. a MyCsvParser module using c:NimbleCSV.parse_stream/1
File.stream!("large.csv")
|> MyCsvParser.parse_stream()
|> Stream.map(&MyCsvParser.turn_csv_line_into_spear_event/1)
|> Spear.append(conn, "ChargesFromCsvs", timeout: 15_000)
# => :ok

You can also read EventStoreDB streams lazily via Elixir Streams:

# say we have 125 events in this EventStoreDB stream
iex> Spear.stream!(conn, "SomeLongStream", chunk_size: 25)
#Stream<[
  enum: #Function<62.80860365/2 in Stream.unfold/2>,
  funs: [#Function<48.80860365/1 in Stream.map/2>]
]>
# the returned stream will fetch events in chunks of 25
iex> |> Enum.count
125

And work with subscriptions (regular and persistent) through asynchronous message passing.

iex> Spear.subscribe(conn, self(), "MyStream")
{:ok, #Reference<0.1469255564.3441164290.87228>}
iex> flush
%Spear.Event{..}
%Spear.Event{..}
%Spear.Event{..}
%Spear.Event{..}
..
:ok

Or if you’re looking for something a bit more fancy for real-time event processing with back-pressure, there are GenStage and Broadway producers in Volley (hex) for regular and persistent subscriptions, respectively.

Even if you’re not in the market for a new EventStoreDB client, Spear still has something :sunglasses: to offer: it’s an example of a gRPC client written with just Mint (hex)! gRPC is a pretty slim specification which extends HTTP2 to add a message format and some headers. It’s not such a heavy lift to implement a gRPC client given a nice HTTP2 client like Mint. Plus Mint’s fine-grained control over each connection makes it possible to add features like efficient request streams which respect HTTP2 window size back-pressure (via t:Enumerable.continuation/0s) and be certain about blocking and multiplexing.

Give Spear and Volley a try and see what you think! Issues and PRs are always welcome :slightly_smiling_face:

28 Likes

Nice stuff! Learned about Spear from the EventStoreDB newsletter and looking forward to trying it out :slight_smile:
I’m so tired of rolling my own versions of ES :grinning_face_with_smiling_eyes:

1 Like

Spear v0.10.0 has been released!

This release focuses on implementing the new features introduced in EventStoreDB v21.6.0. Some additions to Spear compatible with the new EventStoreDB version include

  • connecting/creating/updating/deleting persistent subscriptions to the $all stream with server-side filtering
  • a new Spear.append_batch/5 function which takes advantage of a new RPC that can optimize append throughput
    • plus a Spear.append_batch_stream/2 function which wraps a stream of event batches in the necessary calls to Spear.append_batch/5, for convenience
  • a new Spear.subscribe_to_stats/3 function which can be used to form a subscription to the Monitoring API

See the EventStoreDB 21.6 release post on the EventStore blog or the Spear HexDocs for more information about the new features.

Since Spear has been published, it has been featured on the EventStore blog and we (@NFIBrokerage) have begun using Spear for production connections.

Check out the Spear changelog to see a breakdown on all the new changes.

4 Likes

v0.11.0 was just published!

This release corresponds to the new v21.10.0 EventStoreDB release. v21.10.0 is the new LTS release, so it mostly focuses on stability and performance improvements.

There are only two new functions added in this version:

  • Spear.get_supported_rpcs/2 returns the list of implemented RPCs in the connected EventStoreDB
  • Spear.get_server_version/2 gets the connected EventStoreDB’s version string

Also new in this release is the ability to use server-side filtering on the magic $all stream with Spear.stream!/3 and Spear.read_stream/3 via the :filter option:

filter = %Spear.Filter{on: :stream_name, by: ["StreamPrefix.A-", "StreamPrefix.B-"]}
Spear.stream!(conn, :all, filter: filter)

A feature we wanted so bad we went and implemented it upstream in the database ourselves! :stuck_out_tongue:

Server-side filtering allows clients to efficiently read from multiple streams or event types at once, functionality that could previously only be accomplished with projections. While it’s unclear so far which is better resource-usage wise (this is something we intend on testing), server-side filtering is nice to work with vs. projections because the filters leave no trace on the database, which means less operational burden for you! Server-side filtering was implemented in earlier EventStoreDB versions, but only for subscription-type reads which lack back-pressure. With this new change, our GenStage producer for Spear (Volley) will now be able to work with server-side filtering and provide proper back-pressure.

Check out the Spear changelog, and be sure to read through the wonderful v21.10.0 server release notes

2 Likes

Hi @the-mikedavis,

Please forgive me if this is a stupid question, but can Spear do a “quorum reads”?

There’s no support in Spear out of the box, but I believe you could craft your own by connecting to a cluster, reading the Spear.cluster_info/2 to find all the replicas, and then form connections to a subset of those and perform a read. If you’re just looking for the most up-to-date information, I believe you can just read from the :Leader replica though.

I haven’t heard much about quorum reads/writes in EventStoreDB, I think because you’re usually leaning into eventual consistency: usually you’re reading information that’s somewhat behind the source of truth.

Thank, you, that makes sense.

How are you guys going about isolating data between Acceptance Tests?
We manage our suite with Cypress, not in parallel mode so far.

Btw, kudos for GitHub - NFIBrokerage/beeline: a tool for building in-order GenStage topologies for EventStoreDB. We are just starting out with ESDB and might give it a try for projecting read models.