We need to keep a lot of logs for our live phoenix application as it’s transaction related. In production, we realised the logger was being overwhelmed forcing it to discard important logs - it was a tough lesson but one that we had to learn, as a work around, we deployed HAproxy and multiple instances of the phoenix application. This solves the logger issue, is this solution right or we could design our system differently and do away with HAproxy?
I feel that you use Logger differently than it it is meant to be used. If you want to have transactional logs, then you probably want to make all of them synchronous where Logger is asynchronous as long as possible (that is why there is dropping, to not overwhelm the backends).
I might have missed setting sync_threshold to 0 to force sync mode. Will a single instance of the application suffice on a 32cores machine with 32GB ram as compared to having multiple instances and load balancing with HAproxy.
Why not using distribution built-in in the BEAM instead of doing it externally?
Also you may not want to use the Logger for something that you care about not loosing data, because the Logger was not built for that. You may want to try:
This used in Event Sourcing to fulfill the same needs you mentioned:
@slashdotdash may give you some more pointers in how you can use it at scale.
Why not using distribution built-in in the BEAM instead of doing it externally?
Kindly elaborate on this or share a resource; will e.g libcluster replace HAproxy seamlessly and provide sticky sessions etc.
Also you may not want to use the Logger for something that you care about not loosing data, because the Logger was not built for that.
The transaction is eventually persisted into postgresql, logging is to help reconstruct the full request/api calls in the advent of a failure. I would however peruse the resource and see how best to use it.
That’s exactly what Event Sourcing does, it stores events to be able to replay them later when needed., and that’s why @slashdotdash build the EventStore. So, in part you are doing kind of Event Sourcing without knowing you are doing it:
It does? But the library specifically states that it uses PostgreSQL, and the guide pages assume it. How do you use another database? Or not even a DB like Kafka?
@kodepett For you to understand better how an event store works I recommend you to read this articles by @alvises :
or
In the articles you will learn that a disk event store uses a key:value append only approach into a file and keeps the index of it in another file, just like Kafka does:
Reading the above articles will give you a very deep understanding how event logs work and how they can be stored, therefore leaving you in a much better position to understand the trade-offs when making the decision how you will gonna proceed in your application.
A simple disk based Event Log library that you can look into the code to understand the principles of key:value with indexes:
A distributed and production ready library for Kafka like events:
Vonnegut is a append-only log that follows the file format and API of Kafka 1.0. The server can be run standalone, with 1 or more chains each with 1 or more replicas, or as part of another Erlang release which can talk to it directly.
Thanks, but doesn’t that mandate you to use commanded as well? Guess it was a while since I last checked it but for me using a relational DB for an event store is no go. It doesn’t scale well.
After being in a number of such projects I’ll just opt for Kafka or RabbitMQ. It’s too much pain trying to use an RDBMS as an append-only log with data expiration.
Kafka is amazing but the deployment story is quite bad – you need a Zookeeper most of the time, too. So either use a heavily managed Kafka hosting or suffer through putting Kafka in Docker/Kubernetes and never do it again.
In this series of articles we will see the different concepts behind a key-values store engine, implementing a simple engine in Elixir.
Excellent stuff, I’m done going through and implementing the solution. I need to read a bit on the binary aspect under part 2 which is what I’m currently doing. I will update you as and when I’m done with the resources shared and my general opinion based on the knowledge acquired.
Can you share a resource that delves into Elixir Binaries/Bitstrings?
I’m perusing one resource, I need clarification on storing integers/floats as binaries e.g.
<<1, 2, 3>>
Is the above equivalent to the 123 or `“123”, a quick check in iex returns false. What will the above represent. Sorry I’ve to ask a lot of question.