Logging too many log request

We need to keep a lot of logs for our live phoenix application as it’s transaction related. In production, we realised the logger was being overwhelmed forcing it to discard important logs - it was a tough lesson but one that we had to learn, as a work around, we deployed HAproxy and multiple instances of the phoenix application. This solves the logger issue, is this solution right or we could design our system differently and do away with HAproxy?

I feel that you use Logger differently than it it is meant to be used. If you want to have transactional logs, then you probably want to make all of them synchronous where Logger is asynchronous as long as possible (that is why there is dropping, to not overwhelm the backends).

1 Like

Are you using distributed Elixir?

I’m using a single node with 32cores.

I might have missed setting sync_threshold to 0 to force sync mode. Will a single instance of the application suffice on a 32cores machine with 32GB ram as compared to having multiple instances and load balancing with HAproxy.

Curious, what kind of volume were you doing?

What Logger backend?

I’m using the default console backend. Kindly clarify - Curious, what kind of volume were you doing?

Why not using distribution built-in in the BEAM instead of doing it externally?

Also you may not want to use the Logger for something that you care about not loosing data, because the Logger was not built for that. You may want to try:

This used in Event Sourcing to fulfill the same needs you mentioned:

@slashdotdash may give you some more pointers in how you can use it at scale.

1 Like

Why not using distribution built-in in the BEAM instead of doing it externally?

Kindly elaborate on this or share a resource; will e.g libcluster replace HAproxy seamlessly and provide sticky sessions etc.

Also you may not want to use the Logger for something that you care about not loosing data, because the Logger was not built for that.

The transaction is eventually persisted into postgresql, logging is to help reconstruct the full request/api calls in the advent of a failure. I would however peruse the resource and see how best to use it.

In that case Logger isn’t something you look for. You need event store instead of logging platform. Both are in fact logs, but for different purposes.

4 Likes

That’s exactly what Event Sourcing does, it stores events to be able to replay them later when needed., and that’s why @slashdotdash build the EventStore. So, in part you are doing kind of Event Sourcing without knowing you are doing it:

https://cqrs.nu/

Check the RealWorld example of CQRS by @slashdotdash in Elixir:

Now, if you don’t like to use Postgres as the persitent layer for the events, then the EventStore allows you to provide another storage backend.

You can also read the in progress book from him:

What you don’t want to do is to use any logging library, because they are built for logging, and in logging is ok to loose data.


Maybe this guide can help you:

Or:

The library from Discord to solve distributed sessions:

1 Like

Kudos for the rich resources provided, I’ve got a lot of reading to do. I will revert with any questions should the need the arise.

It does? But the library specifically states that it uses PostgreSQL, and the guide pages assume it. How do you use another database? Or not even a DB like Kafka?

@kodepett For you to understand better how an event store works I recommend you to read this articles by @alvises :

or

In the articles you will learn that a disk event store uses a key:value append only approach into a file and keeps the index of it in another file, just like Kafka does:

Reading the above articles will give you a very deep understanding how event logs work and how they can be stored, therefore leaving you in a much better position to understand the trade-offs when making the decision how you will gonna proceed in your application.

A simple disk based Event Log library that you can look into the code to understand the principles of key:value with indexes:

A distributed and production ready library for Kafka like events:


Related links in the forum:

3 Likes

See this reply from the author of the library:

Maybe this one:

Vonnegut is a append-only log that follows the file format and API of Kafka 1.0. The server can be run standalone, with 1 or more chains each with 1 or more replicas, or as part of another Erlang release which can talk to it directly.

Did not know about that vonnegut one. Good point.

1 Like

Thanks, but doesn’t that mandate you to use commanded as well? Guess it was a while since I last checked it but for me using a relational DB for an event store is no go. It doesn’t scale well.

As far I am aware you don’t need to use Commanded, but @slashdotdash is the best one to answer you that :wink:

I am from the same opinion and that’s why I am looking to alternatives, because I need a safe backup so that I can recover from a Mnesia disaster:

Probably I will replace the default Mnesia with:

Despite being looking at the safer Mnevis as alternative for Mnesia I still want to use a proper event store to be on the safe side… just in case :wink:

1 Like

After being in a number of such projects I’ll just opt for Kafka or RabbitMQ. It’s too much pain trying to use an RDBMS as an append-only log with data expiration.

Kafka is amazing but the deployment story is quite bad – you need a Zookeeper most of the time, too. So either use a heavily managed Kafka hosting or suffer through putting Kafka in Docker/Kubernetes and never do it again. :003:

2 Likes

In this series of articles we will see the different concepts behind a key-values store engine, implementing a simple engine in Elixir.

Excellent stuff, I’m done going through and implementing the solution. I need to read a bit on the binary aspect under part 2 which is what I’m currently doing. I will update you as and when I’m done with the resources shared and my general opinion based on the knowledge acquired.

Can you share a resource that delves into Elixir Binaries/Bitstrings?
I’m perusing one resource, I need clarification on storing integers/floats as binaries e.g.

<<1, 2, 3>>

Is the above equivalent to the 123 or `“123”, a quick check in iex returns false. What will the above represent. Sorry I’ve to ask a lot of question.