Best product/practice for aggregating mix test results from CI systems

This is rather a broad question, so let me start with some background.

Lets say that we had a customer, the customer wanted a means of viewing project progress - meaning modules, tests, passing tests, failing tests, and test coverage.

So what we implemented was a system whereby every time the CI system ran the test results were parsed, and uploaded to a web server somewhere and the aggregate test results were rendered into a html table.

This was very ad-hoc, and quite ugly. Has anyone used a product that provided a nicer, standard way of achieving the same thing?

Something like this maybe?

Does coveralls work if you split tests into pipelines?

IIRC it has such option. You can even mark from which step comes each report and check them one by one as well as merged.

1 Like

Why is the customer monitoring such developer-specific concerns rather than being interested in a more product-oriented set of release notes or change logs? This is a strange thing to solve for, IMO. It feels micro-manage-y, since those things (especially test coverage) rarely speak to “does this provide good utility to me?”

Most orgs don’t expose i.e. internal task-tracking data to their customers, but a more curated and high level roadmap. I don’t know if higher fidelity would be productive. This feels like a similar decision to face.


Ah yes, like but self hosted

I have found OpenCov which is even written in Elixir, but I haven’t tested it out yet.


That, sir, is exactly what I’m looking for, thank you

1 Like

It’s for the case where we have to overhaul a big financial system that has major performance/reliability issues, and NO automated testing (that’s a LOT more common than you’d think haha).

We aren’t tasked to implement new functionality, just lots of bug fixes, rewriting, and optimisations.

I agree, this would not be something you’d bother sharing with the client in the usual case - developers tend to be self regulating.