Dear Elixir community,
I want to make a short announcement about my open-source project dedicated to web scraping, which allows to:
- Manage your crawl jobs
- Visualize extracted data
- Perform a search on extracted fields
One of the problems of web scraping is data visualization. Lots of projects would only allow you to get the data and extract it as a JSON somewhere. However, what we’ve noticed is that in order to deliver good quality datasets you need to need to have a clear process that involves QA of the data.
You can fork it here: https://github.com/oltarasenko/crawly_ui
Alternatively, you can play with a deployed version of it: http://crawlyui.com/
We hope it will be useful for you!
If you have a suggestion or a production use case you’re happy for us to share, please get in touch.