My current company’s ecommerce website is on Drupal 7. I am trying to find a way (excuse) to use Elixir more in my workplace. In order to achieve this, I was thinking of creating a front end testing tool using Elixir.
My Question:
Has anyone here created a front end testing tool using Elixir which tests a website on a re-occuring bases?
Is this a good idea? Should I abandon Elixir for front end testing and stick to something like cypress?
Hound works very well for this. I am currently deploying hound against a django bff that runs in front of a django backend and it was incredibly effective at demonstrating that the elixir replacement backend that I wrote is an effective substitute that doesn’t die after long term burn in testing.
I can’t compare it to anything else (it’s been ages since I used capybara) but I wrote a repeating test and having it be a supervised process in elixir meant I didn’t have to nail down all corner cases (also my tests called out other services like ssh to confirm side effects), if it crashed the tests would restart from a safe state
I tried wallaby, but there were simply a few things that couldn’t trigger as reliably as hound does, so I stuck with hound, even though it’s imperative syntax is less to my liking. I got used to it though.
Thanks @tme_317 and @ityonemo for your insight. I’m going to take a look at both wallaby and hound. Ideally I would like go with the library that sees to build out.
##Question
I’m bring this post back to life to ask the following question. Is it possible to use Wallaby or Hound to test external sites, outside of ExUnit?
##Context:
I want to build a Front End QA tooling that has a set of endpoints for me to utilize from another website. Essentially I want to utilize the QA tool from website A to test website A. I understand this seem unnecessary, but I feel that this would be a good way for me to introduce Elixir to your company without impacting business logic.
Yes, 100%. At work I replaced a python backend with elixir (I’m coming for the python frontend next) and tested that it worked by running a user story continuously using hound for one week in our staging/lab environment (against the python frontend). Discovered several errors, and since deploying to prod there have been zero backend errors.
As a protip, if you do this, BEAM supervised restarts are key and I found that I needed to System.cmd(“killall”, [“chromedriver”]) on each restart, because I found that chromedriver could become helplessly lost in the face of our high-latency python frontend that doesn’t handle conflicting state very well, but that’s another story altogether.
sadly I don’t have an open source version of the tester. I did kind of write a testing framework, but it’s not really ready to be open sourced anytime soon. It’s also somewhat tightly integrated into a set of releases, that’s probably more than you would need.
In your case I would recommand making a custom mix task.