Wallaby vs Hound?

Tags: #<Tag:0x00007f1141b5ead8> #<Tag:0x00007f1141b5e6c8>


Theres a new tool for acceptance testing called Wallaby, I was wondering if anyone has experience with it. I was specifically wondering how it compares to Hound, as that’s what I am using right now.

From this blog post it seems like it might be faster because it takes advantage of Ecto 2.0 features and runs all the tests concurrently?

Selenium driver for elixir, with javascript support

@keathley is a member here so hopefully he will spot this and add his thoughts :003:


I like the way wallaby tests read and the concurrent testing is super attractive as well. Hound doesn’t seem as readable to me though both get the job done.


Sorry for the late reply. I’ve been traveling for a few days.

Hound and Wallaby are similar in a lot of ways but different in a few key areas. I’ll try to break down what I think are the key differences and highlight why we wanted to build Wallaby in the first place.


This is the biggest difference between the two libraries. Both Wallaby and Hound use Webdriver to talk with the browser. So under the hood its using a lot of the same functionality. The key differences are in the developer experience.

Wallaby’s api was designed in the same spirit of Capybara’s api. It defaults to finding elements by css and allows you to interact with form elements by their label text, name, or id. Under the hood it uses xpath to do more these dynamic selections. This allows you to write interactions like:

|> fill_in("First Name", with: "Chris")
|> fill_in("last_name", with: "Keathley")

I find that writing tests in this way conveys more meaning in your tests both for yourself and other people working with you.

By default, Wallaby will only allow you to interact with elements that a user would see. If a element exists on the page, but its not visible to a user Wallaby will block until it either becomes visible or until it times out and raises an error.

Hound’s api is a bit more explicit then Wallaby’s. For instance with finders you specify the “strategy” that you want to use to select elements on the page:

find_element(:name, "message")
find_element(:css, ".message")

As far as I know Hound will allow you to select any element on the page even if its not visible to the user. I hope that HashNuke or tuvistavie will correct me if I’m wrong about this.


Wallaby (currently) only supports PhantomJS. However, we manage Phantoms for you. When Wallaby starts up we start a pool of Phantoms. Each test case checks out one of the Phantoms before it runs. That way we get test isolation between browsers and don’t pay startup costs for starting and stopping phantom.

Hound supports multiple browser tools (Selenium, PhantomJS, etc.). However it doesn’t manage any of these for you. You have to start up the driver and tell Hound what you’re using.


Wallaby and Hound BOTH support concurrent tests with Phoenix using the SQL Sandbox plug in phoenix_ecto. The difference is that Wallaby uses multiple browsers simultaneously. I haven’t benchmarked Hound and Wallaby so I can’t speak to the relative performance benefits of either.

There are pros and cons to both tools. I have a ton of respect for the work that HashNuke and tuvistavie are doing. Neither tool is necessarily better. They just have different design goals and opinions. I recommend that you check out both of them and go with the one that makes the most sense for you and your team.

Running async tests with Hound?

Sorry to bring back this old post but, does Wallaby support testing of SPAs?


Totally! In fact the majority of applications I’ve worked on with Wallaby have been large JS clients in front of a phoenix api.


Great to hear, I’m trying to use it on a new SPA project and to a different scenario: web crawling.


@keathley: I’m in team that uses HTTPoison, Hound, Floki and ExVCR to scrape data from page.I want to ask if we should think about your library it in next projects. We are parsing mostly old ASP.NET pages and we had some problems with them. The biggest problem is with sessions, same urls and caching responses. Imagine that slow site have a limited time of session (for one site we need to start a new session in middle of requests). Caching responses was also hard in some situations. Imagine that page depending on the data in session (or part of it) returns different responses for same urls. For example you need to submit form n times where n is the number of options in select tag. And it’s not end yet. Some pages uses JavaScript to make changes in session and finally redirects to other page. We also have a topic for one problem: how much time we need to wait for JavaScript execution?. Some sites makes our work really hard and any new helpful libraries are welcome. :smile: Is your API useful or just better (at least in some cases) than other packages when scraping in edge case situation like that? I don’t think about any changes in current project(s), but this could affect at least my future projects with similar targets.


Thank you very much for this write up, I appreciate it!