Assert_value - ExUnit's assert on steroids that writes and updates tests for you

Hi everyone,

We are releasing assert_value. assert_value is ExUnit’s assert on steroids that writes and updates tests for you.

You can use assert_value instead of ExUnit’s assert. It makes Elixir tests interactive and lets you create and update expected values with a single key press.

Here is a simple example. Start with a broken test:

assert_value "foo\n" == """

Run tests as usual. assert_value will show the diff and ask you what to do. Here we like the new value and tell assert_value to accept it:

~/> mix test

test/my_test.exs:6:"test example" assert_value "foo\n" == "bar\n" failed


Accept new value? [y,n,?] y

Finished in 1.0 seconds
1 test, 0 failures

Your test will be automatically updated:

assert_value "foo\n" == """


  • makes writing tests easier by automatically generating expected values
  • makes maintaining tests and refactoring code much easier
  • improves test readability

You will find usage examples and documentation in the README on GitHub.

We appreciate all feedback.

P.S. We also have a ruby version of this library. The elixir version turned out to be a substantial improvement over the ruby version. Because of macros we are able to use more natural, composable, and extensible syntax. Another huge advantage is async tests and assert_value supports them fully.


This is very, very cool. Mutating the test file interactively is a great way of going over the natural laziness of writing tests in cases that are not good for property testing. There seems to be a limitation in which it only works for strings, right? Why don’t you just convert the value into an Erlang term, store it into a file (or even write it inline)? It’s strictly better than using strings in my opinion.

Now, I might have a use for this. I’m the author of the Makeup syntax highlighting library. It needs some automated tests (badly, it’s > 1000k lines of code with almost no tests), as currently all testing is visual integration testing by examining the output (there is some low hanging fruit to be had from property testing, but that’s limited).

assert_value seems like a good tool for the job: I can just dump some source code snippets somewhere, run assert_value on them, compare the output (at least the tokenization, comparing the HTML output is harder). Ultimately, this is waht I’d like to have:

I’m comparing the visual output for my program, and it happens to be in HTML. I’d like to select some code snippets, have them converted into HTML and shown in some form of HTML canvas somewhere so that I could mark them as correct. The most immediate way I can think of implement this is to have the tests start an HTTP server, start a web browser and then ask the user if the output is correct or not.

For example (ridiculous mockup I’ve done in 5mins):

An then repeat for all test cases.


This is awesome - had a similar idea recently about updating tests like this, so glad somebody else made it! Awesome work!


Co-author here. Thank you!

Left argument can be anything with implemented String.Chars protocol (String, Atom, Integer, Float… etc.). In all other cases you have to serialize left argument to string. Right argument should always be a string heredoc.

We use heredoc because we need to modify test source code. Currently it is hard to search/replace values in the code because Elixir’s AST gives us only line number without column offset. With heredocs it is easy to search for """ and replace everything in-between.

We are planning to support all types of arguments in Elixir 1.6. It will have formatter metadata in AST which will help a lot.

Exactly! assert value can test for expected values stored in files. You can even compare content of two files:

assert_value!("input.txt") ==!("reference.txt")

It is smart enough to recognize! as right argument and will update contents of a file. With Elixir macros you can easily create test for bunch of files in some directory:

defmodule Support do
  defmacro build_tests do
    filenames =
      Path.expand("data/*.txt", __DIR__)
      |> Path.wildcard

    for filename <- filenames do
      quote do
        test unquote(filename) do
          assert_value!(unquote(filename)) ==!(unquote(filename <> ".ref"))


defmodule MyTest do
  use ExUnit.Case
  import AssertValue
  import Support


Having something like this you will have a separate test for each file and any failed test will not break other tests. And you can start with just input files without reference files:

~/>mix test
test/my_test.exs:34:"test test/data/foo.txt" assert_value!("/home/examp... failed


Accept new value? [y,n,?] y
test/my_test.exs:34:"test test/data/bar.txt" assert_value!("/home/examp... failed


Accept new value? [y,n,?] y

Finished in 3.6 seconds
2 tests, 0 failures

Hope this will help.


This reminds me very much the idea of approval tests. and


Yes. Actually, I think the idea is very simple: you just generate data once, confirm that it’s the correct result and then check for regressions.

The trick is having a good implementation for the real bottleneck, which is human inspection of the results the first time they’re generated. assert_value seems to provide a good UI.

But there are other possible design decisions that might lead to different (and possibly better) UIs. For example:

  • assert_value is synchronous. It stops at each test result that hasn’t been checked. Why not making that async after all test have run? One could gather all test results pending confirmation and show them all at once to the user once the test suite has run. I don’t know if this brings any benefits, just thinking about it.

  • Another possible improvement (although a bit specialized) for programs that generate reproducible HTML would be to show the results in a web browser through an embedded Plug app. The user could inspect the result visually instead of arsing it tag by tag. This is what I discussed above regarding Makeup. Often you want to use serializers like the authors of assert_value suggest, of course.

One could also generalize the concept to imething like property testing, but in which the property is visual inspection by a human (subsequent runs will compare to the reference result, of course).

EDIT: removed last paragraph because I’ve just reread the links and it turns out I had misinterpreted it.


As somebody who generally hates unit tests because of exactly the problem this solves…thank you


assert_value is asynchronous. While assert_value waits for user input the other tests continue to run. Only interactions with the user are synchronous.

You will want to split things across multiple tests. Each test should invoke Makeup on a single input file and assert the output. You will want to separate tests like this because test terminates when the first assert in it fails and to get maximum parallelism. That’s why my code above generates one test per input file instead of a big test for all files. I should have made this more clear.

The way we could do it is something like GIT_EXTERNAL_DIFF. That would make sense.

1 Like

This should certainly be made more clear! I haven’t tried your library yet but I was under the completely wrong impression that it would pause waiting for input.

1 Like

Hm… Doesn’t seem ideal for the use case of visually inspecting Makeup’s output. In that case I don’t care about diffs, I want to look at the rendered HTM. But in any case that’s probably not a good idea. Inspecting the list of tokens is probably much more useful than the rendered HTML and that can be done with serializers like your webpage example.

My idea of using a web browser was that you could compare things visually that might be hard to do textually. I admit I don’t have a concrete example for that (even Makeup isn’t probably the best example). Code that generates images is an obvious case, of course. You’d probably never do that in Elixir, so that’s not a problem, of couse. I guess I was fantasizing on an architecture that could be useful someday for something without thinking too hard about it.

I agree, the intent is quite similar. Thanks for the links!

To correct my own stupidity: You were probably talking about diffing HTML in a meaningful way (that is, graphically in a web canvas). This makes total sense, of course.

Still, I wonder if separating the test running part and the approval part could be helpful, even though the prompts don’t actually block test execution. As soon as I have the time for it, I’ll test your approach and explore a separate workflow with separate test running and approval steps.

Right, we would pass you the full actual and expected values for you to do with as you please. This is how external/visual diff tools work with source control. And our use seems similar.

We’re happy to have you use our assert_value any way you’d like. But my advice would be to not worry too much about inspecting freshly generated new values for new tests for now. Makeup already works, and you have done a lot of work to make sure it works well. I would focus on thinking of good test cases and just accept whatever output they currently give. That way you capture the value of engineering and manual testing you have done already into automated tests.

The real benefit will come next, once you have a solid regression test suite to know when Makeup’s behavior changes. Those changes, will be worth inspecting in a lot of detail to see whether they’re good or bad.

And once you have this going for a while you will know the best way to serialize the data-HTML or AST or whatever. Then you can change the serializer with little effort.

1 Like

well Jest has being doing this for a long while with snapshot testing with additional bonus of not having to provide initial value either :slight_smile:

1 Like

New version 0.8.2 has been released

New Features

Previous version supported only argument types which implemented String.Chars protocol on the left and string heredocs on the right. 0.8.2 supports all argument types (e.g. Integer, List, Map) except not serializable (e.g. Function, PID, Reference) So now you can write

assert_value 2 + 2 == 4
assert_value %{a: true} == %{a: true}


  • Better parser now supports any kind of formatting (expressions, parentheses, multi-line, etc.)
  • Better formatter (smarter formatting one-line and multi-line strings)
  • Better error reporting
  • Ensure compatibility with Elixir 1.6

Upgrade Instructions

  • Run mix test. You may get diffs because assert_value no longer converts everything to a string
  • Run ASSERT_VALUE_ACCEPT_DIFFS=reformat mix test to take advantage of improved formatter
  • Add this to .formatter.exs to make Elixir formatter not to add parens to assert_value
      import_deps: [:assert_value]

New version 0.9.1 has been released

New Feature: Elixir Formatter

assert_value now uses Elixir Formatter to format values. This improves formatting of big data structures. Previously assert_value formatted expected values other than strings as one line. This made code hard to read when using big data structures like long arrays and nested maps. Now we use Elixir Formatter and it formats them correctly on multiple lines.


assert_value foo() ==  [foo: "foo", bar: "bar", baz: "baz", qux: "qux", xyz: "xyz"]


assert_value foo() == [
               foo: "foo",
               bar: "bar",
               baz: "baz",
               qux: "qux",
               xyz: "xyz"

Enhancement: Easier to Understand Diff

assert_value now shows more context in diffs. Compare these two outputs:


test/example_test.exs:7:"test expected" assert_value 2 + 2 failed



test/example_test.exs:7:"test example" assert_value failed

-    assert_value 2 + 2
+    assert_value 2 + 2 == 4


assert_value now requires Elixir >= 1.6

Upgrade Instructions

  • Add this to .formatter.exs:
      # don't add parens around assert_value arguments
      import_deps: [:assert_value],
      # use this line length when updating expected value
      line_length: 98 # whatever you prefer, default is 98
  • Run ASSERT_VALUE_ACCEPT_DIFFS=reformat mix test to take advantage of improved formatter
1 Like