You seem to have written type-specific generators which can be composed. This is probably the most common approach in property testing, but I'd like to point to a Python library which does things differently. Please note that maybe I'm misrepresenting the way your library works - maybe the LazyTree module takes the palce of the random bytes in my explanation below....
In Hypothesis, Instead of writing custom generators for each type (they call this approach "type-based shrinking"), they generate a stream of random bytes, and write generators to turn the random bytes into useful types. Instead of shrinking the generated data structure, they shrink the byte stream (first by deleting bytes, then by trying to decrease the number of bytes). The generators are written in such a way that most of the time shrinking the byte stream also shrinks the generated value. This decouples the shrinking and generating parts of the code. It also makes it very easy for users to write their own custom generators, which will shrink automatically and do the right thing most of the time. It's also very elegant.
Additionally, this has the advantage that all values can be serialized (just store the random bytes and rebuild the value from the byte sequence). This allows hypothesis to save failing examples in a database and test them again in the next round.
In his blog, the author claims that this approach is superior to type-based shrinking (something I'm mostly convinced it's true, but don' have any data do back my intuition). This post tries to explain some of the differences: http://hypothesis.works/articles/how-hypothesis-works/
The core of Hypothesis seems to be more complex than the "core" of StreamData, but Hypothesis seems to be smarter with shrinking. It's still quite simple, though, and I've been working on and off trying to port it to Elixir. For example, you write: https://hexdocs.pm/stream_data/StreamData.html#binary/0-shrinking. I don't know if this is a purposeful design decision or a technical limitation of the way you've written StreamData, but in Hypothesis, a smart bytestring generator could very easily shrink both the length AND the bytes (which I think would be the correct API).
EDIT: Something that Hypothesis makes easy is to do something like generating an integer according to a distribution and then generating a list with that number of items, or even more complex operations, which I've never done in practice. For example, it can generate a pair of integers (n, m) and then generate and (n, m) matrix of items defined by another generator, and the matrix will shrink properly. I don't think StreamData supports that except by writing a generator from scratch, right?