Ignoring the data structure part, what you’re suggesting is very similar to a JIT (just in time) compiler, like say V8, which will optimize your code as it learns about it at runtime. JIT is something that’s being discussed as a future option for the BEAM, I don’t know if any project towards it has been started though.
But your question is specifically for data structures. As an example of a data structure that does change as it’s being used is the Elixir/Erlang map, which for smaller sizes uses a more efficient underlying representation (tuple of lists, not the same but similar to a keyword list), but as it grows larger than a certain size (32, although that can change) it is automatically converted to a HAMT.
But part of the value in letting the developer choose their own data structures is that you’re conveying information to the runtime (I want this behavior) and you’re in turn giving guarantees to the developer (your code will behave the way you expect). The fact that Elixir maps change their behavior at runtime actually causes confusion and even bugs! If you don’t pay attention to the documentation and just experiment with small maps, maps seem to have sorted keys. Calling Map.keys
gives you the keys in sorted order. But if you grow it beyond 32 items that stops being true, they instead get a different order. If your code relied on the keys being sorted and you at some point have a map larger than 32 items, you’ve got a bug.
iex(1)> m = for key <- 1..32, into: %{}, do: {key, :val}
iex(2)> Map.keys(m)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
23, 24, 25, 26, 27, 28, 29, 30, 31, 32]
iex(3)> m = for key <- 1..33, into: %{}, do: {key, :val}
iex(4)> Map.keys(m)
[11, 26, 15, 20, 17, 25, 13, 8, 7, 1, 32, 3, 6, 2, 33, 10, 9, 19, 14, 5, 18, 31,
22, 29, 21, 27, 24, 30, 23, 28, 16, 4, 12]
To clarify, I’m not saying that it’s a bad thing that Maps are optimized for smaller sizes. I’m saying this type of optimization does have tradeoffs.