There is no effect on performance based on function visibility. There were some reports that having a lot of public functions in a module slows down compilation time, but it actually has no effect on runtime performace, and if it does, it is a result of some limitation in current compiler implementation
Oh and
That type of optimizations is out of scope of current type-system work, I asked about that some time ago on the forum. I hope that at least the dot (.) operator will finally be optimized
Not true, there is a difference, it used to be that private functions took longer to compile due to more optimizations being applied, it has changed now to public functions being slower as they have to be exported.
I’m not sure that all the optimizations that is done with private functions are possible to do to public or even anonymous.
But I agree that in the end this is all about improving the compiler as it shouldn’t be like this.
All of them are possible. Anonymous functions are lifted (they used to be literally lifted, but now they are compiled in a different way, but logically it is still lifting). Public functions can also be compiled the way private functions are
The only difference I know is with inlining. First of all, inlining must be explicitly enabled. Then, if compiler sees that public or private function fits inlining heuristics, it inlines the function. The difference is that if private function was inlined in every place it’s called, the function is removed from the module body. While public function will remain there, because it can be called from other module.
You are confusing upcoming Elixir gradual-type system and JIT type tracking where Erlang is able to omit some guards for private functions when it can guarantee that it will be called only with values that do not require type check. For example if you do:
defmodule Foo do
def foo(a, b) when is_integer(a) and is_integer(b), do: a + bar(b)
def bar(b) when is_integer(b), do: b + 1
end
Compiler need to keep guards in both functions, as it do not know if bar/1 will be called only with integer. However if we write it as:
defmodule Foo do
def foo(a, b) when is_integer(a) and is_integer(b), do: a + bar(b)
defp bar(b) when is_integer(b), do: b + 1
end
Then it can “deduce” that bar/1 is called only with integer (as it is private, and the only caller already checks if that is integer or not). So it can optimise second guard away, as it has guarantee that it is integer.
That’s true, but you are one step away from overcoming this problem, because it is always possible to change one public function into one public function and the private function clone, like this:
defmodule Foo do
def foo(a, b) when is_integer(a) and is_integer(b), do: a + bar_in_foo(b)
def bar(b) when is_integer(b), do: b + 1
defp bar_in_foo(b) when is_integer(b), do: b + 1
end
Every compiler can do this automatically and this optimization is called “speculative specialization” and while it increases the module size, it is fairly cheap in all the BEAM languages and can be applied almost everywhere
This would be great for anon functions, I’ll be doing some benchmarking with decode specialization in my SQL library and I’m expecting anon functions to be slower then named private functions.
Anonymous functions are “slower” because it is a dynamic dispatch: runtime needs to check that variable contains a function, this function has the correct arity and then, if this function holds context (it is a closure) it needs to provide it too.
But given how these anonymous functions can be optimized, there are no limits.
It is possible in some cases. Compiler needs to be able to prove that (for example) in every possible case the code f.() will be called with f equals fn -> IO.puts "hello" end. In BEAM languages, it is very hard to come up with such proof.
For example, if my code looks like
defmodule X do
def adder(y) do
fn x -> x + y end
end
def add(x, y) do
adder(y).(x)
end
end
Compiler can prove that adder(y).(x) always equals to (fn x -> x + y end).(x) or just x + y. (However, in order for this to happen, you need to add the @compile :inline)
But if I do something like
defmodule X do
def adder(y) do
fn x -> x + y end
end
end
defmodule Y do
def add(x, y) do
X.adder(y).(x)
end
end
Compiler can’t prove it. And it can’t prove it, because module X can change in runtime (due to hot-reloading) independently from Y, and this feature is a hard requirement of the runtime
That may sound like a problem, but honestly, runtime overhead of anonymous function dispatch is extremely low and can’t be noticed in any real world program. For example, the way your code fits into CPU cache has much more impact on performance, and this thing is nearly random.
This looks really cool but I’m getting the following error
mix test --trace test/problem11_test.exs:13
warning: function data/0 is unused
│
16 │ defp data do
│ ~
│
└─ lib/problem11.ex:16:8: Problem11 (module)
Running ExUnit with seed: 86835, max_cases: 1
Excluding tags: [:test]
Including tags: [location: {"test/problem11_test.exs", 13}]
Problem11Test [test/problem11_test.exs]
* test greets the world (77.5ms) [L#12]
1) test greets the world (Problem11Test)
test/problem11_test.exs:12
** (UndefinedFunctionError) function Problem11."REPATCH-PRIVATE-data"/0 is undefined or private
code: assert "sucka" == private Problem11.data()
stacktrace:
(ex_euler 0.1.0) Problem11."REPATCH-PRIVATE-data"()
test/problem11_test.exs:13: (test)
Finished in 0.2 seconds (0.2s async, 0.00s sync)
1 test, 1 failure
shell returned 2
Press ENTER or type command to continue
here’s the spec
defmodule Problem11Test do
use ExUnit.Case, async: true
use Repatch.ExUnit
import Repatch, only: [private: 1]
doctest Problem11
setup do
Repatch.spy(Problem11)
end
test "greets the world" do
assert "sucka" == private Problem11.data()
end
end
Unused private functions are not compiled in the module, therefore Repatch cant access their code to make them public in tests. Try using this private function in a public one