Choosing between keyword arguments and config values

I have a library that sets the name of a module based on a value in the config.exs file. I’m thinking of changing this so that the module name is set by using a keyword argument in the calling function.
The main reason for making this change would be because it seems to be more efficient, so am I right in thinking it would be more efficient?

2 Likes

Although I honestly don’t think it will infect efficiency much, the only way too know for sure is by measuring yourself, for instance using a tool like Benchwarmer. No use-case is the same, and claiming that ‘one solution always trumps the others’ is something that often happens in the programming world, but the real world is not that simple.

“Meten is weten” (to measure is to know), as they say in Dutch

Efficiency aside, being able to pass configuration settings to the function that ‘does the work’ as parameters often makes it a lot simpler to test such a function.

You’re right, or course, but I thought I’d take the lazy way out and ask around to see if anyone had already looked into it :slight_smile:
My intuition is that setting the module in the config would mean that every time the function is called, the Agent that maintains the state for the config would then get called, and so the keyword argument way would be more efficient - it would result in fewer functions calls in total. I could be totally wrong this, and maybe someone can correct me later.

Can you elaborate on your use case? Passing in values vs retrieving them from config both have their uses because there are a number of very important differences, and it’s a bit of an apples and oranges comparison to try to compare them without a scenario in mind.

On a related note, readability, correctness, and ease of testing are all wildly higher priorities than saving a microsecond here or there on fetching config. The performance differences will rarely matter here. However, we can’t really answer those first questions without context.

Yes, it’s this module, which is a module plug. On line 93 I call a config value, which in turn calls Application.get_env(:openmaize, :db_module). I could also add an extra keyword argument to the init function, which then gets passed on to the handle_login function. As I mentioned before, I believe that using a keyword argument would be more efficient, and it would be nice to have this confirmed / disproved.
I realize that speed should not be the only consideration, but I want to understand the mechanics a little better.

Application.get_env is backed by :ets. When it comes to getting such a small value the cost is hilariously small, particularly in the context of handling an HTTP request. My local benchmarks indicate that retrieving a small configure value takes 0.23 MICRO seconds. Pulling a value from a keyword lists could be 10 or 100x faster and it wouldn’t matter.

Don’t get me wrong, I’m not saying that keyword options to the plug call are the wrong choice here. Rather I’m arguing that concerns about efficiency are misplaced. Go with whatever produces the best API.

As an aside, be very careful with returning anonymous functions here. https://github.com/riverrun/openmaize/blob/master/lib/openmaize/login.ex#L68

The return value of init is cemented at compile time. The &foo/2 form of anonymous functions CAN be safely escaped to a macro, but any other anonymous function form cannot be.

Thanks for that detailed reply. I’ll probably keep it as it is, but I wanted to have a better idea of how it was working before I make that decision, and your answer provided me with that.
About the anonymous function, I’m not really sure I understand your point. Is this way of passing around functions not recommended?

The following will work
Given:

defmodule Foo do
  def init(opts), do: opts
  def call(conn, _), do: conn
end
plug Foo, callback: &IO.puts/1

The following will not work

plug Foo, callback: &IO.puts("Hello: #{&1}")
plug Foo, callback: fn x -> IO.puts(x) end

The reasoning is this. In most places you call plug the Foo.init function is called at compile time. This is often desirable because it’s supposed to setup one time initialization values so that they don’t need handled on every request. The values returned from Foo.init are transformed into AST and unquoted into the body of the module so that they can be given at runtime to call already nicely prepared.

Thus Foo.init is passed [callback: anon_function] which it simply returns. This now needs transformed into AST. Problematically, only the &function/arity form can be transformed from a real value into AST. My understanding about why this is is that anonymous functions form closures, and the if values that they close are at compile time they cannot be used in runtime. the &function/arity form is the only form that the compiler can prove doesn’t close over any values.

Long story short, I think your use case is fine, there’s just some gotchas with passing anonymous functions as plug options.

Ok, I got it now.
Thanks for the detailed replies. That’s a real help.

I basically agree with @benwilson512’s points, especially about the fact that a saving of few microseconds rarely amounts to anything significant in the grand scheme of things.

A more important question is what is a better interface for the user of the functionality. The answer of course varies from case to case, but I have a feeling people tend to overuse app env where plain arguments would work better. That’s especially true for writers of library applications.

The main problem I have with app env is that it’s a global, system-wide parameter. This has two implications. First, the parameter is not visible in the code invocation. When I call foo() I have to know that some app env parameter will affect the outcome of the function call. That makes the parameterization implicit, and the function harder to use.

Another problem is that since setting is global, you can’t use different values concurrently. You can set a value to bar, and then later to baz, but you can’t have two processes use bar and baz at the same time. This can sometimes present problems for tests: if you want to test the function with different values of app env, you need to run those tests synchronously.

In contrast, plain parameters don’t have such problems. All options are passed explicitly, and concurrent clients can use different options. Both properties make such code test-friendlier.

Consequently, I think app env makes more sense for system-wide options. A nice example of this is Logger where app env can be used to affect e.g. logger level and logger backends. These are the things we usually want to configure once, and then just invoke log functions without caring about whether it’s logged and to which backend.

So in your case the question IMO boils down to: is the client allowed to vary the option from call to call, or is the setting global? If former, then I’d propose using explicit arguments. Otherwise, app config might work better, although even then I’d consider explicit arguments if we’re talking about a few invocations of a few functions.

1 Like

Thanks for the input.
Actually, I chose to make the change to use arguments instead of app env a couple of days ago, and the main reason was because of the second point you made - about running different values concurrently.
Thanks again for your help.