Keyword lists or individual function parameters?

Hi…

Certain behaviour in my application is defined by the contents of my JSON file and I am wondering whether anyone has recommendations on how I should implement this…

Here is an example of some JSON from my application

        "effect_fortitude": {
            "type": "MyApp.Modifier.Health",
            "params": {
                "health_property": "intestinal_fortitude",
                "transition_type": "linear",
                "start_time": 0,
                "end_time": 4000,
                "relative_amount": -10
            }
        }

Currently, my module looks like this

defmodule MyApp.Modifier.Health do
  def execute(health_property, transition_type, start_time, end_time, relative_amount) do 
    #do things here based on the parameters
  end
end

But I have been doing some more reading around using Keyword lists as the optional parameters that can be passed to a function.

I can see pro’s and con’s of doing it either way.

For example, by taking a keyword list, down the track, I might be able to better handle changes to my JSON schema, for example, I might add a new property to the params “max_amount_of_damage”. By using keyword lists, I wouldn’t have to change the arity of the function and could handle it accordingly. That said, I would probably need to update the inner workings of my execute() function

The other approach I see working (if I keep it like I have it now) is that if the JSON schema changes, I simply refer to a different module in my JSON, e.g… MyApp.Modifier.HealthVersion2 which might keep my code better organised and avoid conditions in my execute() function calls if I went the Keyword Lists route…

Has anyone done anything similar and had success or failure?

There seems to be a style consensus that the lower the arity of the function, the better. I think cramming a bunch of parameters into a keyword list is slightly more opaque than multiple parameters, but it enables a bunch of syntatic sugar (especially when parameters have default values or need to be pattern matched). I think it depends on the function, if it’s a general function that you foresee adding field to later (such as storing a JSON payload) the use a keyword list. If the function accepts a discrete number of parameters that shouldn’t change, use individual parameters.

Here’s an example of a high arity function which needs to have explicit parameters (Calculates cumulative interest).

  defp cumulate(_rate, _balance, _payment, cum_interest, count, acc) when acc > count do
    cum_interest
  end
  defp cumulate(rate, balance, payment, cum_interest, count, acc) do
    interest = balance * rate
    principal = payment - interest
    balance = balance - principal
    cum_interest = cum_interest + interest
    cumulate(rate, balance, payment, cum_interest, count, acc + 1)
  end

A third option could be passing params as a map. It gives you benefits of both: ensuring parameters are set via pattern matching like you get with the individual parameters (you can kind of get this too with keyword lists, but then you have to worry about order), and also maintaining a single arity function like you get with keyword lists.

I don’t know if single arity is a huge benefit but in my opinion its nice because it normalizes the Modifier api. I’m extrapolating here from what you’ve shown, so your application might not work like I’m assuming. But it seems like the type param refers to a module which has an execute function. By using a map vs multiple arguments you are to have execute as a single arity function for all of the different Modifier modules, rather than each module having an execute function with varying arities. This lets you use something like behaviors if its appropriate, and also keeps things consistent: all Modifier modules have an execute function that takes a single argument.

Also this means that whatever is dispatching these calls doesn’t have to know anything about the specific module it is calling. It can take the type and params directly from the json and do apply(type, :execute, [params]) without having to worry about if this particular type takes such-and-such arguments while this other type takes so-and-so. But be careful that you’re not letting using users just call any function in your application with this. If this is coming from user input you probably don’t want them being able to specify an arbitrary function and parameters.

The context is vague, but based on my understanding of the problem, I’d convert this into a map (or a struct) with well defined keys, with keys being atoms. This could be easily done with Ecto changeset. Here’s a simple example:

data = %{
  "health_property" => "intestinal_fortitude",
  "start_time" => 0,
}

map_spec = %{health_property: :string, start_time: :integer}
fields = Map.keys(map_spec)
empty_instance = Map.new(fields, &{&1, nil})

decoded_params =
  {empty_instance, map_spec}
  |> Ecto.Changeset.cast(data, fields)
  |> Ecto.Changeset.apply_changes()
  # %{health_property: "intestinal_fortitude", start_time: 0}

So at this point you can pass decoded_params around, and access individual fields with decoded_params.health_property, etc.

1 Like

To be honest, in this case I’d even pass the map that gets extracted from "params"-key straight into your execute function.

This way, you can define a proper @behaviour-Module, give it a execute/1-@callback and let the actual implementor deal with the details of the type.

Currently your dispatcher needs to know details about each object type that it shouldn’t even care about. You had to touch your dispatcher AND create a new module when you expect a new “type”, while with my approach you only need to add the module.

A rogh draft:

defmodule Foo.Event do
  @callback execute(params :: %{optional(string) => any}) :: {:ok, any} | {:error, any}
end

defmodule Foo.Modifier.Health do
  @behaviour Foo.Event

  def execute(params), do: {:ok, :done}
end

defmodule Foo.Dispatcher do
  @spec dispatch(type :: string, params :: %{optional(string) => any}) :: {:ok, any} | {:error, any}
  def dispatch(type, params)
    module = Module.concat([type])
    module.execute(params)
  end
end

This is very roughly drafted, and can be easily DoS’d through atom overflow by throwing invalid types at it. Therefore some kind of type-whitelist should or one should make sure all modules are preloaded and then using String.to_existing_atom("Elixir." <> type)

2 Likes

Thanks for your detailed replies everyone.

Much appreciated.

Ok, so I have gone off and done some more reading and I think I will head down the map route. Makes sense to me.

With behaviours, is there any particular call I can make that will confirm whether a module is using a particular behaviour? If so, I could use that as another security measure to make sure only the modules I allow are being executed from the external JSON and be a way of avoiding manually maintaining a whitelist…

No matter what approach I take in locking down which modules can be “executed” I will use String.to_existing_atom()

Check the code linked in this post: Testing that a module implements a given behaviour

Alternatively, if you keep all of the modifier modules behind a common prefix, like MyApp.Modifier, then you can just provide the particular modifier ‘submodule’ name rather than the fully qualified module name. In other words, the json has type: Health and you do "MyApp.Modifier" <> params[type]. That way you provide a scope for which modules can be executed.

Thanks…

I like your suggested approach too… I was thinking more this morning that is probably overkill to do what I was suggesting anyway. Especially if part of the namespace is locked down via code (as per your suggestion)…