Need help with app architecture - how to dynamically run code?

Problem Statement:

I am trying to build an ‘action runner’ for judge json rules. In a nut shell, say a json defined rule matches with action string:

{
...rule
    "action": "collect_signature"
}

I’m trying to think of best architecture to run that ‘collect_signature’ code. It could be a module name or a script file. The intention is for rules to trigger actions in a dynamic way so actions are easy to maintain separately.

Desired Requirements:

  • the action code has its own dependencies/hex packages
  • adding new actions does not require app reload/recompile/redeploy of the ‘action runner’ i.e. the actions are dynamically loaded and run somehow.

Idea 1:

standalone exs scripts - Actions are defined in standalone exs files with their own Mix.installs. I tried this already by using Code.require_file/1 to dynamically run an exs file however ran into dependency errors. The ‘action runner’ runtime dependencies had conflicts with the loaded exs dependencies. Need a way to get around that. This seems the most promising however.

Idea 2:

Umbrella project where each action is a separate application. I don’t have experience with umbrella projects but this might be a feasible path. It seems a bit overkill however and I think the action runner would have to be reloaded every time a new action is created (which is a big con).

Idea 3:

Similar to umbrella, load actions as private local packages via a dependency path. The ‘action runner’ would then dynamically load dependencies in its mix.exs. Sudo code like:

defp deps do
    for each action_name in actions folder return:
        {:action_name, ">= 0.0.0", path: "/actions/action_name"}
end

Then maybe add a Mix task ‘new action’ that loads up the boiler plate for a new action in the correct folder. The main con with this is same as #2 - the action runner would have to be recompiled with every new action.

Or you can do what Livebook does. Use a separate BEAM node.

1 Like

Ok cool I tried that out and still getting dependency errors. Do nodes have their own isolated dependencies?

I have a simple test setup. A file called ‘action_test.exs’ contains:

Mix.install([
  {:req, "~> 0.4.1"}
])

Req.get!("https://hex.pm/api/packages/req").body["meta"]["description"]
|> IO.inspect()

Then from the ‘acion runner’ I have tried:

Code.require_file(Path.join(__DIR__, "action_test.exs")

and

Node.spawn(:node1@localhost, Code.require_file(Path.join(__DIR__, "action_test.exs")))

but getting error:

** (Mix.Error) Mix.install/2 can only be called with the same dependencies in the given VM
(mix 1.15.2) lib/mix.ex:577: Mix.raise/2

Not sure if it’s the problem, but note that Node.spawn expects a function as its second argument and you’re trying to pass the result of Code.require_file there instead. Maybe you need a wrapping fn -> ... end?

1 Like

ah nice catch. I retried with:

Node.spawn(:node1@localhost, fn -> Code.require_file(Path.join(__DIR__, "action_test.exs")) end)

And its running now but with a warning:
** Can not start :erlang::apply,[#Function<43.125776118/0 in :erl_eval.expr/6>, []] on :node1@localhost **

In order to see output I tweaked action_test.exs to be:

Mix.install([
  {:req, "~> 0.4.1"}
])

msg = Req.get!("https://hex.pm/api/packages/req").body["meta"]["description"]

File.write(Path.join(__DIR__, "action_test.txt"), msg)

After re-running

Node.spawn(:node1@localhost, fn -> Code.require_file(Path.join(__DIR__, "action_test.exs")) end)

I see no output :smiling_face_with_tear: so not sure whats wrong

That message comes from an internal Erlang function crasher:

When spawn_opt can’t connect to the destination node to spawn the requested process, it spawns crasher instead and returns that PID:

How are you creating node1@localhost and connecting to it?

Node.spawn/2 is a low-level function, try instead :erpc.call/2, that will handle redirection of errors and output.

1 Like

I’m running tests from a livebook:

Node.spawn(:node1@localhost, fn -> Code.require_file(Path.join(__DIR__, "action_test.exs")) end)

No luck

I also tried:

Node.start(:node1@localhost)
Node.set_cookie(:foo)

Node.connect(:node2@localhost)
Node.spawn(:node2@localhost, fn -> Code.require_file(Path.join(__DIR__, "action_test.exs")) end)

still no luck

Ok I gave up on nodes as I’m not very confident with them. Not sure how I would prodify that either.

Another idea I didn’t think of is dead simple:

System.cmd("elixir", [Path.join(__DIR__, "action_test.exs")])

Getting back to the desired requirements this could work. Put all the action scripts in a cloud directory. Then the action runner checks if script exists before running it.

When I tried running multiple notes before I used the Erlang :peer module.

Here’s a few snippets from a semi-defunct unreleased project I have:

    {:ok, node} =
      :peer.start_link(%{
        host: :localhost,
        name: 'peer_node',
        args: peer_args()
      })

    IO.puts("DONE Starting peer node!")
    add_code_paths(node)
    ensure_applications_started(node)
  defp add_code_paths(node) do
    rpc(node, :code, :add_paths, [:code.get_path()])
  end

  @apps_to_start [
    # :iex,
    :logger,
    # :file_system,
    # :jason,
    :erlexec,
    :runtime_tools,
    # :inets,
    :stdlib,
    :crypto,
    # :hex,
    :elixir,
    # :public_key,
    # :gviz,
    # :mix,
    # :gettext,
    :kernel,
    :ssl,
    :compiler
    # :asn1,
  ]

  defp ensure_applications_started(node) do
    rpc(node, Application, :ensure_all_started, [:mix])
    rpc(node, Mix, :env, [Mix.env()])

    # for {app_name, _, _} <- Application.loaded_applications() do
    #   rpc(node, Application, :ensure_all_started, [app_name])
    # end
    for app_name <- @apps_to_start do
      IO.inspect(app_name, label: "starting app_name")

      rpc(node, Application, :ensure_all_started, [app_name])
      |> IO.inspect(label: "started #{inspect(app_name)}")
    end
  end

  defp rpc(node, module, function, args) do
    :rpc.block_call(node, module, function, args)
  end

  defp peer_args do
    Enum.join(
      [
        "-loader inet -hosts 127.0.0.1",
        "-setcookie #{:erlang.get_cookie()}",
        # "-env MIX_BUILD_ROOT /tmp/gviz_build"
      ],
      " "
    )
    |> to_charlist()
  end
3 Likes

Ok I gave it a spin with a module wrapper like so:

defmodule PeerNode do

  def start(name) do
    {:ok, node} =
      :peer.start_link(%{
        host: :localhost,
        name: name,
        args: peer_args()
      })

    IO.puts("DONE Starting peer node!")
    add_code_paths(node)
    ensure_applications_started(node)
  end

  def peer_args do
    Enum.join(
      [
        "-loader inet -hosts 127.0.0.1",
        "-setcookie #{:erlang.get_cookie()}",
        # "-env MIX_BUILD_ROOT /tmp/gviz_build"
      ],
      " "
    )
    |> to_charlist()
  end

  defp rpc(node, module, function, args) do
    :rpc.block_call(node, module, function, args)
  end

  defp add_code_paths(node) do
    rpc(node, :code, :add_paths, [:code.get_path()])
  end

  @apps_to_start [
    # :iex,
    :logger,
    # :file_system,
    # :jason,
    :erlexec,
    :runtime_tools,
    # :inets,
    :stdlib,
    :crypto,
    # :hex,
    :elixir,
    # :public_key,
    # :gviz,
    # :mix,
    # :gettext,
    :kernel,
    :ssl,
    :compiler
    # :asn1,
  ]

  defp ensure_applications_started(node) do
    rpc(node, Application, :ensure_all_started, [:mix])
    rpc(node, Mix, :env, [Mix.env()])

    # for {app_name, _, _} <- Application.loaded_applications() do
    #   rpc(node, Application, :ensure_all_started, [app_name])
    # end
    for app_name <- @apps_to_start do
      IO.inspect(app_name, label: "starting app_name")

      rpc(node, Application, :ensure_all_started, [app_name])
      |> IO.inspect(label: "started #{inspect(app_name)}")
    end
  end
end

Then I ran:

PeerNode.start("first_peer_node")
Node.list(:hidden)

But its complaining about :peer args:

** (ErlangError) Erlang error: {:invalid_arg, 45}
(stdlib 5.0.2) peer.erl:563: :peer.“-verify_args/1-lc$^0/1-0-”/1
(stdlib 5.0.2) peer.erl:563: :peer.verify_args/1
(stdlib 5.0.2) peer.erl:626: :peer.start_it/2

For debugging I tried both with no luck:

:peer.start_link(%{
        host: :localhost,
        name: name,
        args: ["-loader inet -hosts 127.0.0.1 -setcookie #{:erlang.get_cookie()}"]
      })
:peer.start_link(%{
        host: :localhost,
        name: name,
        args: ~c"-loader inet -hosts 127.0.0.1 -setcookie hardcoded_cookie"
      })

I hit this same :invalid_arg error and got some help from the guys in the Elixir Slack (@LostKobrakai included :wink: ).
I needed to change the args to a list of char lists like so:

defp peer_args do
    [~c"-loader", ~c"inet", ~c"-hosts", ~c"127.0.0.1", ~c"-setcookie", ~c"#{:erlang.get_cookie()}"]
end

Very nice thank you for that. So I updated peer_args and progressed to a new error :smiling_face_with_tear:

Evaluation process terminated - an exception was raised:
    ** (ArgumentError) errors were found at the given arguments:

  * 2nd argument: invalid option in list

        :erlang.open_port({:spawn_executable, ~c"/Applications/Livebook.app/Contents/Resources/rel/vendor/otp/erts-14.0.2/bin/erl"}, [{:args, [~c"-sname", [102, 105, 114, 115, 116, 95, 112, 101, 101, 114, 95, 110, 111, 100, 101, 64 | :localhost], ~c"-loader", ~c"inet", ~c"-hosts", ~c"127.0.0.1", ~c"-setcookie", ~c"NwYcxH4MNAGsdadsjh1TP7kefauOlwoRXq1", ~c"-detached", ~c"-peer_detached", ~c"-user", ~c"peer", ~c"-origin", ~c"g1h3KDN0aW9sbDRd0LWxpdmVsdfib29rX2FwcEsadfdsafdBEYXZpZHMtTWFjQm9vay1Qcm8AAAeIiAAAAAGT7Reg="]}, {:env, []}, :hide, :binary])
        (stdlib 5.0.2) peer.erl:309: :peer.init/1
        (stdlib 5.0.2) gen_server.erl:962: :gen_server.init_it/2
        (stdlib 5.0.2) gen_server.erl:917: :gen_server.init_it/6
        (stdlib 5.0.2) proc_lib.erl:241: :proc_lib.init_p_do_apply/3

So bringing it back to the original goals of this post:

The best approach so far seems to be

System.cmd("elixir", [Path.join(__DIR__, "action_test.exs")])

Which starts node under hood with no worries of dependency overlap issues with the parent process. I’m starting to R&D how to refine and deal with possible elixir --no-halt scenarios with cleanup timeouts etc.

In general though I really like exs scripts with Mix.install. I also want to explore storing script files in DB, then dynamically load + run them in my app. I think this will check all my desired requirements.

If you need an isolated VM you can use :peer to start one. No need to drop to shells or CLI stuff.

Absolutely not, maybe for you. The problem with starting random processes like this is that sooner or later you will make a memory leak, and this will render all the fault tolerance feature useless.

@LostKobrakai All the debugging above is for the :peer approach in PeerNode
@D4no0 instead of ‘best’ perhaps I should have said ‘easiest’ so far

I’m learning and trying to understand more. If both :peer and CLI approaches start a new beam instance, why is one considered better/worse?

1 Like

Architecture ideas so far:

R&D Notes:

  • The CLI approach has big benefit of caching mix dependencies locally on the container for consecutive action runs
  • Was good to read over the initial proposal for Mix.install plus looking over the source code.

Why all this?

I’m working on an open source app idea for event handling. It will be an ‘if-this-then-that’ application where ‘if-this’ logic is no-code gui frontend and the ‘then-that’ portion is Elixir code (probably exs mix install scripts). The essence of the idea is to find the right balance between code and no-code / low-code tools.