We have integrated our Elixir and Python codebases. Initially, the Python code ran on a separate server, exposing AI model inference endpoints via API. Now, we aim to unify the codebases, running Python alongside Elixir and enabling internal communication.
We plan to use FLAME to offload CPU-intensive tasks like loading AI models and running inferences to worker pools, avoiding overloading the main server. We’ll use Elixir Ports for communication between Elixir and Python processes.
Our challenge is keeping Python processes alive to avoid reloading AI models. How can this be achieved?
defp do_something(...) do
FLAME.call(PoolModule, fn ->
# Run some logic
...
# Prepare payload
payload = ...
# Decide which function to call from the Python code (Previously considered as separate endpoints)
function_name = ...
port = Port.open({:spawn, "<path to python file here>"}, [:binary, :exit_status])
send(
port,
{self(), {:command, Jason.encode!(%{function: function_name, data: payload}) <> "\n"}}
)
receive do
{^port, {:data, result}} ->
case Jason.decode(result) do
{:ok, response} -> {:ok, response}
{:error, error} -> {:error, error}
end
{^port, {:exit_status, status}} when status != 0 ->
{:error, "Python script exited with status #{status}"}
end
end)
end