Yes, this is possible. If you create a bash script at .deliver/strategies/erlang-console containing a run() function and an optional help() function you should be able to run edeliver console. If you want to pass custom parameters, you can add a .deliver/help script file implementing an accepts_custom_command_argument() function to which the command is passed as first and the argument as second parameter and which should return 0 for accepted arguments, otherwise 1. The output of the optional print_custom_commands_help() function will be appended to the edeliver help output.
I was able to create the command, but how can I run the command on my staging or production host? I tried __sync_remote but it runs the command on my build host.
True, __sync_remote() executes commands on the build host. __remote() executes commands on the deploy hosts, but asynchronously. This will not work nor make sense for a remote console. Starting a remote console will work for one node only in any case, so you must ensure that there is only one deploy host in you config or that you set the deploy host config to a single host according to your console command line arguments. Maybe having a look at the __execute_node_command_synchronously() function might help. You could also source that file and call that function or __execute_node_command() directly. A custom .deliver/strategies/erlang-console script could look like that:
#!/usr/bin/env bash
REQUIRED_CONFIGS+=("APP")
# contains __execute_node_command
source "$(dirname $(dirname ${BASH_SOURCE[0]}))/strategies/erlang-node-execute"
# contains require_node_config
source "$(dirname $(dirname ${BASH_SOURCE[0]}))/libexec/erlang"
# sets the deploy hosts from the config (PRODUCTION_HOSTS or STAGING_HOSTS)
# there must be only one host configured to be able to use a remote console
require_node_config
run() {
# ssh setup
authorize_hosts
# again make sure that only one node is used (first one)
local _node=${NODES# *}
# run command on deploy host / node
__execute_node_command "single_node" "$_node" "remote_console"
}
#!/usr/bin/env bash
REQUIRED_CONFIGS+=("APP")
REQUIRED_CONFIGS+=("NODE_ENVIRONMENT")
REQUIRED_CONFIGS+=("NODE_ACTION")
NODE_ACTION="remote_console"
NODE_ENVIRONMENT=$DEPLOY_ENVIRONMENT || "staging"
: ${NODE_ENVIRONMENT:="staging"}
source ${BASE_PATH}/strategies/erlang-node-execute
run() {
# ssh setup
authorize_hosts
local _node_command=$NODE_ACTION
# take the first node only
local _nodes=(${NODES// / })
local _node=${_nodes[0]}
__execute_node_command "single_node" "$_node" "$_node_command"
}
Which works, except for one details, the erlang nodename is modified, and I end up in iex(remsh3379d012-kura@127.0.0.1)1> instead of iex(kura@127.0.0.1)1>.
I was able to trace the problem to the absence of TTY. If my app is called without a TTY, erlang seems to randomize the node name.
I can just replace __execute_node_command with a direct ssh call now that I am more familiar with the edeliver internals. But I wonder if maybe this is a bug somehow, because I noticed that ssh was called with -t from __execute_node_command.