I posted the SO question here too: machine learning - How to get the loss history using Elixir's Axon library? - Stack Overflow
Basically I can see how I can view the loss over time, but I don’t see how I can extract it after the loop is run.
something = fn state ->
IO.inspect(state, label: "state_is")
{:continue, state}
end
Axon.Loop.trainer(model, loss, optimizer)
|> Axon.Loop.handle_event(:epoch_completed, something)
|> Axon.Loop.run(loop, train_data, %{}, epochs: epochs)
With Keras it’s done like so:
history = model.fit(X_train, y_train, epochs=2)
loss_history = history.history["loss"]
You can just accumulate the loss (or any value you want to accumulate) on a separate process using send
or something like that instead of just IO.inspect
Did you give Axon.Loop.metric/5 a try?
@krasenyp
I’m not sure metric/5 will work. I’m reading it as, it accepts an array and then applies either running_avg
or running_sum
on those
By default, metrics keep a running average of the metric calculation. You can override this behavior by changing accumulate
My search boils down to, how to get Axon.Loop.run to return the accumulated state.
Adding a metric does not change the output which with a single dense layer would look like:
%{
"dense_0" => %{
"bias" => #Nx.Tensor<
f32[1]
[0.036935850977897644]
>,
"kernel" => #Nx.Tensor<
f32[1][1]
[
[1.0034809112548828]
]
>
}
}
So no metrics available here.
I had thought of that but it felt like I was working around Axon rather than with it. On that same note, I can also (I think) write the state to a disk 
Was really hoping there would be something built in like
Axon.Loop.run(loop, data, %{}, [return_history: [filter: :epoch]]) # 😅
Which will return the history from (values/metrics/etc…) from every epoch.
Right now I feel like I have to run training multiple times in order to pull the data I want if I make a mistake. On a larger dataset with more complex model this may be too time consuming 