# Translating a some Python calculations into Elixir (for extra learning fun)

Hi,

So, I’m doing an online Machine Learning course at the moment which involves playing with some Python / Numpy in Jupyter notebooks. For reinforcement, and to get to grips with `nx` I’ve been translating somethings into LiveBook.

I’ve had a bit of a time figuring out how to translate a cost function for logistic regression (multiple inputs / features mapping to a 0 or 1). Here’s the python:

``````def compute_cost_logistic(X, y, w, b):
"""
Computes cost

Args:
X (ndarray (m,n)): Data, m examples with n features
y (ndarray (m,)) : target values
w (ndarray (n,)) : model parameters
b (scalar)       : model parameter

Returns:
cost (scalar): cost
"""

m = X.shape
cost = 0.0
for i in range(m):
z_i = np.dot(X[i],w) + b
f_wb_i = sigmoid(z_i)
cost +=  -y[i]*np.log(f_wb_i) - (1-y[i])*np.log(1-f_wb_i)

cost = cost / m
return cost
``````

Basically the cost formula is such that when the target value (y for the row) is 1 then the loss for the row is based on the log of the sigmoid of the calculated output. When the target value is 0, then it’s based on log(1 - calculated).

Here’s what I got to with `nx`

``````  def cost_logistic(x, y, w, b) do
{m, _} = Nx.shape(x)
one_minus_y = y |> Nx.multiply(-1) |> Nx.add(1)

f_wb = x
|> Nx.dot(w)
|> Nx.sigmoid()

neg_cost = f_wb
|> Nx.multiply(-1)
|> Nx.log()
|> Nx.dot(one_minus_y)

pos_cost = f_wb
|> Nx.log()
|> Nx.dot(y)

neg_cost
|> Nx.multiply(-1)
|> Nx.divide(m)
end
``````

The result matches the Python but splitting between the costs when y^i is 1 and when it is 0 feels a bit meh. I’m really new to `nx` and I was wondering if anyone had cunning plan which would avoid the split.

2 Likes

I think that the Github repository by @NickGnd (shared here) comprises some relevant code. In particular, see the classifier Livebook in chapter 5.

BTW, the Programming Machine Learning book is a masterpiece for understanding ML (in my very personal opinion).

3 Likes

Thanks! That looks very relevant. 2 Likes

Thanks again for sharing that by @NickGnd. Reading through it does follow pretty much the same approach for calculating loss: separating for when `y == 0` and when `y == 1` so that’s good.

The main thing is that it’s great to see relevant examples for how to use things. For instance I completely missed that there was a `Nx.negate/1` and dividing by the number of examples rather than using `Nx.mean` is .

Also I hadn’t bothered with `defn` & `exla` but now realise I should embrace that.

2 Likes