Hi,

So, I’m doing an online Machine Learning course at the moment which involves playing with some Python / Numpy in Jupyter notebooks. For reinforcement, and to get to grips with `nx`

I’ve been translating somethings into LiveBook.

I’ve had a bit of a time figuring out how to translate a cost function for logistic regression (multiple inputs / features mapping to a 0 or 1). Here’s the python:

```
def compute_cost_logistic(X, y, w, b):
"""
Computes cost
Args:
X (ndarray (m,n)): Data, m examples with n features
y (ndarray (m,)) : target values
w (ndarray (n,)) : model parameters
b (scalar) : model parameter
Returns:
cost (scalar): cost
"""
m = X.shape[0]
cost = 0.0
for i in range(m):
z_i = np.dot(X[i],w) + b
f_wb_i = sigmoid(z_i)
cost += -y[i]*np.log(f_wb_i) - (1-y[i])*np.log(1-f_wb_i)
cost = cost / m
return cost
```

Basically the cost formula is such that when the target value (y for the row) is 1 then the loss for the row is based on the log of the sigmoid of the calculated output. When the target value is 0, then it’s based on log(1 - calculated).

Here’s what I got to with `nx`

```
def cost_logistic(x, y, w, b) do
{m, _} = Nx.shape(x)
one_minus_y = y |> Nx.multiply(-1) |> Nx.add(1)
f_wb = x
|> Nx.dot(w)
|> Nx.add(b)
|> Nx.sigmoid()
neg_cost = f_wb
|> Nx.multiply(-1)
|> Nx.add(1)
|> Nx.log()
|> Nx.dot(one_minus_y)
pos_cost = f_wb
|> Nx.log()
|> Nx.dot(y)
neg_cost
|> Nx.add(pos_cost)
|> Nx.multiply(-1)
|> Nx.divide(m)
end
```

The result matches the Python but splitting between the costs when y^i is 1 and when it is 0 feels a bit meh. I’m really new to `nx`

and I was wondering if anyone had cunning plan which would avoid the split.