Logistic Regression: Gradient Descent

Visualizing optimization on the Log-Loss (Binary Cross-Entropy) Surface

$$ \hat{y} = \sigma(0.00x + 0.00) $$
$$ J(m,b) = -\frac{1}{n}\sum_{i=1}^{n}\left[ y_i \log(\hat{y}_i) + (1-y_i)\log(1-\hat{y}_i) \right] $$
Values
Model Space (2D)
Passed (y=1)
Failed (y=0)
Epoch0
Slope (m)0.00
Intercept (b)0.00
Log-Loss Surface (BCE)
Current Loss0.00
Gradient Norm0.00