"We multiply the model's outputted probabilities together for the actual ouotcomes."
Log Loss(Cross Entropy Loss, also used frequently in classification problemes
Log Loss is the most important classification metric based on probabilities.
It's hard to interpret raw log-loss values, but log-loss is still a good metric for comparing models. For any given problem, a lower log-loss value means better predictions.
Log Loss is a slight twist on something called the Likelihood Function.
Log Loss = (-1) * log (Likelyhood loss)
" It penalizes heavily for being
very confident and
very wrong."
Loss Function & Optimizers (Gradient Descent for ex.) work toghter to fit the algorithm to the dataset in the best way possible.