Evaluation metric, objective function, loss function, cost function, scoring function, error function

Keesari Vigneshwar Reddy - Aug 14 - - Dev Community

Well if you are new to machine learning, just got started and getting some ML stuff in your feed then perhaps you might encounter the terms mentioned in the title.

Even in a kaggle competition there is an evaluation tab where you get the details of evaluation metric.

I would bet that you were confused between those. This is the only discussion you will ever need to know more, understand differences and use the right one.

There are 6 terms in total - evaluation metric, objective function, loss function, cost function, scoring function, error function.

Firstly I would classify the terms through a visual flowchart.

Image description

Now lets discuss

Evaluation metric

This function serves the model after training by providing a score. It cannot influence how the data gets fitted during training process.

Scoring function

A scoring function suggests better prediction results if scores from the function are higher, implying a maximization of evaluation metric process.

Error function

An error function instead suggests better predictions if smaller error quantities are reported by
the function, implying a minimization of evaluation metric process.

Loss function

A loss function calculates penalty considering ground truth and prediction of a single datapoint. It is used in minimization of loss activity.

Cost function

A cost function calculates penalty taking into account the whole dataset or batch used for training. Usually computing a sum or average over the loss penalties of its data points.

It can also comprise further constraints, such as the L1 or L2 penalties, for instance. The cost function directly affects how the training happens.

Objective function

An objective function is the safe-to-use term related to the scope of optimization during machine learning training. It comprises cost functions and also can take into account goals that are beneficial for training.

For instance, requiring sparse coefficients of the estimated model or a minimization of the coefficients’ values, such as in L1 and L2 regularizations.

Moreover, whereas loss and cost functions imply an optimization based on minimization, an objective function is neutral and can imply either a maximization or a minimization activity performed by the learning algorithm.

Having a basic understanding of these terms is very important because this what you are going to play around in a kaggle competition. Well the difference in real world scenario and a kaggle competition is that in real world you model is evaluated on multiple metrics wheras in kaggle it is evaluated on single metric.

These results are what you are going to share to your team members and colleagues.

I still think something is missing. Yes, examples. Comment below the metrics and functions you have used and mention the respective categories.

.
Terabox Video Player