The Value of Model Accuracy
This article was originally published at Algorithimia’s website. The company was acquired by DataRobot in 2021. This article may not be entirely up-to-date or refer to products and offerings no longer in existence. Find out more about DataRobot MLOps here.
There are many metrics via which one can measure the performance of a model. One possible measure is the mean absolute percent error. It is calculated by taking the mean of the absolute value of the actual values minus the predictions divided by the actual values. Another measure of performance is the receiver operating characteristic (ROC) curve.
The ROC curve is created by plotting the true positive rate of the classifier against the false positive rate at various threshold settings. One can then perform assessments of model performance based on this curve, where models, which achieve a high AUC (area under the curve) are favored.
One can also measure performance based on a model’s confusion matrix. In binary classification problems, the confusion matrix is a 2×2 matrix containing the true positive, false positive, true negative, and false negative rates.
Finally, there is model accuracy, which is an important but shallow way to evaluate a model, and is thus not the greatest measure of a model in the sense that it’s only a single number and can only go so far compared to more rich and complex measurement methods as outlined above. However, it’s important to understand how model accuracy is calculated and what it determines.
What is model accuracy?
Model accuracy is defined as the number of classifications a model correctly predicts divided by the total number of predictions made. It’s a way of assessing the performance of a model, but certainly not the only way. In fact, a wide variety of rich measurements serving this purpose exist, and when considering many of them at once rather than any single one in isolation, accuracy provides the best perspective on how well a model is performing on a given dataset.
There are many instances in which a first pass at model training is insufficient such that the accuracy needs to be increased. This is normal in machine learning—it’s nearly impossible to create a model that perfectly fits all the objectives you initially set out to achieve on your first try, and some iteration on your initial models is necessary.
How to take a more precise model accuracy reading
One way to improve model accuracy is by tuning model parameters. For example, if one has a 3-layer neural network classifier with 100 nodes in its hidden layer, one could retry training the neural network with more or fewer nodes. This is called a parameter search, and is best implemented by first making relatively large changes to the parameter one is tuning, and then successively smaller changes as one settles closer to an optimum value.
The risks of model accuracy
In addition, one could improve accuracy by using a fundamentally different model altogether. For example, it is now widely known that convolutional neural networks are among the best models for classifying images. If one had started out on an image classification problem with a simple logistic regression over pixel values in the image and wasn’t getting the hoped for performance, one could switch to a completely different model like the convolutional neural network for more accurate readings.
The danger in this approach is that one may have to throw away all the work done with the previous model or models, and face all the risks and unknowns of creating a completely new model. It’s best to take the risk of constructing a completely new model to improve accuracy only when a substantial improvement in accuracy is obtainable and necessary.
Sometimes it is not possible to tell without substantial effort and testing whether a new model will provide a substantial improvement, and the risk of investing effort in the new model is not worth taking. On the other hand, sometimes it can be determined that a particular new model can provide a substantial improvement, and the risk is worth taking.
Retrain on new data
A final way to improve the accuracy of a model is by improving the data that the model is trained on. Self-driving cars are an example of using better data to improve model accuracy.
Though many aspects of a self-driving car are not classification problems (like determining a speed for the car), many others are, such as determining whether to turn on the blinkers, or take a left turn. Sometimes it is not the sheer amount of data that must be improved but the diversity of data.
A major challenge of self-driving cars is coverage of edge cases, or events that happen very rarely. In most modern machine learning models, the only way to achieve high accuracy on edge cases is to have seen those edge cases in training—hence why it’s a challenge.
One way to improve coverage of edge cases in training a self-training model is to obtain driving data from purposefully eccentric driving environments like San Francisco where edge cases happen more frequently. Sometimes it’s the sheer amount of data that matters to a model’s accuracy.
The neural networks used in self-driving cars are widely known to be very data-hungry, requiring huge amounts of training data to reach optimal performance.
A model that doesn’t come close to achieving accuracy objectives on one training set can meet them on a similarly constructed but much larger training set. The precision with which the training data set matches the test data set matters and can improve accuracy. It is for this reason that self-driving training sets are often obtained from actual driving rather than from simulations.
How to think about model accuracy
Model accuracy is a very important component in assessing model performance, but is in no way the only metric. When evaluating a model, it is important to do so holistically, taking into account a variety of metrics and heuristics in order to assess how a model is performing on a specific dataset. Otherwise the accuracy metrics is not complete.
DataRobot is the leader in Value-Driven AI – a unique and collaborative approach to AI that combines our open AI platform, deep AI expertise and broad use-case implementation to improve how customers run, grow and optimize their business. The DataRobot AI Platform is the only complete AI lifecycle platform that interoperates with your existing investments in data, applications and business processes, and can be deployed on-prem or in any cloud environment. DataRobot and our partners have a decade of world-class AI expertise collaborating with AI teams (data scientists, business and IT), removing common blockers and developing best practices to successfully navigate projects that result in faster time to value, increased revenue and reduced costs. DataRobot customers include 40% of the Fortune 50, 8 of top 10 US banks, 7 of the top 10 pharmaceutical companies, 7 of the top 10 telcos, 5 of top 10 global manufacturers.
We will contact you shortly
We’re almost there! These are the next steps:
- Look out for an email from DataRobot with a subject line: Your Subscription Confirmation.
- Click the confirmation link to approve your consent.
- Done! You have now opted to receive communications about DataRobot’s products and services.
Didn’t receive the email? Please make sure to check your spam or junk folders.
Accelerate Your AI Journey with the DataRobot Partner EcosystemMarch 28, 2023· 3 min read
How MLOps Enables Machine Learning Production at ScaleMarch 23, 2023· 4 min read
How the DataRobot AI Platform Is Delivering Value-Driven AIMarch 16, 2023· 4 min read
In a global marketplace where decision-making needs to happen with increasing velocity, data science teams often need not only to speed up their modeling deployment but also do it at scale across their entire enterprise. Often, they are doing this with smaller teams in place than they need due to the shortage of data scientists. It’s no wonder, then, that…
Although still a relatively new field, the present global situation and changes have brought AI to the point where machine learning models really need to start showing their value. To achieve that, DataRobot provides a solution for organizations to build an MLOps foundation that allows data, development, and production teams to work together to successfully deploy and manage machine learning services…