Loading...

Machine-learning is good for searching! Why?

Machine learning pipeline is an iterative process involving data curation, feature engineering, training and deploying models, as well as monitoring and maintenance of the deployed models. In large system with many downstream tasks, a feature store is important to standardize and manage feature generation and workflows in using the features. With the advent of self-supervised pre-trained embedding models as features, feature store faces new challenges to manage embeddings.

For this time in machine learning, we look into biases in AI system. The article nicely explains overview of biases that can present in our AI systems. Here are my short note.

Biases in AI Systems

Machine learning is a complex system, and it involves learning from a large dataset with a predefined objective function. It is well-known too that Machine learning systems exhibit biases that comes from every part of the machine learning pipeline: from dataset development to problem formulation to algorithm development to evaluation stage.

Detecting, measuring, and mitigating biases in machine learning system, and furthermore, developing fair AI algorithm are not easy and still active research areas. This article provides a taxonomy of biases in the machine learning pipeline.

Machine learning pipeline begins with dataset creation, and this process includes data collection, and data annotation. During this process, we may encounter 4 types of biases:

  • Sampling bias: caused by selecting particular types of instances more than others.
  • Measurement bias: caused by errors in human measurement or due to certain intrinsic habits of people in capturing data.
  • Label bias: associated with inconsistencies in the labeling process.
  • Negative set bias: caused by not having enough negative samples.

Next stage is problem formulation. In this stage, biases are cause by how a problem is defined. For example, in creditworthiness prediction using AI, the problem can be formulated based on various business reasons (such as maximize profit margin) other than fairness and discrimination.

Taxonomy of bias types along the AI pipeline.

On algorithm and data analysis, several types of biases that potentially can occur in your system:

  • Sample selection bias: caused by selection of data instances as a result of conditioning on some variables in the dataset
  • Confounding bias: it happens because the machine learning algorithm does not take into account all the information in the data. Two types of confounding bias:
    • omitted variable bias
    • proxy variable bias, for example zip code might be indicative of race
  • Design bias: caused by the limitation of the algorithm or other constraints on the system. It could be in the form of algorithm bias, ranking bias, or presentation bias.

Here are several types of biases on the evaluation and validation stage:

  • Human evaluation bias: caused by human evaluator in validating the machine learning performance.
  • Sample treatment bias: selected test set for machine learning evaluation may be biased.
  • Validation and test dataset bias: due to selection of inappropriate dataset for testing.

Beside the taxonomy of bias types, the authors also give some practical guidelines for machine learning developer:

  • Domain-specific knowledge is crucial in defining and detecting bias.
  • It is important to understand features that are sensitive to the application.
  • Datasets used for analysis should be representative of the true population under consideration, as much as possible.
  • Have an appropriate standard for data annotation to get consistent labels.
  • Identify all features that may be associated with the target feature is important. Features that are associated with both input and output can lead to biased estimates.
  • Restricting to some subset of the dataset can lead to unwanted sample selection bias.
  • Avoid sample treatment bias when evaluating machine learning performance.

1. How to avoid machine learning pitfalls: a guide for academic researchers

This guide aims to help newcomers avoid some of the mistakes that can occur when using machine learning (ML) within an academic research context. It’s written by an academic, and focuses on lessons that were learnt whilst doing ML research in academia, and whilst supervising students doing ML research. It is primarily aimed at students and other researchers who are relatively new to the field of ML, and only assumes a basic knowledge of ML techniques. Unlike similar guides aimed at a more general ML audience, it reflects the scholarly concerns of academia: such as the need to rigorously evaluate and compare models in order to get work published. However, most of the lessons are applicable to the broader use of ML, and it could be used as an introductory guide to anyone getting started in this field. To make it more readable, the guidance is written informally, in a Dos and Don’ts style. It’s not intended to be exhaustive, and references are provided for further reading.

2. Before you start to build models

It’s normal to want to rush into training and evaluating models, but it’s important to take the time to think about the goals of a project, to fully understand the data that will be used to support these goals, to consider any limitations of the data that need to be addressed, and to understand what’s already been done in your field. If you don’t do these things, then you may end up with results that are hard to publish, or models that are not appropriate for their intended purpose.

Do take the time to understand your data

Eventually you will want to publish your work. This is a lot easier to do if your data is from a reliable source, has been collected using a reliable methodology, and is of good quality. For instance, if you are using data collected from an internet resource, make sure you know where it came from. Is it described in a paper? If so, take a look at the paper; make sure it was published somewhere reputable, and check whether the authors mention any limitations of the data. Do not assume that, because a data set has been used by a number of papers, it is of good quality — sometimes data is used just because it is easy to get hold of, and some widely used data sets are known to have significant limitations. If you train your model using bad data, then you will most likely generate a bad model: a process known as garbage in garbage out. So, always begin by making sure your data makes sense. Do some exploratory data analysis. Look for missing or inconsistent records. It is much easier to do this now, before you train a model, rather than later, when you’re trying to explain to reviewers why you used bad data.

Don’t look at all your data

As you look at data, it is quite likely that you will spot patterns and make insights that guide your modeling. This is another good reason to look at data. However, it is important that you do not make untestable assumptions that will later feed into your model. The “untestable” bit is important here; it’s fine to make assumptions, but these should only feed into the training of the model, not the testing. So, to ensure this is the case, you should avoid looking closely at any test data in the initial exploratory analysis stage. Otherwise you might, consciously or unconsciously, make assumptions that limit the generality of your model in an untestable way. This is a theme I will return to several times, since the leakage of information from the test set into the training process is a common reason why ML models fail to generalize.

Do make sure you have enough data

If you don’t have enough data, then it may not be possible to train a model that generalizes. Working out whether this is the case can be challenging, and may not be evident until you start building models: it all depends on the signal to noise ratio in the data set. If the signal is strong, then you can get away with less data; if it’s weak, then you need more data. If you can’t get more data — and this is a common issue in many research fields — then you can make better use of existing data by using cross-validation. You can also use data augmentation techniques, and these can be quite effective for boosting small data sets, though Don’t do data augmentation before splitting your data. Data augmentation is also useful in situations where you have limited data in certain parts of your data set, e.g. in classification problems where you have less samples in some classes than others — a situation known as class imbalance. Don’t use accuracy with imbalanced data sets. However, if you have limited data, then it’s likely that you will also have to limit the complexity of the ML models you use, since models with many parameters, like deep neural networks, can easily overfit small data sets. Either way, it’s important to identify this issue early on, and come up with a suitable strategy to mitigate it.

Do talk to domain experts

Domain experts can be very valuable. They can help you to understand which problems are useful to solve, they can help you choose the most appropriate feature set and ML model to use, and they can help you publish to the most appropriate audience. Failing to consider the opinion of domain experts can lead to projects which don’t solve useful problems, or which solve useful problems in inappropriate ways. An example of the latter is using an opaque ML model to solve a problem where there is a strong need to understand how the model reaches an outcome, e.g. in making medical or financial decisions. At the beginning of a project, domain experts can help you to understand the data, and point you towards features that are likely to be predictive. At the end of a project, they can help you to publish in domain-specific journals, and hence reach an audience that is most likely to benefit from your research.

Do survey the literature

You’re probably not the first person to throw ML at a particular problem domain, so it’s important to understand what has and hasn’t been done previously. Other people having worked on the same problem isn’t a bad thing; academic progress is typically an iterative process, with each study providing information that can guide the next. It may be discouraging to find that someone has already explored your great idea, but they most likely left plenty of avenues of investigation still open, and their previous work can be used as justification for your work. To ignore previous studies is to potentially miss out on valuable information. For example, someone may have tried your proposed approach before and found fundamental reasons why it won’t work (and therefore saved you a few years of frustration), or they may have partially solved the problem in a way that you can build on. So, it’s important to do a literature review before you start work; leaving it too late may mean that you are left scrambling to explain why you are covering the same ground or not building on existing knowledge when you come to write a paper.

Do think about how your model will be deployed

Why do you want to build an ML model? This is an important question, and the answer should influence the process you use to develop your model. Many academic studies are just that — studies — and not really intended to produce models that will be used in the real world. This is fair enough, since the process of building and analyzing models can itself give very useful insights into a problem. However, for many academic studies, the eventual goal is to produce an ML model that can be deployed in a real world situation. If this is the case, then it’s worth thinking early on about how it is going to be deployed. For instance, if it’s going to be deployed in a resource-limited environment, such as a sensor or a robot, this may place limitations on the complexity of the model. If there are time constraints, e.g. a classification of a signal is required within milliseconds, then this also needs to be taken into account when selecting a model. Another consideration is how the model is going to be tied into the broader software system within which it is deployed. This procedure is often far from simple. However, emerging approaches such as ML Ops aim to address some of the difficulties.

3. How to reliably build models

Building models is one of the more enjoyable parts of ML. With modern ML frameworks, it’s easy to throw all manner of approaches at your data and see what sticks. However, this can lead to a disorganized mess of experiments that’s hard to justify and hard to write up. So, it’s important to approach model building in an organized manner, making sure you use data correctly, and putting adequate consideration into the choice of models.

Don’t allow test data to leak into the training process

It’s essential to have data that you can use to measure how well your model generalizes. A common problem is allowing information about this data to leak into the configuration, training or selection of models. When this happens, the data no longer provides a reliable measure of generality, and this is a common reason why published ML models often fail to generalize to real world data. There are a number of ways that information can leak from a test set. Some of these seem quite innocuous. For instance, during data preparation, using information about the means and ranges of variables within the whole data set to carry out variable scaling — in order to prevent information leakage, this kind of thing should only be done with the training data. Other common examples of information leakage are carrying out feature selection before partitioning the data (see Do be careful where you optimize hyperparameters and select features), using the same test data to evaluate the generality of multiple models (see Do use a validation set and Don’t always believe results from community benchmarks), and applying data augmentation before splitting off the test data (see Don’t do data augmentation before splitting your data). The best thing you can do to prevent these issues is to partition off a subset of your data right at the start of your project, and only use this independent test set once to measure the generality of a single model at the end of the project (see Do save some data to evaluate your final model instance). Be particularly careful if you’re working with time series data, since random splits of the data can easily cause leakage and overfitting. For a broader discussion of data leakage.

Do try out a range of different models

Generally speaking, there’s no such thing as a single best ML model. In fact, there’s a proof of this, in the form of the No Free Lunch theorem, which shows that no ML approach is any better than any other when considered over every possible problem. So, your job is to find the ML model that works well for your particular problem. There may be some a priori knowledge of this, in the form of good quality research on closely related problems, but most of the time you’re operating in the dark. Fortunately, modern ML libraries in Python (e.g. scikit-learn), R-language, Julia (e.g. MLJ) etc. allow you to try out multiple models with only small changes to your code, so there’s no reason not to try out multiple models and find out for yourself which one works best. However, Don’t use inappropriate models, and Do use a validation set, rather than the test set, to evaluate them. When comparing models, Do optimise your model’s hyperparameters and Do evaluate a model multiple times to make sure you’re giving them all a fair chance, and Do correct for multiple comparisons when you publish your results.

Don’t use inappropriate models

By lowering the barrier to implementation, modern ML libraries also make it easy to apply inappropriate models to your data. This, in turn, could look bad when you try to publish your results. A simple example of this is applying models that expect categorical features to a dataset containing numerical features, or vice versa. Some ML libraries allow you to do this, but it may result in a poor model due to loss of information. If you really want to use such a model, then you should transform the features first; there are various ways of doing this, ranging from simple one-hot encodings to complex learned embeddings. Other examples of inappropriate model choice include using a classification model where a regression model would make more sense (or vice versa), attempting to apply a model that assumes no dependencies between variables to time series data, or using a model that is unnecessarily complex (see Don’t assume deep learning is best). Also, if you’re planning to use your model in practice, Do think about how your model will be deployed, and don’t use models that aren’t appropriate for your use case.

Don’t assume deep learning is best

An increasingly common pitfall is to assume that deep neural networks (DNNs) will provide the best solution to any problem, and consequently fail to try out other, possibly more appropriate, models. Whilst DNNs are great at certain tasks, they are not good at everything; there are plenty of examples of them being out-performed by “old fashioned” machine learning models such as random forests and SVMs, and there are also theoretical reasons why they won’t always be the best choice (see Do try out a range of different models). In particular, a deep neural network is unlikely to be a good choice if you have limited data, if domain knowledge suggests that the underlying pattern is quite simple, or if the model needs to be interpretable. This last point is particularly worth considering: a deep neural network is essentially a very complex piece of decision making that emerges from interactions between a large number of non-linear functions. Non-linear functions are hard to follow at the best of times, but when you start joining them together, their behaviour gets very complicated very fast. Whilst explainable AI methods (see Do look at your models) can shine some light on the workings of deep neural networks, they can also mislead you by ironing out the true complexities of the decision space. For this reason, you should take care when using either DNNs or explainable AI for models that are going to make high stakes or safety critical decisions

Do optimize your model’s hyperparameters

Many models have hyperparameters — that is, numbers or settings that affect the configuration of the model. Examples include the kernel function used in an SVM, the number of trees in a random forest, and the architecture of a neural network. Many of these hyperparameters significantly effect the performance of the model, and there is generally no one-size-fits-all. That is, they need to be fitted to your particular data set in order to get the most out of the model. Whilst it may be tempting to fiddle around with hyperparameters until you find something that works, this is not likely to be an optimal approach. It’s much better to use some kind of hyperparameter optimization strategy, and this is much easier to justify when you write it up. Basic strategies include random search and grid search, but these don’t scale well to large numbers of hyperparameters or to models that are expensive to train, so it’s worth using tools that search for optimal configurations in a more intelligent manner. It is also possible to use AutoML techniques to optimize both the choice of model and its hyperparameters, in addition to other parts of the data mining pipeline.

Do be careful where you optimize hyperparameters and select features

Another common stage of training a model is to carry out feature selection. However, when carrying out both hyperparameter optimisation and feature selection, it is important to treat them as part of model training, and not something more general that you do before model training. A particularly common error is to do feature selection on the whole data set before model training begins, but this will result in information leaking from the test set into the training process. So, if you optimise the hyperparameters or features used by a model, you should ideally use exactly the same data that you use to train the model. A common technique for doing this is nested cross-validation (also known as double cross-validation), which involves doing hyperparameter optimization and feature selection as an extra loop inside the main cross-validation loop. Do evaluate a model multiple times for more information about cross-validation.

4. How to robustly evaluate models

In order to contribute to progress in your field, you need to have valid results that you can draw reliable conclusions from. Unfortunately it’s really easy to evaluate ML models unfairly, and, by doing so, muddy the waters of academic progress. So, think carefully about how you are going to use data in your experiments, how you are going to measure the true performance of your models, and how you are going to report this performance in a meaningful and informative way.

Do use an appropriate test set

First of all, always use a test set to measure the generality of an ML model. How well a model performs on the training set is almost meaningless, and a sufficiently complex model can entirely learn a training set yet capture no generalizable knowledge. It’s also important to make sure the data in the test set is appropriate. That is, it should not overlap with the training set and it should be representative of the wider population. For example, consider a photographic data set of objects where the images in the training and test set were collected outdoors on a sunny day. The presence of the same weather conditions mean that the test set will not be independent, and by not capturing a broader variety of weather conditions, it will also not be representative. Similar situations can occur when a single piece of equipment is used to collect both the training and test data. If the model overlearns characteristics of the equipment, it will likely not generalize to other pieces of equipment, and this will not be detectable by evaluating it on the test set.

Don’t do data augmentation before splitting your data

Data augmentation (see Do make sure you have enough data) can be a useful technique for balancing datasets and boosting the generality and robustness of ML models. However, it’s important to do data augmentation only on the training set, and not on data that’s going to be used for testing. Including augmented data in the test set can lead to a number of problems. One problem is that the model may overfit the characteristics of the augmented data, rather than the original samples, and you won’t be able to detect this if your test set also contains augmented data. A more critical problem occurs when data augmentation is applied to the entire data set before it is split into training and test sets. In this scenario, augmented versions of training samples may end up in the test set, which in the worst case can lead to a particularly nefarious form of data leakage in which the test samples are mostly variants of the training samples. For an interesting study of how this problem affected an entire field of research.

Do use a validation set

It’s not unusual to train multiple models in succession, using knowledge gained about each model’s performance to guide the configuration of the next. When doing this, it’s important not to use the test set within this process. Rather, a separate validation set should be used to measure performance. This contains a set of samples that are not directly used in training, but which are used to guide training. If you use the test set for this purpose, then the test set will become an implicit part of the training process, and will no longer be able to serve as an independent measure of generality, i.e. your models will progressively overfit the test set. Another benefit of having a validation set is that you can do early stopping, where, during the training of a single model, the model is measured against the validation set at each iteration of the training process. Training is then stopped when the validation score starts to fall, since this indicates that the model is starting to overfit the training data.

Do evaluate a model multiple times

Many ML models are unstable. That is, if you train them multiple times, or if you make small changes to the training data, then their performance varies significantly. This means that a single evaluation of a model can be unreliable, and may either underestimate or overestimate the model’s true potential. For this reason, it is common to carry out multiple evaluations. There are numerous ways of doing this, and most involve training the model multiple times using different subsets of the training data. Crossvalidation (CV) is particularly popular, and comes in numerous varieties. Ten-fold CV, where training is repeated ten times, is arguably the standard, but you can add more rigour by using repeated CV, where the whole CV process is repeated multiple times with different partitionings of the data. If some of your data classes are small, it’s important to do stratification, which ensures each class is adequately represented in each fold. It is common to report the mean and standard deviation of the multiple evaluations, but it is also advisable to keep a record of the individual scores in case you later use a statistical test to compare models (see Do use statistical tests when comparing models).

Do save some data to evaluate your final model instance

I’ve used the term model quite loosely, but there is an important distinction between evaluating the potential of a general model (e.g. how well a neural network can solve your problem), and the performance of a particular model instance (e.g. a specific neural network produced by one run of back-propagation). Cross-validation is good at the former, but it’s less useful for the latter. Say, for instance, that you carried out ten-fold cross-validation. This would result in ten model instances. Say you then select the instance with the highest test fold score as the model which you will use in practice. How do you report its performance? Well, you might think that its test fold score is a reliable measure of its performance, but it probably isn’t. First, the amount of data in a single fold is relatively small. Second, the instance with the highest score could well be the one with the easiest test fold, so the evaluation data it contains may not be representative. Consequently, the only way of getting a reliable estimate of a model instance’s generality may be to use another test set. So, if you have enough data, it’s better to keep some aside and only use it once to provide an unbiased estimate of the final selected model instance.

Don’t use accuracy with imbalanced data sets

Be careful which metrics you use to evaluate your ML models. For instance, in the case of classification models, the most commonly used metric is accuracy, which is the proportion of samples in the data set that were correctly classified by the model. This works fine if your classes are balanced, i.e. if each class is represented by a similar number of samples within the data set. But many data sets are not balanced, and in this case accuracy can be a very misleading metric. Consider, for example, a data set in which 90% of the samples represent one class, and 10% of the samples represent another class. A binary classifier which always outputs the first class, regardless of its input, would have an accuracy of 90%, despite being completely useless. In this kind of situation, it would be preferable to use a metric such as F1 score, Cohen’s kappa coefficient (κ) or Matthews Correlation Coefficient (MCC), all of which are relatively insensitive to class size imbalance. For a broader review of methods for dealing with imbalanced data.

5. How to compare models fairly

Comparing models is the basis of academic research, but it’s surprisingly difficult to get it right. If you carry out a comparison unfairly, and publish it, then other researchers may subsequently be led astray. So, do make sure that you evaluate different models within the same context, do explore multiple perspectives, and do use make correct use of statistical tests.

Don’t assume a bigger number means a better model

It’s not uncommon for a paper to state something like “In previous research, accuracies of up to 94% were reported. Our model achieved 95%, and is therefore better.” There are various reasons why a higher figure does not imply a better model. For instance, if the models were trained or evaluated on different partitions of the same data set, then small differences in performance may be due to this. If they used different data sets entirely, then this may account for even large differences in performance. Another reason for unfair comparisons is the failure to carry out the same amount of hyperparameter optimisation (see Do optimise your model’s hyperparameters) when comparing models; for instance, if one model has default settings and the other has been optimised, then the comparison won’t be fair. For these reasons, and others, comparisons based on published figures should always be treated with caution. To really be sure of a fair comparison between two approaches, you should freshly implement all the models you’re comparing, optimize each one to the same degree, carry out multiple evaluations (see Do evaluate a model multiple times), and then use statistical tests (see Do use statistical tests when comparing models) to determine whether the differences in performance are significant.

Do use statistical tests when comparing models

If you want to convince people that your model is better than someone else’s, then a statistical test is a very useful tool. Broadly speaking, there are two categories of tests for comparing individual ML models. The first is used to compare individual model instances, e.g. two trained decision trees. For example, McNemar’s test is a fairly common choice for comparing two classifiers, and works by comparing the classifiers’ output labels for each sample in the test set (so do remember to record these). The second category of tests are used to compare two models more generally, e.g. whether a decision tree or a neural network is a better fit for the data. These require multiple evaluations of each model, which you can get by using cross-validation or repeated resampling (or, if your training algorithm is stochastic, multiple repeats using the same data). The test then compares the two resulting distributions. Student’s T test is a common choice for this kind of comparison, but it’s only reliable when the distributions are normally distributed, which is often not the case. A safer bet is Mann-Whitney’s U test, since this does not assume that the distributions are normal. Also see Do correct for multiple comparisons and Do be careful when reporting statistical significance.

Do correct for multiple comparisons

Things get a bit more complicated when you want to use statistical tests to compare more than two models, since doing multiple pairwise tests is a bit like using the test set multiple times — it can lead to overly-optimistic interpretations of significance. Basically, each time you carry out a comparison between two models using a statistical test, there’s a probability that it will discover significant differences where there aren’t any. This is represented by the confidence level of the test, usually set at 95%: meaning that 1 in 20 times it will give you a false positive. For a single comparison, this may be a level of uncertainty you can live with. However, it accumulates. That is, if you do 20 pairwise tests with a confidence level of 95%, one of them is likely to give you the wrong answer. This is known as the multiplicity effect, and is an example of a broader issue in data science known as data dredging or p-hacking. To address this problem, you can apply a correction for multiple tests. The most common approach is the Bonferroni correction, a very simple method that lowers the significance threshold based on the number of tests that are being carried out. However, there are numerous other approaches, and there is also some debate about when and where these corrections should be applied.

Don’t always believe results from community benchmarks

In certain problem domains, it has become commonplace to use benchmark data sets to evaluate new ML models. The idea is that, because everyone is using the same data to train and test their models, then comparisons will be more transparent. Unfortunately this approach has some major drawbacks. First, if access to the test set is unrestricted, then you can’t assume that people haven’t used it as part of the training process. This is known as “developing to the test set”, and leads to results that are heavily overoptimistic. A more subtle problem is that, even if everyone who uses the data only uses the test set once, collectively the test set is being used many times by the community. In effect, by comparing lots of models on the same test set, it becomes increasingly likely that the best model just happens to over-fit the test set, and doesn’t necessarily generalize any better than the other models (see Do correct for multiple comparisons). For these, and other reasons, you should be careful how much you read into results from a benchmark data set, and don’t assume that a small increase in performance is significant. Also see Do report performance in multiple ways.

Do consider combinations of models

Whilst this section focuses on comparing models, it’s good to be aware that ML is not always about choosing between different models. Often it makes sense to use combinations of models. Different ML models explore different trade-offs; by combining them, you can sometimes compensate for the weaknesses of one model by using the strengths of another model, and vice versa. Such composite models are known as ensembles, and the process of generating them is known as ensemble learning. There are lots of ensemble learning approaches. However, they can be roughly divided into those that form ensembles out of the same base model type, e.g. an ensemble of decision trees, and those that combine different kinds of ML models, e.g. a combination of a decision tree, an SVM, and a deep neural network. The first category includes many classic approaches, such as bagging and boosting. Ensembles can either be formed from existing trained models, or the base models can be trained as part of the process, typically with the aim of creating a diverse selection of models that make mistakes on different parts of the data space. A general consideration in ensemble learning is how to combine the different base models; approaches to this vary from very simple methods such as voting, to more complex approaches that use another ML model to aggregate the outputs of the base models. This latter approach is often referred to as stacking or stacked generalization.

6. How to report your results

The aim of academic research is not self-aggrandisement, but rather an opportunity to contribute to knowledge. In order to effectively contribute to knowledge, you need to provide a complete picture of your work, covering both what worked and what didn’t. ML is often about trade-offs — it’s very rare that one model is better than another in every way that matters — and you should try to reflect this with a nuanced and considered approach to reporting results and conclusions.

Do be transparent

First of all, always try to be transparent about what you’ve done, and what you’ve discovered, since this will make it easier for other people to build upon your work. In particular, it’s good practice to share your models in an accessible way. For instance, if you used a script to implement all your experiments, then share the script when you publish the results. This means that other people can easily repeat your experiments, which adds confidence to your work. It also makes it a lot easier for people to compare models, since they no longer have to reimplement everything from scratch in order to ensure a fair comparison. Knowing that you will be sharing your work also encourages you to be more careful, document your experiments well, and write clean code, which benefits you as much as anyone else. It’s also worth noting that issues surrounding reproducibility are gaining prominence in the ML community, so in the future you may not be able to publish work unless your workflow is adequately documented and shared, who also discuss the use of checklists, a handy tool for making sure your workflow is complete. You might also find experiment tracking frameworks, such as MLflow, useful for recording your workflow.

Do report performance in multiple ways

One way to achieve better rigour when evaluating and comparing models is to use multiple data sets. This helps to overcome any deficiencies associated with individual data sets (see Don’t always believe results from community benchmarks) and allows you to present a more complete picture of your model’s performance. It’s also good practice to report multiple metrics for each data set, since different metrics can present different perspectives on the results, and increase the transparency of your work. For example, if you use accuracy, it’s also a good idea to include metrics that are less sensitive to class imbalances (see Don’t use accuracy with imbalanced data sets). If you use a partial metric like precision, recall, sensitivity or specificity, also include a metric that gives a more complete picture of your model’s error rates. And make sure it’s clear which metrics you are using. For instance, if you report F-scores, be clear whether this is F1, or some other balance between precision and recall. If you report AUC, indicate whether this is the area under the ROC curve or the PR curve

Don’t generalize beyond the data

It’s important not to present invalid conclusions, since this can lead other researchers astray. A common mistake is to make general statements that are not supported by the data used to train and evaluate models. For instance, if your model does really well on one data set, this does not mean that it will do well on other data sets. Whilst you can get more robust insights by using multiple data sets (see Do report performance in multiple ways), there will always be a limit to what you can infer from any experimental study. There are numerous reasons for this, many of which are to do with how datasets are curated. One common issue is bias, or sampling error: that the data is not sufficiently representative of the real world. Another is overlap: multiple data sets may not be independent, and may have similar biases. There’s also the issue of quality: and this is a particular issue in deep learning datasets, where the need for quantity of data limits the amount of quality checking that can be done. So, in short, don’t overplay your findings, and be aware of their limitations.

Do be careful when reporting statistical significance

I’ve already discussed statistical tests (see Do use statistical tests when comparing models), and how they can be used to determine differences between ML models. However, statistical tests are not perfect. Some are conservative, and tend to under-estimate significance; others are liberal, and tend to over-estimate significance. This means that a positive test doesn’t always indicate that something is significant, and a negative test doesn’t necessarily mean that something isn’t significant. Then there’s the issue of using a threshold to determine significance; for instance, a 95% confidence threshold (i.e. when the p-value < 0.05) means that 1 in 20 times a difference flagged as significant won’t be significant. In fact, statisticians are increasingly arguing that it is better not to use thresholds, and instead just report p-values and leave it to the reader to interpret these. Beyond statistical significance, another thing to consider is whether the difference between two models is actually important. If you have enough samples, you can always find significant differences, even when the actual difference in performance is miniscule. To give a better indication of whether something is important, you can measure effect size. There are a range of approaches used for this: Cohen’s d statistic is probably the most common, but more robust approaches, such as Kolmogorov-Smirnov, are preferable. You might also consider using Bayesian statistics; although there’s less guidance and tools support available, these theoretically have a lot going for them, and they avoid many of the pitfalls associated with traditional statistical tests.

Do look at your models

Trained models contain a lot of useful information. Unfortunately many authors just report the performance metrics of a trained model, without giving any insight into what it actually learnt. Remember that the aim of research is not to get a slightly higher accuracy than everyone else. Rather, it’s to generate knowledge and understanding and share this with the research community. If you can do this, then you’re much more likely to get a decent publication out of your work. So, do look inside your models and do try to understand how they reach a decision. For relatively simple models like decision trees, it can also be beneficial to provide visualizations of your models, and most libraries have functions that will do this for you. For complex models, like deep neural networks, consider using explainable AI (XAI) techniques to extract knowledge; they’re unlikely to tell you exactly what the model is doing (see Don’t assume deep learning is best), but they may give you some useful insights.

7. Final thoughts

Machine learning is the latest in a long line of attempts to distill human knowledge and reasoning into a form that is suitable for constructing machines and engineering automated systems. As machine learning becomes more ubiquitous and its software packages become easier to use, it is natural and desirable that the low-level technical details are abstracted away and hidden from the practitioner. However, this brings with it the danger that a practitioner becomes unaware of the design decisions and, hence, the limits of machine learning algorithms.

This document doesn’t tell you everything you need to know, the lessons sometimes have no firm conclusions, and some of the things I’ve told you might be wrong, or at least debateable. This, I’m afraid, is the nature of research. The theory of how to do ML (Machine-learning) almost always lags behind the practice, academics will always disagree about the best ways of doing things, and what we think is correct today may not be correct tomorrow. Therefore, you have to approach ML (Machine-learning) in much the same way you would any other aspect of research: with an open mind, a willingness to keep up with recent developments, and the humility to accept you don’t know everything.

Image
This is an image caption
Image
This is an image caption
Image
This is an image caption
Image
This is an image caption

The enthusiastic practitioner who is interested to learn more about the magic behind successful machine learning algorithms currently faces a daunting set of pre-requisite knowledge:

¤ Programming languages and data analysis tools

¤ Large-scale computation and the associated frameworks

¤ Mathematics and statistics and how machine learning builds on it

Machine learning builds upon the language of mathematics to express concepts that seem intuitively obvious but that are surprisingly difficult to formalize. Once formalized properly, we can gain insights into the task we want to solve. One common complaint of students of mathematics around the globe is that the topics covered seem to have little relevance to practical problems. We believe that machine learning is an obvious and direct motivation for people to learn mathematics.