Of true positives, the fraction correctly identified; sensitive to false negatives.
Why It Matters
Recall is critical in fields where identifying all positive cases is essential, such as healthcare and security. High recall ensures that important cases are not overlooked, making it a key metric for evaluating the effectiveness of classification models.
Definition
Recall, also known as the true positive rate or sensitivity, is defined as the ratio of true positive predictions to the total number of actual positive instances in the dataset, mathematically expressed as Recall = TP / (TP + FN). This metric is crucial in scenarios where the cost of false negatives is high, as it measures the model's ability to identify all relevant instances. Recall is sensitive to the number of false negatives; a high recall indicates that most actual positives are correctly identified, while a low recall suggests that many positives are missed. In the context of the confusion matrix, recall is derived from the counts of TP and FN, making it an essential component in evaluating classification performance, particularly in applications such as disease detection and fraud identification, where failing to identify a positive case can have serious consequences.
Recall measures how well a model can find all the actual positive cases. It’s calculated by dividing the number of true positives (correctly identified positives) by the total number of actual positives (true positives plus false negatives). For example, if there are 100 sick patients and the model correctly identifies 80 of them, the recall is 80%. High recall is important in situations like medical testing, where missing a sick patient can lead to serious health issues.