5.4.7 R5. Predictive Coding - Video 6: Evaluating the Model | Summary and Q&A

480 views
December 13, 2018
by
MIT OpenCourseWare
YouTube video player
5.4.7 R5. Predictive Coding - Video 6: Evaluating the Model

TL;DR

The CART model's accuracy on the test set is 85.6%, showing a slight improvement compared to the baseline model's accuracy of 83.7%.

Install to Summarize YouTube Videos and Get Transcripts

Key Insights

  • 🙂 The CART model shows a slight improvement in accuracy compared to the baseline model.
  • 📜 False negatives are considered more costly in document retrieval applications.
  • ❎ Adjusting the cutoff on the ROC curve can help optimize the trade-off between false positives and false negatives.
  • 🧑‍🦽 Manual review is still required for all predicted responsive documents.
  • 🥺 Unbalanced data sets often lead to limited improvements in model accuracy.
  • ⚾ The accuracy of a model should be evaluated based on the specific costs and consequences of false positives and false negatives.
  • 😥 The performance of the baseline model provides a reference point for evaluating the effectiveness of more complex models.

Transcript

Now that we've trained a model, we need to evaluate it on the test set. So let's build an object called pred that has the predicted probabilities for each class from our cart model. So we'll use predict of emailCART, our cart model, passing it newdata=test, to get test set predicted probabilities. So to recall the structure of pred, we can look at ... Read More

Questions & Answers

Q: How is the accuracy of the CART model on the test set calculated?

The accuracy is calculated by comparing the true outcomes with the predicted outcomes using a cutoff of 0.5. The number of correctly predicted responsive and non-responsive documents is divided by the total number of elements in the test set.

Q: How does the accuracy of the CART model compare to the accuracy of the baseline model?

The CART model has an accuracy of 85.6% on the test set, while the baseline model has an accuracy of 83.7%. This indicates a small improvement in accuracy using the CART model.

Q: What are the consequences of false positives and false negatives in document retrieval applications?

False positives, where a non-responsive document is labeled as responsive, require additional work in the manual review process but do not cause further harm. False negatives, where a responsive document is labeled as non-responsive, result in the document being missed entirely in the predictive coding process, which is more costly.

Q: Why is it important to experiment with different cutoffs on the ROC curve?

Different cutoffs on the ROC curve can help find the optimal balance between false positives and false negatives. By adjusting the cutoff, the trade-off between incorrectly labeling non-responsive documents as responsive (false positives) and missing responsive documents (false negatives) can be optimized.

Summary & Key Takeaways

  • A model called "pred" is built with predicted probabilities for each class from the CART model.

  • The accuracy of the CART model on the test set is calculated using a cutoff of 0.5.

  • The accuracy of the baseline model, which predicts all documents as non-responsive, is compared to the CART model's accuracy.

Share This Summary 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on:

Explore More Summaries from MIT OpenCourseWare 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on: