So in this video, we talked about the notion of trading off between precision and recall and how we can vary the threshold that we use to decide whether to predict y=1 or y=0. This threshold that says do we need to be at least 70% confident or 90% confident or whatever before we predict y=1 and by varying the threshold you can control a trade off between precision and recall. Then talked about the F score which takes precision and recall and gives you a single real number evaluation metric. And of course, if your goal is to automatically set that threshold, one pretty reasonable way to do that would also be to try a range different values of thresholds. And evaluate these different thresholds on, say, your cross validation set, and then to pick whatever value of threshold gives you the highest F score on your cross validation set. That would be a pretty reasonable way to automatically choose the threshold for your classifier as well.