In the third year of my residency, I was given Clinical Epidemiology: A Basic Science for Clinical Medicine.1 Although that wonderful book has since gone out of print, it was transformative to my nascent practice of medicine. Written by the founders of the evidence-based medicine movement, it cogently argued in favor of using heady things such as likelihood ratios, treatment thresholds, and nomograms at the bedside to guide clinical decision making. I devoured it and taught its contents to the residents I subsequently supervised. While I remain a firm believer in the philosophy of evidence-based medicine, I have grown increasingly skeptical about how it is to be operationalized. In the years since I read the book by Sackett et al,1 hundreds of decision tools in a variety of forms—guidelines, practice parameters, prediction rules—have been generated. Some have been good, some bad; some have been validated, others not. But what they all have in common is that their overall use remains poor at best. In the meantime, those of us in academia continue to create them and those of us on editorial boards continue to vet them for methodological rigor. The cottage industry of decision tools has at least the appearance of an academic jobs program since, to clinicians in the real world, their utilities remain largely unproven. For example, there are no fewer than 10 clinical prediction rules for something as common as streptococcal pharyngitis, and I would be surprised if most clinicians even use one.