It’s always been the case that humans crave simplicity, even when it masks reality. Hence the tendency to construct, measure, and argue over scalar (one number) metrics, even though they almost always ignore nuance. Even better, in many people’s eyes, are scalar, binary metrics, such as picking winners and losers. These metrics are almost always problematical.
But they’re fun.
And now we have a name for one class of such metrics: GOAT, or greatest of all time.
A recent DPR MF forum thread wound its way from considering how differences in cameras affected the resultant images, to how different guitars affected the resulting music, to who’s the greatest guitarist of all time. It was amusing, but, as you’d expect, there was a lot more heat than light.
In this post, I’m going to try to analyze why these discussions hardly ever bear fruit.
Almost any device, practitioner, or artistic product should be judged by many criteria. Such an evaluation is mathematically a vector, with each dimension one of the criteria. The way to convert such a vector to a scalar is to decide on how each of the component dimensions should be weighted and scaled before they are summed. Humans are used to doing this, and do it intuitively. What they’re not good at is understanding and articulating the weights and scaling that they’re using. If I ask you to rate a lens, and you say you’d give it a 7 out of 10, I don’t know what you consider important in the usage of that lens, or how you evaluate the tradeoffs among the measurable and visible parameters of the lens.
If I ask you if the 183.5 mm f/1.05 NikTakicron lens you’re holding is a good lens, I’m asking you to convert the numerical scalar that you gave me before into a binary. If you tell me it’s a good lens, you’ve taken your rating, which was 7, and compared it to some threshold for goodness that only you know (maybe it’s whether the lens is better or worse than 6), and you can say yes. I may be able to get your internal threshold out of you, but that’s not much help if I don’t understand your scaling and weighting, which you probably don’t fully understand yourself.
If I ask you what’s the greatest lens ever made, and you are courageous and foolish enough to answer by spitting out a name, I’m asking you to mentally come up with scalars for each of what you consider to be contenders, stack-rank the lenses according to those scalars, and give me the one with the highest rating. Here are some of the many possible answers:
- The James Webb telescope
- The Leica 50mm f/0.95 Noct
- The optics of EUV lithography machines
- A 4.4 inch Goerz Dagor Gold Dot
It’s kind of silly, isn’t it? It ignores the whole horses for courses thing.
But we have these discussions all the time, and it seems that the less people know about the subject at hand, the more they are happy to leap into GOAT discussions with gleeful abandon. SOme of the people in the greatest guitarist discussion have never played guitar.
Can you imagine Andres Segovia, Eric Clapton, Tony Rice, and Pat Metheny, should they somehow have gotten together, talking about who is the greatest guitarist of all time? Or knowledgable photographers trying to pick the greatest lens of all time? If you’re an expert, you know too much to go down that rabbit hole.
I think the search for GOATs has a relationship to the Dunning–Kruger effect, which is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities. So, to a nonexpert, some field may look simple, but to the practitioner in that area, it looks complex.
This effect is related to intelligence and general competence, but in my entirely anecdotal experience, in a counterintuitive manner. Highly intelligent and competent people, when faced with a topic in which they have no expertise, tend to think they know more than they do. I surmise that it’s because they’ve gotten used to being the most knowledgeable person in the room. I’ve noticed the DK effect in myself; when I encounter a new area, I tend to think I understand it better than I actually do, and have to continually test myself to avoid overconfidence.
I’ve noticed that the DK effect is diminished when the discussion turns quantitative. The process of calculating appropriate metrics tends to expose the gaps in one’s knowledge. That’s a good reason to turn these discussions in a quantitative direction. Numbers provide grounding.
Nikojorj says
The GOAT post about GOATs!