The Kindness of Strangers vs. The Wisdom of Crowds
I’m so old I remember when people looked things up on the internet like a leap of faith. Ancient history now. Maybe five years ago. You’d type a question into the little box, hit enter, and hope that somewhere out there, a stranger would be kind to you.
The intertunnel was Blanche DuBois central. If you’re not familiar with Blanche, you should watch “A Streetcar Named Desire,” and see Vivien Leigh fading away while Marlon Brando became the next big thing. At any rate, just like Blanche, you had to be off your rocker to trust the internet, but by gad, people sure did. It involved more self-deception than trust, and a healthy dose of bad judgment.
No matter what it said on the masthead of whatever site Google sent you to:
- You didn’t know who answered your question
- You didn’t know why they answered it
- You didn’t know whether they were brilliant, biased, drinking heavily, goofing off at work, or trying to sell you something on the sly
But there they were on the first ten results on Google. Strangers. And you depended on them.
Lording over this whole mess was Google itself. The ultimate stranger. Google never claimed to be your friend, exactly. It claimed to be helpful. Reached out an elbow for you to grasp on the way to the funny farm, and you took it. It claimed to be neutral and objective. That was a laugh. Its methods of curation were always opaque, its incentives were never purely aligned with your best interests. Google didn’t wonder what was true or even what was useful. Its ranking system consisted entirely of discerning who was kissing Google’s buttocks sufficiently to make Google some money.
What followed was inevitable. If Google rewarded visibility, then visibility became the greasy pole every website operator tried to climb. Entire industries sprang up to reverse-engineer the G’s algorithm. SEO experts. Content farms. Listicles with suspiciously specific headlines. “Doctors Hate This One Simple Trick” became a genre.
The internet quickly filled up with very strange people indeed, strangers shouting, waving, and stuffing keywords, all angling for a click. To be fair, no matter how bad it got, some people actually knew useful and amusing things, and offered them to the public, free of charge. These benighted souls, through titanic efforts, could climb to be found on page 114 of the Google results. It was left to the brave internaut to sort sincerity from strategy in real time. Pretty much, we all failed at that. Hence, Buzzfeed!
The kindness of strangers, it turned out, was unreliable, and just like Blanche lying on the floor, often exhausting. We were all ripe for something different. Google had hogtied the whole internet. The only place sadder than page 114 of Google was the top of the Bing results. You could hide dead bodies on Bing. The internet went from sclerotic to petrified. Only a completely different way to look for information could save us.
That’s what Large Language Models actually represent. They’re not glorified autofill, as many would characterize them. They’re not intelligent, either, in the true sense of the word, but so what? Unlike Google, which claimed to have the world’s digital information all curated for you, LLMs like Chad (Chat GPT) read the whole internet, and then some, and settle on a crowdsourced answer for you. Not original thinking. Not thinking at all, really. Just paying attention to everything, everywhere, more or less. Instead of being handed a ranked list of links curated by an inscrutable and avaricious stranger, you were handed a synthesis. Not a single authoritative voice, but an average. A blending. A statistical distillation of countless human scribblings. The good, the bad, and the ugly.
The prime idea behind this has a name: The Wisdom of Crowds. The term was popularized by James Surowiecki in the early 2000s. The observation itself is much older. Francis Galton came up with the concept, more or less, back in the 1800s. He spent half his time being pretty smart about statistics, and the other half writing a rough first draft of Idiocracy. He didn’t have faith in any single member of a crowd, not by a long shot. One of Galton’s classic illustrations is a fairground guessing game. A crowd is asked to estimate the weight of an ox or the number of jellybeans in a jar. Individual guesses are all over the place, too high, too low, and confidently wrong, usually. But if you take the average of all those guesses, the result is often eerily accurate. No single person knew the answer, but the crowd, in aggregate, effectively did.
This core insight is counterintuitive. Under the right conditions, large groups of ordinary people can collectively make better judgments than a small group of experts, or even the smartest individual you can find. That includes me, I guess. I’m the smartest individual I can find, but then again, I’m alone in my apartment right now. I’d have to put on pants and go outside and look for someone smarter than me. It could take minutes. Never fear. The wisdom of crowds doesn’t work because people are especially wise. It works because their mistakes are all over the place. Biases cancel out. Overconfidence is diluted. Individual blind spots are cancelled out by other individual idiocies.
Large language models are sorta like that. They are not intelligent in the human sense. They don’t reason or understand, and probably never will. They’ve been trained on enormous amounts of human-created text. Everything from high-quality scholarship mixed with drunken Reddit screeds, “journalism” (tee hee) mixed with marketing copy, insight mixed with the comments under cat videos. Much of it, taken individually, is not to be believed, never mind trusted. But when the model predicts answers based on patterns across all of that stuff, what emerges is something like a crowd’s best guess. It ain’t truth, exactly, but it’s at least a probabilistic consensus shaped by millions of whoevers rather than one loud stranger.
This is a subtle but profound shift. Before, you depended on the kindness of strangers. You hoped that someone, somewhere, had taken the time to answer your exact question thoughtfully. Then you hoped Google had decided this person deserved to be seen. Good luck with that. “Don’t be evil” is right up there with “Arbeit Macht Frei” in the accurate slogan department.
Now, you depend on mediation. The LLM doesn’t care about clicks (yet). It doesn’t care about ad revenue (right this minute). It doesn’t care about SEO tricks or keyword density (don’t worry, it will eventually). It doesn’t wake up hoping to sell you a multi-level marketing membership. Its incentives are different: produce something that sounds coherent, relevant, and responsive. That hardly makes it perfect or unbiased. Far from it. The wisdom of crowds can be wise under the right circumstances, it’s true. But crowds can also be lynch mobs. Garbage in, garbage out, averaged.
And since LLMs are programmed never to say, I don’t know, you end up reading hallucinations. You wish you’d get Sargent Shultz, and end up with Cliff Clavin instead:
The experience feels fundamentally different. You’re no longer wandering a digital marketplace, hoping to bump into a benevolent stranger. You’re having a conversation with a synthesized amalgam of John von Neumann and Cliff Clavin. Good luck figuring out who is who.







Recent Comments