Some experts say Google is just parroting your own beliefs right back to you. It may be worsening your own biases and deepening societal divides along the way.
[…]
“Google’s whole mission is to give people the information that they want, but sometimes the information that people think they want isn’t actually the most useful,” says Sarah Presch, digital marketing director at Dragon Metrics, a platform that helps companies tune their websites for better recognition from Google using methods known as “search engine optimisation” or SEO.
[…]
“> What Google has done is they’ve pulled bits out of the text based on what people are searching for and fed them what they want to read” – Sarah Presch
Type in "Is Kamala Harris a good Democratic candidate
…and any good search engine will find results containing keywords such as “Kamala Harris”, “Democratic”, “candidate”, and “good”.
[…] you might ask if she’s a “bad” Democratic candidate instead
In that case, of course the search engine will find results containing keywords such as “Kamala Harris”, “Democratic”, “candidate”, and “bad”.
So the whole premise that, “Fundamentally, that’s an identical question” is just bullshit when it comes to searching. Obviously, when you put in the keyword “good”, you’ll find articles containing “good”, and if you put in the keyword “bad”, you’ll find articles containing “bad” instead.
Google will find things that match the keywords that you put in. So does DuckDuckGo, Qwant, Yahoo, whatever. That is what a good search engine is supposed to do.
I can assure you, when search engines stop doing that, and instead try to give “balanced” results, according to whatever opaque criteria for “balanced” their company comes up with, that will be the real problem.
I don’t like Google, and only use google when other search engines fail. But this article is BS.
Ah but other than the search results there’s also a big AI summary on the top, which I’m more concerned about
Yeah. Be very, very afraid of people using search engines or “AI” as some Magic Eightball oracle to give them answers.
The Featured Snippet quoted an article from the Mayo Clinic, highlighting the words “Caffeine may cause a short, but dramatic increase in your blood pressure.” But when she looked up “no link between coffee and hypertension”, the Featured Snippet cited a contradictory line from the very same Mayo Clinic article: “Caffeine doesn’t have a long-term effect on blood pressure and is not linked with a higher risk of high blood pressure”.
On the one hand, Google sucks. On the other hand, if people are unable to a) understand how those two snippets are not contradictory, and b) read at least one very short simplified-for-laymen Mayo Clinic article about the topic before thinking they’ve learned anything at all about medicine, it’s hard to see the problem as being primarily due to Google. There is something deeper, and worse, going wrong when people habitually take that kind of extreme shortcut to thinking that they know the right answer about almost anything, and it has little to do with whether any one-sentence snippets they’re given are biased or accurate.