Salon is reporting about a proposed Knowledge-Based Trust score that Google might implement to keep "bad information" at bay.
Google could launch an effort to keep trolls and bad information at bay, with a program that would rank websites according to veracity, and sort results according to those rankings. Currently, the search engine ranks pages according to popularity, which means that pages containing unsubstantiated celebrity gossip or conspiracy theories, for example, show up very high.
New Scientist’s Hal Hodson reports on the proposed Knowledge-Based Trust score:
The software works by tapping into the Knowledge Vault, the vast store of facts that Google has pulled off the internet. Facts the web unanimously agrees on are considered a reasonable proxy for truth. Web pages that contain contradictory information are bumped down the rankings.
Vetting for truth is a good thing. It seems as though people will believe anything on the Internet so long as it gets enough views or shares. Because Google's current algorithm takes popularity into consideration, there is a chance for "bad information" to rise to the top of the search results, which has a circular effect of causing more people to believe in the truth of the story.
However, technology acting as a gatekeeper to information is not so good. Particularly when it comes to science; there are times when what was a "truth" yesterday is "bad information" today. We need the ability to vet information for ourselves. To bring information together in a way that creates breakthroughs. If we have a machine do this for us, we lose the ability to use our critical thinking skills and make connections between various sources of information.
Vetting information is such an integral part of the research process. And it often leads to new ideas. I'm not so sure that this is a function that we want to relieve ourselves of in favor of artificial intelligence for many reasons.