Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
In an era of fake news, clickbait and crypto-scams, the public’s trust in online information is on the decline, so it’s perhaps no surprise that some people think OpenAI’s ChatGPT may well spell the end of search engines.
A Harvard study about the impact of misinformation and fake news online showed that consuming misinformation was associated with a general decrease in trust.
Given the lower levels of trust in information sourced online, it makes sense that people may consider ChatGPT as a tool to filter out biased or incorrect information and simply get an answer.
At face value, this theory rings true, as some search results are filled with websites built by bots purely for SEO and Adsense clicks.
However, there is a converse scenario. What happens when someone who believes the earth is flat can no longer search the internet and find new information? How does this group react when the likes of Microsoft or Google decide that it won’t let its chatbot tell you the earth is flat? While bots will make mistakes on both sides of the information spectrum, the point is that people still want to be able to choose what information they consume.
An analysis of 4m Google search results found that the top three results get 54.4% of all clicks - but this means 45% of searchers choose other sources of information.
A colleague refers to search engines as librarians – the websites are all the books available in the library, while the librarian acts as the search engine. When you walk into the library and ask the librarian for a book on the history of a sport, if the librarian doesn’t understand your request (search query) correctly they won’t be able to find the right book.
Let’s use a real-world example and see how three librarians react to search queries differently. A search for the term “goggles” on Google showed what you would likely expect to see - swimming goggles. However, competitor search engines Bing and Yahoo assume you’ve misspelt Google and show you that result first.
Google’s search engine dominates the search landscape, with a 93% share of the market worldwide. This dominance raises concerns about how relying on a single “librarian” can prove dangerous and feed you the wrong information. You could also argue that two out of the three results provided are incorrect.
So, if we can’t even get accurate search results, how can we put so much faith into AI-powered chatbots?
Consider, for example, that when Google promoted its AI chatbot called Bard, it provided an incorrect answer to a question about the James Webb Telescope discoveries. Alphabet’s share price consequently dropped by 8% in one day. And Bing’s version of a search-powered chatbot went rogue a lot faster by gaslighting users, arguing with them and telling them they “haven’t been a good user”.
In defence of these chatbots, it’s still a nascent technology that needs more user testing before it is widely used and accepted. To put this development journey into context, Google turns 25 this September. During its 25 years the company has gone through many iterations of its search algorithm to make it as useful and relevant as possible to users.
Both Google’s and Bing’s chatbots are still unavailable to the public, and you’ll need to sign up on a waiting list to get access to them. While this process continues, search engines will remain the primary way we find and source information online.
Dylan Balouza is head of digital operations at CBR Marketing.
The big take-out: If we can’t even get accurate search results, how can we put so much faith into AI-powered chatbots?
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Will ChatGPT kill the search engine?
In an era of fake news, clickbait and crypto-scams, the public’s trust in online information is on the decline, so it’s perhaps no surprise that some people think OpenAI’s ChatGPT may well spell the end of search engines.
A Harvard study about the impact of misinformation and fake news online showed that consuming misinformation was associated with a general decrease in trust.
Given the lower levels of trust in information sourced online, it makes sense that people may consider ChatGPT as a tool to filter out biased or incorrect information and simply get an answer.
At face value, this theory rings true, as some search results are filled with websites built by bots purely for SEO and Adsense clicks.
However, there is a converse scenario. What happens when someone who believes the earth is flat can no longer search the internet and find new information? How does this group react when the likes of Microsoft or Google decide that it won’t let its chatbot tell you the earth is flat? While bots will make mistakes on both sides of the information spectrum, the point is that people still want to be able to choose what information they consume.
An analysis of 4m Google search results found that the top three results get 54.4% of all clicks - but this means 45% of searchers choose other sources of information.
A colleague refers to search engines as librarians – the websites are all the books available in the library, while the librarian acts as the search engine. When you walk into the library and ask the librarian for a book on the history of a sport, if the librarian doesn’t understand your request (search query) correctly they won’t be able to find the right book.
Let’s use a real-world example and see how three librarians react to search queries differently. A search for the term “goggles” on Google showed what you would likely expect to see - swimming goggles. However, competitor search engines Bing and Yahoo assume you’ve misspelt Google and show you that result first.
Google’s search engine dominates the search landscape, with a 93% share of the market worldwide. This dominance raises concerns about how relying on a single “librarian” can prove dangerous and feed you the wrong information. You could also argue that two out of the three results provided are incorrect.
So, if we can’t even get accurate search results, how can we put so much faith into AI-powered chatbots?
Consider, for example, that when Google promoted its AI chatbot called Bard, it provided an incorrect answer to a question about the James Webb Telescope discoveries. Alphabet’s share price consequently dropped by 8% in one day. And Bing’s version of a search-powered chatbot went rogue a lot faster by gaslighting users, arguing with them and telling them they “haven’t been a good user”.
In defence of these chatbots, it’s still a nascent technology that needs more user testing before it is widely used and accepted. To put this development journey into context, Google turns 25 this September. During its 25 years the company has gone through many iterations of its search algorithm to make it as useful and relevant as possible to users.
Both Google’s and Bing’s chatbots are still unavailable to the public, and you’ll need to sign up on a waiting list to get access to them. While this process continues, search engines will remain the primary way we find and source information online.
Dylan Balouza is head of digital operations at CBR Marketing.
The big take-out: If we can’t even get accurate search results, how can we put so much faith into AI-powered chatbots?
ChatGPT and digital agencies – will the AI uprising destroy jobs?
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
Most Read
Published by Arena Holdings and distributed with the Financial Mail on the last Thursday of every month except December and January.