Did Google move too slowly with AI? Is that why Google is now scrambling to put AI into everything? Two new reports paint two entirely different pictures of Google before – and since – the launch of ChatGPT.
The Google search revolution that never happened. Two Google researchers created a chatbot that supposedly would “revolutionize the way people searched the internet and interacted with computers,” more than two years ago, as reported by The Wall Street Journal.
But executives were reportedly risk-averse, fearing putting out the AI product could hurt its $200 billion+ search advertising business and its reputation. And sure enough, Google took a significant reputational hit with its rushed Bard debut.
What are Google’s AI principles? One reason for Google’s slow approach could be due to its AI principles. Google believes AI applications should:
- Be socially beneficial.
- Avoid creating or reinforcing unfair bias.
- Be built and tested for safety.
- Be accountable to people.
- Incorporate privacy design principles.
- Uphold high standards of scientific excellence.
- Be made available for uses that accord with these principles.
So if Google had this AI technology ready more than two years ago, perhaps Google’s leadership felt it wasn’t as ready as those researchers did.
Lack of sourcing was another internal concern. In addition to safety and accuracy concerns, there was another big concern the WSJ points out:
“Integrating programs like LaMDA, which can synthesize millions of websites into a single paragraph of text, could also exacerbate Google’s long-running feuds with major news outlets and other online publishers by starving websites of traffic. Inside Google, executives have said Google must deploy generative AI in results in a way that doesn’t upset website owners, in part by including source links, according to a person familiar with the matter.”
Yet when Google showed off its new AI capabilities in search, there we no links to sources. And it caused a little bit of outrage.
And along came OpenAI’s ChatGPT and Google’s Code Red. Google co-founder Larry Page, a decade ago, warned that “incrementalism leads to irrelevance over time, especially in technology, because change tends to be revolutionary, not evolutionary.”
Love it or hate it, ChatGPT is a revolutionary technology. Shortly after the launch of ChatGPT in late November, Google declared a “code red” and sought help from Page and co-founder Sergey Brin. This was part of an effort to add chatbot features to Google Search this year.
Then, Google rushed to introduce Bard, it’s answer to ChatGPT, on Feb. 6. That was one day before Microsoft had planned to unveil the new Bing with ChatGPT.
Since that announcement, Google has tried to clarify that Bard is not search. The AI-powered chatbot features coming to search are based on similar technology, but Bard is a standalone product.
Google AI = the new Google Plus? Google is now reportedly “stuffing” generative AI into more products, according to Bloomberg:
“Some Google alumni have been reminded of the last time the company implemented an internal mandate to infuse every key product with a new idea: the effort beginning in 2011 to promote the ill-fated social network Google+. It’s not a perfect comparison—Google was never seen as a leader in social networking, while its expertise in AI is undisputed. Still, there’s a similar feeling.”
Google pushed back on this, saying much of Google’s internal efforts involve having Googlers test and improve Bard. One Googler also told Bloomberg:
- “There is an unhealthy combination of abnormally high expectations and great insecurity about any AI-related initiative.”
Why we care. Is Google panicking or moving too slowly? Both could be true – or the actual truth may be somewhere more in the middle, where Google is really living by its AI principles. Call it a slow rush – as Google can afford to sit back right now and watch and learn from Microsoft and other generative AI players and avoid any (further) costly mistakes.