Google demonstrated to counter ChatGPT

Some have predicted that AI chat will eventually replace the traditional search engine due to ChatGPT’s ability to respond to questions conversationally and directly. Google is taking this seriously and should be able to compete based on what it has already demonstrated. The user experience is the issue at hand.
There are essentially two parts to Google’s mission, which is “to organize the world’s information and make it universally accessible and useful.”

ChatGPT excels at this kind of complex question because it can generate the answer for many things rather than sending you somewhere else. ChatGPT can also ask questions to get you to answer your question in the way you want. In the meantime, it can write essays (with the ability to specify paragraphs), summarize, explain, and debug code among other things.

Add Realmicentral to your Google News feed.   Follow US On Google News

LaMDA

Although less flashy, Google has been working on the same language model technology that powers ChatGPT for some time. However, for the past two developer conferences, it has given its work on large language models and natural language understanding (NLU) top billing.

Google’s most advanced conversational AI yet is LaMDA or Language Model for Dialog Applications. With the caveat that it was still in the R&D stage, it was unveiled at I/O 2021 to converse on any topic. Linda has picked up on several of the nuances that distinguish open-ended conversation, including sensible and specific responses that encourage further back-and-forth. Google’s examples of talking to Pluto and a paper airplane were meant to demonstrate this.

Different characteristics Google needs are “intriguing quality” (whether reactions are savvy, startling, or clever) and “factuality,” or adhering to realities.

Google AI LaMDA

LaMDA 2 was released a year later, and Google began letting the general public use three distinct instances of LaMDA through the AI Test Kitchen app.

MUM

Google’s MUM (Multitask Unified Model) has highlighted multimodal models that “allow people to naturally ask questions across different types of information.” Notable is the example Google provided of a query that a search engine cannot currently answer, but that this new technology can address:

I’ve hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare?

MUM would be aware that you are comparing two mountains and that the period you provided corresponds to Mt. Fuji’s rainy season, which necessitates waterproof gear. The most impressive example was almost certainly connected to Google Lens, and it could bring up articles written in Japanese where there is more information about the local area.

So now imagine taking a photo of your hiking boots and asking, “Can I use these to hike Mt. Fuji?” MUM would be able to understand the content of the image and the intent behind your query, let you know that your hiking boots would just work fine, and then point you to a list of recommended gear and an Mt. Fuji blog.

Google has now made it clear that it will be adding MUM to Lens so that you can take a picture of a broken part of your bicycle and receive repair instructions.

Google MUM

PaLM

Palm (Pathways Language Model) can answer questions if MUM allows questions to be asked using a variety of mediums and LaMDA can continue conversations. It was announced in April and was mentioned on stage at I/O. Palm can:

Question Answering, Semantic Parsing, Proverbs, Arithmetic, Code Completion, General Knowledge, Reading Comprehension, Summarization, Logical Inference Chains, Common-Sense Reasoning, Pattern Recognition, Translation, Dialogue, Joke Explanations, Physics QA, and Language Understanding.

It is powered by Pathways, a next-generation AI architecture that can “train a single model to do thousands or millions of things” as opposed to the current approach, which is highly individualized.

Google PaLM

To the products

Sundar Pichai stated that LaMDA’s “natural conversation capabilities have the potential to make information. Computing radically more accessible and easier to use” when Google announced it in 2021.

It specifically mentioned Google Assistant, Search, and Workspace as products where it intends to incorporate [e] better conversational features. Capabilities to developers and enterprise customers could also be provided by Google.

Google has always emphasized the significant safety and accuracy concerns when demoing. The biggest obstacle appears to be the fact that these models can make stuff up.

Google has reassigned teams to work on competing for AI products and demos and is said to be in “code red” over ChatGPT. The technology will likely be demonstrated once more at I/O 2023. However, it is unclear whether LaMDA, MUM, and PaLM will be prominently integrated into Google’s most important products.

“Conversation and natural language processing are powerful ways to make computers more accessible to everyone,” Pichai stated once more in May. The ultimate objective of everything the company has shown is for Google Search to be able to answer questions like a human.

Google has the innovation to arrive, yet the organization’s everlasting test is moving Research and development into genuine items. Surging it doesn’t appear to be shrewd for the web search tool that the world should be reliably right.

Down to the products

Sundar Pichai stated that LaMDA’s “natural conversation capabilities have the potential to make information.  Computing radically more accessible and easier to use” when Google announced it in 2021.

It specifically mentioned Google Assistant, Search, and Workspace as products where it intends to “incorporate [e] better conversational features.” “Capabilities to developers and enterprise customers” could also be provided by Google.

If you like our news and want to be the first to get notifications of the latest news, then follow us on Twitter and Facebook and join our Telegram channel. Also, you can follow us on Google News for regular updates.

Leave a Comment