Google Introduces LaMDA with natural conversation capabilities

Google announced this week that the usage of their translation application is four times that of last year. At today’s Google I/O 2021 conference, the company stated that their latest progress in automatically understanding the images in Google photos has enabled more than 2 billion memory to be viewed and enjoyed by users. Google Lens is used 3 billion times a month, and Google has always been excited about how their technology is used to translate and interpret information.

Google’s use of WaveNet enables Google Voice Assistant to support up to 51 new languages, and Google’s recent update in understanding natural language allows a reasonable response to the user’s conversation with Google’s search, and it looks natural.

Google’s latest technological achievement in natural language is L AMD A, which is a language model for conversational applications. It is still under research and development, but will soon be available for third-party testing.

Join Our RealMi Central Channel On Telegram

LaMDA is a bit like a chatbot ten years ago, but now, Google’s system allows the computer not only to select the most accurate trigger from a pre-configured response list but at the same time it can also learn and change.

LaMDA was shown at the GoogleI/O conference as a language model of planetary personification. When the user talks to the planet, the system will react. Google even showed the user’s dialogue with the paper airplane (again, trained by LaMDA). Google will eventually put LaMDA in Google Search, Google Assistant, and now a hot workspace around the world.


If you like our news and you want to see such news even further, then follow RealMi Central on Telegram (RealMi Central, Xiaomi, Apple, Realme, Samsung, Microsoft, OnePlus, Huawei/Honor, Android 12), Twitter, Facebook (Page) (Group) & Instagram.

Leave a Comment