Just read the following post – leading readers to learn how to test experimental AI systems like LaMDA written by Kyle Wiggers on August 25, 2022 and thought it was important my readers see it’s connection to my latest articles. I’m providing it in it’s entirety below:
Google today launched AI Test Kitchen, an Android app that allows users to try out experimental AI-powered systems from the company’s labs before they make their way into production. Beginning today, folks interested can complete a sign-up form as AI Test Kitchen begins to roll out gradually to small groups in the U.S.
As announced at Google’s I/O developer conference earlier this year. AI Test Kitchen will serve rotating demos centered around novel, cutting edge AI technologies – all from within Google. The company stresses that they aren’t finished products, but instead are intended to give a taste of the tech giant’s innovations while offering Google an opportunity to study how they’re used.
The first set of demos in AI Test Kitchen explore the capabilities of the latest version of LaMDA (Language Model for Dialogue Applications), Google’s language model that queries the web to respond to questions in a human-like way. For example, you can name a place and have LaMDA offer paths to explore or share a goal to get LaMDA to break it down into a list of subtasks.
Google says it’s added “multiple layers” of protection to AI Test Kitchen in an effort to minimize the risks around systems like LaMDA, like biases and toxic outputs. As illustrated most recently by Meta’s BlenderBot 3.0, even the most sophisticated chatbots today can quickly go off the rails, delving into conspiracy theories and offensive content when prompted with certain text.
Systems within AI Test Kitchen will attempt to automatically detect and filter out objectionable words or phrases that might be sexually explicit, hateful or offensive, violent or illegal or divulge personal information, Google says. But the company warns offensive text might still occasionally make it through.
“As AI technologies continue to advance, they have the potential to unlock new experiences that support more natural human-computer interactions,” Google product manager Tris Warkentin and director of product management Josh Woodard wrote in a blog post, “We’re at a point where external feedback is the next, most helpful step to improve LaMDA reply as nice, offensive, off topic or untrue, we’ll use this data – which is not linked to your Google account – to improve and develop our future products.”
AI Test Kitchen is part of a broader, recent trend among tech giants to pilot AI technologies before they’re released into the wild. No doubt informed by snafus like Microsoft’s toxicity-spewing Tay chatbot, Google, Meta, OpenAI and others have increasingly to test AI systems among small groups to ensure they’re behaving as intended – and to fine-tune their behavior where necessary.
For example, Open AI several years ago released its language-generating system, GPT-3, in a closed beta before making it broadly available. GitHub initially limited access to Copilot the code-generatig system it developed in partnership with OpenAI, to select developers before launching it in generale availability.
The approach wasn’t necessarily born out of the goodness of anyone’s heart – by now, top tech players are well aware of the bad press that AI gone wrong can attract. By exposing new AI systems to external groups and attaching broad disclaimers, the strategy appears to be advertising the systems’ capabilities while at the same time mitigating the more problematic components. Whether this is enough to ward off controversy remains to be seen – even prior to the launch of AI Test Kitchen, LaMDA made headlines for all the wrong reasons – but an influential slice of Silicon Valley seems to have confidence that it will.