From what I can surmise as someone without children, parenting in the 21st century is incredibly difficult. It requires adapting to new technology that comes at a blistering pace, forcing you to take risks no matter what choices you make. Do you risk raising a Luddite that can’t survive in the modern world or do you risk the unknown effects of adopting new technology? Agora hopes you’ll choose the later when they release ChooChoo the Dragon.
This story hit my desk in the form of a PR release proudly announcing a partnership between Agora and RiseLink. It “marks the public debut of a real-time, conversational AI product designed for children.” I think the best way to describe the product concept is as Teddy Ruxpin, but with ChatGPT.
Another AI-generated render of ChooChoo (📷: Agora)
All of the ChooChoo the Dragon images are themselves AI-generated and wildly inconsistent, so there isn’t much to say about its theoretical physical design. But ChooChoo seems to be a host body for Agora’s Convo AI Device Kit R2, which is a conversational AI hardware stack that any manufacturer will be able to stuff into a stuffy.
That hardware appears to be based on an SoC (system-on-chip) designed by RiseLink. But the hardware itself is irrelevant, because the entire concept is fraught with peril.
The press release notes that ChooChoo the Dragon is for children of ages three through six and that it can act as a reading partner. Critically, it will listen to the child and generate responses in real-time, rather than relying on canned replies and pre-programmed conversation trees.
That is a serious cause for concern, because even the titans of the AI industry admit that they don’t understand how LLMs (large language models) decide what to say and that they, the creators of those LLMs, have limited control over the output. In 2023, Google CEO Sundar Pichai famously said “You know, you don’t fully understand. And you can’t quite tell why it said this, or why it got wrong.”
Scott Pelley and Sundar Pichai (📷: 60 Minutes)
In other words, nobody has a complete understanding of how LLMs think or how to control what they say — a well-known problem in the industry. Is that the kind of technology you want your impressionable young child using?
There are countless examples of LLMs providing inaccurate information and advice, ranging from the absurd to the downright dangerous. That isn’t relegated to small experimental models, but is also true for the most popular and well-funded models in existence.
An article published in Scienceby Melanie Mitchell last year goes into great detail on this subject, relaying examples of deceit and misinformation provided by LLMs created by OpenAI and Anthropic — leading companies in this field. In some test scenarios, LLMs even advised users to commit crimes.
Melanie Mitchell (📷: Wikipedia)
Adults may be able to identify and ignore those kinds of blunders, but can children? And that’s before considering the implications of AI psychosis, which is a growing concern for adults and would likely be even more dangerous to kids.
If OpenAI and Anthropic can’t stop that from happening, it seems doubtful that Agora can. As far as I can tell, Agora doesn’t even develop their own LLM models. Their products are interfaces and rely on models from — you guessed it — OpenAI, Anthropic, and others.
Not only does that mean that Agora lacks control over what their products will say to your kids, it also means that they don’t need to bear any responsibility for those conversations. They can pass the blame along to OpenAI or Anthropic. Those companies will then, in turn, tell you that LLMs are experimental and that by choosing to use them, you accepted the risks.
This is all very troubling to me and reminds me of another situation in which a potentially dangerous cash grab targeted children: the Pigzbe scheme. Pigzbe was a crowdfunded “cryptocurrency for kids” platform and I raised the alarm about it via an article here on Hackster back in 2018. Years later, I made an in-depth video on YouTube that explored the aftermath.
Like those behind Pigzbe, Agora seems to be selling new and unproven technology to parents that are desperate for a break and that want to give their kids an edge in the modern world.
I don’t blame parents for that — parenting in the 21st century is incredibly difficult, after all. But I do strongly urge caution when it comes to products like ChooChoo the Dragon. LLMs and AI in general are not yet trustworthy enough to interact with children on a meaningful level.
Teddy Ruxpin (📷: Wikipedia)
For now, consider buying a good old-fashioned Teddy Ruxpin as a reading partner for your toddler.
