Close Menu
geekfence.comgeekfence.com
    What's Hot

    Designing trust & safety (T&S) in customer experience management (CXM): why T&S is becoming core to CXM operating model 

    January 24, 2026

    iPhone 18 Series Could Finally Bring Back Touch ID

    January 24, 2026

    The Visual Haystacks Benchmark! – The Berkeley Artificial Intelligence Research Blog

    January 24, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Artificial Intelligence»The breakthrough that makes robot faces feel less creepy
    Artificial Intelligence

    The breakthrough that makes robot faces feel less creepy

    AdminBy AdminJanuary 20, 2026No Comments6 Mins Read5 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    The breakthrough that makes robot faces feel less creepy
    Share
    Facebook Twitter LinkedIn Pinterest Email


    When people talk face to face, nearly half of their attention is drawn to the movement of the lips. Despite this, robots still have great difficulty moving their mouths in a convincing way. Even the most advanced humanoid machines often rely on stiff, exaggerated mouth motions that resemble a puppet, assuming they have a face at all.

    Humans place enormous importance on facial expression, especially subtle movements of the lips. While awkward walking or clumsy hand gestures can be forgiven, even small mistakes in facial motion tend to stand out immediately. This sensitivity contributes to what scientists call the “Uncanny Valley,” a phenomenon where robots appear unsettling rather than lifelike. Poor lip movement is a major reason robots can seem eerie or emotionally flat, but researchers say that may soon change.

    A Robot That Learns to Move Its Lips

    On January 15, a team from Columbia Engineering announced a major advance in humanoid robotics. For the first time, researchers have built a robot that can learn facial lip movements for speaking and singing. Their findings, published in Science Robotics, show the robot forming words in multiple languages and even performing a song from its AI-generated debut album “hello world_.”

    Rather than relying on preset rules, the robot learned through observation. It began by discovering how to control its own face using 26 separate facial motors. To do this, it watched its reflection in a mirror, then later studied hours of human speech and singing videos on YouTube to understand how people move their lips.

    “The more it interacts with humans, the better it will get,” said Hod Lipson, James and Sally Scapa Professor of Innovation in the Department of Mechanical Engineering and director of Columbia’s Creative Machines Lab, where the research took place.

    See link to “Lip Syncing Robot” video below.

    Robot Watches Itself Talking

    Creating natural-looking lip motion in robots is especially difficult for two main reasons. First, it requires advanced hardware, including flexible facial material and many small motors that must operate quietly and in perfect coordination. Second, lip movement is closely tied to speech sounds, which change rapidly and depend on complex sequences of phonemes.

    Human faces are controlled by dozens of muscles located beneath soft skin, allowing movements to flow naturally with speech. Most humanoid robots, however, have rigid faces with limited motion. Their lip movements are typically dictated by fixed rules, which leads to mechanical, unnatural expressions that feel unsettling.

    To address these challenges, the Columbia team designed a flexible robotic face with a high number of motors and allowed the robot to learn facial control on its own. The robot was placed in front of a mirror and began experimenting with thousands of random facial expressions. Much like a child exploring their reflection, it gradually learned which motor movements produced specific facial shapes. This process relied on what researchers call a “vision-to-action” language model (VLA).

    Learning From Human Speech and Song

    After understanding how its own face worked, the robot was shown videos of people talking and singing. The AI system observed how mouth shapes changed with different sounds, allowing it to associate audio input directly with motor movement. With this combination of self-learning and human observation, the robot could convert sound into synchronized lip motion.

    The research team tested the system across multiple languages, speech styles, and musical examples. Even without understanding the meaning of the audio, the robot was able to move its lips in time with the sounds it heard.

    The researchers acknowledge that the results are not flawless. “We had particular difficulties with hard sounds like ‘B’ and with sounds involving lip puckering, such as ‘W’. But these abilities will likely improve with time and practice,” Lipson said.

    Beyond Lip Sync to Real Communication

    The researchers stress that lip synchronization is only one part of a broader goal. Their aim is to give robots richer, more natural ways to communicate with people.

    “When the lip sync ability is combined with conversational AI such as ChatGPT or Gemini, the effect adds a whole new depth to the connection the robot forms with the human,” said Yuhang Hu, who led the study as part of his PhD work. “The more the robot watches humans conversing, the better it will get at imitating the nuanced facial gestures we can emotionally connect with.”

    “The longer the context window of the conversation, the more context-sensitive these gestures will become,” Hu added.

    Facial Expression as the Missing Link

    The research team believes that emotional expression through the face represents a major gap in current robotics.

    “Much of humanoid robotics today is focused on leg and hand motion, for activities like walking and grasping,” Lipson said. “But facial affection is equally important for any robotic application involving human interaction.”

    Lipson and Hu expect realistic facial expressions to become increasingly important as humanoid robots are introduced into entertainment, education, healthcare, and elder care. Some economists estimate that more than one billion humanoid robots could be produced over the next decade.

    “There is no future where all these humanoid robots don’t have a face. And when they finally have a face, they will need to move their eyes and lips properly, or they will forever remain uncanny,” Lipson said.

    “We humans are just wired that way, and we can’t help it. We are close to crossing the uncanny valley,” Hu added.

    Risks and Responsible Progress

    This work builds on Lipson’s long-running effort to help robots form more natural connections with people by learning facial behaviors such as smiling, eye contact, and speech. He argues that these skills must be learned through observation rather than programmed through rigid instructions.

    “Something magical happens when a robot learns to smile or speak just by watching and listening to humans,” he said. “I’m a jaded roboticist, but I can’t help but smile back at a robot that spontaneously smiles at me.”

    Hu emphasized that the human face remains one of the most powerful tools for communication, and scientists are only beginning to understand how it works.

    “Robots with this ability will clearly have a much better ability to connect with humans because such a significant portion of our communication involves facial body language, and that entire channel is still untapped,” Hu said.

    The researchers also acknowledge the ethical concerns that come with creating machines that can emotionally engage with humans.

    “This will be a powerful technology. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks,” Lipson said.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    The Visual Haystacks Benchmark! – The Berkeley Artificial Intelligence Research Blog

    January 24, 2026

    Windows 365 for Agents: The Cloud PC’s next chapter

    January 23, 2026

    Why it’s critical to move beyond overly aggregated machine-learning metrics | MIT News

    January 22, 2026

    The Machine Learning Practitioner’s Guide to Model Deployment with FastAPI

    January 21, 2026

    Balancing cost and performance: Agentic AI development

    January 19, 2026

    How to Get Started with Data-Driven Decisions

    January 18, 2026
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 202511 Views

    Hard-braking events as indicators of road segment crash risk

    January 14, 20269 Views

    Microsoft 365 Copilot now enables you to build apps and workflows

    October 29, 20258 Views
    Don't Miss

    Designing trust & safety (T&S) in customer experience management (CXM): why T&S is becoming core to CXM operating model 

    January 24, 2026

    Customer Experience (CX) now sits at the intersection of Artificial Intelligence (AI)-enabled automation, identity and access journeys, AI-generated content…

    iPhone 18 Series Could Finally Bring Back Touch ID

    January 24, 2026

    The Visual Haystacks Benchmark! – The Berkeley Artificial Intelligence Research Blog

    January 24, 2026

    Data and Analytics Leaders Think They’re AI-Ready. They’re Probably Not. 

    January 24, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Designing trust & safety (T&S) in customer experience management (CXM): why T&S is becoming core to CXM operating model 

    January 24, 2026

    iPhone 18 Series Could Finally Bring Back Touch ID

    January 24, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.