Close Menu
geekfence.comgeekfence.com
    What's Hot

    Identity-first AI governance: Securing the agentic workforce

    March 30, 2026

    Why AI Data Readiness Is Becoming the Most Critical Layer in Modern Analytics

    March 30, 2026

    Facial Recognition Errors Affect Millions Globally

    March 30, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Technology»Facial Recognition Errors Affect Millions Globally
    Technology

    Facial Recognition Errors Affect Millions Globally

    AdminBy AdminMarch 30, 2026No Comments3 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Facial Recognition Errors Affect Millions Globally
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—and menacing—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.

    Yet the story those photos can tell inevitably has errors. FRT makers, like those of any diagnostic technology, must balance two types of errors: false positives and false negatives. There are three possible outcomes.

    In best-case scenarios—such as comparing someone’s passport photo to a photo taken by a border agent—false-negative rates are around two in 1,000 and false positives are less than one in 1 million.

    In the rare event you’re one of those false negatives, a border agent might ask you to show your passport and take a second look at your face. But as people ask more of the technology, more ambitious applications could lead to more catastrophic errors. Let’s say that police are searching for a suspect, and they’re comparing an image taken with a security camera with a previous “mug shot” of the suspect.

    Training-data composition, differences in how sensors detect faces, and intrinsic differences between groups, such as age, all affect an algorithm’s performance. The United Kingdom estimated that its FRT exposed some groups, such as women and darker-skinned people, to risks of misidentification as high as two orders of magnitude greater than it did to others.

    Five faces arranged left to right, from easy to hard to recognize.Less clear photographs are harder for FRT to process.iStock

    What happens with photos of people who aren’t cooperating, or vendors that train algorithms on biased datasets, or field agents who demand a swift match from a huge dataset? Here, things get murky.

    Consider a busy trade fair using FRT to check attendees against a database, or gallery, of images of the 10,000 registrants, for example. Even at 99.9 percent accuracy you’ll get about a dozen false positives or negatives, which may be worth the trade-off to the fair organizers. But if police start using something like that across a city of 1 million people, the number of potential victims of mistaken identity rises, as do the stakes.

    What if we ask FRT to tell us if the government has ever recorded and stored an image of a given person? That’s what U.S. Immigration and Customs Enforcement agents have done since June 2025, using the Mobile Fortify app. The agency conducted more than 100,000 FRT searches in the first six months. The size of the potential gallery is at least 1.2 billion images.

    At that size, assuming even best-case images, the system is likely to return around 1 million false matches, but at a rate at least 10 times as high for darker-skinned people, depending on the subgroup.

    Responsible use of this powerful technology would involve independent identity checks, multiple sources of data, and a clear understanding of the error thresholds, says computer scientist Erik Learned-Miller of the University of Massachusetts Amherst: “The care we take in deploying such systems should be proportional to the stakes.”

    From Your Site Articles

    Related Articles Around the Web



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    A School District Tried to Help Train Waymos to Stop for School Buses. It Didn’t Work

    March 29, 2026

    Iran war: John Bolton on why even he’s against Trump’s campaign.

    March 28, 2026

    Maine bans online sweepstakes casino platforms statewide

    March 27, 2026

    The Download: a battery company pivots to AI, and a new AI tool seeks to transform math

    March 26, 2026

    Elon Musk pauses changes to X’s creator revenue-sharing program after backlash

    March 25, 2026

    I Only Listened to AI Music for a Week. It Was Terrible, but Not for the Reason You Think

    March 24, 2026
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 202527 Views

    Hard-braking events as indicators of road segment crash risk

    January 14, 202624 Views

    Redefining AI efficiency with extreme compression

    March 25, 202620 Views
    Don't Miss

    Identity-first AI governance: Securing the agentic workforce

    March 30, 2026

    AI agents are now operating inside production systems, querying Snowflake, updating Salesforce, and executing business…

    Why AI Data Readiness Is Becoming the Most Critical Layer in Modern Analytics

    March 30, 2026

    Facial Recognition Errors Affect Millions Globally

    March 30, 2026

    Bringing AI to DevNet Learning Labs

    March 30, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Identity-first AI governance: Securing the agentic workforce

    March 30, 2026

    Why AI Data Readiness Is Becoming the Most Critical Layer in Modern Analytics

    March 30, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.