Close Menu
geekfence.comgeekfence.com
    What's Hot

    HCLTech acquires HPE telco unit

    December 29, 2025

    This tiny chip could change the future of quantum computing

    December 29, 2025

    What’s In a Name? Mainframe GDGs Get the Job Done

    December 29, 2025
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Technology»DeepSeek may have found a new way to improve AI’s ability to remember
    Technology

    DeepSeek may have found a new way to improve AI’s ability to remember

    AdminBy AdminOctober 29, 2025No Comments2 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    DeepSeek may have found a new way to improve AI’s ability to remember
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Currently, most large language models break text down into thousands of tiny units called tokens. This turns the text into representations that models can understand. However, these tokens quickly become expensive to store and compute with as conversations with end users grow longer. When a user chats with an AI for lengthy periods, this challenge can cause the AI to forget things the user has already told it and get information muddled, a problem some call “context rot.”

    The new methods developed by DeepSeek (and published in its latest paper) could help to overcome this issue. Instead of storing words as tokens, its system packs written information into image form, almost as if it’s taking a picture of pages from a book. This allows the model to retain nearly the same information while using far fewer tokens, the researchers found. 

    Essentially, the OCR model is a testbed for these new methods that permit more information to be packed into AI models more efficiently. 

    Besides using visual tokens instead of just text ones, the model is built on a type of tiered compression that is not unlike how human memories fade: Older or less critical content is stored in a slightly more blurry form in order to save space. Despite that, the paper’s authors argue that this compressed content can still remain accessible in the background, while maintaining a high level of system efficiency.

    Text tokens have long been the default building block in AI systems. Using visual tokens instead is unconventional, and as a result, DeepSeek’s model is quickly capturing researchers’ attention. Andrej Karpathy, the former Tesla AI chief and a founding member of OpenAI, praised the paper on X, saying that images may ultimately be better than text as inputs for LLMs. Text tokens might be “wasteful and just terrible at the input,” he wrote. 

    Manling Li, an assistant professor of computer science at Northwestern University, says the paper offers a new framework for addressing the existing challenges in AI memory. “While the idea of using image-based tokens for context storage isn’t entirely new, this is the first study I’ve seen that takes it this far and shows it might actually work,” Li says.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    MGM Resorts extends branding deal as MGM China strengthens Macau performance

    December 29, 2025

    MIT Technology Review’s most popular stories of 2025

    December 28, 2025

    How reality crushed Ÿnsect, the French startup that had raised over $600M for insect farming

    December 27, 2025

    Instagram execs have waged an aggressive campaign to win back teens, boosting teen-friendly influencers, adjusting its algorithm, and more (Naomi Nix/Washington Post)

    December 26, 2025

    Best Facial Sunscreen of 2026

    December 25, 2025

    If You’ve Never Broken It, You Don’t Really Know It – O’Reilly

    December 23, 2025
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 20258 Views

    Microsoft 365 Copilot now enables you to build apps and workflows

    October 29, 20258 Views

    Here’s the latest company planning for gene-edited babies

    November 2, 20257 Views
    Don't Miss

    HCLTech acquires HPE telco unit

    December 29, 2025

    HCLTech moves toward a future of AI-driven growth In sum – what we know: The…

    This tiny chip could change the future of quantum computing

    December 29, 2025

    What’s In a Name? Mainframe GDGs Get the Job Done

    December 29, 2025

    Microsoft named a Leader in Gartner® Magic Quadrant™ for AI Application Development Platforms

    December 29, 2025
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    HCLTech acquires HPE telco unit

    December 29, 2025

    This tiny chip could change the future of quantum computing

    December 29, 2025

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2025 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.