Close Menu
geekfence.comgeekfence.com
    What's Hot

    The Miniature Wife Release Date, Cast, Plot And Trailer

    March 10, 2026

    Vivo V70 FE 5G Launched with Dimensity 7360 Turbo Chip

    March 10, 2026

    ChatGPT as a therapist? New study reveals serious ethical risks

    March 10, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Technology»LLMs contain a LOT of parameters. But what’s a parameter?
    Technology

    LLMs contain a LOT of parameters. But what’s a parameter?

    AdminBy AdminJanuary 7, 2026No Comments3 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    LLMs contain a LOT of parameters. But what’s a parameter?
    Share
    Facebook Twitter LinkedIn Pinterest Email


    When a model is trained, each word in its vocabulary is assigned a numerical value that captures the meaning of that word in relation to all the other words, based on how the word appears in countless examples across the model’s training data.

    Each word gets replaced by a kind of code?

    Yeah. But there’s a bit more to it. The numerical value—the embedding—that represents each word is in fact a list of numbers, with each number in the list representing a different facet of meaning that the model has extracted from its training data. The length of this list of numbers is another thing that LLM designers can specify before an LLM is trained. A common size is 4,096.

    Every word inside an LLM is represented by a list of 4,096 numbers?  

    Yup, that’s an embedding. And each of those numbers is tweaked during training. An LLM with embeddings that are 4,096 numbers long is said to have 4,096 dimensions.

    Why 4,096?

    It might look like a strange number. But LLMs (like anything that runs on a computer chip) work best with powers of two—2, 4, 8, 16, 32, 64, and so on. LLM engineers have found that 4,096 is a power of two that hits a sweet spot between capability and efficiency. Models with fewer dimensions are less capable; models with more dimensions are too expensive or slow to train and run. 

    Using more numbers allows the LLM to capture very fine-grained information about how a word is used in many different contexts, what subtle connotations it might have, how it relates to other words, and so on.

    Back in February, OpenAI released GPT-4.5, the firm’s largest LLM yet (some estimates have put its parameter count at more than 10 trillion). Nick Ryder, a research scientist at OpenAI who worked on the model, told me at the time that bigger models can work with extra information, like emotional cues, such as when a speaker’s words signal hostility: “All of these subtle patterns that come through a human conversation—those are the bits that these larger and larger models will pick up on.”

    The upshot is that all the words inside an LLM get encoded into a high-dimensional space. Picture thousands of words floating in the air around you. Words that are closer together have similar meanings. For example, “table” and “chair” will be closer to each other than they are to “astronaut,” which is close to “moon” and “Musk.” Way off in the distance you can see “prestidigitation.” It’s a little like that, but instead of being related to each other across three dimensions, the words inside an LLM are related across 4,096 dimensions.

    Yikes.

    It’s dizzying stuff. In effect, an LLM compresses the entire internet into a single monumental mathematical structure that encodes an unfathomable amount of interconnected information. It’s both why LLMs can do astonishing things and why they’re impossible to fully understand.    



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    I Used Google’s New Gemini-Powered ‘Help Me Create’ Tool in Docs. It’s Great at Corporate-Speak

    March 10, 2026

    Anxiety is more than its symptoms. It’s an innate part of being human.

    March 9, 2026

    MacArthur Park 18th Street gang gambling crackdown arrests

    March 8, 2026

    Is the Pentagon allowed to surveil Americans with AI?

    March 7, 2026

    Anthropic to challenge DOD’s supply-chain label in court

    March 6, 2026

    Reclaim Security, which uses AI-driven automation to remediate threat exposures, raised a $20M Series A led by Acrew Capital and a $6M seed (Chris Metinko/Axios)

    March 5, 2026
    Top Posts

    Hard-braking events as indicators of road segment crash risk

    January 14, 202619 Views

    Understanding U-Net Architecture in Deep Learning

    November 25, 202518 Views

    How to integrate a graph database into your RAG pipeline

    February 8, 202610 Views
    Don't Miss

    The Miniature Wife Release Date, Cast, Plot And Trailer

    March 10, 2026

    Summary created by Smart Answers AIIn summary:Tech Advisor reports on Peacock’s new series “The Miniature…

    Vivo V70 FE 5G Launched with Dimensity 7360 Turbo Chip

    March 10, 2026

    ChatGPT as a therapist? New study reveals serious ethical risks

    March 10, 2026

    Unique Capabilities of Edge Computing in IoT

    March 10, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    The Miniature Wife Release Date, Cast, Plot And Trailer

    March 10, 2026

    Vivo V70 FE 5G Launched with Dimensity 7360 Turbo Chip

    March 10, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.