Close Menu
geekfence.comgeekfence.com
    What's Hot

    Open Cosmos launches first satellites for new LEO constellation

    January 25, 2026

    Achieving superior intent extraction through decomposition

    January 25, 2026

    How UX Research Reveals Hidden AI Orchestration Failures

    January 25, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Technology»LLMs contain a LOT of parameters. But what’s a parameter?
    Technology

    LLMs contain a LOT of parameters. But what’s a parameter?

    AdminBy AdminJanuary 7, 2026No Comments3 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    LLMs contain a LOT of parameters. But what’s a parameter?
    Share
    Facebook Twitter LinkedIn Pinterest Email


    When a model is trained, each word in its vocabulary is assigned a numerical value that captures the meaning of that word in relation to all the other words, based on how the word appears in countless examples across the model’s training data.

    Each word gets replaced by a kind of code?

    Yeah. But there’s a bit more to it. The numerical value—the embedding—that represents each word is in fact a list of numbers, with each number in the list representing a different facet of meaning that the model has extracted from its training data. The length of this list of numbers is another thing that LLM designers can specify before an LLM is trained. A common size is 4,096.

    Every word inside an LLM is represented by a list of 4,096 numbers?  

    Yup, that’s an embedding. And each of those numbers is tweaked during training. An LLM with embeddings that are 4,096 numbers long is said to have 4,096 dimensions.

    Why 4,096?

    It might look like a strange number. But LLMs (like anything that runs on a computer chip) work best with powers of two—2, 4, 8, 16, 32, 64, and so on. LLM engineers have found that 4,096 is a power of two that hits a sweet spot between capability and efficiency. Models with fewer dimensions are less capable; models with more dimensions are too expensive or slow to train and run. 

    Using more numbers allows the LLM to capture very fine-grained information about how a word is used in many different contexts, what subtle connotations it might have, how it relates to other words, and so on.

    Back in February, OpenAI released GPT-4.5, the firm’s largest LLM yet (some estimates have put its parameter count at more than 10 trillion). Nick Ryder, a research scientist at OpenAI who worked on the model, told me at the time that bigger models can work with extra information, like emotional cues, such as when a speaker’s words signal hostility: “All of these subtle patterns that come through a human conversation—those are the bits that these larger and larger models will pick up on.”

    The upshot is that all the words inside an LLM get encoded into a high-dimensional space. Picture thousands of words floating in the air around you. Words that are closer together have similar meanings. For example, “table” and “chair” will be closer to each other than they are to “astronaut,” which is close to “moon” and “Musk.” Way off in the distance you can see “prestidigitation.” It’s a little like that, but instead of being related to each other across three dimensions, the words inside an LLM are related across 4,096 dimensions.

    Yikes.

    It’s dizzying stuff. In effect, an LLM compresses the entire internet into a single monumental mathematical structure that encodes an unfathomable amount of interconnected information. It’s both why LLMs can do astonishing things and why they’re impossible to fully understand.    



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Tech CEOs boast and bicker about AI at Davos

    January 25, 2026

    India smartphone shipments were flat YoY at ~153M; Apple shipped 14M iPhones, raising its share of shipments to a record 9%, up from 7% in 2024 (Jagmeet Singh/TechCrunch)

    January 24, 2026

    Today’s NYT Connections: Sports Edition Hints, Answers for Jan. 23 #487

    January 23, 2026

    The Fork-It-and-Forget Decade – O’Reilly

    January 21, 2026

    AI Data Centers Face Skilled Worker Shortage

    January 20, 2026

    How to Clean Your Keurig (and When)

    January 19, 2026
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 202511 Views

    Hard-braking events as indicators of road segment crash risk

    January 14, 20269 Views

    Microsoft 365 Copilot now enables you to build apps and workflows

    October 29, 20258 Views
    Don't Miss

    Open Cosmos launches first satellites for new LEO constellation

    January 25, 2026

    Press Release Open Cosmos, the company building satellites to understand and connect the world, has…

    Achieving superior intent extraction through decomposition

    January 25, 2026

    How UX Research Reveals Hidden AI Orchestration Failures

    January 25, 2026

    ByteDance steps up its push into enterprise cloud services

    January 25, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Open Cosmos launches first satellites for new LEO constellation

    January 25, 2026

    Achieving superior intent extraction through decomposition

    January 25, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.