Close Menu
geekfence.comgeekfence.com
    What's Hot

    Why The Motorola Razr Fold’s Underwhelming Debut Might Not Matter

    January 17, 2026

    What are Large Language Models? What are they not?

    January 17, 2026

    2026 AI Predictions: Why Data Integrity Matters More Than Ever

    January 17, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Artificial Intelligence»Posit AI Blog: mall 0.2.0
    Artificial Intelligence

    Posit AI Blog: mall 0.2.0

    AdminBy AdminOctober 29, 2025No Comments4 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Posit AI Blog: mall 0.2.0
    Share
    Facebook Twitter LinkedIn Pinterest Email


    mall uses Large Language Models (LLM) to run
    Natural Language Processing (NLP) operations against your data. This package
    is available for both R, and Python. Version 0.2.0 has been released to
    CRAN and
    PyPi respectively.

    In R, you can install the latest version with:

    In Python, with:

    This release expands the number of LLM providers you can use with mall. Also,
    in Python it introduces the option to run the NLP operations over string vectors,
    and in R, it enables support for ‘parallelized’ requests.

    It is also very exciting to announce a brand new cheatsheet for this package. It
    is available in print (PDF) and HTML format!

    More LLM providers

    The biggest highlight of this release is the the ability to use external LLM
    providers such as OpenAI, Gemini
    and Anthropic. Instead of writing integration for
    each provider one by one, mall uses specialized integration packages to act as
    intermediates.

    In R, mall uses the ellmer package
    to integrate with a variety of LLM providers.
    To access the new feature, first create a chat connection, and then pass that
    connection to llm_use(). Here is an example of connecting and using OpenAI:

    install.packages("ellmer")
    
    library(mall)
    library(ellmer)
    
    chat <- chat_openai()
    #> Using model = "gpt-4.1".
    
    llm_use(chat, .cache = "_my_cache")
    #> 
    #> ── mall session object 
    #> Backend: ellmerLLM session: model:gpt-4.1R session: cache_folder:_my_cache

    In Python, mall uses chatlas as
    the integration point with the LLM. chatlas also integrates with
    several LLM providers.
    To use, first instantiate a chatlas chat connection class, and then pass that
    to the Polars data frame via the .llm.use() function:

    pip install chatlas
    
    import mall
    from chatlas import ChatOpenAI
    
    chat = ChatOpenAI()
    
    data = mall.MallData
    reviews = data.reviews
    
    reviews.llm.use(chat)
    #> {'backend': 'chatlas', 'chat': 
    #> , '_cache': '_mall_cache'}

    Connecting mall to external LLM providers introduces a consideration of cost.
    Most providers charge for the use of their API, so there is a potential that a
    large table, with long texts, could be an expensive operation.

    Parallel requests (R only)

    A new feature introduced in ellmer 0.3.0
    enables the access to submit multiple prompts in parallel, rather than in sequence.
    This makes it faster, and potentially cheaper, to process a table. If the provider
    supports this feature, ellmer is able to leverage it via the
    parallel_chat()
    function. Gemini and OpenAI support the feature.

    In the new release of mall, the integration with ellmer has been specially
    written to take advantage of parallel chat. The internals have been re-written to
    submit the NLP-specific instructions as a system message in order
    reduce the size of each prompt. Additionally, the cache system has also been
    re-tooled to support batched requests.

    NLP operations without a table

    Since its initial version, mall has provided the ability for R users to perform
    the NLP operations over a string vector, in other words, without needing a table.
    Starting with the new release, mall also provides this same functionality
    in its Python version.

    mall can process vectors contained in a list object. To use, initialize a
    new LLMVec class object with either an Ollama model, or a chatlas Chat
    object, and then access the same NLP functions as the Polars extension.

    # Initialize a Chat object
    from chatlas import ChatOllama
    chat = ChatOllama(model = "llama3.2")
    
    # Pass it to a new LLMVec
    from mall import LLMVec
    llm = LLMVec(chat)    

    Access the functions via the new LLMVec object, and pass the text to be processed.

    llm.sentiment(["I am happy", "I am sad"])
    #> ['positive', 'negative']
    
    llm.translate(["Este es el mejor dia!"], "english")
    #> ['This is the best day!']

    For more information visit the reference page: LLMVec

    New cheatsheet

    The brand new official cheatsheet is now available from Posit:
    Natural Language processing using LLMs in R/Python.
    Its mean feature is that one side of the page is dedicated to the R version,
    and the other side of the page to the Python version.

    An web page version is also availabe in the official cheatsheet site
    here. It takes
    advantage of the tab feature that lets you select between R and Python
    explanations and examples.

    Enjoy this blog? Get notified of new posts by email:

    Posts also available at r-bloggers



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    What are Large Language Models? What are they not?

    January 17, 2026

    The Download: spying on the spies, and promising climate tech

    January 16, 2026

    The Problem with AI “Artists” – O’Reilly

    January 15, 2026

    Hard-braking events as indicators of road segment crash risk

    January 14, 2026

    A Case Study with the StrongREJECT Benchmark – The Berkeley Artificial Intelligence Research Blog

    January 13, 2026

    Developer_Direct Returns January 22: Watch Fable, Forza Horizon 6, and Beast of Reincarnation Gameplay, Direct from the Studios

    January 12, 2026
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 202511 Views

    Microsoft 365 Copilot now enables you to build apps and workflows

    October 29, 20258 Views

    Here’s the latest company planning for gene-edited babies

    November 2, 20257 Views
    Don't Miss

    Why The Motorola Razr Fold’s Underwhelming Debut Might Not Matter

    January 17, 2026

    What a missed opportunity. At parent company Lenovo’s huge Tech World conference at CES 2026 (held…

    What are Large Language Models? What are they not?

    January 17, 2026

    2026 AI Predictions: Why Data Integrity Matters More Than Ever

    January 17, 2026

    Parking Pains? Not Anymore! See how HL Robotics and Cisco can help

    January 17, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Why The Motorola Razr Fold’s Underwhelming Debut Might Not Matter

    January 17, 2026

    What are Large Language Models? What are they not?

    January 17, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.