Close Menu
geekfence.comgeekfence.com
    What's Hot

    Apple’s 14-in. M5 MacBook Pro – one day all computers might be as good – Computerworld

    November 14, 2025

    Connected Germany 2025: Connectivity leaders converge in Munich to tackle nation’s digital challenges

    November 14, 2025

    Introducing Keras 3 for R

    November 14, 2025
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Software Development»Anchoring AI to a reference application
    Software Development

    Anchoring AI to a reference application

    AdminBy AdminNovember 14, 2025No Comments4 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Anchoring AI to a reference application
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Service templates are a typical building block in the “golden paths” organisations build for their engineering teams, to make it easy to do the right thing. The templates are supposed to be the role models for all the services in the organisation, always representing the most up to date coding patterns and standards.

    One of the challenges with service templates though is that once a team instantiated a service with one, it’s tedious to feed template updates back to those services. Can GenAI help with that?

    Reference application as sample provider

    As part of a larger experiment that I recently wrote about here, I created an MCP server that gives a coding assistant access to coding samples for typical patterns. In my case, this was for a Spring Boot web application, where the patterns were repository, service and controller classes. It is a well established prompting practice at this point that providing LLMs with examples of the outputs that we want leads to better results. To put “providing examples” into fancier terms: This is also called “few-shot prompting”, or “in-context learning”.

    When I started working with code samples in prompts, I quickly realised how tedious this was, because I was working in a natural language markdown file. It felt a little bit like writing my first Java exams at university, in pencil: You have know idea if the code you’re writing actually compiles. And what’s more, if you’re creating prompts for multiple coding patterns, you want to keep them consistent with each other. Maintaining code samples in a reference application project that you can compile and run (like a service template) makes it a lot easier to provide AI with compilable, consistent samples.

    Anchoring AI to a reference application

    Detect drift from the reference application

    Now back to the problem statement I mentioned at the beginning: Once code is generated (be that with AI, or with a service template), and then further extended and maintained, codebases often drift away from the role model of the reference application.

    So in a second step, I wondered how we might use this approach to do a “code pattern drift detection” between the codebase and the reference application. I tested this with a relatively simple example, I added a logger and log.debug statements to the reference application’s controller classes:

    Screenshot of a git commit diff in the reference application, showing a controller with an added @Slf4j annotation, and a log.debug statement in one of the endpoint mappings.

    Then I expanded the MCP server to provide access to the git commits in the reference application. Asking the agent to first look for the actual changes in the reference gives me some control over the scope of the drift detection, I can use the commits to communicate to AI exactly what type of drift I’m interested in. Before I introduced this, when I just asked AI to compare the reference controllers with the existing controllers, it went a bit overboard with lots of irrelevant comparisons, and I saw this commit-scoping approach have a good impact.

    An expanded version of the previous diagram, this time showing the setup for the drift detection. The prompt asks the agent to find latest changes, the agent gets the latest commit from the reference application, via MCP server. The agent then looks at the diff and uses it to analyse the target application, and to create a drift report. In a second step, the user can then ask the agent to write code that closes the gaps identified in the report.

    In the first step, I just asked AI to generate a report for me that identified all the drift, so I could review and edit that report, e.g. remove findings that were irrelevant. In the second step, I asked AI to take the report and write code that closes the gaps identified.

    When is AI bringing something new to the table?

    A thing as simple as adding a logger, or changing a logging framework, can also be done deterministically by codemod tools like OpenRewrite. So bear that in mind before you reach for AI.

    Where AI can shine is whenever we have drift that needs coding that is more dynamic than is possible with regular-expression-based codemod recipes. In an advanced form of the logging example, this might be turning non-standardised, rich log statements into a structured format, where an LLM might be better at turning a wide variety of existing log messages into the respective structure.

    The example MCP server is included in the repository that accompanies the original article.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Software Development in Latin America

    November 11, 2025

    This week in AI updates: Syncfusion Code Studio, MCP support in Linkerd, and more (November 7, 2025)

    November 10, 2025

    An Holistic Framework for Shared Design Leadership – A List Apart

    November 9, 2025

    How to Add a Header to a curl Request

    November 8, 2025

    Updating to .NET 8, updating to IHostBuilder, and running Playwright Tests within NUnit headless or headed on any OS

    November 7, 2025

    The Learning Loop and LLMs

    November 6, 2025
    Top Posts

    Microsoft 365 Copilot now enables you to build apps and workflows

    October 29, 20256 Views

    Here’s the latest company planning for gene-edited babies

    November 2, 20254 Views

    Skills, Roles & Career Guide

    November 4, 20252 Views
    Don't Miss

    Apple’s 14-in. M5 MacBook Pro – one day all computers might be as good – Computerworld

    November 14, 2025

    When it comes to price, the fact you can pick up the same machine I…

    Connected Germany 2025: Connectivity leaders converge in Munich to tackle nation’s digital challenges

    November 14, 2025

    Introducing Keras 3 for R

    November 14, 2025

    OpenAI Releases GPT 5.1: Here’s How it Performs!

    November 14, 2025
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Apple’s 14-in. M5 MacBook Pro – one day all computers might be as good – Computerworld

    November 14, 2025

    Connected Germany 2025: Connectivity leaders converge in Munich to tackle nation’s digital challenges

    November 14, 2025

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2025 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.