Close Menu
geekfence.comgeekfence.com
    What's Hot

    Oppo Find X9s Gets Global Launch Alongside Ultra

    April 18, 2026

    The Best Smart Home Accessories to Boost Your Curb Appeal (2026)

    April 18, 2026

    How Yeastar Simplifies Self-hosted UCaaS for Service Providers

    April 18, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Software Development»Fragments: April 14
    Software Development

    Fragments: April 14

    AdminBy AdminApril 18, 2026No Comments6 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Fragments: April 14
    Share
    Facebook Twitter LinkedIn Pinterest Email


    I attended the first Pragmatic Summit early this year, and while there host
    Gergely Orosz interviewed Kent Beck and myself on stage. The video runs for about half-an-hour.

    Fragments: April 14

    I always enjoy nattering with Kent like this, and Gergely pushed into some worthwhile topics. Given
    the timing, AI dominated the conversation – we compared it to earlier
    technology shifts, the experience of agile methods, the role of TDD, the
    danger of unhealthy performance metrics, and how to thrive in an AI-native
    industry.

     ❄                ❄                ❄                ❄                ❄

    Perl is a language I used a little, but never loved. However the definitive book on it, by its designer Larry Wall, contains a wonderful gem. The three virtues of a programmer: hubris, impatience – and above all – laziness.

    Bryan Cantrill also loves this virtue:

    Of these virtues, I have always found laziness to be the most profound: packed within its tongue-in-cheek self-deprecation is a commentary on not just the need for abstraction, but the aesthetics of it. Laziness drives us to make the system as simple as possible (but no simpler!) — to develop the powerful abstractions that then allow us to do much more, much more easily.

    Of course, the implicit wink here is that it takes a lot of work to be lazy

    Understanding how to think about a problem domain by building abstractions (models) is my favorite part of programming. I love it because I think it’s what gives me a deeper understanding of a problem domain, and because once I find a good set of abstractions, I get a buzz from the way they make difficulties melt away, allowing me to achieve much more functionality with less lines of code.

    Cantrill worries that AI is so good at writing code, we risk losing that virtue, something that’s reinforced by brogrammers bragging about how they produce thirty-seven thousand lines of code a day.

    The problem is that LLMs inherently lack the virtue of laziness. Work costs nothing to an LLM. LLMs do not feel a need to optimize for their own (or anyone’s) future time, and will happily dump more and more onto a layercake of garbage. Left unchecked, LLMs will make systems larger, not better — appealing to perverse vanity metrics, perhaps, but at the cost of everything that matters. As such, LLMs highlight how essential our human laziness is: our finite time forces us to develop crisp abstractions in part because we don’t want to waste our (human!) time on the consequences of clunky ones. The best engineering is always borne of constraints, and the constraint of our time places limits on the cognitive load of the system that we’re willing to accept. This is what drives us to make the system simpler, despite its essential complexity.

    This reflection particularly struck me this Sunday evening. I’d spent a bit of time making a modification of how my music playlist generator worked. I needed a new capability, spent some time adding it, got frustrated at how long it was taking, and wondered about maybe throwing a coding agent at it. More thought led to realizing that I was doing it in a more complicated way than it needed to be. I was including a facility that I didn’t need, and by applying yagni, I could make the whole thing much easier, doing the task in just a couple of dozen lines of code.

    If I had used an LLM for this, it may well have done the task much more quickly, but would it have made a similar over-complication? If so would I just shrug and say LGTM? Would that complication cause me (or the LLM) problems in the future?

     ❄                ❄                ❄                ❄                ❄

    Jessica Kerr (Jessitron) has a simple example of applying the principle of Test-Driven Development to prompting agents. She wants all updates to include updating the documentation.

    Instructions – We can change AGENTS.md to instruct our coding agent to look for documentation files and update them.

    Verification – We can add a reviewer agent to check each PR for missed documentation updates.

    This is two changes, so I can break this work into two parts. Which of these should we do first?

    Of course my initial comment about TDD answers that question

     ❄                ❄                ❄                ❄                ❄

    Mark Little prodded an old memory of mine as he wondered about to work with AIs that are over-confident of their knowledge and thus prone to make up answers to questions, or to act when they should be more hesitant. He draws inspiration from an old, low-budget, but classic SciFi movie: Dark Star. I saw that movie once in my 20s (ie a long time ago), but I still remember the crisis scene where a crew member has to use philosophical argument to prevent a sentient bomb from detonating.

    Doolittle: You have no absolute proof that Sergeant Pinback ordered you to detonate.
    Bomb #20: I recall distinctly the detonation order. My memory is good on matters like these.
    Doolittle: Of course you remember it, but all you remember is merely a series of sensory impulses which you now realize have no real, definite connection with outside reality.
    Bomb #20: True. But since this is so, I have no real proof that you’re telling me all this.
    Doolittle: That’s all beside the point. I mean, the concept is valid no matter where it originates.
    Bomb #20: Hmmmm….
    Doolittle: So, if you detonate…
    Bomb #20: In nine seconds….
    Doolittle: …you could be doing so on the basis of false data.
    Bomb #20: I have no proof it was false data.
    Doolittle: You have no proof it was correct data!
    Bomb #20: I must think on this further.

    Doolittle has to expand the bomb’s consciousness, teaching it to doubt its sensors. As Little puts it:

    That’s a useful metaphor for where we are with AI today. Most AI systems are optimised for decisiveness. Given an input, produce an output. Given ambiguity, resolve it probabilistically. Given uncertainty, infer. This works well in bounded domains, but it breaks down in open systems where the cost of a wrong decision is asymmetric or irreversible. In those cases, the correct behaviour is often deferral, or even deliberate inaction. But inaction is not a natural outcome of most AI architectures. It has to be designed in.

    In my more human interactions, I’ve always valued doubt, and distrust people who operate under undue certainty. Doubt doesn’t necessarily lead to indecisiveness, but it does suggest that we include the risk of inaccurate information or faulty reasoning into decisions with profound consequences.

    If we want AI systems that can operate safely without constant human oversight, we need to teach them not just how to decide, but when not to. In a world of increasing autonomy, restraint isn’t a limitation, it’s a capability. And in many cases, it may be the most important one we build.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    What Is Vibe Coding and Why It Fails in Production

    April 15, 2026

    AWS Agent Registry: Taming Wild Agents

    April 14, 2026

    Principles of Mechanical Sympathy

    April 12, 2026

    How to Build an EHR System (Electronic Health Records)

    April 9, 2026

    Anthropic’s Project Glasswing addresses how AI exploits vulnerabilities

    April 8, 2026

    Harness engineering for coding agent users

    April 6, 2026
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 202529 Views

    Hard-braking events as indicators of road segment crash risk

    January 14, 202624 Views

    Redefining AI efficiency with extreme compression

    March 25, 202623 Views
    Don't Miss

    Oppo Find X9s Gets Global Launch Alongside Ultra

    April 18, 2026

    Summary created by Smart Answers AIIn summary:Tech Advisor reports that Oppo has confirmed a global…

    The Best Smart Home Accessories to Boost Your Curb Appeal (2026)

    April 18, 2026

    How Yeastar Simplifies Self-hosted UCaaS for Service Providers

    April 18, 2026

    Posit AI Blog: Implementing rotation equivariance: Group-equivariant CNN from scratch

    April 18, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    Oppo Find X9s Gets Global Launch Alongside Ultra

    April 18, 2026

    The Best Smart Home Accessories to Boost Your Curb Appeal (2026)

    April 18, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.