Close Menu
geekfence.comgeekfence.com
    What's Hot

    From resumes to results: Findem bets on verified hiring with Glider AI 

    March 29, 2026

    Test and measurement gets an AI upgrade

    March 29, 2026

    Do AI Coding Assistants Powered by LLMs Reduce the Need for Programmers?

    March 29, 2026
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook Instagram
    geekfence.comgeekfence.com
    • Home
    • UK Tech News
    • AI
    • Big Data
    • Cyber Security
      • Cloud Computing
      • iOS Development
    • IoT
    • Mobile
    • Software
      • Software Development
      • Software Engineering
    • Technology
      • Green Technology
      • Nanotechnology
    • Telecom
    geekfence.comgeekfence.com
    Home»Technology»The Five Skills I Actually Use Every Day as an AI PM (and How You Can Too) – O’Reilly
    Technology

    The Five Skills I Actually Use Every Day as an AI PM (and How You Can Too) – O’Reilly

    AdminBy AdminJanuary 31, 2026No Comments6 Mins Read3 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    The Five Skills I Actually Use Every Day as an AI PM (and How You Can Too) – O’Reilly
    Share
    Facebook Twitter LinkedIn Pinterest Email


    This post first appeared on Aman Khan’s AI Product Playbook newsletter and is being republished here with the author’s permission.

    Let me start with some honesty. When people ask me “Should I become an AI PM?” I tell them they’re asking the wrong question.

    Here’s what I’ve learned: Becoming an AI PM isn’t about chasing a trendy job title. It’s about developing concrete skills that make you more effective at building products in a world where AI touches everything.

    Every PM is becoming an AI PM, whether they realize it or not. Your payment flow will have fraud detection. Your search bar will have semantic understanding. Your customer support will have chatbots.

    Think of AI product management as less of an OR and instead more of an AND. For example: AI x health tech PM or AI x fintech PM.

    The Five Skills I Actually Use Every Day

    This post was adapted from a conversation with Aakash Gupta on The Growth Podcast. You can find the episode here.

    After ~9 years of building AI products (the last three of which have been a complete ramp-up using LLMs and agents), here are the skills I use constantly—not the ones that sound good in a blog post but the ones I literally used yesterday.

    • AI prototyping
    • Observability, akin to telemetry
    • AI evals: The new PRD for AI PMs
    • RAG versus fine-tuning versus prompt engineering
    • Working with AI engineers

    1. Prototyping: Why I code every week

    Last month, our design team spent two weeks creating beautiful mocks for an AI agent interface. It looked perfect. Then I spent 30 minutes in Cursor building a functional prototype, and we immediately discovered three fundamental UX problems the mocks hadn’t revealed.

    The skill: Using AI-powered coding tools to build rough prototypes.
    The tool: Cursor. (It’s VS Code but you can describe what you want in plain English.)
    Why it matters: AI behavior is impossible to understand from static mocks.

    How to start this week:

    1. Download Cursor.
    2. Build something stupidly simple. (I started with a personal website landing page.)
    3. Show it to an engineer and ask what you did wrong.
    4. Repeat.

    You’re not trying to become an engineer. You’re trying to understand constraints and possibilities.

    2. Observability: Debugging the black box

    Observability is how you actually peek underneath the hood and see how your agent is working.

    The skill: Using traces to understand what your AI actually did.
    The tool: Any APM that supports LLM tracing. (We use our own at Arize, but there are many.)
    Why it matters: “The AI is broken” is not actionable. “The context retrieval returned the wrong document” is.

    Your first observability exercise:

    1. Pick any AI product you use daily.
    2. Try to trigger an edge case or error.
    3. Write down what you think went wrong internally.
    4. This mental model building is 80% of the skill.

    3. Evaluations: Your new definition of “done”

    Vibe coding works if you’re shipping prototypes. It doesn’t really work if you’re shipping production code.

    The skill: Turning subjective quality into measurable metrics.
    The tool: Start with spreadsheets, graduate to proper eval frameworks.
    Why it matters: You can’t improve what you can’t measure.

    Build your first eval:

    1. Pick one quality dimension (conciseness, friendliness, accuracy).
    2. Create 20 examples of good and bad. Label them “verbose” or “concise.”
    3. Score your current system. Set a target: 85% of responses should be “just right.”
    4. That number is now your new North Star. Iterate until you hit it.

    4. Technical intuition: Knowing your options

    Prompt engineering (1 day): Add brand voice guidelines to the system prompt.

    Few-shot examples (3 days): Include examples of on-brand responses.

    RAG with style guide (1 week): Pull from our actual brand documentation.

    Fine-tuning (1 month): Train a model on our support transcripts.

    Each has different costs, timelines, and trade-offs. My job is knowing which to recommend.

    Building intuition without building models:

    1. When you see an AI feature you like, write down three ways they might have built it.
    2. Ask an AI engineer if you’re right.
    3. Wrong guesses teach you more than right ones.

    5. The new PM-engineer partnership

    The biggest shift? How I work with engineers.

    Old way: I write requirements. They build it. We test it. Ship.

    New way: We label training data together. We define success metrics together. We debug failures together. We own outcomes together.

    Last month, I spent two hours with an engineer labeling whether responses were “helpful” or not. We disagreed on a lot of them. This taught me that I need to start collaborating on evals with my AI engineers.

    Start collaborating differently:

    • Next feature: Ask to join a model evaluation session.
    • Offer to help label test data.
    • Share customer feedback in terms of eval metrics.
    • Celebrate eval improvements like you used to celebrate feature launches.

    Your Four-Week Transition Plan

    Week 1: Tool setup

    • Install Cursor.
    • Get access to your company’s LLM playground.
    • Find where your AI logs/traces live.
    • Build one tiny prototype (took me three hours to build my first).

    Week 2: Observation

    • Trace five AI interactions in products you use.
    • Document what you think happened versus what actually happened.
    • Share findings with an AI engineer for feedback.

    Week 3: Measurement

    • Create your first 20-example eval set.
    • Score an existing feature.
    • Propose one improvement based on the scores.

    Week 4: Collaboration

    • Join an engineering model review.
    • Volunteer to label 50 examples.
    • Frame your next feature request as eval criteria.

    Week 5: Iteration

    • Take your learnings from prototyping and build them into a production proposal.
    • Set the bar with evals.
    • Use your AI Intuition for iteration—Which knobs should you turn?

    The Uncomfortable Truth

    Here’s what I wish someone had told me three years ago: You will feel like a beginner again. After years of being the expert in the room, you’ll be the person asking basic questions. That’s exactly where you need to be.

    The PMs who succeed in AI are the ones who are comfortable being uncomfortable. They’re the ones who build bad prototypes, ask “dumb” questions, and treat every confusing model output as a learning opportunity.

    Start this week

    Don’t wait for the perfect course, the ideal role, or for AI to “stabilize.” The skills you need are practical, learnable, and immediately applicable.

    Pick one thing from this post, commit to doing it this week, and then tell someone what you learned. This is how you’ll begin to accelerate your own feedback loop for AI product management.

    The gap between PMs who talk about AI and PMs who build with AI is smaller than you think. It’s measured in hours of hands-on practice, not years of study.

    See you on the other side.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    A School District Tried to Help Train Waymos to Stop for School Buses. It Didn’t Work

    March 29, 2026

    Iran war: John Bolton on why even he’s against Trump’s campaign.

    March 28, 2026

    Maine bans online sweepstakes casino platforms statewide

    March 27, 2026

    The Download: a battery company pivots to AI, and a new AI tool seeks to transform math

    March 26, 2026

    Elon Musk pauses changes to X’s creator revenue-sharing program after backlash

    March 25, 2026

    I Only Listened to AI Music for a Week. It Was Terrible, but Not for the Reason You Think

    March 24, 2026
    Top Posts

    Understanding U-Net Architecture in Deep Learning

    November 25, 202527 Views

    Hard-braking events as indicators of road segment crash risk

    January 14, 202624 Views

    Redefining AI efficiency with extreme compression

    March 25, 202619 Views
    Don't Miss

    From resumes to results: Findem bets on verified hiring with Glider AI 

    March 29, 2026

    Findem’s acquisition of Glider AI signals an inevitable shift in talent acquisition from operational efficiency to outcome-driven hiring. Enterprises are moving beyond speed-based metrics…

    Test and measurement gets an AI upgrade

    March 29, 2026

    Do AI Coding Assistants Powered by LLMs Reduce the Need for Programmers?

    March 29, 2026

    Excel 101: Cell and Column Merge vs Combine

    March 29, 2026
    Stay In Touch
    • Facebook
    • Instagram
    About Us

    At GeekFence, we are a team of tech-enthusiasts, industry watchers and content creators who believe that technology isn’t just about gadgets—it’s about how innovation transforms our lives, work and society. We’ve come together to build a place where readers, thinkers and industry insiders can converge to explore what’s next in tech.

    Our Picks

    From resumes to results: Findem bets on verified hiring with Glider AI 

    March 29, 2026

    Test and measurement gets an AI upgrade

    March 29, 2026

    Subscribe to Updates

    Please enable JavaScript in your browser to complete this form.
    Loading
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2026 Geekfence.All Rigt Reserved.

    Type above and press Enter to search. Press Esc to cancel.