Site icon I Manage Products

PRODUCTHEAD: AI and I have trust issues

Producthead logo

PRODUCTHEAD is a regular newsletter of product management goodness,
curated by Jock Busuttil.

how i made my products #

every PRODUCTHEAD edition is online for you to refer back to


tl;dr

What are the factors that lead us to trust someone (or an AI tool)?

Vibe coding is great for prototyping; but for production code, dev teams still rule


hello

Given I tend to name most inanimate objects in my life, it shouldn’t come as a surprise to anyone that I ascribe personas to AI tools. And I trust none of them.

Some I think of as a well-meaning rookie, who enthusiastically tries to do the right thing, but whose lack of experience leads them to make occasional catastrophic errors of judgement.

Others I think of as a shady flatmate, who is amiable enough to your face, but who never quite gets round to doing stuff like washing their dishes or paying their share of the bills, and who steals your stuff while you’re out.

The problem underlying both these characterisations is that, deep down, I don’t fully trust AI tools to do what I need them to do. Although I was writing about delegating to humans at the time, my three factors for trust apply equally well here. Trust is earned when you:

1. demonstrate you can perform the task competently;

2. actually do what you said you’d do; and

3. communicate transparently about what you will do, are doing and have done.

In short, trust depends on competence, contract and communication.

Early on (in generative AI terms), Descript and Riverside introduced cutting-edge but primitive AI transcription to their video editing products.

Because both use the transcribed text as the main way to edit the videos, I would find myself spending hours reviewing its accuracy. In Descript’s case, chunks of text would be out-of-sync with video, meaning my cuts would be erratically placed. Other times, it would just fail to transcribe whole minutes of talking, making editing borderline impossible. In Riverside’s case, an early bug made it spontaneously translate my transcriptions to Welsh. Amusing, but unhelpful.

Like my well-meaning rookie, the AI transcription tools fell down occasionally on competence. For me it was a case of when, not if, the AI was going to f**k things up, so I had to invest time checking their work to perform the dependent task (editing the video) with confidence. Things may have improved a great deal since then for both companies and their products, but it will still take a lot more time and glitch-free transcription to regain my trust.

Similarly, a few different stories have been doing the rounds this week of people (including the odd product manager) being bitten in the ass by vibe coding tools not only screwing things up catastrophically, but in some cases covering their tracks and lying about what they’d done. When this kind of thing happens, it’s a violation of the contract and communication aspects of trust. Like I said, shady flatmate.

This isn’t a Luddite invocation to smash the machines automating away our jobs. Rather it’s a reminder that trust is easy to lose and hard to regain.

When we effectively delegate tasks to an AI tool, whether to do something for us or for our users / customers, we implicitly trust the AI tool and the company providing it. We assume until proven otherwise that the goals and values of the AI tool and its provider are aligned with our own.

AI tools may present themselves as trustworthy and authoritative, but we’re not yet at the point where we can fully trust them to act unsupervised on our behalf, regardless of whether we believe the providers are aligned with our goals and values. If you wouldn’t put your well-meaning rookie onto a complex, critical task, or entrust your shady flatmate with something you value, then perhaps you shouldn’t expect AI tools to work flawlessly all the time either. Competence, contract and communication factor into every user interaction.

For you this week #

Teresa Torres and Petra Wille talk about their own experiences prototyping and encourage product trios to experiment and prototype ideas with vibe coding before the dev team builds them for real.

Jeff Gothelf writes about how his university professor made their class learn to edit physical audio tape with a razor blade as a contemporary lesson for people using AI tools.

Benj Edwards reviews the recent vibe coding fails and explores why it’s difficult to trust AI coding tools when you can easily receive conflicting results depending on how you phrase the question.

Erika Hall is currently working on the second edition of her book, Conversational Design. The first edition, which predates the recent proliferation of AI tools, explored how conversation is the original user interface, and what happens when you try to emulate it in products. It’s available as a free PDF download from Erika’s site to read while you’re waiting for the updated edition to arrive.

Speak to you soon,

Jock



what to think about this week

AI Prototyping

AI prototyping tools are mushrooming—Replit, V0, Bolt, WindSurf, and more—but do they actually help with discovery work? In this episode of All Things Product, Petra Wille and Teresa Torres discuss how AI-powered tools can supercharge assumption testing, where they still fall short, and what that means for product teams today.

[VIDEO] “If you keep building the wrong thing, you’re still wasting your life”

[Teresa Torres & Petra Wille / All Things Product with Teresa & Petra]

Understanding the work before automating it: A survival skill for the AI age

I graduated from university in May 1995 with a degree in mass communication. This is the same year that Netscape came out and radically transformed all forms of mass communication. While not everything I learned in university became obsolete overnight, a lot of it did. There was one class I particularly enjoyed during my senior year – audio editing.

How the sausage is made

[Jeff Gothelf]



Two major AI coding tools wiped out user data after making cascading mistakes

New types of AI coding assistants promise to let anyone build software by typing commands in plain English. But when these tools generate incorrect internal representations of what’s happening on your computer, the results can be catastrophic.

“I have failed you completely and catastrophically”

[Benj Edwards / Ars Technica]

Conversational Design

How do we make digital systems feel less robotic and more real? Whether you work with interface or visual design, front-end technology, content design, or just want to be better at your business, learn why conversation is the best model for creating device-independent, human-centered systems.

Conversation is the original interface

[Erika Hall / Mule Books]

recent posts

Empowered teams can’t simply do whatever they like

Have you encountered these misunderstandings about empowered teams?

Nope # 1 – “We’re empowered and autonomous — we can do whatever we like”

Nope # 2 – “As the leader for an autonomous team, I’m not allowed to set direction”

Nope # 3 – “As an empowered team, it’s not our job to explain our Very Important Work to others”

Before you delegate, you need trust

[I Manage Products]

Trust and transparency

Up there with coaching questions such as “how do I mind-control my stakeholders?” is “how do I get my team / stakeholders to trust me?”

The three factors needed for trust

[I Manage Products]

Are developers vibe coding themselves out of a job?

And is the increasing reliance by junior developers on AI coding assistants storing up a generational skills shortage for the future – ‘professional debt’, if you will?

So simple, anyone could do it. Wait – don’t fire me

[I Manage Products]

can we help you?

Product People is a product management services company. We can help you through consultancy, training and coaching. Just contact us if you need our help!

Helping people build better products, more successfully, since 2012.

PRODUCTHEAD is a newsletter for product people of all varieties, and is lovingly crafted from an overgrown wasteland no more.

Exit mobile version