Site icon I Manage Products

PRODUCTHEAD: New tech = new responsibilities

Producthead logo

PRODUCTHEAD is a regular newsletter of product management goodness,
curated by Jock Busuttil.

product of war #

every PRODUCTHEAD edition is online for you to refer back to


tl;dr

The negative effects of social media algorithms were an early warning for the consequences of generative AI

As product managers, we have a duty of care to our users

The unpredictability of genAI poses varying risks of harm to users depending on the context


hello

“When you invent a new technology, you uncover a new class of responsibilities.”

This was one of the concluding points from Tristan Harris and Aza Raskin’s 2023 talk, ‘The A.I. Dilemma’. Far more than grooming backlogs, constructing roadmaps or managing stakeholders, product management for me has always been about responsibility for the product and its knock-on effects on the people who use it, create it and work alongside it.

We may not individually possess all the skill sets we need to research the potential harms and unintended consequences of our products, nor to redesign the product to remove or mitigate them. This is why we collaborate with multidisciplinary teams of specialists. Nevertheless, when we decide which products we release to market, we assume responsibility for the human consequences of those products.

As with any technological innovation that goes mainstream, understanding and adoption rarely go hand in hand, and the distribution of each is uneven. Some companies are racing ahead with generative AI despite little understanding of its strengths, weaknesses and consequences, while others are taking a more considered approach precisely because they do understand those things.

Harris and Raskin’s talk alludes to the 1983 television film The Day After. Depicting the aftermath of nuclear war through the eyes of rural America, it is cited as the cause of President Ronald Reagan’s abrupt reversal of public stance from nuclear proliferation to disarmament. Harris and Raskin said that the intent of their talk was to provoke similar concern into the possible adverse consequences of unchecked advancement in generative AI while there was still time to change course.

Two years later, despite some initial steps towards a more considered approach, both the UK and US declined to sign a declaration at the 2025 AI Action summit in Paris to ensure that AI-related technologies were “safe, secure and trustworthy”. While the stance taken by the US is entirely predictable given the current political climate, the UK’s is more mystifying — or possibly borne out of a misguided overestimation of our capability and global influence.

If you’re a product manager, odds are you’ll be working with some kind of generative AI. Many product managers are relatively new to the profession and have come to realise that the role is rather different from the utopian ideal peddled by many influencers. Coupled with a ‘founder mode’-fuelled breakdown in trust in product management as a profession, it’s understandable that this new cohort is asking itself: if not this, what should I be doing?

The product management role is nuanced, and differs depending on the context (stage of company, maturity of product being just two of the factors). However, what is fundamental is that you are responsible for your products and their consequences. You owe a duty of care to your users not to expose them to harm through use of your products.

Large language models (LLMs) are inherently unpredictable. Even if you know the foundational model, weightings, safeguards and everything else that went into creating the LLM (which is unlikely), you never truly know how the LLM will combine all that to respond to prompts. Some particularly harmful examples have been well publicised. Even research scientists are finding that LLMs can be unexpectedly proficient at certain tasks despite no specialist training, such as the ability to predict chemical models.

When you incorporate generative AI capabilities from any technology provider, you are effectively working with a black box that has the propensity to generate harmful responses, and with a technology provider that has a vested financial or political interest in downplaying those potential harms. Regardless of PR propaganda to the contrary, their needs are not aligned with your or your users’ needs with such strong incentives in play.

As you hold the responsibility for your products, you should be asking how genAI in your product could harm your users, whether that’s an acceptable risk, and how you and your team should mitigate or remove that risk. You may need to couch that risk in terms of financial or reputational damage to the business to be heard by certain audiences, but it’s still down to you to make the case. Whatever your expectations may have been, product management was never going to be easy.

Speak to you soon,

Jock



what to think about this week

The A.I. Dilemma

Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world.

[VIDEO] Change course while there’s still time

[Tristan Harris & Aza Raskin / YouTube]

The A.I. Dilemma’, a 2023 talk given by Tristan Harris and Aza Raskin

Five lessons for careful AI adoption

At Careful Industries, we’re turning some of the deep research we do into the social and democratic impacts of AI into training courses and group sessions so that more people can understand what AI means for both the organisations they work at and for society as a whole.

Whatever the enthusiastic sales email might tell you, AI won’t transform everything equally or in the same way for everyone.

The field of AI is almost intentionally confusing

[Rachel Coldicutt / Careful Industries]

AI has social consequences, but who pays the price?

As public concern about the ethical and social implications of artificial intelligence keeps growing, it might seem like it’s time to slow down. But inside tech companies themselves, the sentiment is quite the opposite. As Big Tech’s AI race heats up, it would be an “absolutely fatal error in this moment to worry about things that can be fixed later,” a Microsoft executive wrote in an internal email about generative AI, as The New York Times reported.

Tech companies’ problem with ‘ethical debt’

[Casey Fiesler / The Conversation]



recent posts

The unifying principles of product management

There’s certainly a lot more discussion about product management than there used to be. 10-15 years ago or so, the product manager’s complaint was that nobody knew what a product manager was. These days people in tech generally tend to know that the product manager role exists, but we still struggle to describe the breadth of the role in practice.

Become one with everything

[I Manage Products]

Good product management

Perhaps we’ve been caught a little off-guard by the implications of these new technologies. These have presented product managers with yet another new challenge to add to the growing list: how to create products that are not only successful but also ethical.

Technology isn’t to blame. It’s how people use it that’s the problem.

[I Manage Products]

What freelance product management is really like with Jock Busuttil

Off the back of his recent article for Mind The Product, Liam Smith interviewed me about my experiences in freelance product management.

We cover topics including:

» Should you hire freelancers in your product team?

» How to be successful as an external hire

+ more :-)

If this doesn’t put you off, nothing will

[I Manage Products]

can we help you?

Product People is a product management services company. We can help you through consultancy, training and coaching. Just contact us if you need our help!

Helping people build better products, more successfully, since 2012.

PRODUCTHEAD is a newsletter for product people of all varieties, and is lovingly crafted from a new, permanent tinnitus.

Exit mobile version