PRODUCTHEAD: On OpenAI’s ideological conflict
PRODUCTHEAD is a regular newsletter of product management goodness,
curated by Jock Busuttil.
man of product #
every PRODUCTHEAD edition is online for you to refer back to
By reinstating Sam Altman, OpenAI has chosen financial success over its altruistic principles
When Steve Jobs left Apple in 1985, the board believed he wasn’t ready to be CEO
Google was quick to fire ethicist Timnit Gebru when she challenged its lucrative search advertising business
It was going to be difficult to write an edition of PRODUCTHEAD without making some mention of the OpenAI drama from the last week or so.
If there’s a moral to this story, and similar ones from Google and Apple before it, it’s that no matter how strongly a company builds its ethical safeguards, when there’s a conflict between ethics and making billions of dollars, the money-makers tend to call the shots.
Speak to you soon,
A boardroom power struggle at OpenAI #
Events developed so quickly last week that you’d be forgiven for not keeping up. It started with the board’s abrupt firing of CEO and co-founder Sam Altman on Friday 17th November 2023. Five days and two interim CEOs later, Altman had been reinstated. The Verge has a helpful timeline summarising the coverage as it was reported.
No doubt there will be plenty more speculation and analysis of why this chain of events was set in motion and how the matter was resolved. As is often the case, it all boils down to people, politics and ideologies, and the fireworks that result when they come into conflict. Boardroom dramas like this unfold all the time in different companies and organisations, though few happen quite so publicly, or at places quite as high profile as OpenAI.
OpenAI is an unusual case because it had attempted to enshrine its guiding principle (to create artificial general intelligence for the benefit of humanity) in its corporate structure. The for-profit bit of OpenAI that’s making billions of dollars through ChatGPT and massive investment from Microsoft is itself owned by the non-profit bit of OpenAI. In theory this meant that the board of directors at the non-profit could drag the for-profit back on track if it strayed too far from its altruistic imperative.
The events of the last week or so have highlighted how difficult doing that was in practice when the for-profit is making more money than Midas and most of the people concerned would much rather keep things that way.
It’s like when Steve Jobs got fired, right? #
Many journalists have already drawn the superficial parallel between Sam Altman and Steve Jobs. Jobs was similarly ousted from Apple in 1985 by its board of directors, again when the company’s fortunes were in the ascendant. However, that’s where the similarities end.
Whether triggered by then-CEO John Sculley trying to sideline Jobs, or because of their disagreement on product pricing, Jobs went in protest to the board of directors. Soon after, depending on the account you read, Jobs was fired or resigned. Of course, we know that Steve Jobs returned to Apple in 1997 as (initially interim, later permanent) CEO until he handed over the reins to Tim Cook in 2011.
At Apple, a scenario familiar to many technology startups / scale-ups played out: a brilliant but commercially inexperienced co-founder leads the company from 0 to 1, then the board of directors (rightly or wrongly) brings in a more experienced CEO to grow from the company from 1 to N, irking the co-founder who wanted to be CEO instead.
OpenAI’s situation was a conflict of ideology, not opinion #
The situation at OpenAI is different. Its governance model — which I predict will almost certainly be changed in the light of recent events — established a deliberate tension between effective altruism and capitalism:
“We designed OpenAI’s structure—a partnership between our original Nonprofit and a new capped profit arm—as a chassis for OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity.”“Our structure”, OpenAI (updated 28 June 2023, retrieved 25 November 2023)
This split was reflected in the personal beliefs of members of OpenAI’s board of directors. Altruists Ilya Sutskever, Tasha McCauley and Helen Toner staunchly defended the mission of creating artificial general intelligence (AGI) for the betterment of humanity. Co-founders Sam Altman and Greg Brockman saw the continued wave of commercial success as the way forward.
The details are not yet clear, but this direction was apparently incompatible with the altruistic mission. Remaining board member Adam D’Angelo’s stance appears to have been sufficiently neutral (or bi-partisan) to spare him from the cull precipitated by Altman’s reinstatement.
For sure, the reality is going to be more nuanced than I’ve represented: I would expect different accounts and perspectives to emerge once the outgoing board members start talking to journalists.
The canary in the mine #
This wasn’t even OpenAI’s first ideological schism. A team led by Dario Amodei, its VP of research, left OpenAI in 2020 to found rival firm Anthropic in 2021. Unlike OpenAI, Anthropic would place greater emphasis on ethical safeguards for their its language model:
“So there was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things. … One was the idea that if you pour more compute into these models, they’ll get better and better and that there’s almost no end to this. … And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety. … And so we went off and started our own company with that idea in mind.”Anthropic CEO Dario Amodei in conversation with Fortune’s Jeremy Kahn (paywalled, 26 September 2023, retrieved 25 November 2023).
Similar to OpenAI’s original intent, Anthrophic was set up as as a “public benefit corporation” (PBC) but also a for-profit from the outset. However, the choice to incorporate in Delaware, where companies can self-certify their PBC status with minimal scrutiny, certainly leaves ‘public benefit’ open to interpretation and places no restriction on enriching shareholders. As Sebastian Moss writes for AI Business:
“Had Anthropic been incorporated as a PBC in California, the company would not have been able to distribute profits, gains, or dividends to any person.”“Eleven OpenAI Employees Break Off to Establish Anthropic, Raise $124 Million”, Sebastian Moss, AI Business (2 June 2021, retrieved 25 November 2023)
It would seem as though Amodei and his co-founders have set themselves up for a similar conflict to OpenAI. However, their choice to make Anthropic’s public benefit status effectively optional would suggest they’ve already chosen their side.
Who’s the real victim here? #
Returning to the cloud of intrigue at OpenAI, Altman has been portrayed by the media as the victim of a boardroom coup. Notably absent has been a detailed description of why the board chose to fire him in the first place. If he was prioritising profits in a way that conflicted with the governing principle of effective altruism, then maybe it’s explicable why the board acted as it did.
But that narrative doesn’t seem to be playing much in the media. Rather, there’s the implicit endorsement that AI poster child Altman was in the right (whatever he was trying to do), and that the board was wrong to attempt to rein in the wildly successful commercial arm of OpenAI. The letter threatening mass resignations if Altman wasn’t reinstated, signed by 95 percent of the 700+ staff, certainly put the weight of opinion behind him and arguably was what forced the board’s hand.
What if we’ve got it backwards and Sutskever, McCauley and Toner were the ones in the right?
Touching a nerve at Google #
We only need to look back almost exactly three years to November 2020 to see a similar conflict playing out, again surrounding the development of large language model AIs. As co-lead of Google’s Ethical AI team, Timnit Gebru co-authored an academic paper that shouldn’t have been controversial.
The paper entitled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” highlighted how a large language model AI would perpetuate the biases inherent in the data the model was trained on, and that the servers training that model would consume vast amounts of energy. Both these points were already well-established. (I’m certainly no AI expert and even I spoke about machine learning biases in 2018.)
And yet, the paper touched a nerve at Google:
“[Jeff] Dean [head of research] became the face of Google’s displeasure with the “Stochastic Parrots” paper. He sent an email to the members of Google Research, also released publicly, saying the work “didn’t meet our bar for publication,” in part because one of its eight sections didn’t cite newer work showing that large language models could be made less energy-hungry.
“Others, including Gebru, offered a different explanation from Dean’s: Google had used an opaque internal process to suppress work critical of a technology that had commercial potential. “The closer the research started getting to search and ads, the more resistance there was,” one Google employee with experience of the company’s research review process says. “Those are the oldest and most entrenched organizations with the most power.” ”“What Really Happened When Google Ousted Timnit Gebru”, Tom Simonite, Wired, (8 June 2021, retrieved 25 November 2023)
Senior management gave Gebru an ultimatum: retract the paper or remove her and her co-authors’ names from it. Gebru refused and says she was fired; Google maintains she resigned. Not entirely surprising behaviour for a company that had watered down its original guiding principle “Don’t be evil” to “Do the right thing” in 2015.
Ideology either wins or loses, no middle ground #
Steve Jobs’s departure from Apple boiled down to a difference of opinion, not ideology: Jobs wanted to be CEO, the board of directors at the time did not believe him qualified. In contrast, Sam Altman’s dramatic firing and reinstatement brought to a head the existing conflict in ideologies within OpenAI’s board, with money defeating altruism. Likewise, Timnit Gebru’s ousting from Google could be seen as the result of Gebru’s ethical ideology conflicting with the actions of an ethically ambivalent and highly lucrative advertising business.
It may be trite to observe that ideological belief is far more entrenched than ‘regular’ beliefs and opinions. People truly committed to an ideology are not going to be talked round to a compromise: it’s their way or no way. A disagreement with a person’s ideology is taken as not just a challenge to their beliefs, but a personal attack. When ideology is involved on one or both sides of a conflict, there’s little room for negotiation. One side will win outright or lose outright.
The success of most companies, but especially tech companies, is measured by financial worth, or at least the market’s perception of it. When that imperative to generate profits and investor value comes into conflict with the ideological principles of the company, it boils down to a false dichotomy* of ‘succeeding (financially)’ or ‘failing (remaining true to its ethical guiding principles)’.
[ * Because there are other possible options than the two alternatives presented. ]
When the choice is made to embrace vast profits over principles, ideology loses outright, with no middle ground. OpenAI’s board of directors has just made that choice, and we’re going to have to live with the consequences of their decision.
Final thoughts #
Is it ever possible for billion dollar companies to stay true to an ethical ideology? If an absolutist ethical ideology is thrown out as soon as it conflicts with profit-making, does this mean we have to demote such ideologies to more loosely-held beliefs, reopening the door to compromise? That way there can at least be some middle ground, even if that results in — at best — morally grey companies.
I hope not.
But until the preferred yardstick for company worth switches from financial success and shareholder enrichment to ethical impact and humanitarian betterment, I don’t see things changing soon.
what to think about this week
After five days of chaos triggered by OpenAI’s firing of CEO Sam Altman, the executive is set to return to the company, while the board of directors that fired him is to be almost entirely remade. OpenAI said last night that it “reached an agreement in principle for Sam Altman to return to OpenAI as CEO.”
[Jon Brodkin / Ars Technica]
The rise, fall, and return of Steve Jobs is a big part of the Apple founder’s legend.
Ousted from Apple after a failed boardroom coup, Jobs formed his own startup. That startup was eventually purchased by a desperate Apple, which was in dire need of product leadership at the time. Not long after, Jobs would become interim CEO, then permanent CEO, and Apple would go from tech industry punchline to the most valuable company in the world.
[Matt Weinberger / Business Insider]
‘There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases
The Ethiopian-born computer scientist lost her job after pointing out the inequalities built into AI. But after decades working with technology companies, she knows all too much about discrimination
[John Harris / The Guardian]
Job adverts present a chicken-and-egg problem: they all need you to have product management experience to secure a job, but you don’t yet have a product management job to gain that experience.
Don’t let this discourage you!
[I Manage Products]
Recently I was explaining to a client why I focus my efforts on finding “force multipliers”. These are what I call activities that allow us to extract multiple benefits from a single piece of work. You could think of it a little like a workplace fusion reaction, where the output ends up far greater than the input effort.
[I Manage Products]
When the vision and strategy are focused and clear, they allow product managers to prioritise and filter the possible options for their products more easily.
[I Manage Products]
can we help you?
Product People is a product management services company. We can help you through consultancy, training and coaching. Just contact us if you need our help!
Helping people build better products, more successfully, since 2012.
PRODUCTHEAD is a newsletter for product people of all varieties, and is lovingly crafted from burning billions of dollar bills.
Read more from Jock
The Practitioner's Guide To Product Management
by Jock Busuttil
“This is a great book for Product Managers or those considering a career in Product Management.”— Lyndsay Denton