Site icon Foresight News EN

Shameless Foomerism and S-Curves

We need to talk about how we talk about AI progress

Source

Why do blatantly untrue lies about AI spread further and faster than truths?

  • But it isn’t limited to hustle bros – even Wharton professors are getting into it, claiming that ChatGPT “clearly passes” medical, legal and MBA exams, only clarifying his overstatement “in thread”.
  • The latest in irresponsible #foomerism this week is lies about ChatGPT user numbers, which two publications estimated at 100m users (UBS/Reuters) and 5m users (Forbes) on the same day.
    • Forbes’ reporting involved two reporters spending multiple days embedding themselves in the OpenAI offices and interviewing more than 60 people, including two OpenAI founders.
    • UBS’ estimate involved some guy opening Similarweb for 2 minutes (an inexact estimate at best) and wildly extrapolating daily traffic numbers.
    • Guess which number went viral, presented as fact?

It’s not new that people only read headlines, or that a lie will run around the world before the truth can get its pants on. But there is something about AI that makes it inherently susceptible to suspension of disbelief. Perhaps this is built into the architecture of deep learning – involving many hidden layers and bitter lessons that defy human intelligent design.

The Origins of #Foomerism

I’ve taken to calling this behavior #foomerism, an ironic malapropism that reflects a third extreme playing on doomerism (AGI fatalists) and FOMO (hustle bros). The hashtag #foom itself has been gaining steam from techno-optimists reflecting the onomatopoetry of the “slowly, then all at once” nature of exponential growth:

The nature of AI discourse tends to be as binary as the activation functions on which it is based – either you are a #foomer techno-optimist, or you are a AGI-fearing Luddite, with no position in between. The #foomer in chief of the Noughties was probably Ray Kurzweil, who coined the awkward name of The Singularity, and the Tens brought us Tim Urban, who was much better at illustrating:

The Long AI Summer of the roaring 2020’s have finally democratized #foomerism, and now finally everybody is free to repost big circle small circle JPEGs spreading ideas from big brains to smaller.

AI Moloch and #Foomerism

We saw the beginnings of AI Moloch in January, when Sundar Pichai hit the Code Red. Now they have laid off 12,000 people (including 36 Massage Therapist Level II’s) and Larry and Sergey are back committing code.

AI Moloch is now spreading to social media as shameless #foomerism. In response to my observation about the 20x differences in ChatGPT user estimate, Forbes’ Alex Konrad admitted their estimate was “likely conservative”, but how likely do you think he will want to maintain his conservatism given his incentives and the clear results?

Don’t hate the players, hate the game, right?

And so the hustle bro fistpumps at getting a “hit”.

The Ivy League professor wilfully overstates paper results.

The career reporter kicks himself for being outdone by a faceless bank analyst.

Fundamentally, we suspend our disbelief on AI, and knowingly retweet lies and half-truths about AI, because deep down we want to be lied to. We want Santa to deliver gifts, we want the tooth fairy swapping our teeth for coin, we want deontological absolutes with no utilitarian consequences. We want magic in our lives, we want to see superhuman feats, we want to have been there for the major advances in human civilization.

I assume you’re still reading this boring essay, bereft of wild overstatements and unfounded speculations on the future, because you are interested in staying somewhat close to practical reality.

How can I “keep it real”, you ask?

Exponential #foomers don’t talk about S-curves

We have in fact lived through another #foom in recent years. Another time when the “you just don’t understand exponentials” people talked down to the unwashed uneducated masses too stupid to look up from their phones.

Remember Covid?

Every day we retweeted and obsessed over big numbers going up. These same people stopped commenting when numbers stopped going up as much, and were completely silent when they started going down. Piping back up on each aftershock, but in fewer numbers and less stringently each time.

I write this not to question aggressive pre-emptive Covid action, which of course had the better risk-reward, but to point out that the more nuanced and harder-to-meme reality is acknowledging that exponential curves do exist, but they also tap out, and run into invisible asymptotes and real-world limits, and they are also extremely hard to forecast. By far my favorite visualization of this is from systems modeler Dr. Constance Crozier (play the video and imagine yourself getting excited The Singularity on the way up and then falling silent as the asymptote arrives):

S-curves are everywhere in tech. Both Fred Wilson and Marc Andreesen are paricularly fond of the Carlota Perez Framework, which I have written about in my prior blogposts and book. Every technology has a #foom period and then a more “boring” (but profitable) maturity phase, the layers of civilized technology growth alternating neatly between invention and infrastructure.

Multiple s-curves can daisy chain in an unending but undulating tapestry of human progress (thanks to CRV’s Brittany Walker for pointing me to this):

Next time you run into a #foomer, ask them about S-curves.

If they don’t know, show this to them and ask about limits and practical constraints.

And if they don’t care: run.

Exit mobile version