If you’ll get into my time machine and follow me back to 1979, you’ll find me in a computer store selling Apple II after II. All to the same basic guy wearing a suit who was managing others at a Fortune 500 company.
Our store was literally at the center of much of the Really Big 500, so this parade of middle-management suits was pretty constant. Each one wanted an Apple II and VisiCalc. Why? Because they had heard that they could learn how VisiCalc worked in a few hours, plug their numbers in, and get information back out near instantly.
The alternative at the time was a game we called Analyst Agony. Said middle manager went to his company's IT department, where a Program Analyst would listen to their problem, then spend a week or two thinking about how to attack it, come back with lots of questions and suggestions, none of which fulfilled the original request, and eventually booked a coding effort from someone the middle manager never met that produced some iffy or useless results several months after the request.
The first manager that figured out that a bit of VisiCalc effort gave him useful answers the same day told other managers who told other managers who shared that with still other managers at trade shows who shared that with… You get the idea. But it's from there that the stream of middle managers got the news and came wandering into the store seeking Nirvana. Viral has been happening and important for much longer than Millennials beleive.
What was amusing about all this is that almost none of those managers could accept an invoice that said “computer” and get it paid by the company, because anything that mentioned the C word had to go through and be approved by IT. IT, of course, realized if they did approve such requests, they’d be out of jobs Real Soon Now. So “no on that computer purchase dude.”
Which is why I spent much of my time writing invoices for a 6502 chip, some RAM, and a bunch of other odds and ends with as much obfuscation as possible to hide the fact that what was actually being bought was an Apple II computer.
This long preamble brings me to my topic today: Visicalc Mentality, and now the updated version of it in AI Mentality.
All those using Visicalc slowly developed the notion that “if the spreadsheet says that’s the number, then that’s a real number. Trust it."
Well, maybe. If you made a formula error the number you decided to depend on was wrong, though.
A lot of those spreadsheets being built and middle-managed were about forecasts, too. Plug in past results, make some assumptions about the future, then run the calculation. Bingo, “boss, my division will have 15% growth this year, give me a raise.”
Things got worse when Silicon Valley started to get more organized in approaching venture capitalists. Every startup had a spreadsheet. Sometimes pages upon pages of spreadsheet (“more” out-impresses “less”). Such spreadsheets always showed 100%+ growth, 100%+ profit improvements, 100%+ more customers. That had to be right, because, well, VisiCalc (and its successors, such as today’s Doesn’t Excel) don’t make mistakes! It’s just a formula, and the spreadsheet calculates it reliably.
The term VisiCalc Mentality was invented by some of us to reference these delusional numbers in business plans. Behind virtually every formula was at least one assumption, and most of the time the assumptions were questionable, or worse, impossible. The thing that astonished me was that there were VCs who helped create these impossible predictions. One once told me “I need to get some other VCs in on this deal, and they need to see a certain thing in the numbers," so we made sure to “highlight” that thing in our business plan. “Highlight” as in “made it up and then called attention to it.”
Assumptions are basically beliefs. Belief is a strange beast. Once you’ve crossed over into belief, no one’s going to convince you that you’re wrong. At least until the market does and the numbers in your spreadsheet come tumbling down to reality.
I saw the height of this in the Web 2.0 build-up that came crashing downward around the turn of the century.
In one meeting I attended a brand new startup with US$14m in raised capital demanded that my half-billion-a-year company just let them use all of our content for free, “because we’re going to be huge.” If we didn’t give in to that unreasonable demand we were warned “we’ll eventually get content that will put you out of business.” Guess who’s out of business today?
I spent the meeting asking pointed questions. For example, one of their Powerpoint slides touted their industry-leading technology. When I asked pointedly what that was, the answer was that they were just now contracting with someone else to write it. Plus it was going to cost them US$8m, or nearly two-thirds of their war chest. Oh dear. Yes, Virginia, there is also a Powerpoint Mentality. If it’s on a slide, it must be true. Slides never lie. I think Microsoft wrote a law about that where they’d rescind your Powerpoint license if you ever used it for falsehoods. Oh, wait, no, we’re talking about Microsoft. I think they actually said that “Powerpoint makes your points look better than they really are.”
Okay, I’ve rambled long enough. It’s time to bring this home, so hop back into the time machine and set the date for today.
What you see today is a lot of tech companies—at least the ones that survived incorrect spreadsheets and misleading presentations—are grabbing the AI wagon’s rails as it races past. Billions have been spent on this without any real understanding of the usefulness or eventual payoff. The spreadsheets say these companies are worth billions. Many billions. The presentations—now organized into a tidy “deck” of no more than 20 nothing-usefully-said slides—say that AI is the greatest tech ever. AI can do anything (and then some).
Don’t get me wrong, I use AI all the time, but partly because I know how it developed, what it’s doing in the background, and how to work with it. I use it more as a sounding board—a minnie Minnie!—than a solution. AI, like VisiCalc and Powerpoint and every other big new software tool that came along and promised everyone the world, has its uses.
What I see happening, though, is all today's middle managers downloading ChatGPT or equivalent, typing a short query, then accepting the results. No, strike that. They believe that the results must be infallible, because it’s based upon the data of all mankind. The I in AI is “intelligence”, after all.
There’s that belief word again. If you want a useful takeaway from today’s monologue, it’s this: if someone tells you that they believe something their computer told them, run as fast from the room as you can. Knowledge is different than belief. Belief is behind virtually all of the ruinous periods of history, almost all failed companies, and all arguments that you’ll never win. Facts and knowledge don’t trump belief for most people. They have to be bludgeoned with their failure to even start to wonder if their beliefs might have been misplaced. Even then, stubbornness can set in, and you have that Mentality thing out and about causing havoc again.
The thing that makes me wonder about the ultimate usefulness of AI is this: what it ultimately outputs comes from something that was already done. Somewhere. By someone. Somehow. Personally, I feel that using AI to provide final anything (report, code, calculation, summary, etc.) means that you’re limiting your scope to what’s been done in the past, and you still need to analyze whether that is a correct way to proceed or not. In the tech business I’ve been so successful at, you don’t succeed by doing what’s already done, you invent what doesn’t yet exist.
Most of our scientific, knowledge, and idea breakthroughs come from either errors in the system causing someone to question something, or from two very disparate things coming together in the right mind at the right time. So I don’t mind AI confabulations (see footnote 1), but I also know that I need to spend as much time examining how the engine came up with those as I do when it comes up with a more standard and defensible answer. More often than not when I chat with AI these days, and it’s really just a chat.
Many of my Silicon Valley friends who are still doing Big Things—admittedly a dwindling number now—use AI as creative assistant with a wide range of knowledge who has to be quizzed and monitored constantly. They don’t fall for AI Mentality (i.e. they never assume what the AI says is absolutely correct). Indeed, if they do, they’re likely going to be out of a job, because anyone can ask AI questions and get those same answers.
AI is going to go through the same duty cycle as every other software miracle. We’re currently at the “gotta have it, it’s always the best choice” phase. The smarter group already just consider it another tool on their workbench.
Not ironically, a Linked In post came across my screen not long after I began writing the above. Here’s the quote:
"This morning, I woke up to a fully written newsletter for my pickleball paddle company, ready to send, thanks to AWeber's Newsletter Assistant. [i.e., AI] 👉 My effort? Zero. 👉 No research, no writing, no editing.”
Here we have someone who has not just drunk the Kool Aid, but thinks they’re enjoying it. The likelihood that the AI is going to say something new about pickleball that hasn’t been said before is zilch. The likelihood that this newsletter is going to seem repetitive and unuseful over time is the Full Monty.
Said Linked In post also indicated that the newsletter was written perfectly in "their style.” Styles should and do change. Today, for instance, I write with more knowledge and less making things up than I did back in the 1980’s. I spell and grammar better, too, ha ha.
Life is about adapting and evolving, not codifying your “style” into a fixed thing that can be replicated by a machine.
(Footnote 1) The word you usually see here is "hallucinations." It's the wrong word. The correct word is confabulation, which is "a memory error that unintentionally creates false or fabricated information, typically without realizing the information is incorrect."