Are Computers Starting to Think?
I Don't think So
Will AI Take Your Job?
It depends on the timeframe.
There’s no doubt in my mind that AI is over-hyped if your timeframe is a few years, and I have some fear that we’ll never know how it turns out if we become over-dependent on it during those few years.
But first, a little background. While I’m far from a luddite, my long-time experience with computers tells me that most major developments at first trigger unwarranted optimism about how quickly they will change the world.
I first became aware of the possibility of intelligent machines while at M.I.T. in the early 60s. In 1957, a team at Carnegie Mellon University had predicted a computer would defeat the human world champion in chess by 1967.
That prediction was off by three decades. IBM’s Deep Blue defeated World Champion Gary Kasparov in 1997.
Human chess masters glance at a chess board and immediately sense the weaknesses and strengths of both sides. They choose a move by considering a very limited set of moves and possible continuations.
Researchers readily acknowledge that Deep Blue didn’t become a chess champion by imitating human thought processes. The program depended primarily on raw data-processing speed. Rather than intuitively deciding which few moves to consider, it examined every possible move in as much detail as time allowed and then chose the one simple math indicated the most likely to lead to a favorable position.
As programming effectiveness and computer speed improved, so did Deep Blue’s playing strength, and it eventually defeated Kasparov because it had run through many more possible sequences of moves than a human possibly could in the same amount of time.
Self-Driving Vehicles
Now consider the highest level of decision making needed for vehicles to achieve fully autonomous driving, a level that has been predicted to become a reality in a few years every year for at least the last 15.
In the late 60s I was briefly responsible for the software Ford Motor’s Scientific Research Lab needed for some of its earliest experiments with computer-controlled engines. I soon discovered how difficult it was to program a fail-safe way for a computer to control a single critical function such as a vehicle’s engine speed, even when it wasn’t necessary to know anything about the surrounding environment.
Full vehicle autonomy, usually referred to as Level 5, requires continuous processing of data on the location, size, texture, speed and direction of anything moving in a vehicle’s surroundings under any weather conditions. It also requires vehicles to correctly interpret road signs and markings and signals coming from people.
And who is signaling, a policeman, a worker directing traffic, a hijacker or a bear?
An impossible-to-predict flood of input data determines a continuous stream of decisions on how fast to go, when to brake, and when and how to adjust steering.
An experienced driver does all this, usually without consciously thinking about it, just as a chess master immediately knows which moves to consider and which to ignore.
Level 5 vehicles must reach or exceed human-level driving skill. But even if that does finally happen in the next few years (which I still don’t believe it will however often Tesla claims it already has), all the vehicle will be able to do is drive itself. What about playing chess, putting cars together, performing surgery, writing stories and replacing psychiatrists?
At the current state of AI development, AI researchers are almost always working on programs that will duplicate a specific ability the brain has. One plays chess, another drives an SUV and robots assemble cars.
And I’d prefer that a robot with other programmed skills take out my appendix if necessary.
Programs such as the frequently updated ChatGPT seem to be a major exception. But storytelling and responding to questions about your mental state are related skills that require selecting and combining words into sentences.
The latest AI craze is based on programs that seem to understand words. They don’t, unless it’s happening by some mysterious process that is different from the equally mysterious way a human brain does.
AI researchers use computer models they believe duplicate the brain’s decision making, but neither they nor anyone else knows how the brain actually works.
Some of the most respected AI gurus are saying that so called deep learning software is making decisions they couldn’t have predicted, and they don’t understand how. Neither do I, but again, I’m pretty sure it isn’t because there’s some mysterious digital voodoo going on that is akin to human unconscious decision making.
Instead, it’s a more complex variety of the digital and mathematical manipulation Deep Blue performed when it came up with a move Kasparov didn’t expect. That wasn’t human-like understanding of chess, and ChatGPT will never shed a tear over a poignant story it produces. (But it could be programmed to seem to be doing that.) It doesn’t understand the stories it tells, much less the emotions they may generate.
Hold that thought. More next time.


