Plato, through the words of the central character in his famous dialogues, the philosopher Socrates, tells us that the invention of writing severely impaired human memory. The impairment resulted in part from disuse. We humans no longer had to commit to memory important information that could now be rendered on the page. Socrates insists that living memory is far better and far more responsive to inquiry than the written word.
Human learning has not, however, disappeared or even diminished in the age of the written word, but rather prospered as the wisdom of the ages can be readily passed down to each generation. The invention of moveable type in the 15th century spread the written word across world, making it accessible as never before. Plato must have realized the irony that he was preserving Socrates’ argument against writing for future generations by writing it down. Plato could not have guessed, however, that 2500 years later his writings would be part of the canon of Western philosophy and that moveable type and modern transportation and communications would make his writings available practically anywhere.
Modern communications devices and adjuncts to learning and investigation such as artificial intelligence (AI) programs bid us to remember how writing itself was once critiqued and how that critique in large part was dispelled by subsequent events. But ought we be so sanguine about our reliance on such devices as cellphones, computers, and the emerging AI programs? Do these aid us or dull our abilities? Do they allow us to inquire deeper into the human world or become more divorced from it?
I suppose the general answer is: It all depends on how we use these tools.
I’m old enough to remember when handheld electronic calculators were just being introduced and only a few of my fellow high-school students chose to spend about $150 to get one (the equivalent of paying almost $1,000 today). I learned manual methods in my math classes for calculating answers and found one of these methods so useful I still use it today for adding a written column of numbers as it is quicker than inputting all the numbers into a spreadsheet or calculator.
So, when a grandfather who is also a computer scientist recently asked his grandchildren—who were already holding their cellphones—what one-third of nine is, those grandchildren immediately went to their cellphones to find the answer. I’m used to doing math in my head for a quick estimate of the answer to a real-world problem that I can confirm later with written or computer calculations. Will these young people never be faced with a situation where estimating the answer to a math problem in their heads will be useful? I cannot foresee such a time before the fall of our technical civilization.
But just as Plato explained, this grandfather believes that “our cognitive abilities weaken when we let technology think for us.” Defaulting to Google for every answer we don’t know weakens our minds.
Richard Murphy, an accountant by trade, but now a critic who writes extensively on public finance, explained in a recent piece that accounting firms are no longer seeking to train employees in taxation as tax questions will be answered by AI programs, or so they believe. But Murphy counters that “[t]he way you become good at tax is by reading a lot about it; by writing a lot about it (usually to advise a client); and by having to correct your work when someone superior to you says you have not got it right. There is a profoundly iterative process in human learning.”
When I was a freshman in college, my advisor explained to me that in whatever profession I chose, I should seek experience in all the jobs from the bottom up. That way, when I became a manager, I would not be able to be fooled by the people under me. The accounting firms do NOT understand that the managers they are now creating won’t know whether the firm’s AI tax program has answered a tax question correctly. The program will become the equivalent of “people under me” and the new managers will be easily fooled by an authoritative seeming piece of software.
Murphy adds that AI programs answer only the question they are given. They cannot know whether the question is the right question under the circumstances. In other words, AI cannot detect a wrong question and reorient the user to find the right one. It turns out that the only way to detect a wrong question is extensive experience with the subject matter and with the people you serve.
Nassim Nicholas Taleb, the self-styled student of risk and author of The Black Swan, summed up this problem very succinctly in a recent post on X (formerly Twitter): “VERDICT ON ChatGPT: It is ONLY useable if you know the subject very, very well. It makes embarrassing mistakes that only a connoisseur can detect.” So, the AI programs that accounting firms are counting on to answer tax questions will only be useful to someone who is already thoroughly trained in tax law and tax accounting. Who knew?
Now think about the mess AI will make if used without respect for its limitations in the fields of medicine and law where honed judgment from seasoned professionals who know the subject matter extremely well is crucial.
One psychology professor explained AI this way: “It’s a machine algorithm that’s really good at predicting the next word. Full stop.” The psychologist added that humans learn best in situations that include meaning, emotion, and social interaction. AI only learns from data that people give to it.
This begs the question: Where will all the expert data and words come from if no one is being trained to be an expert because “AI will take care of that”? We are once again back to having to become experts ourselves to know whether AI is giving us correct information.
It’s worth noting that expertise does not actually reside on the page. It resides in the minds of a community of interacting experts who are constantly debating and renewing their expertise by evaluating new information, insights and data from experiments and real-world situations.
So, it turns out we never really abandoned the mind as a repository of memory. These communities of experts rely on a sort of common mind which they create to hold evolving information and views among the community members. Socrates would be pleased. But would AI be able to explain WHY he was pleased?