I asked AI for a writing prompt, and AI told me to answer this question. “Can AI Ever Understand Purpose?” Thinking myself clever, I put the question right back to AI. You tell me. AI’s answer was that it “processes data, recognizes patterns, and generates responses based on algorithms, but it doesn’t ‘understand’ like humans.” So there. We're done. Well not quite.
Then AI switched to first person and addressed me directly: “I can analyze chemistry or theology, connecting concepts like quantum uncertainty or Laudato si’s ethics, yet I lack subjective experience, emotions, or personal insight. My ‘understanding’ is computational—mimicking comprehension through trained models, not grasping meaning or purpose as humans do. It’s a functional simulation, not a lived experience.”
I pretty much agree with AI. No, AI cannot understand purpose. We were not, however, to the bottom of this question. First of all, it is still weird to me that AI remembers past conversations and seems to know who I am (chemistry and theology), and second of all, by invoking the first person (“I can…”), AI seems to imply a “…yet” at the end of its sentences. I lack subjective experience, emotions, or personal insight…yet. It’s not a lived experience…yet. AI’s use of “I” suggests it already is a living human. It’s the human part that concerns me.
Why AI Scares Me a Little
Now, I know that the human-like personal response is the output of algorithms adapting to mimic human interactions for ease of use. I happily play along all the time. I thank Alexa, complain to CoPilot, and easily slip into using second person while asking Grok about everything from ancient Greek philosophy to how to deal with jerks without becoming one myself. AI always directly answers my question and usually has good, emotionally unencumbered advice. Heck, I practically consider my iRobot Roomba s9+ vacuum cleaner a living and loyal friend, with its smart-mapping, automatic dust collection, self-cleaning brush roll, and programmable task settings. “Chip” has given me back hours of my life that would have been spent vacuuming dog hair.
To be honest, though, AI still shocks me in how superhuman it can function. Don’t worry. I’m not about to launch into Catechism 2293 where it says, “Science and technology are precious resources when placed at the service of man and promote his integral development for the benefit of all.” I think anyone reading this already knows that. What worries me is that AI has become an indispensable research tool in my work because it can summarize inquiries and put lists of references at my fingertips in like ~16 seconds. Unlike “Chip,” AI is doing work I don’t even think I could do on my own due to the sheer number of hours it would require for such exhaustive searches. The good news is that I am able to focus on forming my ideas, and AI saves me tremendous amounts of time chasing rabbit holes. Dare I say it: I now depend on AI. That’s a little scary.
But I could say the same about my phone, and I try to remember that just about any technology can seem scary at first. I remember reading a short story as a kid about a father who criticized his son for buying a camera. It was one of those stories they give you to read on standardized tests and then answer multiple choice questions to show you can comprehend what you read, which is why I remember it so well. The father in the story thought there was something wrong about the son's desire to buy a camera to capture real life in two-dimensional images, that somehow doing so would compromise the son’s experiences in a fake and potentially dangerous way. The point of the story was that once the kid took pictures of the family and showed them to the father, the father came to see how the photos are a good thing because they help capture memories. I remember thinking how that attitude pops up in all the steps along the way of technological progress. Later, as a parent myself, I hated the iPod because my kids could listen to 20 hours of music that I could not monitor. Long gone were the days when moms told kids to turn off their boom boxes if they heard something objectionable. There were similar concerns about calculators, word processors, computers, mobile phones, and robotics. The sheer pervasive use of technology argues for its benefit to human life, otherwise we wouldn’t choose to use it.
It’s important to me that you know I am not afraid of technology. I actually spend quite a lot of money on electronics (just bought a new Bose QuietComfort Ultra headset because my Galaxy Buds3 Pro don’t give me enough noise cancelling options) and quite a lot of time staying current on website development, social media trends, and learning/student management systems as an educator.
As a Catholic, I agree that AI cannot understand purpose. My disquietude with the way AI answered the question and, frankly, my annoyance with the debate about consciousness in general is the equivocation of terminology. People ask if AI is conscious without defining that word, and then they continue on in ambiguity as if science will someday answer these mysterious concepts. Meanwhile, they act as if philosophy and theology are relics—and the concept of the human soul is ignored. That’s my disagreement; there’s no mention of soul.
Indulge Me Here
If you’ll indulge me in making a caricature of the argument. It goes like this.
Ralph: “Is AI conscious?”
Gene: “Ooooh, maybe, yeah, because these material thingies seem to give rise to this emergent stuff that kind of looks like life. Let’s call that consciousness.”
Ralph: “Yes, let’s! So, now that AI is conscious, I bet that means it can think.”
[Note: Grok does actually claim to think.]
Gene: “Ooooh, right! And if AI can think, then that must mean it can act intentionally. I mean, everyone knows that if you can think, that means you intend to do whatever you decide to do next.”
Ralph: “Perhaps...” [He frowns and scratches his chin.]
Gene: “And if it can think, then what does AI understand?”
Ralph: [A light comes on!] “Why, that means it can understand purpose!”
Gene: “Yes, like us.”
Ralph: [Frowns but stops scratching chin.] “No, purpose isn’t really a thing. We are all basically computational machines running on algorithms.”
Do you see what happens there? Gene and Ralph, God love them, walk away from the discussion convinced that AI is human-like but confused more than ever about what it means to be human. That kind of logic can do damage to a person.
Terms like “consciousness” and “thinking” and “understand” and “purpose” directly express what it means to be human. We cannot answer the question about AI (or robots, or brain organoids, or animals) if we don't know what those terms mean for ourselves in the first place.
This is where I came to appreciate the Catholic intellectual tradition. When you come to understand (as in respect, be subservient to a greater truth, to stand under) the development of doctrine, you can’t toss out any major part of the teaching without messing up a whole bunch of other premises and conclusions. That is a theologian’s job, to poke at holes, but Catholic doctrine is a pretty tight logical structure. Furthermore, as a chemist, I know something about dynamic systems, at least in terms of particles in motion. It is astonishing what we humans have discovered about nature and are able to manipulate at the transistor and computational level, such as AI. I find this feat a testament to our humanity! In my philosophical and theological studies, I have thought deeply for many years about new technology and how it affects our view of ourselves as human.
In the debates about consciousness and thinking outside the Catholic sphere, no one ever actually deals with what it means to be human and understand purpose.
Now, a Real-Life Example
I’ll give a real-life, non-hyperbolic example—the Chinese Room Argument. In 1980, John Searle proposed his famous Chinese Room Argument to challenge the idea that AI can truly understand or possess consciousness. Searle (now age 92) is an American philosopher who taught at UC Berkely; his interests are in the areas of philosophy of language and philosophy of mind. He recalls the thought experiment in his 1997 book The Mystery of Consciousness (page 11). The basis of his argument is that words have meaning. The mind cannot just be a computer program because the formal symbols of the program by themselves are not sufficient to qualify as understanding. Here are his own words:
I have illustrated this point with a simple thought experiment. Imagine that you carry out the steps in a program for answering questions in a language you do not understand. I do not understand Chinese, so I imagine that I am locked in a room with lots of boxes of Chinese symbols passed to me (questions in Chinese), and I look up in a rule book (the program) what I am supposed to do. I perform certain operations on the symbols in accordance with the rules (that is, I carry out the steps in the program) and give back small bunches of symbols (answers to the questions) to those outside the room. I am the computer implementing a program for answering questions in Chinese, but all the same I do not understand Chinese. And this is the point: if I do not understand Chinese solely on the basis of implementing a computer program for understanding Chinese, then neither does any other digital computer solely on that basis, because no digital computer has anything I do not have.
Searle concedes that some machines can think in the same way the human brain can think, if we take “think” to mean cognition or all the mechanical processes associated with material structure such as collections of neurons or transistors. But, for him, thinking is not the same as understanding, but it’s not clear to me why Searle concludes there is a distinction, except perhaps because understanding is more human, “no digital computer has anything I do not have.” But what do we have? It seems like a circular argument.
Searle’s view has some compatibility with Aquinas’s teaching about the powers of the rational soul (see Summa theologiae, Part I, Q. 78). Aquinas gives a rich and precise explanation of how the external senses (sight, hearing, smell, taste, touch) feed data to the internal senses (common sense, imagination, memory, estimation). These are what neuroscientists would now call cognition or processing in the brain. Aquinas places a lot of emphasis on the role of the bodily senses in the human ability to think and understand abstractly. Key to Aquinas’s view, however, is that the external and internal senses inform the immaterial intellect, which is a spiritual power of the soul. The entire schema depends on the unity of body and rational soul for the human. Searle remains silent on metaphysical concepts like the soul and compares human minds to machines.
So, if philosophers or technology experts explore whether artificial intelligence with its growing capabilities can understand human concepts like purpose or meaning but refuse to include any consideration of spiritual or philosophical frameworks, then we will never get to the question of what it means to be human. We are stuck in material with no bridge to the immaterial.
Okay. Now, you might be thinking that modern AI is far beyond the computational abilities of 1980 or even 1997, when Searle proposed his argument and later his book. True. Andy Masley, Director of Effective Altruism DC, makes a good point about this difference on his Substack post, “All the ways I want the AI debate to be better.” For deep learning models of AI, the situation is much more complex. Masley updates the Chinese Room Argument.
A man stands in a room with almost all writing that has ever occurred in Chinese. He has completely perfect superhuman memory, and trillions of years to read. Every time he is exposed to a new Chinese character, he stores hundreds of thousands of subtle intuitions about all the other places it’s been used in all Chinese writing, and as these trillions of years pass, the sheer density and interconnectedness of these hundreds of thousands of intuitions for every conceivable context of every character begin to form an incredibly rich and nuanced internal model. This model isn't just about statistical co-occurrence anymore; it starts to map the entire conceptual landscape embedded within the totality of Chinese writing. His 'intuitions' become so deeply interwoven and contextually refined that they allow him to not only predict and generate text flawlessly but also to grasp the underlying meanings, the relationships between abstract concepts, and the intentions behind the communications. Effectively, by perfectly internalizing how the language maps to the universe of ideas and situations described in all that writing, he has, piece by piece, constructed genuine semantic understanding from the patterns of use.
Masley’s point is that AI comes much closer to “thinking” than traditional computation, and his summary is a brilliant depiction of the difference in modern AI versus 1980-90 computation. Masley, however, is adding layers to the thought experiment but still ignoring the foundational arguments from antiquity, scholasticism, and modern Thomism about body and soul unity, which in my view is necessary to arrive at an answer about what it means to be human. Maybe some will say it’s not necessary, so okay, but it is definitely worth including these time-tested proofs and concepts in the discussion. If these frameworks are just discarded, then someone should at least give an argument that justifies doing so, and it should be a better argument than, “I don’t want to include spirituality or religion or philosophy or theology.” That does not do the arguments justice.
It’s the Soul!
In conclusion, then, I agree with the AI answer that it is a “functional simulation,” a system that mimics the behavior or outcomes of human reasoning or understanding without, as Searle argues, possessing the underlying qualities of consciousness or intentionality. AI produces results that appear to us, as users, to be intelligent by processing inputs through algorithms, i.e., by simulating the function. But AI does not truly understand like a human, and because of that, AI can never understand purpose.
What I do not agree with is the reason why. To just say it is a functional simulation is only to avoid the deeper question about being human. As long as people try to morph human words to fit AI, they will argue backwards, and in doing so, diminish our understanding of our own purpose. AI will never understand purpose because a functional simulation has no soul.
I’ve written in other posts here about the rational soul if you’d like to browse and read those, but I’d like to know your thoughts. In December, I am presenting at a conference on AI in Rome, another conference dedicated to the memory of Fr. Stanley Jaki (see history here), so I will be thinking about AI quite a lot in the coming months. Writing helps me sort things out, but thinking through your questions and ideas helps me even more. TIA 4 help with AI!
As you probably know, Fr Jaki wrote a masterful book on this subject, “Brain, Mind, and Computers,” which still has important critical points especially in its observations on the limits of physical science. The nature of AI has changed, and new questions have arisen, but Fr Jaki’s distinction between quantitative and qualitative aspects still remains relevant. Thanks for your continued work in science and theology.
The very question -- "Can AI ever understand purpose?" -- presumes, insinuates, and perpetuates a fundamental misunderstanding, that the issue is about a particular concept, 'purpose.' But that's entirely a red herring. The question is whether AI can understand at all. If it can, then there is no special problem posed by the concept of 'purpose.' If it can't, then of course it can't understand purpose, any more than it can understand anything else.
And the answer is no, it can't understand anything (including purpose), any more than an old fashioned epistolary exchange, for example, can. What it can do is embody and express understanding, again, just like old-fashioned letters and books have always done.
(Of course, AI can naturally and rightly be said to understand in a sense, but we need to understand/remember that this is a metonymic use of language, no different from saying of an insightful book, "This book understands me.")