In the Machine Age, What Makes Us Human?
And what would a "Syllabus for Humanity" need to include . . .
A few years ago, an old friend, aware of my endless search for what defines us, insisted I read John O’Donohue’s Eternal Echoes.
O’Donohue was an Irish philosopher and poet who grew up on his family farm in County Clare -- an experience he described later as "a huge wild invitation to extend your imagination into an ancient conversation between the land and sea."
Eternal Echoes provides a similar invitation, urging us to assume our proper place in the universe. “We live in a world that responds to our longing,” he writes. “The hunger to belong is at the heart of our nature. The sense of belonging is the natural balance of our lives.
“Distance awakens longing; closeness is belonging. Yet they are always in a dynamic interflow with each other.”
O’Donohue’s book changed the way I view human nature by convincing me there’s a universal three-word story that describes the essence of what makes us all human:
Longing and Belonging.
I was reminded of that three-word story this week, as another school year kicked off with not just the usual avalanche of back-to-school pictures, but also the heightened concerns about what the hell schools are supposed to be doing these days.
If the future of work is largely unknown, what does it actually mean to be ‘prepared?’
If the fate of the planet (and our democracy) hangs in the balance, what’s the point of our most timeworn courses and curricula?
And if the age of the machines is upon us, what exactly is the role of the humans in the years ahead?
The comedy of our errors was especially on display in Los Angeles, where, mere months after the city’s school district held itself up as a trailblazer for its embrace of artificial intelligence (read: charming chatbots), the whole program was unceremoniously scrapped. “There’s a dream that A.I. is just more or less automatically going to solve all or many problems [of K-12],” said Ashok Goel, a professor of computer science and human-centered computing in the School of Interactive Computing at Georgia Institute of Technology. “But that’s not how learning and education works.”
Unfortunately, too many of our most commonly accepted answers to how learning and education should work remain baked in by a century’s worth of habits and rituals.
In too many places, we are still perfecting our ability to succeed in systems that no longer serve our interests.
Yet the arrival of A.I. feels like a renewed opportunity to return, once again, to the eternal echo of this magical world — and our mysterious place in it.
I’m not talking about twenty-first-century skills, or competency-based education, or a new kind of transcript. And I’m definitely not talking about a great new technology policy.
I’m talking about the irreducible elements of homo sapiens-- the things that make us who we are -- and finding a way to more intentionally lean into those things as our primary point of inquiry as a species.
The New Yorker’s Joshua Rothman is talking about it, too. In a recent article for the magazine, Rothman catalogs A.I. 's current pantry of party tricks, before reminding us that “as artificial intelligence proliferates, more and more hinges on our ability to articulate our own value.
“What will get left out when A.I. steps in?”
This feels like the question for educators everywhere -- albeit one we’re not (yet) asking pointedly enough.
So let’s get specific: if you were to draw up a discrete list of the qualities that make us human, what would you add, and why?
Ideally, this is not merely a list; it’s a syllabus, a set of dispositions we could seek to better understand and, eventually, embody throughout our lives.
As I’ve written before, Minerva University has already completed a version of this assignment to reimagine higher education -- albeit in service of a different goal. Every aspect of Minerva’s curriculum is designed to provide students with four core competencies: Thinking Critically, Thinking Creatively, Communicating Effectively and Interacting Effectively. Underneath these four competencies are more than one hundred specific learning objectives, which represent a blend of habits of mind -- cognitive skills that with practice come to be triggered automatically -- and foundational concepts -- fundamental knowledge that is broadly applicable. “The Minerva curriculum is designed to help our students understand leadership and working with others,” explained their founder, Ben Nelson.
To which I say, yes -- and what I’m talking about here is not a curriculum for leadership; it’s a way to understand ourselves more fully, amidst the slowly boiling pot of the machine age.
Our hardwired tendency for longing, for example -- alongside our equally hardwired need to belong.
How powerful would it be if all of us were given the space to better understand the constant interplay between those two yearnings? How might that impact our overall health and well-being?
What else feels essential to who we are?
Perhaps empathy, creativity, and resilience.
The quest for meaning and purpose.
Perhaps beauty.
As it turns out, the University of Edinburgh’s Shannon Vallor has spent a lot of time thinking about this question, and in her new book, “The A.I. Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking,” she reminds us that being a good person isn’t a fixed state, but something we are constantly navigating and striving towards.
“This struggle is the root of existentialist philosophy,” she writes. “At each moment we must choose to exist in a particular way. Then, even as we make the choice—to love another, to shoulder a duty, to take up a cause, to rebuke a faith or throw ourselves into it—the choice opens up again. It will not hold itself there without our commitment to choose it again, and again.”
For Vallor, this is where the distinction between human and machine thinking becomes clearest. Her worry is not that the computers will take over, but that in our immersion in a world of artificially intelligent technologies, we’ll lose track of what our most essential human intelligences really are.
“We are caught in the grip of a gradual and accelerating mechanization of the human personality,” she argues. “The systematic replacement of reflective discernment with mindless prediction; the efficient sacrifice of shared flourishing to expected utility; the exchange of humane creativity and open-ended progress for local optimization of content delivery. In short, the surrender of humane wisdom to machine thinking.
“In the face of accelerating climate crisis and other existential risks,” Vallor writes, “the future desperately needs us to understand — more fully than ever — who human beings can be, and what we can do together. Yet, because today’s A.I. mirrors face backward, the more we rely on them to know who we are, the more the fullness of our humane potential recedes from our view.
“The call is coming from inside the house. A.I. can devalue our humanity only because we already devalued it ourselves.”
So what would you add to a Syllabus of Humanity, to ensure we’d more explicitly revalue and reinvest in the vocabulary of what makes us human before the modern metaverse robs it of all meaning?
For what, ultimately, do we wish to stand?
"Preparation", as a goal of education, is fundamentally different, and much, much more important than vocational training. The latter has its place but is more targeted and comes later. Always more "educere" than "educare", more drawing out than bringing up. Education is never one and the same thing for all but is tailored to the individual. It fuels in an ever compounding manner a desire to know, first oneself, and then others, to appreciate, to create, to connect and to question. We need to put epistemology at the center of our approach. How do we know what we know? I appreciate the power of the Turning Test as we wrestle with the emergence of A.I. but I'm reminded of a section in Benjamin Latabut's latest book, Maniac, (fiction? non-fiction?)about many things including John von Neumann. "When asked what it would take for a computer to begin to think like a human being?" von Neumann said "it would have to grow, not be built, it would have to understand language, to read, to write, to speak. And he said it would have to play, like a child."
I actually think the Inner Development Goals are a decent start (though I would add "honesty" (as in being brutally honest about the challenges we face) into the "Acting" section.) (https://innerdevelopmentgoals.org/ Make sure to "Explore the full framework.")
But, yeah, those first two questions that you ask feel like where the work is. I don't think we can grapple with the third one until we bring the lenses we arrive at from trying to answer the first two. "Future Ready Schools" is unserious. Current curricula and education narratives are increasingly irrelevant and dangerous. And the fate of all of this rests on our ability to understand the violence and disconnection that fuels our current singular story of "progress" and "success" which is unsustainable and clearly now an existential threat to life on the planet.
Lots of shifting ahead if we're going to learn our way out of this.