Every time I teach an artificial intelligence ethics class, which is most semesters, the same pattern emerges in the first few weeks. The students never seem to feel that they know “enough.” This initial uncertainty, prompted by a perceived lack of expertise, mirrors a phenomenon I have observed in society more broadly: the widespread tendency to assume that AI is not just complex, but uniquely complex. That regular citizens cannot possibly begin to form any opinions on what to do about AI’s future, and what place to accord it in our lives because AI — whether we view it as a magic-like tool or a doomsday-inducing threat — seems inherently inscrutable. In contrast to other scientific domains and their corresponding policy domains, participating in public discourse surrounding AI innovation and AI governance requires an especially deep technical, and possibly philosophical, expertise. In my forthcoming book Democratizing AI, I call this phenomenon “AI exceptionalism.” It’s the implicit or explicit view that AI-related matters should be elevated to a special kind of pedestal, out of reach for most of us.
People are also reading…
Given AI’s dazzling complexity and light-speed evolution, AI exceptionalism is an understandable view. But it is also a misleading and counterproductive one. Within a few weeks, the same students who start out intimidated by this interdisciplinary and rapidly evolving field begin to realize that the foundation they need is actually within reach. With the right structure and guidance, they develop the competence necessary to navigate key philosophical and technical topics — algorithmic bias, privacy, explainability, whether AI learns just like a human child does, and whether AI can ever be truly creative. They engage thoughtfully with questions about fairness, autonomy and the possibility of AI having moral status. They test each other’s ethical intuitions and develop a deeper understanding of AI’s distinctive technological characteristics in comparison to other technologies. They start understanding how any given choice to express one particular goal in mathematical terms corresponds to a choice in favor of a specific set of ethical values, such as the choice to optimize for equal opportunity or for equal outcomes for different groups in society. They come to see that the conversations shaping AI ethics and policy are not closed to them after all.
Here’s my takeaway from several years of teaching AI ethics to very different groups of students: When it comes to reasoning better about AI’s place in our lives, competence, not deep expertise, is what really matters.
If students from all different majors and backgrounds can gain the necessary skills to assess AI’s ethical and societal implications, then so can policymakers, journalists and the broader public. Understanding what’s really at stake for us in the age of AI does not require knowing exactly how to build the next generation of AI tools. The ability to reason through trade-offs, to question underlying values and to critically assess societal impacts is within reach for anyone willing to engage with the topic and may help them avoid falling into AI exceptionalist thinking.
This has implications beyond the classroom. If we insist that only those with advanced technical knowledge can shape AI policy, we risk shutting out crucial perspectives. Many of the most urgent ethical questions about AI — how it affects justice, autonomy and human dignity — do not depend solely, or even primarily, on advanced programming skills. They depend on being able to deliberate with others about value-based questions, such as: Do we as a society really want to automate this task, or that one? How comfortable are we as a group with, say, a social credit system that would require large-scale surveillance? Should we prioritize long-term or short-term AI safety risks? And should we prioritize safety over personal autonomy? These are areas where interdisciplinary dialogue is not just valuable but essential.
In the closing line of his pathbreaking article “Computing Machinery and Intelligence” published in a philosophy journal, Alan Turing wrote, “We can only see a short distance ahead, but we can see plenty there that needs to be done.” His words remain as relevant as ever. AI is advancing rapidly, and its effects on society are profound. We cannot afford to wait for perfect knowledge before acting. Public discourse on AI’s role in our lives ought to be broad, inclusive and ongoing. That means ensuring that regular citizens — not just specialists — have the tools to participate meaningfully.
—
About the Author
Annette Zimmermann is an assistant professor of philosophy, an affiliate professor of statistics and the co-lead of the Uncertainty and AI research group at the Institute for Research in the Humanities at UW–Madison. Zimmermann specializes in research on the ethics and politics of artificial intelligence, machine learning and big data. Zimmermann has received UW–Madison’s Vilas Early-Career Investigator Award and Zimmermann’s previous research has been supported by research fellowships at Princeton University and Harvard University.

