Title

AI, human-capacity habituation, and the deskilling problem

Abstract

AI tools replace or stand to replace human activity with non-human activity, via automated decision-making, recommender systems and content generation. The more AI replaces valuable human activity, the more it risks deskilling humans of their human capacities. While others have warned of moral deskilling caused by AI-warfare and social robotics, I argue that AI deskilling could encompass other human valuable capacities such as the epistemic, social, creative, physical and the capacity to will. On an Aristotelian view, deskilling of these capacities leads to capacity-impoverishment and ultimately to unflourishing lives.

AI is uniquely positioned to accelerate deskilling, because it compromises the conditions necessary for cultivating capacities. Competently exercising human capacities is like cultivating a virtue. It requires habituation: practice, refinement and judgment. AI offers the possibility of replacing various human tasks through automation, via automated decision-making, recommendations, diagnostics, content creation, and robotics. While some replacements may not diminish skill levels, such as automating repetitive tasks, others could erode skills if they impede the conditions necessary for capacity development and habituation.

To determine when AI replacement is benign or ethically problematic, I offer an analytic framework for evaluating the goodness of AI tools and systems, distinguishing between two types of AI environments: environments that afford capacity habituation (e.g. practice; encountering challenge; discernment; flow), and capacity-hostile environments that narrow the ‘field of affordances’ necessary for capacity habituation. Notably, this helps identify AI environments that undermine the habituation of the capacity to will, which is a meta-capacity required for cultivating other capacities.

Focusing on human capacities as an organizing idea is useful for AI ethics as it offers a vision of living well with AI through the cultivation and competent exercise of capacities; and a political morality that regulates against ‘capacity-hostile’ AI environments.

About Avigail

Avigail Ferdman is a Research Fellow at the Department of Humanities and Arts, Technion - Israel Institute of Technology. At the Technion she also leads the Embedded Ethics program.