Title

Moral Status and AI

Abstract

Does it matter how we humans treat social robots and if so why does it matter? In the current literature, there are arguments for and against the direct and indirect moral status of robots. Here indirectness indicates that robots are not moral patients per se, thus, we don’t morally owe them anything for their own sake; rather, because of what we owe to ourselves and to other direct moral patients (humans, animals, environment etc), actions towards robots become morally relevant.

I don’t think that robots (which are developed based on the current paradigm of robotics and AI) can have a direct moral status—not in any substantial way. Rather, I argue that there are certain sufficient reasons for social robots to be given an indirect moral standing. That is, it matters to us how we treat robots, although it doesn’t and cannot matter to robots. And I derive reasons for this indirect status from the nature of our social cognition, the way we come-to-be and maintain-to-be social beings and how this mechanism work in our cognition of sociality of robots.

About Arzu

Arzu Formánek (BSc in Mathematics, MA in Philosophy) is doing her PhD study within the FWF funded project FoNTI (Forms of Normativity - Transitions and Intersections).