Cornell University

232 East Ave, Central Campus

https://philosophy.cornell.edu/epl
View map

Generally, it seems permissible to give someone solicited advice. Specifically, doing so does not seem to undermine the advice-seeker’s autonomy; if anything, solicited advice seems to enhance, or at least respect, the advice-seeker as an autonomous reasoner seeking additional information. However, in this paper, we argue that there are ways in which solicited advice can undermine autonomy. We argue that there are two ways in which advice can undermine autonomy: (1) through biases in the presented option set and (2) in virtue of the relational dynamic between the advice-seeker and the advice-giver. We then argue that these autonomy-undermining risks are exacerbated in the case of advisory AI, specifically in cases of people using LLM-based chatbots for advice, because AI systems are unlikely to meet the conditions for autonomy-respecting advice. Next, we respond to two objections: that there is a double standard for human-given advice and chatbot-given advice and that consenting to receiving advice renders the advice non-autonomy-undermining. Ultimately, our argument has implications both for advice-giving in the human case and for advisory AI.

 

This event is sponsored by The Cornell Program on Ethics & Public Life

4 people are interested in this event

User Activity

No recent activity