Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents
In this paper, I argue that there are several different circumstances in which it is rational for human agents not to trust artificial agents (such as ChatGPT). I claim that artificial agents cannot, in principle, be programmed with their own self (nor a simulation of their own self) and, consequent...
| 出版年: | Information |
|---|---|
| 第一著者: | |
| フォーマット: | 論文 |
| 言語: | 英語 |
| 出版事項: |
MDPI AG
2025-01-01
|
| 主題: | |
| オンライン・アクセス: | https://www.mdpi.com/2078-2489/16/1/36 |
