Consider ChatGPT’s answer to my question, “whose idea was it to divide your ethical discussions according to utilitarianism, deontology, etc.?”:
>The division of ethical responses based on >ethical theories like utilitarianism, >deontology, and others is a part of the design >and development of AI models like mine, >and it's intended to provide a structured and >informative way to discuss ethical topics. >The goal is to offer users insights into how >different ethical theories and principles might >be applied to specific ethical questions or >dilemmas.
This hard-coding disregards ready-to-hand Ethics, which is a phenomenological matter of a lived experience of belonging, in exchange for present-at-hand Morelite. The coders thought, “what would it look like to do Ethics?” and came up with the analytic present-at-hand Anglo philosopher’s head-in-the-clouds Morelite. An Ethical AI would have to have Ethical disciplinary praxis.