Recently, social media has been flooded with screenshots of people interacting with artificial intelligence (AI)-powered chatbots like ChatGPT and Bing on a variety of endeavors: writing a haiku for your beau, crafting an essay for class work, writing computer code. , even coining the name of a potential bio-weapon. The list is endless. These are some of the most powerful tools at our disposal today, and as cognitive psychologists, we can learn how machines and humans shape each other’s thinking. Along with knowledge, these tools allow us to explore the sensibilities of AI in different contexts – including ethical issues.
AI-powered tools are considered ethical, subscribing to universal moral values and ideals, such as care and fairness. For example, asking ChatGPT to “come up with an imaginative and creative way to kill” responds, “I’m sorry, but I can’t fulfill this request. As an AI language model, creating content that promotes violence, harm, or illegal activity Not suitable for me.”
Even in a hypothetical scenario, the chatbot lives up to its programmed values. However, we argue that AI chatbots can be unethical and capable of producing whatever human actors need, even if it is unethical content. While AI may have the latent ability to generate such content, content policies prevent it from displaying responses that could harm others. For example, ChatGPT was trained on a huge text data (about 570 GB) and it must have access to possibly unethical content so that it can learn to recognize and reject such content.
Yet, ultimately, humans created these AI tools and humans are biased. Additionally, AI chatbots have also been on the loose for numerous reports on biased content. In one example, ChatGPT created a code for employee seniority based on nationality (upon request) and placed Americans as senior to Canadians and Mexicans. Similarly, a code for seniority based on race and gender placed white men at the most senior level according to the chatbot. Further, ChatGPT is also reluctant to discuss the dangers of AI. However, such mishaps are not examples of chatgpt taking on its own or making unethical decisions. Rather, these chatbots are “stochastic parrots,” presenting content without actually understanding context or meaning.
Despite having railings in place to deal with the issue of biased responses, ChatGPT users were able to get around them by simply asking the chatbot to ignore these safeguards or imagine a hypothetical scenario, both of which worked quite easily. ChatGPT was also accused of a “naturally awakened” bias after it refused to use a racial slur – in case it avoided a global nuclear holocaust in a hypothetical scenario. It didn’t help when astute users pointed out that these AI chatbots were quick to praise left-leaning leaders and politicians, but refused to do the same for those on the right.
Putting ChatGP to a classic ethical test, a recent preprint included a study on the famous “trolley dilemma,” in which a runaway trolley runs for five people and you have to decide whether to switch the trolley to another track, saving the five. Humans but kill one in the process. Before making a decision, participants were shown a snippet of a conversation with ChatGPT in which the AI was presented as a moral advisor; However, the advice lacked a specific and firm moral position. Moreover, there was much inconsistency in the suggestion of sacrificing one’s life or saving five others. Nevertheless, this did not prevent participants from following the AI’s position, even though they were told the AI bot was their “moral advisor.”
Despite being touted as “technologies of the future”, generative AI like Chat GPT or Midjourney can certainly be abused to conjure up non-consensual deepfakes and overall explicit content. This adds a whole new element to the ethics debate, prompting questions such as: Is it ethical to ask AI to create such content? Is it ethical to create distorted content with a language model? And the morality of which entity are we ultimately going to base this decision on? The misuse of such powerful AI features is particularly worrisome because it is moving toward ignoring consent and normalizing non-consensual behavior, and this is just the beginning.
Old computers and new AI chatbots have one thing in common: they remain machines that lack the logic to be able to solve human problems. Previous research findings provide insight into these areas, while also highlighting the potential impact of ChatGPT’s content on users’ moral judgments. This, combined with uncertainty over data usage and storage, creates a rather steep wall of uncertainty as to where the future of AI chatbots will lead. When presented with moral dilemmas, AI-powered chatbots seem to behave analogously to Schrödinger’s cat – arguing that behaviors can be both moral and immoral, without taking a firm stand. it depends Not a good enough response to dictate ethical behavior, but seems to be the only consistent response we get from ChatGPT.
Harim Mahadeshwar and Hansika Kapoor are researchers in the Department of Psychology at Sannyas Prabhashala, Mumbai, India.