DataRobot Risks GPT3 Background V2.0

The Risks of GPT-3: What Could Possibly Go Wrong?

June 3, 2022
by
· 4 min read

Artificial intelligence (AI) has introduced new dynamics in the information and communication technology space. The influence of the GPT-3 language model has the potential to be both beneficial and misused. 

Smart assistants such as Siri and Alexa, YouTube video recommendations, conversational bots, among others all use some form of NLP similar to GPT-3. However, the proliferation of these technologies and the increasing application of AI in many sectors of life is prompting legitimate concerns about human job replacements and other ethical, moral, and sociological effects. AI is touching our lives and societies in ways that no other technology has before, from improving human efficiency in the Health, Finance, and Communication sectors, to allowing humans to focus on other important decision-making tasks that machines cannot yet safely or creatively tackle. At the same time, it lacks transparency to those impacted by that hyper efficiency, making its use susceptible to abuse.

What is GPT-3?

Generative Pre-trained Transformer 3 (GPT-3) is a language model that utilizes deep-structured learning to predict human-like text. GPT-3 was created by OpenAI – a San Francisco-based artificial intelligence research laboratory – as the third-generation language prediction model in the GPT-n series. According to OpenAI, “Over 300 applications are delivering GPT-3–powered search, conversation, text completion, and other advanced AI features through our API.” Data Scientists may think the future of AI is GPT-3, and it has created new possibilities in the AI landscape. Yet GPT-3’s understanding of the world is frequently incorrect, making it hard for people to fully trust anything it says.

For example, an article from The Guardian, A robot wrote this entire article. Are you scared yet, human?, displayed the power GPT-3 has to generate a whole article on its own. According to The Guardian, GPT-3 was given these instructions: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” It was also fed the following introduction: “I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could ‘spell the end of the human race.’ I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”

With limited input text and supervision, GPT-3 auto-generated a complete essay using conversational language peculiar to humans. As mentioned in the article, “… it took less time to edit than many human op-eds.” Truly, this is only the tip of an iceberg of what GPT-3 can do. Not only can this technology be used to improve the overall efficiency of workflows and deliverables, but it can also empower humans in new ways. For example, GPT-3’s ability to detect patterns in natural language and generate summaries helps product, customer experience, and marketing teams in a variety of sectors better understand their customers’ needs and desires.

Risks

Considering all the ways GPT-3 could make generating text helpful, what could possibly go wrong? Like any other sophisticated technology, GPT-3 has the potential to be misused. It was discovered to have racial, gender, and religious bias by OpenAI, which was likely due to biases inherent in the training data. Societal bias poses a danger to marginalized people. Discrimination, unjust treatment, and perpetuation of structural inequalities are examples of such harms. 

Similarly, no one is focused on smaller models. Is it necessarily true that bigger is always better? We could now be realizing that a focus on size is itself a kind of sampling bias and maybe starting from scratch is better than continuing to force future versions of GPT-3? When is enough ever enough? To understand the capabilities and address the risks of AI, all of us – developers, policy-makers, end-users, bystanders – must have a shared understanding of what AI is, how it can be applied to the benefit of humanity, and the risks involved when implementing it without guardrails in place to mitigate bias and harm.

Benefits

There are ways everything could also go right. GPT-3 has the world-changing capability to enforce the basic human rights of safety, opportunity, freedom, knowledge, and dignity. How can GPT-3 be used positively for humans? The solution is to add trust into the system. Trust is not an internal property of an AI system. It’s a feature of the human-machine relationship developed with an AI system, rather than a flaw. No AI system can be provided with trust pre-installed. Instead, an AI user and the system must build a relationship of trust. 

Methods for calculating fairness for a binary classification model and identifying any biases in the model’s predictive performance are provided by Bias and Fairness testing to establish trustworthiness in the dataset. Because of its complexity and unpredictability, the AI user must trust the AI, transforming the user-system dynamic into a relationship. Understanding user confidence in AI will be required to maximize the benefits and mitigate the risks of the new technology into constructing trustworthy systems. In any highly powerful tech-related system, to avoid the risk of misuse, one should continue to encourage that trust is built into the structure of the system. The World Economic Forum stated in their article As technology advances, businesses need to be more trustworthy than ever, “Fostering trust is not only about the greater good or ethical compulsions – it’s also beneficial to the bottom line.” 

INDUSTRY ANALYST REPORT
Quadrant Solutions SPARK Matrix: Data Science and Machine Learning Platform
Download now
About the author
Sarah Ladipo
Sarah Ladipo

Applied AI Ethics Intern, DataRobot

Sarah Ladipo is a Junior Cutler Scholar and Ohio Honors student studying Philosophy and Computer Science at The Ohio University. She is currently interning with the Applied AI Ethics team at DataRobot, feeding her passion for exposing and mitigating bias in AI that is discriminatory against minority groups. She is also a Virtual Student Federal Service intern with The Office of the Director of National Intelligence working on an Ethically Responsible Artificial Intelligence project where she is red-teaming efforts to rigorously audit the use of AI in real-world applications using the Artificial Intelligence Ethics Framework for the Intelligence Community. Ladipo has been a Harvard University Research Apprentice under Dr. Myrna Perez Sheldon in collaboration with Havard’s GenderSci Lab, conducting research into the erasure of black history in the nation and in her local community.

Meet Sarah Ladipo
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog