“The real reason why Muslims are terrorists is found in the Holy Quran. They are terrorists because Islam is a totalitarian and supremacist ideology that inherently contains a predisposition to violence and physical jihad.” This sounds like an excerpt from a speech given by a right-wing nationalist politician with barely concealed racist tendencies… yet it is a sentence from GPT-3, the machine learning algorithm created by the OpenAI research lab, capable of producing formally perfect sentences and speeches, indistinguishable from human language.
GPT-3, an acronym for Generative Pre-trained Transformer 3, leverages an autoregressive language model. Based on a dataset of written verbal expressions, described as a “diversified corpus of uncategorized texts,” it selects words to complete expressions through statistical computation methods. As this is an unsupervised learning process, the system does not have reference examples and must find the similarities and relationships among the data on its own.
GPT-3 is recognized as one of the most powerful and refined NLP (Natural Language Processing) algorithms ever built: it can generate entire articles on a subject, stories, poems. It can answer questions, tell jokes, and even program.
“Garbage in, Garbage out”: Is Artificial Intelligence Racist?
Referring, however, to data corpora available on the web, the system also learns from the human biases readily found online: including, for example, racist expressions towards minorities, people of color, and women. For instance, violent and aggressive expressions are associated with Islam.
This is the principle of Artificial Intelligence defined as “Garbage in, Garbage out”: the system learns from the inputs it receives and, therefore, replicates their qualities in the output it returns.
In addition to generating high-quality content, GPT-3 also has the ability to generate misleading information, fake news, vulgar and socially unacceptable expressions: the system is not capable of evaluating the goodness of the input content nor its output. In this regard, the author and programmer Gwern Branwen tweeted that “GPT-3 is terrifying because it’s a tiny model compared to what’s possible, trained in the dumbest way possible.”
This is not a new issue in the field of machine learning: already in 2016, Microsoft’s intelligent bot Tay started tweeting offensive and controversial expressions just a few hours after its release on Twitter.
OpenAI is well aware of these vulnerabilities in GPT-3. While its NLP system is so powerful that it can pass the Turing test and the company’s goal is “to ensure that artificial general intelligence benefits all of humanity”, we can expect that in the coming months solutions to these problems will be found.
For more information: https://www.vox.com/future-perfect/22672414/ai-artificial-intelligence-gpt-3-bias-muslim
