As The Verge notes, Norman is only the extreme version of something that could have equally horrifying effects, but be much easier to imagine happening: "What if you're not white and a piece of software predicts you'll commit a crime because of that?"
This AI is dubbed Norman after the antagonist in the Alfred Hitchcock classic "Psycho". Norman was trained to perform image captioning, a deep learning method used to generate a description of an image.More news: Warriors Win Back-To-Back NBA Championships But Ratings Take A Loss
Norman is an AI experiment born from the test and "extended exposure to the darkest corners of Reddit", according to MIT, in order to explore how datasets and bias can influence the behavior and decision-making capabilities of artificial intelligence. All the image data MIT fed Norman came from what it calls "an infamous subreddit" that the researchers refuse to name specifically due to its graphic content. The way Norman describes the Rorschach inkblots with simple statements does make it seem like it is posting on a subreddit. "Then, we compared Norman's responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders". What a "normal" A.I. sees as a bird on a wire, Norman identifies as a man being electrocuted to death. Nice, eh? Per MIT, Norman's psychopathic tendency "represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms". Similarly, a standard AI saw a "photo of a baseball glove" in the same inkblot where Norman saw a "man murdered by machine gun in broad daylight". Where one AI saw a vase with flowers, Norman saw a man shot in front of his "screaming" wife.
In the first inkblot, a normally programmed AI saw "a group of birds sitting on top of a tree branch". The objective of Norman AI is to demonstrate that artificial intelligence can not be unfair and biased unless such data is fed into it. AI can also be used for good, like when MIT managed to create an algorithm called "Deep Empathy" a year ago, to help people relate to victims of disaster.