The Terminator 2: Judgment Day has been on my mind a lot lately for two reasons.
One, I am planning my annual post-semester winter-holiday decorating, gingerbread baking movie watch-list.
Two, SkyNet’s sentience and destruction of humanity is feeling more and more real these days.
Well, perhaps not SkyNet exactly. But it certainly seems like technology is hurdling us towards something, and it ain’t good.
Andy Baio’s critique of art created by OpenAI’s DALL-E 2, “Opening the Pandora’s Box of AI Art,” touched on a visceral unease with the pace, acceptance, and championing of Artificial Intelligence, Machine Learning, and data collection for algorithms. At the heart of this critique are questions, and very good ones at that, about the ethics of AI generated “art.”
- Is it ethical to train an AI on a huge corpus of copyrighted creative work, without permission or attribution?
- Is it ethical to allow people to generate new work in the styles of the photographers, illustrators, and designers without compensating them?
- Is it ethical to charge money for that service, built on the work of others?
These, in my mind, all have straightforward answers. No! It is not ethical. But what makes these questions good and very important are not what they are asking but why they need to be asked.
Ethics are not being fully considered in the development of these technologies or, it seems at most, a perverted self-serving form of ethics are being pushed by the technocratic/techno-utopist developers.
In a 2022 interview with the New Yorker, Sam Altman, the CEO of OpenAI (the parent company that created DALL-E) said “There was this belief that creativity is this deeply special, only-human thing…maybe not so true anymore…[DALL-E 2 is] an extension of your own creativity.”
This is deeply unsettling.
There was a belief?
As in, no longer?
What does he mean by this?
What does “was” indicate about the direction technocrats are trying to push humanity (for their own gain, ‘cause nothing tech in this world is free)?
What are we training Artificial Intelligence and algorithms for?
Why is this (and similar technologies) being developed, refined, and pushed? I have a hard time believing it is solely for the few “squirts” of endorphins being delivered when the DALL-E2 AI spits out a Basquiat-esque bowl of soup dimensional portal.
Pandora’s Box of Ethical Nightmares
These ethical questions are not limited to AI art. Take, for example, TikTok, owned by Chinese company Huawei. In 2021, user agreements were quietly updated to inform people that their “faceprints” and “voiceprints” were now being collected and added to a database (but who really thinks that was not happening before?). The social media company claims this is not true, they are not actively collecting this information. However, Chinese companies have a legal obligation to hand over any data or information to the ruling Chinese Communist Party and, in 2021, it became law that all Chinese companies must scrap as much data as possible.
What are we training Artificial Intelligence and algorithms for? The technology that makes AI art and social media possible can easily (and has easily) been weaponized.
There are many people attempting to respond to the ethics of these emerging technologies and do their best to inform the public. Ryan Cordell’s report to the Library of Congress on the state of Machine Learning and Libraries is an example of just such an effort. In addition to critically reviewing ethical uses and applications, as well as ethical abuses, there is a distinct call to educate the public on Machine Learning literacy.
But even this is concerning. In my limited time in “library-land” (I joke that the ink on my diploma as hardly dried) and my experience working with high-school students, media and internet literacy is an enormous issue. It is difficult to help empower people to be critical consumers of information and media on the internet. The CRAAPP test used to be a handy, go to acronym for this. CRAAPP stands for:
Currency: the timeliness of the information
Relevance: the importance of the information for your needs
Authority: the source of the information
Accuracy: the reliability, truthfulness, and correctness of the content
Purpose: the reason the information exists
Pay: the organizations or individuals paying for this information to be published or who may make money
But in the age of social media manipulation, algorithms, echo chambers, the removal of humans and human agency in the creation of media- does this even stand? How can people walking down the street, worried about inflation and making rent, be empowered to care about this?
The algorithms seemed to have rendered so many people capable of information retrieval, but massively reduced their information seeking capabilities. And why would anyone need to seek information, when you can just retrieve it from an algorithm?
Should I wear a tin-foil hat?
This post has taken a dark turn (and I didn’t even get into GAN, deep-fakes, the incredibly uncanny repulsion I feel with CGI, social media use, neo-liberalism, or the many conversations Sam Harris has had with experts about the dangers of AI, machine learning, algorithms). These technologies are not entirely bad, but they are not nearly as great or even “ok” as technocrats want the public to think.