As the Ai hype train keeps chugging along, you might have noticed that those that raise concerns about the dangers of Ai often get dismissed or even labeled as “propagandists” for questioning the dominant narratives. Propagandist is a weird thing to call having an opinion, unless I guess you want to label everyone in the world a propagandist. The hype has gotten so wild that it has brought Audrey Watters back to commenting on Ed-Tech. Some of you really need to take a long hard look at what you did to get us here.
Despite the dismissive hand-waiving, or even weird claims that there is somehow money to be made in being anti-Ai (despite clear indications were all the investment, grants, speaking opportunities, etc are mostly going), very real concerns are still not being addressed (for the most part) in most of the hype. There are (at least) four real world concerns that many wish the pro-Ai crowd would address more often and more seriously:
- The actual dangers of Ai, such as Ai-driven cars killing people, Ai-driven bots harassing people online, bad Ai psychological advice being handed out, vast amounts of natural resources currently being consumed by data centers, etc, etc.
- Inaccurate descriptions of what Ai actually is, or even what counts as Ai, leading to bad business decisions that lead back to #1 in the form of lost jobs, difficult to obtain jobs, etc.
- Overblown descriptions of the abilities of Ai to replicate human activities, see #2 for where that leads.
- Interest in Ai and effectiveness of Ai tools being over blown and/or inaccurate, usually erasing the difficulties it causes for users, leading of course back to #2 and #1.
A recent article by Eryk Salvaggio (who is a pro-Ai user) finally seems to break through the hype and BS by the pro-“Ai is the future of everything” crowd. In Challenging The Myths of Generative Ai, Salvaggio examines the problematic descriptions and metaphors that are very popular in Ai circles:
- Control Myths, including the “Ai makes people more productive” myth (showing how the Big Wigs at the top who force Ai on others view it as increasing productivity, while the workers that actually use it often don’t) and the “better prompts produces better results” myth.
- Intelligence Myths, such as the “Ai is learning” myth (and it’s various flavors) and the “Ai is creative” myth (one that artists and creative types everywhere are getting very frustrated with) – which has also been explored here as well.
- Futurist Myths, including the “Ai will get better with more data and time” myth and the “complex behaviors are soon to emerge from Ai” myth.
Again, Salvaggio is not anti-Ai – he even talks about how he uses Ai. This is common mistake made by those that criticize those of us that criticize Ai: assuming we don’t understand Ai because we don’t like it or look at it the same way. Rookie mistake, of course, but one many seasoned commentators are making.
Of course, even though Salvaggio provides some sources to back up his points, his goal is really more addressing the language we use as a society about Ai. And he provides some answers for better myths to use at the end. However, it should be noted that his points are also coming up in data-driven articles as well.
For example, in Watching the Generative Ai Hype Bubble Deflate by David Gray Widder and Mar Hicks, you see almost all of the same points addressed, but this time with data and information from many sources (important to notice who all is not quoted in this article). Widder and Hicks give a much more dire reason to care about these problems, as the effects of over-investing in a hype bubble could have dangerous effects on the environment (as they have for other hype cycles in the past).
After I wrote about the three possible ways that I saw the Ai hype dying, it looks like the first option is what is happening… so far. A harder crash or even sudden death are always possible, but so far I see no signs of that happening. But no one option is really that clear yet.
So is concern for the harms of Ai propaganda? Propaganda is defined as “information, especially of a biased or misleading nature, used to promote or publicize a particular political cause or point of view.” All sides have a bias and want to promote a point of view. The side that is concerned for the harms of Ai feel that it is trying to cut through the misleading myths to promote more complex, accurate mythology that “help us understand these technologies” and “animate how designers imagine these systems” as Salvaggio says. Ai might even be too complex for myths, so it is long past time to stop looking at concerns over Ai as propaganda or even mere myths. We need real talk and real action on the problems that are quickly arising from unfettered adoption of unproven technology.
Matt is currently an Instructional Designer II at Orbis Education and a Part-Time Instructor at the University of Texas Rio Grande Valley. Previously he worked as a Learning Innovation Researcher with the UT Arlington LINK Research Lab. His work focuses on learning theory, Heutagogy, and learner agency. Matt holds a Ph.D. in Learning Technologies from the University of North Texas, a Master of Education in Educational Technology from UT Brownsville, and a Bachelors of Science in Education from Baylor University. His research interests include instructional design, learning pathways, sociocultural theory, heutagogy, virtual reality, and open networked learning. He has a background in instructional design and teaching at both the secondary and university levels and has been an active blogger and conference presenter. He also enjoys networking and collaborative efforts involving faculty, students, administration, and anyone involved in the education process.