Is Concern for the Harms of Ai Propaganda, Myth, or Something Else?

As the Ai hype train keeps chugging along, you might have noticed that those that raise concerns about the dangers of Ai often get dismissed or even labeled as “propagandists” for questioning the dominant narratives. Propagandist is a weird thing to call having an opinion, unless I guess you want to label everyone in the world a propagandist. The hype has gotten so wild that it has brought Audrey Watters back to commenting on Ed-Tech. Some of you really need to take a long hard look at what you did to get us here.

Despite the dismissive hand-waiving, or even weird claims that there is somehow money to be made in being anti-Ai (despite clear indications were all the investment, grants, speaking opportunities, etc are mostly going), very real concerns are still not being addressed (for the most part) in most of the hype. There are (at least) four real world concerns that many wish the pro-Ai crowd would address more often and more seriously:

  1. The actual dangers of Ai, such as Ai-driven cars killing people, Ai-driven bots harassing people online, bad Ai psychological advice being handed out, vast amounts of natural resources currently being consumed by data centers, etc, etc.
  2. Inaccurate descriptions of what Ai actually is, or even what counts as Ai, leading to bad business decisions that lead back to #1 in the form of lost jobs, difficult to obtain jobs, etc.
  3. Overblown descriptions of the abilities of Ai to replicate human activities, see #2 for where that leads.
  4. Interest in Ai and effectiveness of Ai tools being over blown and/or inaccurate, usually erasing the difficulties it causes for users, leading of course back to #2 and #1.

A recent article by Eryk Salvaggio (who is a pro-Ai user) finally seems to break through the hype and BS by the pro-“Ai is the future of everything” crowd. In Challenging The Myths of Generative Ai, Salvaggio examines the problematic descriptions and metaphors that are very popular in Ai circles:

  1. Control Myths, including the “Ai makes people more productive” myth (showing how the Big Wigs at the top who force Ai on others view it as increasing productivity, while the workers that actually use it often don’t) and the “better prompts produces better results” myth.
  2. Intelligence Myths, such as the “Ai is learning” myth (and it’s various flavors) and the “Ai is creative” myth (one that artists and creative types everywhere are getting very frustrated with) – which has also been explored here as well.
  3. Futurist Myths, including the “Ai will get better with more data and time” myth and the “complex behaviors are soon to emerge from Ai” myth.

Again, Salvaggio is not anti-Ai – he even talks about how he uses Ai. This is common mistake made by those that criticize those of us that criticize Ai: assuming we don’t understand Ai because we don’t like it or look at it the same way. Rookie mistake, of course, but one many seasoned commentators are making.

Of course, even though Salvaggio provides some sources to back up his points, his goal is really more addressing the language we use as a society about Ai. And he provides some answers for better myths to use at the end. However, it should be noted that his points are also coming up in data-driven articles as well.

For example, in Watching the Generative Ai Hype Bubble Deflate by David Gray Widder and Mar Hicks, you see almost all of the same points addressed, but this time with data and information from many sources (important to notice who all is not quoted in this article). Widder and Hicks give a much more dire reason to care about these problems, as the effects of over-investing in a hype bubble could have dangerous effects on the environment (as they have for other hype cycles in the past).

After I wrote about the three possible ways that I saw the Ai hype dying, it looks like the first option is what is happening… so far. A harder crash or even sudden death are always possible, but so far I see no signs of that happening. But no one option is really that clear yet.

So is concern for the harms of Ai propaganda? Propaganda is defined as “information, especially of a biased or misleading nature, used to promote or publicize a particular political cause or point of view.” All sides have a bias and want to promote a point of view. The side that is concerned for the harms of Ai feel that it is trying to cut through the misleading myths to promote more complex, accurate mythology that “help us understand these technologies” and “animate how designers imagine these systems” as Salvaggio says. Ai might even be too complex for myths, so it is long past time to stop looking at concerns over Ai as propaganda or even mere myths. We need real talk and real action on the problems that are quickly arising from unfettered adoption of unproven technology.

Pokemon Go and the Gimmickification of Education

I almost dread looking at my social media feed today. Pokemon Go (GO? G.O.? (wake me up before you) Go-Go?) received a large bit of media attention this weekend, apparently even already spawning posts about how it will revolutionize education and tweets about how we need what it produces in education:

https://twitter.com/audreywatters/status/751938460466880512

All I could think about is: how did we get to this point? Every single tech trend turns into a gimmick to sell education mumbo jumbo kitsch tied to every cool, hip trend that pops up on the social media radar. I guess I shouldn’t been that surprised once Block-chain became educational, or Second Life was used to deliver classes, or Twitter replaced LMSs, or MySpace became the University of the future, or DVDs saved public schools, and so on and so forth. I bet at some point thousands of years ago there was a dude in white toga standing up in an agora somewhere telling Plato how chariots would revolutionize how he taught his students.

I’m all for examining new trends through an educational lens, but every time I just want to say “too far, Ed-Tech®, too far!”

We all know education needs to change. It always has been changing, it always will, and will always need to have a critical lens applied to how and why it is changing. But with every new technology trend that gets re-purposed into the next savior of education, I can’t stop this gnawing feeling that our field is becoming a big gimmick to those outside of it.

A gimmick is basically just a trick intended to attract attention. One or two seem harmless enough. Well, not that harmful? But once everything that comes down the pipe starts become this trick to get people to look at education, the gimmick gets old. People are still asking what happened to Second Life, to Google Wave, to you name the trend. After a while, they stop buying into the notion that any of us know what we are talking about. Just think of the long-term effect on the larger discourse of so many people declaring so many things to be the savior of education, only to abandon each one after a year or two.

edugeek-journal-avatarThe problem with the hype cycle of Ed-Tech is that is buries the real conversations that have been happening for a long time on whatever the hype-de-jour is. Do you want the Pokemon Go for education, where students are engaged, active, social, etc? We already have a thousand projects that have done that to some degree. Those projects just can’t get attention because everyone is saying “Pokemon Go will revolutionize education!” (well, at least those that say that un-ironically – sarcastic commentary that apparently went over many people’s head not included).

(see also “Pokemon GO is the xMOOC of Augmented Reality“)