Or maybe it’s not. I recently finished going through the AI for Everybody course through Coursera taught by Andrew Ng. It’s a very high level look at the current landscape of AI. I didn’t really run into anything that anyone following AI for any length of time wouldn’t already know, but it does give you a good overview if you just haven’t had time to dive in, or maybe need a good structure to tie it all together.
One of the issues that Ng addressed in one session was the idea of another AI winter coming. He described it this way:
This may be true, but there are some important caveats to consider here. People said these exact same things before the previous AI winters as well. Many of the people that invest in AI see it as an automation tool, and that was true in the past. Even with the limitations that existed in the past with earlier AI implementations, proponents still saw tremendous economic value and a clear path for it to create more value in multiple industries. The problem was that just after AI was implemented, investors lost their faith. The shortcomings of AI were too great to overcome the perceived economic value… and with all of the improvements in AI, they still haven’t really overcome the same core limitations AI has always had.
If you are not sure what I am talking about, you can look around and see for yourself. Just a quick look at Twitter (outside of your own timeline that is) reveals a lot of disillusionment and concern about outputs. My own students won’t touch ChatGPT because the output is usually bad, and it takes longer to fix it than it does to just do it on their own. This has always been the problem that caused past AI winters – companies implement it, and are not happy with the results. I know of advertising agencies that are banning the use of AI because their internal polls are showing customers don’t trust AI generated content. People that once advocated for AI implementation are starting to back away due to quality and safety concerns.
That always has happened and it always will happen with AI. The question is whether or not the financial numbers hold well enough to overcome the loss of faith that is happening and will continue to grow. Anyone that says either option is obvious either underestimates how poorly current AI is ready for real world usage, or they overestimate the economic impact that the inevitable loss of faith will generate.
It’s basically a race between mounting disillusionment and increasing economic value. Whichever one can outpace the other in the next year or two will determine if we get another AI winter or not. I will say that losing faith in tech is very easy (AOL, MySpace, Second Life, Google Wave, Bitcoin, etc, etc, etc), while economic value is hard to build when there are problems baked into the system (hallucinations, bias, errors, etc). The logical choice is to brace yourself, the AI winter is coming. But never underestimate the power of a tech bro to squeeze economic value out of dying idea at the last minute, either.
Matt is currently an Instructional Designer II at Orbis Education and a Part-Time Instructor at the University of Texas Rio Grande Valley. Previously he worked as a Learning Innovation Researcher with the UT Arlington LINK Research Lab. His work focuses on learning theory, Heutagogy, and learner agency. Matt holds a Ph.D. in Learning Technologies from the University of North Texas, a Master of Education in Educational Technology from UT Brownsville, and a Bachelors of Science in Education from Baylor University. His research interests include instructional design, learning pathways, sociocultural theory, heutagogy, virtual reality, and open networked learning. He has a background in instructional design and teaching at both the secondary and university levels and has been an active blogger and conference presenter. He also enjoys networking and collaborative efforts involving faculty, students, administration, and anyone involved in the education process.