You might have noticed this Twitter thread about improvements in AI-generated art work. Well, if you are still on Twitter that is. Here is the thread – well, at least, until You-Know-Who “MySpaces” Twitter out of service:
If you last checked in on AI image makers a month ago & thought “that is a fun toy, but is far from useful…” Well, in just the last week or so two of the major AI systems updated.
You can now generate a solid image in one try. For example, “otter on a plane using wifi” 1st try: pic.twitter.com/DhiYeVMEEV
— Ethan Mollick (@emollick) November 26, 2022
So let’s take a look at this claim that AI-generated artwork is coming to disrupt people’s jobs in the very near future. First of all, yes it is really cool to be able to enter a prompt like that and get results like this. There is obviously a lot of improvement in the AI. It actually looks useful now. But saying “a less capable technology is developing faster than a stable dominant technology (human illustration)”…?
[tweet 1596581772644569090 hide_thread=’true’]
Whoa, now. Time for a reality check. AI art is just now somewhat catching up with where human art has been for hundreds of years. AI was programmed by people that had thousands of years of artistic development easily available in a textbook. So saying that it is “developing faster”? With humans being able to create photo-realistic drawings as well as illustrate any idea that comes to their mind – where is there left to “develop” in art?
That is like a new car company saying they are “developing new cars faster than the stable industry.” Or someone saying that they have blazed new technology in travel because they can cross the country faster in a car than a horse and wagon did in the past. The art field had to blaze trails for thousands of years to get where it is, and the AI versions are just basically cheating to play catch up (and it is still not there yet).
The big question is: can this technology come up with a unique, truly creative piece of artwork on its own? The answer is still “no.” And beating the Lovelace Test is not proof that the answer is “yes,” because the Lovelace test is not really a true test of creativity.
[tweet 1596680005638975489 hide_thread=’true’]
Yes, all artists stand on the shoulders of others, but there is still an element to creativity that involves blending those influences into something else that transcends being strictly derivative of existing styles. Every single example of AI artwork so far has been very derivative of specific styles of art, usually on purpose because you have to pick an existing artistic style just to get your images in the first place.
But even the example above of an “otter making pizza in Ancient Rome” is NOT a “novel, interesting way” by the standards that true artists use. I am guessing that Mollick is referring to the Lovelace 2.0 Test, which the creator of said test stated that “I didn’t want to conflate intelligence with skill: The average human can play Pictionary but can’t produce a Picasso.”
Of course, the average artist can’t produce an original painting on the level of Picasso either (unless they are just literally re-painting a Picasso, which many artists do to learn their craft). The people working on this particular AI Art Generator have basically advanced the skill of their AI to where it can pass the Lovelace 2.0 Test without really becoming truly creative. And honestly, “Draw me a picture of a man holding a penguin” is a sad measure of artistic creativity – no matter how complex you make that prompt as the test goes along.
But Mollick’s claims in this thread is just an example of people not understanding the field that they say is going to be disrupted. For example, marveling over correct lighting and composition? We have had illustration software that could do this correctly for decades.
[tweet 1596621251627335680 hide_thread=’true’]
Artists will tell you that in a real world situations, the time consuming part of creating illustrations is figuring out what the human that wants to art… actually wants. “The otter just looks wrong – make it look right!” is VERY common feedback. The client probably also created several specific details about the otter, plane, positions of things, etc that has to be present in any artwork they want. Then there are all of the things they had in their head that they didn’t write down. Pulling those details that out of clients is what professional artists are trained to do.
This is where AI in art, education, etc always falls apart: programmers always have to start with the assumption that people actually know what they want out of the AI generator in the first place. The clients that professionals work with rarely ever want something as simple as “otter on a plane using wifi.” The reality is that they rarely even have that specific or defined idea of what they want in the beginning. There is a difficult skill of learning to figure out what people actually want that the experts in AGI/strong AI/etc tell us is probably never going to be possible from AI.
So, is this a cool development that will become a fun tool for many of us to play around with in the future? Sure. Will people use this in their work? Possibly. Will it disrupt artists across the board? Unlikely. There might be a few places where really generic artwork is the norm and the people that were paid very little to crank them out will be paid very little to input prompts. Look, PhotoShop and asset libraries made creating company logos very, very easy a long time ago. But people still don’t want to take the 30 minutes it takes to put one together, because thinking through all the options is not their thing. You still have to think through those options to enter an AI prompt. And people just want to leave that part to the artists. The same thing was true about the printing press. Hundreds of years of innovation has taught us that the hard part of the creation of art is the human coming up with the ideas, not the tools that create the art.
Matt is currently an Instructional Designer II at Orbis Education and a Part-Time Instructor at the University of Texas Rio Grande Valley. Previously he worked as a Learning Innovation Researcher with the UT Arlington LINK Research Lab. His work focuses on learning theory, Heutagogy, and learner agency. Matt holds a Ph.D. in Learning Technologies from the University of North Texas, a Master of Education in Educational Technology from UT Brownsville, and a Bachelors of Science in Education from Baylor University. His research interests include instructional design, learning pathways, sociocultural theory, heutagogy, virtual reality, and open networked learning. He has a background in instructional design and teaching at both the secondary and university levels and has been an active blogger and conference presenter. He also enjoys networking and collaborative efforts involving faculty, students, administration, and anyone involved in the education process.