What Do You Mean By “Cognitive Domain” Anyways? The Doom of AI is Nigh…

One of the terms that one sees while reading AI-Hype is “cognitive domain” – often just shortened to “domain.” Sometimes it is obvious that people using the term are referring to content domains – History, Art, English, Government, etc. Or maybe even smaller divisions within the larger fields, like the various fields of science or engineering or so on. Other times, it is unclear if a given person knows really what they mean by this term other than “brain things” – but I choose to believe they have something in mind. It is just hard to find people going into much detail about what they mean when they say something like “AI is mastering/understanding/conquering more cognitive domains all the time.”

But if cognitive domains are something that AI can conquer or master, then they surely exist in real life and we can quantitatively prove that this has happened – correct? So then, what are these cognitive domains anyways?

Well, since I have three degrees in education, I can only say that occasionally I have heard “cognitive domains” used in place of Bloom’s Taxonomy when one does not want to draw the ire of people that hate Bloom’s. Other times it is used to just sound intellectual – as a way to deflect from serious conversation about a topic because most people will sit there and think “I should know what ‘cognitive domain’ means, but I’m drawing a blank so I will nod and change the subject slightly.” (uhhhh… my “friend” told me they do that…)

It may also be that there are new forms of cognitive domains that exist since I was in school, so I decided to Google it to see if there is anything new. There generally aren’t any new ones it seems – and yes, people that are using the term ‘cognitive domains’ correctly are referring to Bloom’s Taxonomy.

For a quick reference, this chart on the three domains from Bloom (cognitive, affective, and psychomotor) puts the various levels of each domain in context with each other. The cognitive domains might have been revised since you last looked at them, so for the purpose of this post I will list the revised version as what is being referenced here: Remember, Understand, Apply, Analyze, Evaluate, and Create.

So back to the big question at hand: where do these domains exist? What quantitative measure are scientists using to prove that that AI programs are, in fact, conquering and mastering new levels all the time? What level does AI currently stand at in conquering our puny human brains anyways?

Well, the weird thing is… cognitive domains actually don’t exist – not in a really actual read kind of way.

Before you get your pitchforks out (or just agree with me and walk away), let me point out that taxonomies in general are often not quantifiably real, even though they do sometimes describe “real” attributes of life.  For example, you could have a taxonomy for candy that places all candies of the same color into their own level. But someone else might come along and say “oh, no – putting green M&Ms and green Skittles into the same group is just gross.” They might want to put candies in levels of flavors – chocolate candies like M&Ms into one, fruit candies like Skittles into others, and so on.

While the candy attributes in those various taxonomies are “real,” the taxonomies themselves are not. They are just organizational schemes to help us humans understand the world, and therefore have value to the people that believe in one or both of them. But you can’t quantifiably say that “yes, this candy color taxonomy is the one way that nature intended for candy to be grouped.” It’s just a social agreement we came up with as humans. A hundred years from now, someone might come along and say “remember when people thought color was a good way to classify candy? They had no idea what color really even was back then.”

To make it even more complex – Bloom’s Taxonomy is a way to organize processes in the brain that may or may not exist. We can’t peer into the brain and say “well, now we know they are remembering” or “now we know they are analyzing.” Someday, maybe we will be able to. Or we may gain the ability to look in the brain and find out that Bloom was wrong all along. The reality is that we just look at the artifacts that learners create to show what they have learned, and we classify those according to what we think happened in the head: “They picked the correct word for this definition, so they obviously remembered that fact correctly” or “this is a good summary of the concept, so them must understand it well” and so on.

If you have ever worked on a team that uses Bloom’s to classify lessons, you know that there is often wild disagreement over what lessons belong to what level: “no, that is not analyzing, that is just comprehending.” Even the experts will disagree – and often come to the conclusion that there is no one correct Level for many lessons.

So what does that mean for AI “mastering” various levels of the cognitive domain?

Well, there is no way to tell – especially since human brains are no longer thought to work like computers, and what computers do is not comparable to how human brains work.

I mean think about it – AI has access to massive databases of information that are transferred into active use with a simple database query (sorry AI people, I know, I know – I am trying to use terms people would be most familiar with). AI can never, ever remember as it technically never forgets or loses connection with the information like our conscious mind does (also, sorry learning science people – again, trying to summarize a lot of complexity here).

This reality is also why it is never impressive when some AI tool passes or scores well on various Math, Law, and other exams. If you could actually hook a human brain up to the Internet, would it be that impressive if that person could pass any standardized exam? Not at all. Google has been able to pass these exams for… how many decades now?

(My son taught me years ago how you can type any Math question into a Chrome search bar and it will give you the answer, FYI.)

I’m not sure it is very accurate to even use “cognitive domains” anywhere near anything AI, but then again I can’t really find many people defining what they mean when they do that – so maybe giving that definition would be a good place to start for anyone that disagrees.

(Again, for anyone in any field of study, I know I am oversimplifying a lot of complex stuff here. I can only imagine what some people are going to write about this post to miss the forest for the trees.)

But – let’s say you like Bloom’s and you think it works great for classifying AI advancement. Where does that currently put AI on Bloom’s Taxonomy? Well, sadly, it has still – after more than 50 years – not really “mastered” any level of the cognitive domain.

Sure, GPT-4 can be said to being close to mastering the remember level completely. But AI has been there since the beginning – GPT-4 just has the benefit of faster computing power and massive data storage to make it happen in real time. But it still has problems with things like recognizing, knowing, relating, etc – at least, the way that humans do these tasks. As many people have pointed out, it starts to struggle with the understand level. Sure, GPT-4 can explain, generalize, extend, etc. But many other things, like comprehend, distinguish, infer, etc – it really struggles with. Sometimes it gets close, but usually not. For example, what GPT-4 does for “predict” is really just pattern recognition that technically extends the pattern into the future (true prediction requires moving outside extended patterns). Once you start getting to apply, analyze (the way humans do it, not computers), evaluate, and create? GPT-4 is not even scratching the surface of those.

And yes, I did say create. Yes, I know there are AI programs to “create” artwork, music, literature, etc. Those are still pattern recognition that converts and generalizes existing art into parameters sent to it – no real “creation” is happening there.

A lot of this has to do with calling these programs “artificial intelligence” in the first place, despite the fact that AI technically doesn’t exist yet. Thankfully, some people have started calling these programs what they actually are: AutoRegressive Large Language Models – AR-LLMs or usually just LLMs for short. And in case you think I am the only one pointing out that there really is nothing cognitive about these LLMs, here is a blog post from the GDELT Project saying the same thing:

Autoregressive Large Language Models (AR-LLMs) like ChatGPT offer at first glance what appears to be human-like reasoning, correctly responding to intricately complex and nuanced instructions and inputs, compose original works on demand and writing with a mastery of the world’s languages unmatched by most native speakers, all the while showing glimmers of emotional reactions and empathic cue detection. The unfortunate harsh reality is that this is all merely an illusion coupled with our own cognitive bias and tendency towards extrapolating and anthropomorphizing the small cues that these algorithms tend to emphasize, while ignoring the implications of their failures.

Other experts are saying that Chat-GPT is doomed and can’t be fixed – only replaced (also hinted at in the GDELT blog post):

There is no alt-text with the LeCun slide image, but his points are basically saying that because LLMs are trained on faulty data, that data makes them untrustable AND since they use their own outputs to train future outputs for others, the problems just increase over time. His points on the slide image above are:

  • Auto-Regressive LLMs are doomed
  • They cannot be made factual, non-toxic, etc.
  • They are not controllable
  • Probability e that any produced token takes us outside of the set of correct answer
  • Probability that the answer of n is correct: P(correct) = (1-e)n
  • This diverges exponentially
  • It’s not fixable (without a major redesign)

So… why are we unleashing this on students… or people in general? Maybe it was a good idea for universities to hold back on this unproven technology in the first place? More on that in the next post I am planning. We will see if LeCun is right or not (and then see what lawsuits will happen against who if he is correct). LeCun posts a lot about what is coming next – which may happen. But not if people keep throwing more and more power and control to OpenAI. They have a very lucrative interest in keeping GPT going. And let’s face it – their CEO really, really does not understand education.

I know many people are convinced that the “shiny new tech tool hype” is different this time, but honestly – this is exactly what happened with Second Life, Google Wave, you name it: people found evidence of flaws and problems that those who were wowed by the flashy, shiny new thing ignored – and in the rush to push unproven tech on the world, the promise fell apart underneath them.