Previously, we saw how “Ghiblification” blurred the line between human touch and AI creativity. But does the AI-generated creative output truly indicate its creative understanding?
Scientists tested humans and AI chatbots using the Alternate Uses Task (AUT) test. In this test, both human participants and AI chatbots had to come up with unusual and creative uses for everyday objects like rope, boxes, or pencils. The AUT test measures divergent thinking. It’s basically open-ended thinking that leads to many alternate ideas for one goal, often tied to innovation. When prompted with “rope,” a convergent thinker might say it’s for tying things. A divergent thinker might imagine making snake toys or crafting a hammock for lounging on a lazy afternoon.
In the AUT tests, humans on average scored the lowest across all the tasks, while AI chatbots gave consistently creative responses. Surprisingly, even when the highest scores are compared, top-performing humans often beat the best AI. This shows that while most humans lag behind, their creativity varies widely—from poor to impressively high. In contrast, AI maintains a steady, competent level output.
The study’s authors explained that humans generate lower-quality ideas because they often can’t move beyond an object’s typical use. On the contrary, “AI chatbots executed the task always at the best of their capabilities from a computational point of view”. This suggests that average humans are convergent thinkers, and only a few excel at divergent thinking. But we already know this, right? Not everyone of us is a creative genius.
AI, on the other hand, behaves like a robot—sticking to the task, keeping its eyes on the instruction, and performing with precision.
Well, it is a robot anyway!
We often credit AI for producing ideas, sometimes even refer them creative or original. But originality comes from exploring a boundless space of possibilities, driven by intent. It’s not just shuffling or extrapolating known ideas.
Take Salvador Dalí, the iconic surrealist artist of the 20th century. In his famous painting ‘The Persistence of Memory’, he depicted a coastal landscape where clocks melt like soft cheese—a symbol of time’s fluidity. Dalí reportedly said the image was inspired by hallucinations after eating Camembert cheese. Here, his idea didn’t just appear, it aligned with a purpose – to express the nature of time. Without this intent, it would merely be an exercise in divergent thinking—an odd take on what a clock could look like. So, while an advanced AI may generate something that appears creative, it lacks ‘true’ intention.
So, where does AI really stand when compared to human intelligence or creativity? And, turning the mirror around, where do we stand in relation to AI?
In 1950, the father of modern computer science, Alan Turing, proposed the ‘Imitation Game‘ or the ‘Turing Test’. A human judge chats with both a human and a machine through text, trying to tell who’s who. If the judge can’t tell them apart, the machine passes—it has successfully imitated human intelligence. Recently, researchers at UC San Diego tested OpenAI’s GPT-4.5 model, and it fooled human judge 73% of the time, more often than actual humans did.
Does that mean that machines are now more human than we are?
Not quite! As the study puts it, “The Turing Test is not a direct test of intelligence, but a test of human-likeness”. If a machine passed the test, earlier it was assumed to have a human-like behavior and hence possibly human-like intelligence. But now, as machines begin to sound more like us, other distinctions become more evident. Intelligence alone is not enough to appear convincingly human. Trained on large amount of human-generated data, AI have learned how to sound like humans in conversation without true comprehension of the word, its context, or the intent.
Most importantly, its response does not originate from genuine thoughts or subjective experiences.
The Lovelace Test, designed in the early 2000s by Selmer Bringsjord and colleagues, is a more rigorous test for machine intelligence. It emphasizes the authenticity of AI’s creation. Named after Ada Lovelace, the 19th-century English mathematician who argued that a machine cannot have human-like intelligence until it creates something it wasn’t programmed to do. So, to pass the test, AI must produce something original and authentic entirely on its own, without a clear input-output trail tied to its training data.
So far, no AI is known to have passed it. As I mentioned earlier, the AI-generated art, poetry, and music might look impressive, but they’re fundamentally the result of statistically learned patterns, not authentic creativity. And with the way current models are developed, authenticity does not seem to be a bright prospect, as authenticity lies “in accordance with desires, motives, ideals or beliefs”.
The creation of AI is one of the greatest achievements of human civilization, one that can surpass many of our limitations that come from simply being human. Humans are plagued with biases, and on average, we’re not always motivated to live up to our true potential. But a few among us do and have. That’s why we have art, science, culture, and society. We’re not just a mindless hoard of a species called humans. So, until we see AI that can truly invent its own style of drawing, music, or writing, create something original, and express something resembling a real sense of self, the masters can rest easy. Their light still shines. Perhaps, in an uncertain and distant future, people may find comfort in the glamour of AI-generated creativity. And then, maybe only then, we may truly lose the human touch.
For Further Reading:
Scientific Reports, 13, 13601 (2023)
Journal of Creativity, 33 (3) (2023)
3. Large Language Models Pass the Turing Test
Discover more from CuPod
Subscribe to get the latest posts sent to your email.
Intelligence doesn’t require neurons, nor does it need conscious awareness of surroundings. So, what does it require to emerge?
It’s a great question! And we may and surely will fail to answer it, even in the most rhetorical sense.
Slime molds solve mazes by optimizing nutrient paths and yet no nervous system, ant colonies show swarm intelligence and yet no central brain, and there’s AI, its just alogorithm and silicon chips. So, in a broader context, maybe we can think of intelligence as the ability to adapt to surroundings and respond in the best possible way. And I think, whether its conscious or unconscious, that’s the active trigger point for its emergence.
I would love to see a separate blog on this topic.