In a recent study published in Scientific Reports, it was found that AI chatbots, particularly large language model (LLM) ones like ChatGPT3, ChatGPT4, and Copy.Ai, demonstrated superior creative thinking skills compared to humans. The study focused on tasks related to divergent thinking, specifically brainstorming alternative uses for common objects, such as ropes, boxes, pencils, and candles. While divergent thinking is a key aspect of creativity, the top-scoring individuals still outperformed the best chatbot results.
Divergent thinking involves generating numerous diverse ideas or solutions for a given task and is commonly assessed using the Alternate Uses Task (AUT). In this task, participants are asked to propose as many alternative uses for an everyday object as possible within a limited time frame. Responses are evaluated based on fluency, flexibility, originality, and elaboration.
Researchers Mika Koivisto and Simone Grassini measured the originality of responses by considering semantic distance (how closely related the response was to the object’s original purpose) and creativity. Semantic distance was quantified on a scale from 0 to 2 using a computational method, while human evaluators, unaware of the source of the responses, subjectively rated creativity on a scale from 1 to 5. On average, chatbot-generated responses scored significantly higher than human responses in both semantic distance (0.95 vs. 0.91) and creativity (2.91 vs. 2.47).