by Petro Poutanen
Based on our recent contemplations we concluded that the extended mind does not do very well in creative thinking. Machines are notoriously bad in it. For computers “being creative” is of course one of the fundamental challenges on the way towards human-like computer intelligence or the so called artificial intelligence (AI). Back in the 1995 a research group named “Fluid analogies research group” was exploring the ways how human intelligence could be replicated through computer algorithms and modeling. They suggested that making analogies is one of the fundamental concepts for human mind to solve problems creatively.
One of the most interesting outcomes of this project was a program called Copycat. The Copycat is based on the idea of a complex system consisting of a group of individual agents operating with no centralized control system and producing collectively emergent properties. As brains can be described as a complex system, the Copycat is based on the idea of modeling human cognition as a complex system. Analogies are what we need when linking things together at some abstract, conceptual level. For instance, humor is based on analogies. This example comes from the Writing English blog: “Her vocabulary was as bad as, like, whatever”. Obviously, the humor in this sentence is completely understandable for anyone knowing some English. But how about a computer? How could it make it out by computing?
At the moment, we are able to “cheat” computers by even with the most elementary analogies, such as with the letter recognition tests on the websites’ registration forms to prevent the attacks of webots. According to Copycat developers, for a computer recognizing such “fluid” similarity would be overwhelming because there is no single reliable clue in the picture indicating that it is a letter. According to programmers, the key for making analogies is “conceptual slipping” in response to perceptions on contextual changes. The program was developed for solving letter-string problems (If abc => abd, what? => ijk). It comprises of three elements: a long-term memory of various degrees of abstractions in the form of an evolving network, a short-term memory module for calculating and evaluating different structures, and a collection of pieces of raw-material with an individual probability weight determining the possibility to be selected. The conclusion was that the Copycat could mimic human behavior in finding the most adjacent solutions but being more “satisfied” with remote ones, in other word, “more creative” solutions.
Although Copycat could behave psychologically plausibly, the problem is that such a program can only work in the predetermined context for which it is originally programmed. It cannot solve problems that are from an unknown conceptual world. For example, to produce a funny analogy akin to above mentioned it needed to know English language, have a sense of humor (“what is considered funny?”) and know some unofficial conventions for spoken language. At a theoretical level: it needed to be able to learn from its environment and generate own meta-languages explaining the underlying rules of a given situation, such as social and cultural codes, norms, the use of language, visual and audible representations and related emotions, etc. of which all would be constantly evolving and contextually varying. This is why the extended mind is not working without the network of human intelligence giving the required “sense” and “meaning” to it. Therefore, it might well be that instead of artificial intelligence, social singularity – using extended mind technologies – is the next “big thing” in problem solving and other intelligent endeavors.