thinking

5 Ways School Damages Us – And 5 Ways to Fix It

Some years ago Sir Ken Robinson gave a massively popular TED talk, entitled “How Schools Kill Creativity”. The modern school has taken a hit or two since, being criticized for having students listen to mindnumbingly boring lectures, or directing their motivation to wrong things like grades and SAT scores instead of learning.

Having looked in depth at how learning takes place, I must admit much of such criticism is in place. That being said, the modern school does do a tremendously good job in many ways. After all, ubiquitous Western literacy is not something to be trifled with, if you take a look at countries where it does not exist. Being armed with just the scope of the critic, it is easy to miss on all the good things we have achieved.

Be that as it may, while we have come a long way, there’s still work to be done. And while leveling criticisms can be fruitful, I believe it is better to also offer alternatives to the objects of critique. Hence, here are five ways in which I think school does really damage us – and five ways to fix them.

1. Fixed Mindset

The Damage: Schools teach what the Stanford professor Carol Dweck calls the ‘fixed mindset’. It is a way of regarding oneself as consisting of a fixed set of properties and skills. In other words, you are either talented at something, or you are not, period. The standard testing of schools is very inclined to maintain this point of view, since grades are not regarded so much as indicators of future work, but of current competence.

The Fix: instead of locking our kids into fixed mindsets, we should help them embrace Dweck’s ‘growth mindset’. This means that skills and talents are not fixed, but grow with practice. Modern science seems to point towards this being in fact the more realistic view of the two. To establish a growth mindset, students should be allowed to progress at their own pace, with grading used to find the focus where further work is needed. In the growth mindset there are no grade C students. There are just “not quite yet A” students.

2. Focus on Weaknesses

The Damage: Here is another artifact of the modern school grading system. It tends to guide focus to the weaknesses of a student. Whereas weakness-based grading may be of use to point out where work is needed (as per above), the fact is that nobody can be great at everything. (Yes, even straight-A students have their weaknesses.)

The Fix: Instead of finding systematically what weaknesses students have, we should rather focus on emphasizing and clarifying strengths. Studies show that focusing on strenghts not only makes learning more fun but also adds up to general well-being. There are also existing pedagogical models that can be tapped into, such as the Montessori model that constantly seeks to single out the most enthusiastic interests of the students and focus on those.

3. Extrinsic Motivation

The Damage: In motivating children by having them aspire towards greater grades, and scaring them with sanctions if they do not succeed, school creates a strongly extrinsically motivating environment. Too bad, studies show that extrinsic motivation not only kills creativity but also seriously hampers even basic learning.

The Fix: Schools should tap into intrinsic motivation. Intrinsic motivation consists of trying to fulfill the basic psychological needs of autonomy, competence and relatedness. Autonomy means that students should have more freedom in how to study. Competence means what was said above: personal growth and focus on strengths. And relatedness means working with people you really feel accepted with. By tapping into these three areas, I am sure teachers will see a soaring in learning experiences.

4. Lack of Collaboration

The Damage: It is utterly idiotic that at the most critical moments in school life, children are stripped of their most powerful tool: collaboration. It does not really help that much to have kids participate in tedious group work, if when push comes to shove, working together is interpreted as cheating.

The Fix: There are practically no situations in real life like test taking, where you simply cannot ask or go look for an answer to a problem. Why then do we drill our kids up to 12 years with this idiotic exercise? The brain is social to begin with. Let it work the way it is meant to. Kids will flourish and thrive if they get a tough problem to solve – and they can dive into it with all their social and networked might.

5. Killing Creativity

The Damage: Kids aged seven were asked who was creative. Everybody said yes. Then kids aged twelve were asked the same question. Only about half of them said yes. And once high schoolers were asked the question, only a fraction considered themselves creative. Why? If you have lived over ten years in an environment where it’s been signalled that there are right and only right answers, it’s rather hard to be creative.

The Fix: Questions are wonderful things that should not be killed with right answers. Also, coming up with solutions (note, not answers) to questions, there are a million avenues of inquiry one could pursue. Consider for a moment, what kind of a wonderland of creativity the school could be if only this one little thing was changed: when a question is asked, there are no wrong answers. And when a solution is generated, there are no wrong methods.

The problem is, it does put a lot of pressure in coming up with interesting enough questions.

But that, I guess, is what teaching is all about.

Standard
thinking

Always Do

I watched a great video some time ago by a fellow who gave a fabulous flipboard presentation on why we should act on global warming, whether its true or not. The argument in a nutshell was that while not acting may have terrible consequences, acting, at worst, will tie up some resources, and at best will save us from a massive catastrophe. So no matter what the facts, we should just act as if global warning was true, because the consequences of not acting are too dire.

This made me think of how to act in an information-constrained environment in general. If, (and when as is practically always the case), we do not have complete visibility on an ecosystem, if we come up with a strategy that has a viable future outcome, we should act on that strategy rather quickly. The thing is, we spend so much time analyzing and pondering, but the value of such activity is substantial only if we have enough information – which we seldom do.

This is emphasized even more these days – and in the near future – as future visibility becomes even more opaque, thanks to the accelerating evolution of various technologies. So instead of paralyzing with analysis, we should act, gather data and act again. Because if we don’t the answer, – as Shervin Pishevar succinctly put it at the Dublin Web Summit –, is already ‘no’.

But if we should act first, won’t this lead to a world of aimlessly fumbling headless chickens? Of course not. Acting first does not mean acting without absolutely no information whatsoever.

Let’s look at global warming again. The case is not that we have absolutely no projections on global warning. The case is that the probabilities of the positive and the negative outcomes are competitive enough not to warrant an immediate commitment of resources. (Altough one could argue this depends a lot on whose projections we talk about, what with the scientific community being pretty much aligned on this one.)

And in this case it is the active strategy that enables us to quickly update and iterate on knowledge, whereas the passive strategy will lead us nowhere, basically just left astray at the mercy of whatever the actual facts are. Facts may be what they are, but the only way to change them to match our needs better is to act.

Therefore, whenever you are presented with a choice with at least more than negligible possibility of a positive future outcome, you should always do.

Then follow up on results, reiterate where necessary and do again.

Standard
thinking

Scarcity, Excess and Abundance

Professor Andrew Abbott gave last year an awesome lecture at the London School of Economics. Abbott argued that while many of today’s problems seem to arise from scarcity – the lack of resources, such as food or money –, in fact many of our problems are quite the opposite: problems of excess.

Scarcity means that we do not have enough to get by. Excess means that we have too much, and that somehow distracts us. There is a third quantity, abundance, which means we have enough of a resource not to be troubled by it.

I suppose nobody can argue against abundance. In fact, this should probably be the goal of the human race: how to distribute resources so that they are abundant for every person on the planet. But in aiming for abundance, we have at several places erred on the side of excess.

Western countries already have a huge excess of food. This creates the weird global paradox: half of the world is starving to death, the other half eating themselves to death.

We also have an excess of unwanted byproducts of our culture, such as pollution. Chinese factories, or Fukushima, have produced an excess of such material that we are troubled by.

But what is the most pressing problem with the advance and constantly accelerating development of technology is the excess of information. We are bombarded to death with information, while our conscious minds can only process about three or four things at a time.

Yet a few decades ago, areas of life such as research and product development suffered from a scarcity of information. Sometimes you had to go to the other side of the world to get the information you wanted. I remember contacting the British Library to photocopy and fax me a research paper I could not find anywhere else in the world.

Now it’s all online, and the problem is the opposite. We have now an excess of information, and discovering the information that you need right now is sometimes even harder than before.

The solution to the problem of excess is focus. Focus on the essential; on what really counts at each given moment.

We need to focus on what we eat – that we get the nutrients that we really need, and not let our lizard brain guide us to chomping up on calories that serve no purpose in our daily lives.

We need to focus on what to produce – that we get hardware that we really need and not let our need to placate our stressed selves send us into a spending spree, cramming our houses full of useless clutter.

And we need to focus on what we really need to know. To zone in on those pieces of information that really count – and ignore the rest.

The problem is, all this is easier said than done. Everybody knows we should eat less than we consume. Yet almost nobody does this.

This is why we need tools.

For balancing diets, the wearable tech revolution that we are witnessing right now may for the first time give us a universal toolkit for managing dietary inputs and outputs. Check out, for example, the amazing lineup from Fitbit to see where we’re at right now.

For consuming, algorithms employed by e.g. Amazon help us zoom in better on what really makes us feel great and consume more of that stuff, as compared to picking up whatever’s stocked on the store shelves.

As for cognition, services such as Simplenote or Any.do help us really focus better on what’s going on in our everyday lives, not to speak of Google, whose search algorithms are getting more amazing by the day.

There is a really cool passage in the Sherlock Holmes short story “Five Orange Pips”, where Sherlock tells Watson: “a man should keep his little brain-attic stocked with all the furniture that he is likely to use, and the rest he can put away in the lumber-room of his library, where he can get it if he wants it.”

So should we. In suffering with the daily excess of information, we should eliminate those sources of information and interruptions that do not serve a real function, and focus on the ones that do.

Turn off email notifications.

Unfollow newsletters.

Sort out your Facebook newsfeed.

And get your hands on the best tools out there.

While excess may be as bad a source of illbeing as scarcity, it has one upside going for it.

If we cut down from excess, we will eventually end up with abundance.

Standard
thinking

It Ain’t All in the Head

by Lauri Calonius

There is a growing interest in the idea that cognitive processes are not solely confined in the head and explained simply in terms of brain processes. The type of body we possess and the natural and cultural environment we are surrounded by are taken more into account in the explanations of cognitive phenomena such as memory and problem solving tasks.

In my thesis It Ain’t All in the Head: Situating Cognition to the Body and the Surrounding World, four different approaches to cognition that conceive it in this bodily and/or worldly situated way are looked into. More specifically “embodied-embedded cogntion”, “enactive cognition”, “extended cognition” and “distributed cognition” are compared and contrasted with each other and the more orthodox “brain-bound” conception.

In addition, critique towards the more unorthodox positions is examined, but which ultimately ends up leveling the ground between the unorthodox and orthodox positions. Thus highlighting the viability of the positions that credit more role for the body and the world in explaining cognitive phenomena.

Finally, the issue of cognitive agency (i.e. what elements of the body and the world may be said to be responsible for a given cogitive property or process) is also examined in the light of these different approaches.

The main goal of the thesis then is to elucidate different positions that depart from the traditional brain centered conception of cognition and draw out the similarities as well as the differences between the approaches.

Moreover, even if these approaches still remain distinct without a clear unified conception of cogntion there could be said to be a kindling of an emerging paradigm that could be applied to other interesting philosophical questions such as the issue of cognitive agency.

The take-home message from the thesis is that even if the liberation of cognition from the confines of the head is a complex issue, being open to this kind of possibility will nevertheless bring forth new and interesting ways of understanding cognitive phenomena.

You can read the entire thesis here.

Standard

thinking

XKCD: The Extended Mind

Image
thinking

On Neuroplasticity, the Extended Mind and the Intelligence Explosion

This posting is a reply to this response by Daniel Estrada to my paper The Coming Social Singularity.

Mr. Estrada argues that my basic position requires a strong differentiation between the technological and cultural. This is, however, not what I have intended to convey. My paper rather concerns an argument for the comparison of the plausibility of Vernor Vinge’s AI (artificial intelligence) and IA (intelligence amplification) hypotheses. In other words, I do agree with much that Mr. Estrada writes. We are, in many senses, “tools all the way down”. As what comes to the nature of the mind, it is in a very profound sense extended to begin with. If an AI were forthcoming, it would in many senses contribute as an extended resource to the human mind.

My claim in the paper is not so much intended as the comparison of the intrinsic nature of a biological mind to a simulated mind (which, as I think Mr. Estrada rightly points out, cannot justly be separated), but rather the plausibility of whether an IA or AI explosion will take place sooner: in other words, where the focal point of the intelligence explosion will be: in the network itself (IA), or in identifiable components of it (AI).

The problem with the plausibility of the AI hypothesis is not that it would be impossible or somehow IA-incompatible. It is rather that we are not very likely to reach it before an IA explosion takes place. In addition to the complexity of the nervous system that can be postulated on the grounds of the Stanford experiment, the integration of nervous and extended processes is of a far higher order than in a simple sensory coupling or a feedback loop. The nervous system is dynamic to a far greater degree than any known computational system as is demonstrated by the massive literature on neuroplasticity. The nervous system does not compute – synaptic connections grow and shrink. The brain is not a machine. It is a garden.

In the light of what we now know about brain function, in the nervous system the hardware and the software are intrinsically intertwined. In other words, the brain is not a static processing and memory system where information is stored, but rather a dynamic feedback mechanism that *produces* information by creating complex enough connections. Once you add to this the ability to augment these connections by using the environment, there is a very profound sense in which human intelligence differs dramatically from what has been postulated as machine intelligence. Using Searle as an example was simply to show that there are some dramatic difficulties in attributing intelligence to a machine (whereas attributing intelligence to a human-machine coupling is by no means problematic).

Incidentally, this is not to say there could not be an intelligent machine. I do not subscribe to the fundamental Searlean assumption that this would be philosophically impossible. Quite the opposite: if a machine is constructed that for all purposes acts like an intelligent agent, it should be treated as an intelligent agent, even if this behavior came about in the way of complex enough computation. But this is not at all the point I am trying to make in the paper.

What I am arguing is that while an AI explosion may as well be on the way, it is not very likely to happen very soon. But once real-time networking of human beings is achieved (which should happen in a few years now), the IA explosion will take place. I have no doubts that this will also contribute to the AI explosion as well, whatever that will mean then, which will in turn augment the capacity of the IA system and so on and so forth.

To sum up, none of this is to say either that human intelligence and machine intelligence should be intrinsically separated, or even that the simulation of intelligence were impossible. It is just to say that the intelligence explosion that involves real-time networking of existing nervous systems of human beings is somewhat likelier to happen sooner than a significant enough advance in computing technology.

Standard
future, technology, thinking

Extended Mind and Thinking Creatively

by Petro Poutanen

Based on our recent contemplations we concluded that the extended mind does not do very well in creative thinking. Machines are notoriously bad in it. For computers “being creative” is of course one of the fundamental challenges on the way towards human-like computer intelligence or the so called artificial intelligence (AI). Back in the 1995 a research group named “Fluid analogies research group” was exploring the ways how human intelligence could be replicated through computer algorithms and modeling. They suggested that making analogies is one of the fundamental concepts for human mind to solve problems creatively.

One of the most interesting outcomes of this project was a program called Copycat. The Copycat is based on the idea of a complex system consisting of a group of individual agents operating with no centralized control system and producing collectively emergent properties. As brains can be described as a complex system, the Copycat is based on the idea of modeling human cognition as a complex system. Analogies are what we need when linking things together at some abstract, conceptual level. For instance, humor is based on analogies. This example comes from the Writing English blog: “Her vocabulary was as bad as, like, whatever”. Obviously, the humor in this sentence is completely understandable for anyone knowing some English. But how about a computer? How could it make it out by computing?

At the moment, we are able to “cheat” computers by even with the most elementary analogies, such as with the letter recognition tests on the websites’ registration forms to prevent the attacks of webots. According to Copycat developers, for a computer recognizing such “fluid” similarity would be overwhelming because there is no single reliable clue in the picture indicating that it is a letter. According to programmers, the key for making analogies is “conceptual slipping” in response to perceptions on contextual changes. The program was developed for solving letter-string problems (If abc => abd, what? => ijk). It comprises of three elements: a long-term memory of various degrees of abstractions in the form of an evolving network, a short-term memory module for calculating and evaluating different structures, and a collection of pieces of raw-material with an individual probability weight determining the possibility to be selected. The conclusion was that the Copycat could mimic human behavior in finding the most adjacent solutions but being more “satisfied” with remote ones, in other word, “more creative” solutions.

Although Copycat could behave psychologically plausibly, the problem is that such a program can only work in the predetermined context for which it is originally programmed. It cannot solve problems that are from an unknown conceptual world. For example, to produce a funny analogy akin to above mentioned it needed to know English language, have a sense of humor (“what is considered funny?”) and know some unofficial conventions for spoken language. At a theoretical level: it needed to be able to learn from its environment and generate own meta-languages explaining the underlying rules of a given situation, such as social and cultural codes, norms, the use of language, visual and audible representations and related emotions, etc. of which all would be constantly evolving and contextually varying. This is why the extended mind is not working without the network of human intelligence giving the required “sense” and “meaning” to it. Therefore, it might well be that instead of artificial intelligence, social singularity – using extended mind technologies – is the next “big thing” in problem solving and other intelligent endeavors.

Standard
thinking

What the Extended Mind Does Well – And What It Doesn’t

What EM Does Well

Declarative Memory

It is relatively easy to dig up trivia and tidbits if you have a good enough archiving system and/or search engine. With biological memory, the information must be relatively significant to be remembered.

Volitional Recollection

Directly related to the above: it is difficult to volitionally remember many things, whereas digging them up from an archive is easy.

Information Management

Again, directly related to the above: information management is massively easier with pen and paper and libraries, not to speak of the digital realm. Furthermore, with the advent of ubiquitous connectivity, we can push the digital retrieval response times close to spontaneous recollection, which will no doubt produce interesting results.

Ubiquitous Availability

That is, of course, unless things crash or break apart. But digital technologies enable us an increasingly available access to EM capacities, whereas biological mental capacities are available variantly.

Task Management

Externalizing information works particularly well for tasks and other repetitive declarative information.

Organizing

Directly related to task and information management.

Generating Randomness

This should be rather obvious; the biological mind cannot produce genuine randomness. A program can.

Calculation

Once again, rather obvious: all rule-following is massively easier to an algorithm-driven program than to a human being.

Collective Thinking

This is only beginning to emerge, but we can do more and more together with the aid of EM technologies, whereas in biological connectivity, we are limited to very small groups.

Social Networking

Directly related to the above: real-time social networks are relatively small, whereas a digital network can consist of hundreds of active participants.

What EM Does Not Do Well

Creative Thinking

Machines do not as for now think creatively. Furthermore, while EM can augment creativity (think mind maps), it does not alone produce creative thought.

Emotions

This is actually more relevant to AI than EM; it is also arguable that EM can be used to induce and direct emotions. But once again, it is a subtle interplay between the biological mind and EM.

Reflection

It is very hard to think what would EM reflection even mean. Reflection is quite directly related to the biological mind, while of course it may involve EM components.

Metacognition

It seems metacognition is hard for both BM and EM. Perhaps a solution will emerge later? Thinking about thinking is not a very easy skill to learn, it appears.

Humor, analogies and irony

These require a human interpreter, and do not have an intrinsic EM dimension to them.

Evaluating Information

This is a field where EM will no doubt soon catch up. Nonetheless, right now automatic evaluation of information is still very elementary and gives a very varying mileage.

Standard
thinking

What the Mind Does Well – and What it Doesn’t

In our latest session we sat down to think over what the biological mind does well, and what it does not. Likewise, we considered what the extended mind does well and what it does not. Here are the results we brainstormed; coming up next week, the respective EM ones.

What the Mind Does Well

Creative Thinking

We still have to build a machine that is capable of genuine creativity. Whether this is a question of complexity, hidden variables or something we do not even yet understand is an open question. Nonetheless, biological mind is by far superior in creativity compared to technology.

Intuitive (Aschematic) Thinking

Same as above: machine intelligence is still for the most part schematic thinking, whereas intuitive thinking, at least according to some researchers such as Djiksterhuis and Nordgren, is aschematic.

Reflection

Machine intelligence is paradigm-constrained, whereas human intelligence can reposition and view things from various perspectives. Also relevant to empathy.

Semantic Processing

EM is catching up here, but humans are still superior in understanding meaning.

Irony

I think this one will take too long to explain.

Humor

Directly relevant to the two of the above. Also to the first item: whether this is a question of complexity, or of something deeper is still an open question.

Analogies

Same as above.

Imagination

Do androids dream of electric sheep? This leads to a can of worms of a question with respect to AI and EM, that is to say, can even the most complex of machines have phenomenal consciousness?

Association

Here too, technology is catching up fast, but biological mind still prevails.

Beliefs

Similar question as imagination.

Dogmas

Only an agent can have dogmas (i.e. axiomatic beliefs). Does this require a biological mind?

Image recognition

This is similar to semantic recognition: machines still have some way to go, but they are catching up.

 

Things the Mind Does Not Do Well

Tedious Tasks

We tend to get bored quickly with repetitive tasks.

Massive Information Storage

What did you have for lunch a month ago?

Trivial Declarative Just-In-Case Recall

What is the tenth digit of pi?

Volitional Recollection

See massive information storage.

Metacognition

What do you think about what you think about right now?

Calculation

8433953 x 234235?

Task Management

This is an interesting tangent to EM in terms of information processing. Tasks consist of declarative memory items, and they are hard to recall volitionally.

Information Management

Like David Allen put it, a brain is a great place to have ideas, but lousy to store them in. The memory constraints apply to any management of large amounts of non-consolidated information, for example raw data.

Cognitive Multitasking

Here, the constraints of the working memory (the magical number seven) make it hard to focus on several processes at the same time.

Thinking by Negations

The biological mind seems to have hard time grasping the word no. Try not to think of the pink elephant.

Rational Thinking

Whether we like good old Aristotle or not, we are not really very rational animals. Human decision making seems driven by a huge number of cognitive biases and other effectors that have nothing to do with rational inference.

Analysis

Directly related to the above. Also, even the most rigorous mind must commit to some axioms and make intuitive decisions on choosing rules of inference. Pure rational analysis just does not seem to be cut out for the human mind.

 

Next week, a similar breakdown of what the extended mind does and does not do well.

Standard