Human Capital in the Twenty-First Century: Piketty, Progress and the Distribution of Wealth

Thomas Piketty’s magnum opus Capital in the Twenty-FIrst Century made huge splashes especially after the publication of its English translation last spring. The bombshell argument of the huge book, supported by a massive amount of data and analyses, is two-fold. First, income inequality is increasing. And second, this will propel the economy back to the patrimonial capitalism dominated by inherited wealth.

Piketty gave recently a wonderful talk at the London School of Economics about his book. One thing that struck me at the very beginning of the talk was that he emphasized his books reliance on history. While I cannot argue with Piketty’s economical argumentation, which I, along with many of his supporters, believe to be mostly correct, I do believe that its applicability is somewhat suspicious in the present day world. The reason to this is simple: progress.

I believe that using that fancy academic two word disclaimer, ceteris paribus, Piketty is correct. That is to say, all other things remaining the same, as income inequality grows, capital will be accumulated in clusters of already wealthy families and individuals. But I believe the situation is complex enough to warrant further scrutiny.

First, the global economy appears to be in a fluctuation, where structural changes happen faster and faster each year. The dispersion of information, and as its consequence the creation of new innovations (which in turn disperse information faster) creates a spiralling motion where the structure of supply and demand of goods keeps fluctuating faster and faster. Applying historical evidence in a world in such turmoil is perhaps not the best strategy. Modelling patrimonial capitalism of the 19th century may not be at all a viable way to describe the market, even if the capital itself was accumulated to the wealthy individuals. Having lots of money is not worth much, if most of it is tied in sinking assets.

Second, there is the question of whether income inequality is a bad thing. Of course, I would not argue against the massive amount of correlative evidence that goes to show that in countries with more dramatic income inequality also the general well-being of the people is worse. But there is more to well-being than just income.

Interestingly, while income inequality has grown in the last few decades, so has the general overall wealth of people. While capital has been accumulated to the most wealthy individuals, also absolute poverty has been halved from the situation twenty years ago. While the comparative wealth gap has grown, absolute wealth has also been distributed more widely.

I have been a big critic of Adam Smith’s idea of the invisible hand, but in a sense something like that is happening here. The reason, though, is not crumbs of capital falling from the super-rich to the poor, but once again in progress, and its consequent increase in efficiency in producing goods and services.

The luxury of yesterday is the norm of today. This applies as much to luxury goods like TV sets and computers as to basic everyday needs like food and clean water. Producing these goods and services is today far more efficient than it was twenty years ago, and the trend in becoming even more efficient looks good.

The third and what I believe to be the most important feature of progress is the increasing relevance of human capital. That is to say, understanding, knowledge, skills and creativity. Interestingly, with information moving faster and faster, with production efficiency growing, and with the availability of both knowledge and the means of production, the tables are turning.

The significance of monetary capital is becoming less important in a world, where you can build an app worth millions of dollars in a few months in your bedroom. Such leaps to success would not have been possible yet a hundred years ago, where owing to the market structure, just entering an existing market would have required tens or hundreds of thousands of dollars. Now it’s enough that you have a cheap laptop.

The true challenge in capital of the 21st century is, I believe, the equal distribution of human capital. That is to say, guaranteeing equal rights to education and to the knowledge bases we have available to us. While capital will continue to play a significant part in a capitalist market economy, I believe that with the continuing acceleration of both innovation and the market structure, it will be increasingly human capital that will differentiate the future successes from the failures.

The take home messages, I believe, are the following. First, we must take seriously the fact that the world of today is not the world of the past. And more so, the world of tomorrow will be something that no one of us understands perfectly well. So we need humility, and careful optimism in facing the future.

Second, given the increase in production efficiency, we should move from measuring monetary exchange to measuring actual well-being. With more efficient production better products are created at a fraction of the previous price. This will be reflected in slowing economic growth – but also in the increased well-being of the people consuming the better goods.

And finally, we should take seriously the fact that the capital of 21st century is not measured in dollars, but in ideas.

We are, indeed, moving towards a post-capitalist market economy, where true added value for commerce will arise from innovation, not from applied funds.


It’s the End of the World As We Know It?

The first thing that struck me when I saw the Guardian apocalyptic headline and its derivatives hit social media was this:

NASA probably really didn’t fund a study on the end of the world, and the world probably is not going to end.

The title was “Nasa-funded study: industrial civilisation headed for ‘irreversible collapse’?”

First of all, no, this is not a NASA study, but rather a study that has some minor derivative funding from NASA. The media storm is embarrassing up to the point that NASA issued an official statement on their emphatically not supporting the arguments proposed in the paper.

Now, I am obviously inclined towards the kind of techno-optimism criticized in the study, so in that sense my position is far from neutral to this topic. But seriously, drawing an argument from the demise of the Roman Empire and the Han Dynasty to predict the fall of present day civilization? It’s like saying it’s stupid to think that you could get from Beijing to New York in twelve hours, since you couldn’t do that a hundred years ago.

To boot, it is even arguable that in a very relevant sense neither the Roman Empire nor the Han Dynasty really even ended. Throughout human history, most human civilizations have not in fact really collapsed (in terms of being wiped out permanently  from the face of the Earth), but rather they have morphed into something new – of course, often following some dramatic cultural shifts.

The case of the sustainability of resource use is, however, a very important one. While we have come a long way, we still have a long way to go. And with developments in some areas, new problems have arisen that need to be addressed.

To this end, we need both solutions arising from technological breakthroughs, as well as changes in our mindsets regarding consuming material stuff. It is by pushing forward where we can, and holding back where we need to that we can resolve these issues.

But these are issues that we can emphatically resolve. No dynamic system moves on inevitably on a mechanistic track, least of all a system comprising of as complicated beings as humans.

I suppose the real historical lesson you should draw regarding apocalypses is that there have always been doomsayers certain of the looming apocalypse.

Yet here we are.


Reality is Breaking Down

Our reality is breaking down. It is becoming more virtual by the day.

In a sense, reality has always been partly virtual, at least ever since we learned to use language. By being able to reference times past, we bring them to play in the present moment.

Constructs such as national borders or even money are to a great extent more virtual than real. If we had not agreed to a complex behavioral pact, they would not exist.

But with the advent of technology, the borders between the virtual and the real are starting to blur unlike we’ve ever seen before.

Like Ray Kurzweil said in a recent article, even telephone is a type of virtual reality. It brings the person far away from you virtually close. But the telephone is a baby step compared to what is about to shake the very foundation of our reality.

With the advent of wearable tech and augmented reality, the next generation of computing is around the corner. The scope of this leap is similar to moving from huge computers to desktops, from desktops to laptops and from laptops to mobile.

New digital layers will permeate our everyday life.

And what with the advent of augmented reality, these layers will be harder and harder to tell apart from our physical world. With something like Google Glass, you can have virtual objects to manipulate.

You can have overlays such as translations displayed in real time over what you see. I tried the Spanish translator Word Lens with Google Glass. It was spooky to see an English text scramble into Spanish right in front of me.

Our lives will have more and more virtual elements. Maybe a virtual pet one day, like a real life Tamagochi. Or overlays displaying you the very headlines you want to see, instead of the tabloid attention grabbers. Perhaps, as Vernor Vinge riffs in his novel Rainbow’s End, even building facades designed according to your preferences.

But the real and the virtual are merging on a far deeper level than just a digital overlay.

3D printing will also make the reverse true. Whereas wearable tech and AR bring the digital layer as an integrated part of physical experience, 3D printing will also convert parts of the digital layer into actual objects when needed.

So the road from the real to the virtual is getting shorter, as is the road from the virtual to the real.

What the combination of the two brings one can only speculate. Already, prototype cases exist, where an object has been designed using AR (i.e. simulating the physical object) and then reproduced using 3DP. What happens when these two technologies become a part of our everyday life is anybody’s guess.

However these developments do pan out, one thing is for sure. What we used to think of as reality is breaking down as we speak.

The reality of the future will be more virtual than we can imagine.


Towards a Post Work Society

Western countries have two problems. Problems, which I suppose may have quite a similar solution to.

The first problem is the constantly looming economical crisis indicated by economic problems especially in the Southern EU and the USA. It seems that we are constantly on the verge of an economic crisis in the West, owing mostly to the offshoring of heavy industry, to the fluctuations in the financial market and to the constantly more skewed demographic structure of our nations.

The second problem is the prospect of automation in the job market. It is practically guaranteed that with the second wave of automation, a huge amount of jobs will simply vanish. Just as no horse cart drivers exist anymore, in the future we’ll have no bus drivers, service clerks or call center assistants. If a job can be replaced by a robot, it will be replaced by a robot.

The first problem is a productivity problem. If we are losing our industry, if we cannot operate in the financial market and if we are running out of able bodied workforce, our productivity is going to tank. And at the end of the day, it is not the hours we pour into our work that create the revenue that makes our pay, but what we get done. So we need to get more done with less legs, with less time to do it in.

The second problem is a social and a moral problem. If we are growing towards a situation where there will simply not be enough work to go around for everybody, how should we treat those who do not get to work?

Like I said, the solution to both problems is probably the same: we need to help our people figure out what they really want to do, and we need to let them do exactly that.

In order to meet the productivity demands of the near future, we need to get more things done in less time. And as studies show, people who are really into what they do can get a huge amount done compared to those who are not. Like the ex-CTO of a major corporation said a couple of weeks ago, an enthusiastic coder can be a thousand times more productive than a frustrated one.

And if we are truly entering a post work world, those people not working are in an even more pressing need to figure out something fun and engaging to do with their time. Right now, people without jobs can tap into welfare, at least in Scandinavia. While that may be enough to pay the bills, if unemployed people don’t find new jobs soon, they’ll become frustrated and alienated. This frustration can, with time, create a massive social problem.

If a post work world segregates people into the valuable people who do work and the not-so-valuable who don’t, we’ll still have a problem. Even if we can get everybody’s stomach full and give them roofs over their heads. But if, instead of economic success, we learned to emphasize the importance of doing interesting things, of passion, of finding one’s vocation, the situation might be different.

By going through the trouble of directing one’s passion towards an immediately pressing need people could, in addition to working with interesting things, also boost their material well being over the minimum provided by the society. But also people who would not or could not contribute in such a way would not only be a welfare burden, but in fact a valuable part of the society in another way.

Much of innovation works like this: in order to create something new and useful, you first have to fool with a lot of old and unuseful stuff. People dedicated to non-work activities might in fact boost the innovative capacity of the human race massively.

A post work society could distribute the labor so that people could tap into what truly interests them and work on that, eventually either producing something of compensatable value or not. We could have generative people who are not immediately productive, and executive people who are, with the two working even in some kind of unison.

By encouraging people to work with what truly interests them, the work itself would be of value, even if it did not immediately enter the marketplace. And by this I do not only mean some intrinstic human value, but also the very bottom line. In a changing world we need to be constantly innovative to keep up with the market.

I believe that the impending productivity crisis will require us to rethink the way we work pretty soon. And while I am not entirely sure as to how we should start to address the moral conundrums involved in letting some people grasshopper their way through their lives, while the ants provide, it is certainly interesting to think about it.

A new world needs new perspectives. Be it a world without jobs, or a world without work.


Being Human in a Post-Human World

What happens when man and machine mix?

In the 1960’s, I.J. Good proposed an upcoming intelligence explosion. Riffing on Good’s work, Vernor Vinge developed this notion further. In his famous paper on the technological singularity, Vinge proposed two alternative scenarios for intelligence explosion: the AI, or artificial intelligence, scenario, and the IA, or intelligence amplification scenario.

We are at the brink of this predicted explosion right now. In a matter of few years, the man-machine integration will be smooth enough to allow us to tap into vast amounts of information in our everyday life.

But it’s crazy how mundane it has become to be able to google things up in everyday life situations. Yet less than ten years ago, the idea that you could check a fact in a bus or in a bar conversation straight away, was pure science fiction.

Now, we are looking at the next wave of man-machine integration: wearable technology. When technology moved from the warehouse to the desktop, the way we think changed radically. We could already amplify our intelligence a great deal with a computer in the house.

When technology moved from the desktop into the pocket, this integration deepened. Now we can do amazing things with our portable computers. Yet they have integrated into our everyday life astoundingly well. Having a Star Trek tricorder in the pocket just doesn’t seem that big a deal once you have one.

And I predict that in a couple of years, once the integration of augmented reality displays and other wearable tech has been cracked properly, having a digital overlay on our everyday life won’t feel much more special than being able to draw cash from an ATM.

The interesting thing is that all the while our intellectual collective capacity is increasing exponentially, and about to explode into something it is very hard to predict, we are staying emphatically human.

I believe that the AI hypothesis with its Terminator and Matrix corollaries is far more imaginary than people tend to think. After all, we still hardly understand how we ourselves think. Going from this to actually building a machine that thinks requires for the time being at least some kind of a leap of faith.

On the other hand, AI will also play a great deal into our next level of thought. It too, I believe, will integrate into the cognitive whole that is formed by us humans and our tools. Just like a navigator or a smart search algorithm can boost our intellect, think how a strong AI could boost it more.

In a sense, then, the AI hypothesis and the IA hypothesis may well merge in the future, into something that is far more potent than AI by itself, much less human beings withouth the tools we use.

While human-like AI does present some philosophical and pragmatic problems, the IA hypothesis does not. After all, we are tool-using animals, and have been for tens of thousands of years. And with each tool, we can think better and smarter. A pen helps us pull our thoughts together. A computer helps us manage vast amounts of information. And a mobile phone helps us share that information.

Wearable tech will make all this much easier. It will ease the distribution of labor between the biological mind(s) and the digital mind(s). In a sense, as a Wired article recently put it, it will reduce the number of seconds in a day that we are confused.

We are at the brink of an intelligence explosion, and that explosion is, I am quite sure, far more human than we think. The technology we bring to play in our everyday life has followed a beautifully exponential curve for at least the last thousand years. And while the world has changed a great deal, our lives remain emphatically human.

It is the case, drawing from the ideas of  Pierre Teilhard de Chardin, that as we learn to distribute labor among ourselves and our machines, we do not meld into a formless mass of drones.

Rather, our individuality is increased by this amplified collective intelligence.

With the intelligence explosion, and its man-machine integration, we can all be far more human, far more ourselves, than we ever have been able to in the entire history of the humankind.


The Digital Divide

World leaders have gathered to Davos to think about the economic future of the planet. In particular, the question of income inequality gives cause to worries.

An economy where the majority of resources is isolated in the hands of the few cannot thrive. An economy is a moving, living thing. When its lifeblood is stored in a vault, be it a digital stowaway or a cave guarded by a dragon, the economy stagnates.

But there is an even worse gap brewing. I mean the Digital Divide that is building up as we speak. This division of intellectual resources will also no doubt contribute to increasing income inequality.

After all, income is not generated out of thin air, but out of activity resulting in smart allocation of resources. Be they capital, gold, water or coal. Or level design for a computer game.

What I mean by the Digital Divide is the phenomenon of inequally distributed learning. Intellectual resources such as tablet universities, online scholarly databases and even children’s learning apps mean that those with access to ubiquitous digital connectivity also have an access to constantly develop their intellectual skillset. The skillset which will also contribute to general success in life.

A five year old who can start learning algebra by playing DragonBox, a teenager who can study differential calculus in Khan Academy and the thirty-something career changer who can participate in a Princeton MBA level class via Coursera are in a massively advanced situation compared to those whose learning is limited by the classical schooling model.

School is simply outdated in the current learning ecosystem, yet the New Learning is only accessible to scarce few.

What causes the Digital Divide are these three elements:

  1. Lack of access to digital devices.
  2. Lack of knowledge of digital services. And
  3. Lack of understanding to employ those services.

We need to build an ecosystem where practically everybody has immediate (mobile) internet access. We need a system to communicate to the mobile users the top notch learning services. And we need schooling that helps people to use those services properly and to sort out the wheat from the chaff.

Unless we do this, and unless we do this pretty soon, we will soon have a society divided into supersmart people who have been able to tap into their personally interesting fields from early on in their lives. I’m pretty sure we will in a moment have an explosion in the future Kurzweils and Einsteins.

But meanwhile we will have a growing number of people who have been thrown off the wagon owing to old structures not yielding fast enough to a world that is changing more rapidly every day.

Teachers resisting “bring your own device” practices or parents being wary of digital learning will leave their students and children with a legacy that I suppose will cause far more dire problems than the present income inequality.

An intellect inequality caused by the Digital Divide will at best create an economy where a great deal of people will have tremendous problems accessing the job market.

At worst, it can cause a division into something like H.G. Wells predicted in his dystopian “Time Machine”: a society of smart but weak Eloi, direly contrasted by the aggressive but slow thinking Morlocks.

Add to this the fact that owing to automation, in a couple of decades there will be scarce few jobs left for those not versed in complex subject matters, one can only speculate on the scope of such dystopian visions.

The silver lining here is that unlike with income inequality, the intellect inequality is something we can deal with pretty much straight away, both as individuals and as a society.

By giving our children, our students and our adult learners access to digital devices by both parental and peer support, school reform and legislation; by creating web portals collecting the best learning platforms; and by incorporating the learning skills themselves (“learning to learn”) into the school curriculum, we can, at a moderate cost, avoid such a division from building up in the first place.

We might not have a future society of Einsteins, nor should we. But we can have a future society where the great majority of people can first tap into what they are truly interested in, and second develop considerable skills in those fields from early on in life.

The digital tools we have available make this possible for a growing number of people right now. To these people the future is now.

The society I believe we should be building is such where the future would eventually be equally distributed.

A society where, instead of a Digital Divide, intellectual abundance touches not only the fortunate few, but the great majority of the human kind – if not, indeed, eventually every single human being.


What Does the Future Look Like?

What does the future look like? Paraphrasing William Gibson, the future looks like now, only more equally distributed.

Right now, we are tapping into an increasing capacity of processing and managing information. Mobile devices, big data, social media, wearable computing, augmented reality. You name it, it’s giving your cognition a boost.

Vernor Vinge proposed in a 1993 paper that we are looking at two possible scenarios on massively increasing the advance of technology. The AI hypothesis is based on Moore’s law and presumes that at some point our tools will become so smart that they can develop smart tools themselves, in which case we will have an almost immediate explosion in computing capacity.

An intelligence explosion.

But the more interesting of Vinge’s hypothesis I think is the IA hypothesis: Intelligence Amplification. It is more interesting, since AI is still a troubleridden concept both practically and philosophically, whereas intelligence amplification is something we already have in our hands.

Is Google making us stupid? Emphatically, no. And the studies back this up too.

Yes, Google is changing the way we process information, and it is changing our brains. But for the better.

In outsourcing managing trivia to Google, we have been able to release cognitive capacity that was previously tied up with trifles. We have become more innovative.

And what with the increasingly fast dispersion and utilization of information (think social media and big data), the future of man-machine integration won’t be limited to just looking up tidbits online.

I am pretty confident we still need a mushy electrochemical component in the intelligence explosion. The brain is a nexus in an ever-growing, ever-integrating network of information, a network optimized every day better for our actual practical needs.

As our tools keep getting better and as we learn to better optimize how the biological and the digital minds work together, we will be looking at an amazing world, a world that despite all of the technological advance will be an emphatically human world.

It’s a world that is already here. It’s just not yet evenly distributed.

future, technology, thinking

Extended Mind and Thinking Creatively

by Petro Poutanen

Based on our recent contemplations we concluded that the extended mind does not do very well in creative thinking. Machines are notoriously bad in it. For computers “being creative” is of course one of the fundamental challenges on the way towards human-like computer intelligence or the so called artificial intelligence (AI). Back in the 1995 a research group named “Fluid analogies research group” was exploring the ways how human intelligence could be replicated through computer algorithms and modeling. They suggested that making analogies is one of the fundamental concepts for human mind to solve problems creatively.

One of the most interesting outcomes of this project was a program called Copycat. The Copycat is based on the idea of a complex system consisting of a group of individual agents operating with no centralized control system and producing collectively emergent properties. As brains can be described as a complex system, the Copycat is based on the idea of modeling human cognition as a complex system. Analogies are what we need when linking things together at some abstract, conceptual level. For instance, humor is based on analogies. This example comes from the Writing English blog: “Her vocabulary was as bad as, like, whatever”. Obviously, the humor in this sentence is completely understandable for anyone knowing some English. But how about a computer? How could it make it out by computing?

At the moment, we are able to “cheat” computers by even with the most elementary analogies, such as with the letter recognition tests on the websites’ registration forms to prevent the attacks of webots. According to Copycat developers, for a computer recognizing such “fluid” similarity would be overwhelming because there is no single reliable clue in the picture indicating that it is a letter. According to programmers, the key for making analogies is “conceptual slipping” in response to perceptions on contextual changes. The program was developed for solving letter-string problems (If abc => abd, what? => ijk). It comprises of three elements: a long-term memory of various degrees of abstractions in the form of an evolving network, a short-term memory module for calculating and evaluating different structures, and a collection of pieces of raw-material with an individual probability weight determining the possibility to be selected. The conclusion was that the Copycat could mimic human behavior in finding the most adjacent solutions but being more “satisfied” with remote ones, in other word, “more creative” solutions.

Although Copycat could behave psychologically plausibly, the problem is that such a program can only work in the predetermined context for which it is originally programmed. It cannot solve problems that are from an unknown conceptual world. For example, to produce a funny analogy akin to above mentioned it needed to know English language, have a sense of humor (“what is considered funny?”) and know some unofficial conventions for spoken language. At a theoretical level: it needed to be able to learn from its environment and generate own meta-languages explaining the underlying rules of a given situation, such as social and cultural codes, norms, the use of language, visual and audible representations and related emotions, etc. of which all would be constantly evolving and contextually varying. This is why the extended mind is not working without the network of human intelligence giving the required “sense” and “meaning” to it. Therefore, it might well be that instead of artificial intelligence, social singularity – using extended mind technologies – is the next “big thing” in problem solving and other intelligent endeavors.

future, technology, thinking

Interesting Links on EM and Social Singularity

Bees Solve Complex Problems Faster than Supercomputers:

This is directly relevant to both swarm intelligence and AI. Also bears links to embodied cognition.


Getting Things Done: The Science Behind Stress-Free Productivity:

A very interesting paper on the science of the productivity method GTD, embodied cognition, and swarm intelligence. The concept of stigmergy links nicely with the article above. See also Olli’s post on ants.



A Finnish company dedicated to bringing about a new level of crowdsourcing. Once again, links to the above.


Daily Crowdsource

An interesting source for the latest in crowdsourcing.


Top Ten Mobile Trends

Mobile is reaching critical mass as we speak. How soon can we integrate all this connectivity into working information-sharing services?


Facebook’s Questions

Can Facebook’s questions platform grow to be the social singularity?


Brain-Computer Implants

On the tech side, some interesting developments


Human Exoskeletons

This too shows some promising man-machine integration paths.



How the Social Singularity Makes Creative Collaboration Possible

by Petro Poutanen

The collaborative capacity of the Web might bring about a new era of human intelligence: the social singularity. The social singularity refers to collective human intelligence enabled by a huge number of interconnected individuals. Imagine the possibilities of an enormous information pool that the millions of web users comprise. Some examples are Wikipedia and more recently Aardvark that provides an extended social network for answering people’s unique questions. Also firms are seeking ways to benefit from the collaborative capacity. For example, a firm called InnoCentive provides a common platform for companies looking for solutions and people willing to solve companies’ problems. What is important here is that people are contributing creative outcomes without centrally planned organizations. So the question goes: How the emergence of social singularity makes collaborative creativity possible?

Momentarily, we are lacking a common, coherent theory of network-based collaboration but having multiple terms for the phenomenon (crowdsourcing, collective intelligence, open innovation, the wisdom of the crowd – to mention but a few). Clearly, we are talking about some kind of “self-organizing” – an uncoordinated behavior of a large mass that produces something coherent and cogent collectively – yet we don’t know if there is a single logic behind the different kinds of self-organizing systems, such as ant colonies and human brains.

I have tried to figure out how to describe a system of collaborative creativity in online. What happens when people solve problems collaboratively? I have come to think about this as a system of creativity. The famous systems model of creativity suggested by Mihaly Csikzentmihalyi is constituted of three parts: the cultural domain, the field of experts and the individual. For creativity to emerge, the individual must produce a novel variation of cultural information, which is then subsequently selected by the field for inclusion in the cultural domain. Thus, creativity is the product of all constituent parts in the system and emerges from the interplay of them.

What would this model look like in the collaborative online environment? First of all, the creator (individual) and the evaluator (the field of experts) can be the same person. A person participating in a project of programming might contribute the project with a single line of code and simultaneously, by that very same contribution, act as a peer to another contributor by suggesting a modification to the code made by that participant. So, in a collaborative field anyone can be an expert and a creator at the same time. Secondly, we have knowledge that on the one hand belongs to the cultural background of a participant, and on the other hand to the field of continuous negotiation.

This perspective opens up a dual nature of both the contributor and the knowledge in a collaborative field: the contributor as a “creator” and “modifier”, and the knowledge as a “cultural” and “shared/negotiated”. And when thinking about this collaboration on the systemic level, it is the group in collaboration that decides whether a variation produced by an individual (on the basis of the collective work) is selected or not. The picture below illustrates that dynamics.

Figure 1. A model for Creative Collaboration in Online Environment. The process starts when a problem or a need for change occurs. An individual contributor proposes a solution to it in a form of a variation drawn from the cultural knowledge. Subsequently, other participants work as peers to the proposed variation and give feedback to participants. Then variation is modified, if necessary, and finally selected. After the requisite amount of iteration, the final solution is moved to the area of cultural knowledge.