future

Human Capital in the Twenty-First Century: Piketty, Progress and the Distribution of Wealth

Thomas Piketty’s magnum opus Capital in the Twenty-FIrst Century made huge splashes especially after the publication of its English translation last spring. The bombshell argument of the huge book, supported by a massive amount of data and analyses, is two-fold. First, income inequality is increasing. And second, this will propel the economy back to the patrimonial capitalism dominated by inherited wealth.

Piketty gave recently a wonderful talk at the London School of Economics about his book. One thing that struck me at the very beginning of the talk was that he emphasized his books reliance on history. While I cannot argue with Piketty’s economical argumentation, which I, along with many of his supporters, believe to be mostly correct, I do believe that its applicability is somewhat suspicious in the present day world. The reason to this is simple: progress.

I believe that using that fancy academic two word disclaimer, ceteris paribus, Piketty is correct. That is to say, all other things remaining the same, as income inequality grows, capital will be accumulated in clusters of already wealthy families and individuals. But I believe the situation is complex enough to warrant further scrutiny.

First, the global economy appears to be in a fluctuation, where structural changes happen faster and faster each year. The dispersion of information, and as its consequence the creation of new innovations (which in turn disperse information faster) creates a spiralling motion where the structure of supply and demand of goods keeps fluctuating faster and faster. Applying historical evidence in a world in such turmoil is perhaps not the best strategy. Modelling patrimonial capitalism of the 19th century may not be at all a viable way to describe the market, even if the capital itself was accumulated to the wealthy individuals. Having lots of money is not worth much, if most of it is tied in sinking assets.

Second, there is the question of whether income inequality is a bad thing. Of course, I would not argue against the massive amount of correlative evidence that goes to show that in countries with more dramatic income inequality also the general well-being of the people is worse. But there is more to well-being than just income.

Interestingly, while income inequality has grown in the last few decades, so has the general overall wealth of people. While capital has been accumulated to the most wealthy individuals, also absolute poverty has been halved from the situation twenty years ago. While the comparative wealth gap has grown, absolute wealth has also been distributed more widely.

I have been a big critic of Adam Smith’s idea of the invisible hand, but in a sense something like that is happening here. The reason, though, is not crumbs of capital falling from the super-rich to the poor, but once again in progress, and its consequent increase in efficiency in producing goods and services.

The luxury of yesterday is the norm of today. This applies as much to luxury goods like TV sets and computers as to basic everyday needs like food and clean water. Producing these goods and services is today far more efficient than it was twenty years ago, and the trend in becoming even more efficient looks good.

The third and what I believe to be the most important feature of progress is the increasing relevance of human capital. That is to say, understanding, knowledge, skills and creativity. Interestingly, with information moving faster and faster, with production efficiency growing, and with the availability of both knowledge and the means of production, the tables are turning.

The significance of monetary capital is becoming less important in a world, where you can build an app worth millions of dollars in a few months in your bedroom. Such leaps to success would not have been possible yet a hundred years ago, where owing to the market structure, just entering an existing market would have required tens or hundreds of thousands of dollars. Now it’s enough that you have a cheap laptop.

The true challenge in capital of the 21st century is, I believe, the equal distribution of human capital. That is to say, guaranteeing equal rights to education and to the knowledge bases we have available to us. While capital will continue to play a significant part in a capitalist market economy, I believe that with the continuing acceleration of both innovation and the market structure, it will be increasingly human capital that will differentiate the future successes from the failures.

The take home messages, I believe, are the following. First, we must take seriously the fact that the world of today is not the world of the past. And more so, the world of tomorrow will be something that no one of us understands perfectly well. So we need humility, and careful optimism in facing the future.

Second, given the increase in production efficiency, we should move from measuring monetary exchange to measuring actual well-being. With more efficient production better products are created at a fraction of the previous price. This will be reflected in slowing economic growth – but also in the increased well-being of the people consuming the better goods.

And finally, we should take seriously the fact that the capital of 21st century is not measured in dollars, but in ideas.

We are, indeed, moving towards a post-capitalist market economy, where true added value for commerce will arise from innovation, not from applied funds.

Standard
future

It’s the End of the World As We Know It?

The first thing that struck me when I saw the Guardian apocalyptic headline and its derivatives hit social media was this:

NASA probably really didn’t fund a study on the end of the world, and the world probably is not going to end.

The title was “Nasa-funded study: industrial civilisation headed for ‘irreversible collapse’?”

First of all, no, this is not a NASA study, but rather a study that has some minor derivative funding from NASA. The media storm is embarrassing up to the point that NASA issued an official statement on their emphatically not supporting the arguments proposed in the paper.

Now, I am obviously inclined towards the kind of techno-optimism criticized in the study, so in that sense my position is far from neutral to this topic. But seriously, drawing an argument from the demise of the Roman Empire and the Han Dynasty to predict the fall of present day civilization? It’s like saying it’s stupid to think that you could get from Beijing to New York in twelve hours, since you couldn’t do that a hundred years ago.

To boot, it is even arguable that in a very relevant sense neither the Roman Empire nor the Han Dynasty really even ended. Throughout human history, most human civilizations have not in fact really collapsed (in terms of being wiped out permanently  from the face of the Earth), but rather they have morphed into something new – of course, often following some dramatic cultural shifts.

The case of the sustainability of resource use is, however, a very important one. While we have come a long way, we still have a long way to go. And with developments in some areas, new problems have arisen that need to be addressed.

To this end, we need both solutions arising from technological breakthroughs, as well as changes in our mindsets regarding consuming material stuff. It is by pushing forward where we can, and holding back where we need to that we can resolve these issues.

But these are issues that we can emphatically resolve. No dynamic system moves on inevitably on a mechanistic track, least of all a system comprising of as complicated beings as humans.

I suppose the real historical lesson you should draw regarding apocalypses is that there have always been doomsayers certain of the looming apocalypse.

Yet here we are.

Standard
future

Reality is Breaking Down

Our reality is breaking down. It is becoming more virtual by the day.

In a sense, reality has always been partly virtual, at least ever since we learned to use language. By being able to reference times past, we bring them to play in the present moment.

Constructs such as national borders or even money are to a great extent more virtual than real. If we had not agreed to a complex behavioral pact, they would not exist.

But with the advent of technology, the borders between the virtual and the real are starting to blur unlike we’ve ever seen before.

Like Ray Kurzweil said in a recent article, even telephone is a type of virtual reality. It brings the person far away from you virtually close. But the telephone is a baby step compared to what is about to shake the very foundation of our reality.

With the advent of wearable tech and augmented reality, the next generation of computing is around the corner. The scope of this leap is similar to moving from huge computers to desktops, from desktops to laptops and from laptops to mobile.

New digital layers will permeate our everyday life.

And what with the advent of augmented reality, these layers will be harder and harder to tell apart from our physical world. With something like Google Glass, you can have virtual objects to manipulate.

You can have overlays such as translations displayed in real time over what you see. I tried the Spanish translator Word Lens with Google Glass. It was spooky to see an English text scramble into Spanish right in front of me.

Our lives will have more and more virtual elements. Maybe a virtual pet one day, like a real life Tamagochi. Or overlays displaying you the very headlines you want to see, instead of the tabloid attention grabbers. Perhaps, as Vernor Vinge riffs in his novel Rainbow’s End, even building facades designed according to your preferences.

But the real and the virtual are merging on a far deeper level than just a digital overlay.

3D printing will also make the reverse true. Whereas wearable tech and AR bring the digital layer as an integrated part of physical experience, 3D printing will also convert parts of the digital layer into actual objects when needed.

So the road from the real to the virtual is getting shorter, as is the road from the virtual to the real.

What the combination of the two brings one can only speculate. Already, prototype cases exist, where an object has been designed using AR (i.e. simulating the physical object) and then reproduced using 3DP. What happens when these two technologies become a part of our everyday life is anybody’s guess.

However these developments do pan out, one thing is for sure. What we used to think of as reality is breaking down as we speak.

The reality of the future will be more virtual than we can imagine.

Standard
future

Towards a Post Work Society

Western countries have two problems. Problems, which I suppose may have quite a similar solution to.

The first problem is the constantly looming economical crisis indicated by economic problems especially in the Southern EU and the USA. It seems that we are constantly on the verge of an economic crisis in the West, owing mostly to the offshoring of heavy industry, to the fluctuations in the financial market and to the constantly more skewed demographic structure of our nations.

The second problem is the prospect of automation in the job market. It is practically guaranteed that with the second wave of automation, a huge amount of jobs will simply vanish. Just as no horse cart drivers exist anymore, in the future we’ll have no bus drivers, service clerks or call center assistants. If a job can be replaced by a robot, it will be replaced by a robot.

The first problem is a productivity problem. If we are losing our industry, if we cannot operate in the financial market and if we are running out of able bodied workforce, our productivity is going to tank. And at the end of the day, it is not the hours we pour into our work that create the revenue that makes our pay, but what we get done. So we need to get more done with less legs, with less time to do it in.

The second problem is a social and a moral problem. If we are growing towards a situation where there will simply not be enough work to go around for everybody, how should we treat those who do not get to work?

Like I said, the solution to both problems is probably the same: we need to help our people figure out what they really want to do, and we need to let them do exactly that.

In order to meet the productivity demands of the near future, we need to get more things done in less time. And as studies show, people who are really into what they do can get a huge amount done compared to those who are not. Like the ex-CTO of a major corporation said a couple of weeks ago, an enthusiastic coder can be a thousand times more productive than a frustrated one.

And if we are truly entering a post work world, those people not working are in an even more pressing need to figure out something fun and engaging to do with their time. Right now, people without jobs can tap into welfare, at least in Scandinavia. While that may be enough to pay the bills, if unemployed people don’t find new jobs soon, they’ll become frustrated and alienated. This frustration can, with time, create a massive social problem.

If a post work world segregates people into the valuable people who do work and the not-so-valuable who don’t, we’ll still have a problem. Even if we can get everybody’s stomach full and give them roofs over their heads. But if, instead of economic success, we learned to emphasize the importance of doing interesting things, of passion, of finding one’s vocation, the situation might be different.

By going through the trouble of directing one’s passion towards an immediately pressing need people could, in addition to working with interesting things, also boost their material well being over the minimum provided by the society. But also people who would not or could not contribute in such a way would not only be a welfare burden, but in fact a valuable part of the society in another way.

Much of innovation works like this: in order to create something new and useful, you first have to fool with a lot of old and unuseful stuff. People dedicated to non-work activities might in fact boost the innovative capacity of the human race massively.

A post work society could distribute the labor so that people could tap into what truly interests them and work on that, eventually either producing something of compensatable value or not. We could have generative people who are not immediately productive, and executive people who are, with the two working even in some kind of unison.

By encouraging people to work with what truly interests them, the work itself would be of value, even if it did not immediately enter the marketplace. And by this I do not only mean some intrinstic human value, but also the very bottom line. In a changing world we need to be constantly innovative to keep up with the market.

I believe that the impending productivity crisis will require us to rethink the way we work pretty soon. And while I am not entirely sure as to how we should start to address the moral conundrums involved in letting some people grasshopper their way through their lives, while the ants provide, it is certainly interesting to think about it.

A new world needs new perspectives. Be it a world without jobs, or a world without work.

Standard
future

Being Human in a Post-Human World

What happens when man and machine mix?

In the 1960’s, I.J. Good proposed an upcoming intelligence explosion. Riffing on Good’s work, Vernor Vinge developed this notion further. In his famous paper on the technological singularity, Vinge proposed two alternative scenarios for intelligence explosion: the AI, or artificial intelligence, scenario, and the IA, or intelligence amplification scenario.

We are at the brink of this predicted explosion right now. In a matter of few years, the man-machine integration will be smooth enough to allow us to tap into vast amounts of information in our everyday life.

But it’s crazy how mundane it has become to be able to google things up in everyday life situations. Yet less than ten years ago, the idea that you could check a fact in a bus or in a bar conversation straight away, was pure science fiction.

Now, we are looking at the next wave of man-machine integration: wearable technology. When technology moved from the warehouse to the desktop, the way we think changed radically. We could already amplify our intelligence a great deal with a computer in the house.

When technology moved from the desktop into the pocket, this integration deepened. Now we can do amazing things with our portable computers. Yet they have integrated into our everyday life astoundingly well. Having a Star Trek tricorder in the pocket just doesn’t seem that big a deal once you have one.

And I predict that in a couple of years, once the integration of augmented reality displays and other wearable tech has been cracked properly, having a digital overlay on our everyday life won’t feel much more special than being able to draw cash from an ATM.

The interesting thing is that all the while our intellectual collective capacity is increasing exponentially, and about to explode into something it is very hard to predict, we are staying emphatically human.

I believe that the AI hypothesis with its Terminator and Matrix corollaries is far more imaginary than people tend to think. After all, we still hardly understand how we ourselves think. Going from this to actually building a machine that thinks requires for the time being at least some kind of a leap of faith.

On the other hand, AI will also play a great deal into our next level of thought. It too, I believe, will integrate into the cognitive whole that is formed by us humans and our tools. Just like a navigator or a smart search algorithm can boost our intellect, think how a strong AI could boost it more.

In a sense, then, the AI hypothesis and the IA hypothesis may well merge in the future, into something that is far more potent than AI by itself, much less human beings withouth the tools we use.

While human-like AI does present some philosophical and pragmatic problems, the IA hypothesis does not. After all, we are tool-using animals, and have been for tens of thousands of years. And with each tool, we can think better and smarter. A pen helps us pull our thoughts together. A computer helps us manage vast amounts of information. And a mobile phone helps us share that information.

Wearable tech will make all this much easier. It will ease the distribution of labor between the biological mind(s) and the digital mind(s). In a sense, as a Wired article recently put it, it will reduce the number of seconds in a day that we are confused.

We are at the brink of an intelligence explosion, and that explosion is, I am quite sure, far more human than we think. The technology we bring to play in our everyday life has followed a beautifully exponential curve for at least the last thousand years. And while the world has changed a great deal, our lives remain emphatically human.

It is the case, drawing from the ideas of  Pierre Teilhard de Chardin, that as we learn to distribute labor among ourselves and our machines, we do not meld into a formless mass of drones.

Rather, our individuality is increased by this amplified collective intelligence.

With the intelligence explosion, and its man-machine integration, we can all be far more human, far more ourselves, than we ever have been able to in the entire history of the humankind.

Standard
future

The Digital Divide

World leaders have gathered to Davos to think about the economic future of the planet. In particular, the question of income inequality gives cause to worries.

An economy where the majority of resources is isolated in the hands of the few cannot thrive. An economy is a moving, living thing. When its lifeblood is stored in a vault, be it a digital stowaway or a cave guarded by a dragon, the economy stagnates.

But there is an even worse gap brewing. I mean the Digital Divide that is building up as we speak. This division of intellectual resources will also no doubt contribute to increasing income inequality.

After all, income is not generated out of thin air, but out of activity resulting in smart allocation of resources. Be they capital, gold, water or coal. Or level design for a computer game.

What I mean by the Digital Divide is the phenomenon of inequally distributed learning. Intellectual resources such as tablet universities, online scholarly databases and even children’s learning apps mean that those with access to ubiquitous digital connectivity also have an access to constantly develop their intellectual skillset. The skillset which will also contribute to general success in life.

A five year old who can start learning algebra by playing DragonBox, a teenager who can study differential calculus in Khan Academy and the thirty-something career changer who can participate in a Princeton MBA level class via Coursera are in a massively advanced situation compared to those whose learning is limited by the classical schooling model.

School is simply outdated in the current learning ecosystem, yet the New Learning is only accessible to scarce few.

What causes the Digital Divide are these three elements:

  1. Lack of access to digital devices.
  2. Lack of knowledge of digital services. And
  3. Lack of understanding to employ those services.

We need to build an ecosystem where practically everybody has immediate (mobile) internet access. We need a system to communicate to the mobile users the top notch learning services. And we need schooling that helps people to use those services properly and to sort out the wheat from the chaff.

Unless we do this, and unless we do this pretty soon, we will soon have a society divided into supersmart people who have been able to tap into their personally interesting fields from early on in their lives. I’m pretty sure we will in a moment have an explosion in the future Kurzweils and Einsteins.

But meanwhile we will have a growing number of people who have been thrown off the wagon owing to old structures not yielding fast enough to a world that is changing more rapidly every day.

Teachers resisting “bring your own device” practices or parents being wary of digital learning will leave their students and children with a legacy that I suppose will cause far more dire problems than the present income inequality.

An intellect inequality caused by the Digital Divide will at best create an economy where a great deal of people will have tremendous problems accessing the job market.

At worst, it can cause a division into something like H.G. Wells predicted in his dystopian “Time Machine”: a society of smart but weak Eloi, direly contrasted by the aggressive but slow thinking Morlocks.

Add to this the fact that owing to automation, in a couple of decades there will be scarce few jobs left for those not versed in complex subject matters, one can only speculate on the scope of such dystopian visions.

The silver lining here is that unlike with income inequality, the intellect inequality is something we can deal with pretty much straight away, both as individuals and as a society.

By giving our children, our students and our adult learners access to digital devices by both parental and peer support, school reform and legislation; by creating web portals collecting the best learning platforms; and by incorporating the learning skills themselves (“learning to learn”) into the school curriculum, we can, at a moderate cost, avoid such a division from building up in the first place.

We might not have a future society of Einsteins, nor should we. But we can have a future society where the great majority of people can first tap into what they are truly interested in, and second develop considerable skills in those fields from early on in life.

The digital tools we have available make this possible for a growing number of people right now. To these people the future is now.

The society I believe we should be building is such where the future would eventually be equally distributed.

A society where, instead of a Digital Divide, intellectual abundance touches not only the fortunate few, but the great majority of the human kind – if not, indeed, eventually every single human being.

Standard
future

What Does the Future Look Like?

What does the future look like? Paraphrasing William Gibson, the future looks like now, only more equally distributed.

Right now, we are tapping into an increasing capacity of processing and managing information. Mobile devices, big data, social media, wearable computing, augmented reality. You name it, it’s giving your cognition a boost.

Vernor Vinge proposed in a 1993 paper that we are looking at two possible scenarios on massively increasing the advance of technology. The AI hypothesis is based on Moore’s law and presumes that at some point our tools will become so smart that they can develop smart tools themselves, in which case we will have an almost immediate explosion in computing capacity.

An intelligence explosion.

But the more interesting of Vinge’s hypothesis I think is the IA hypothesis: Intelligence Amplification. It is more interesting, since AI is still a troubleridden concept both practically and philosophically, whereas intelligence amplification is something we already have in our hands.

Is Google making us stupid? Emphatically, no. And the studies back this up too.

Yes, Google is changing the way we process information, and it is changing our brains. But for the better.

In outsourcing managing trivia to Google, we have been able to release cognitive capacity that was previously tied up with trifles. We have become more innovative.

And what with the increasingly fast dispersion and utilization of information (think social media and big data), the future of man-machine integration won’t be limited to just looking up tidbits online.

I am pretty confident we still need a mushy electrochemical component in the intelligence explosion. The brain is a nexus in an ever-growing, ever-integrating network of information, a network optimized every day better for our actual practical needs.

As our tools keep getting better and as we learn to better optimize how the biological and the digital minds work together, we will be looking at an amazing world, a world that despite all of the technological advance will be an emphatically human world.

It’s a world that is already here. It’s just not yet evenly distributed.

Standard