20150209

I'm with stupid


A couple of days ago, it took several humans to rescue a Korean lady who had been savagely assaulted by her robot vacuum cleaner. 




Some may consider this incident to be the first post-singularity crime, because the machine was obviously gifted with an intelligence superior to that of its owner (yes, robot cleaners are great and long hair is great, but even if ondol is great, you simply don’t sleep with the latter on the path of the former).



Still, singularity is not for today. And if we’re not there yet, that’s also because stupidity is precisely what makes humans superior to machines. But we’ll have to get smarter. Not in order to win the evolution race against a more competitive opponent, but in order not to lose ourselves.




In this transitional period, humans keep ruling because they’re stupid


Making mistakes is essential in any learning process, and humans will remain superior to machines as long as machines can’t cope alone with major disruptions, spot and fix issues without rules and decisions based on human expertise or judgment. I don’t mean bugs, basic problems, or even simple loopholes, but issues that truly require cognitive leaps to identify and brand new approaches to solve.

One of the beauties of computing is that you can easily track back decisions, implement and improve rules. But for the moment, most learning systems keep exposing our own flows, because they are not truly self-learning, and ultimately rely on human expertise. I once fell for their devilish attractiveness: our information systems needed a scoring solution, and here was this new, agile, future-proof, and ego-flatteringly smart concept waiting to be toyed with. But we quickly understood that humans were the weaker link, that at that early technological stage, making sure the gizmo remained relevant would demand rules that no organization would ever be able to implement on a sustainable basis. Because our dream of simplified decision making processes promised to be the mother of all Rube Goldberg machines, we opted for an off-the-shelf solution that did the trick for a fraction of the cost. Yes neural networks will unlock new realms for innovation, but back then we simply couldn’t rely only on them to run our whole business. From a risk management perspective, that’s a no brainer.

Of course, we remain stupid animals, and financial organizations keep using algorithmic trading to bang their heads against the Wall with greater force and speed every day…

Still, we cannot afford to continue developing at the same time Artificial Intelligence in machines and Real Stupidity in ourselves.

‘Machines smarter than humans’ must not mean ‘machines out of human reach’


In singularity times, machines will be able to take more complex decisions, to better learn from indirect experiences, to invent radically disruptive rules and programs, to be more proactive. What will be the role of humans in these processes? More often pupils than teachers? More often followers than leaders? More often puppets than puppeteers? On which side of the food chain? Will humans become mere servers, accepting requests from machine-clients that only need them to perfect their own capacity to deal with imperfection? Will we become the only true ‘terminals’ in a pervasively connected world or, at the other extreme, become things among other things in the almighty yet hackable internet of things? In any case, you don’t want Web 3.0 to be controlled by a happy few. It is essential to guarantee transparency, democracy, trust.

More than ever, humans will have to learn how to think by themselves, without machines. We must learn to understand and to challenge our technological environments, we must learn to communicate with each other, not only through machines.

Yet of course, we must also learn to communicate with machines, to get inside their brains as much as they’ll get inside ours. Programming must not be a foreign language mastered by a minority, machines must not become black boxes. Because computers will be first modeled after our brains, then improved from there, we will need increasingly simple interfaces to cope with increasingly complex systems.

We will spend more time with machines, including maybe more time serving them. Information Services are already used to rationalize human resources by projects, to calculate man-hour units that are often measuring time devoted to a human-machine dialog. And today, we’re already measuring the time individuals spend on average in front of a screen (TV, smartphone, laptop…), without distinguishing active and passive periods. Tomorrow, we might need to track the time spent on active contributions, or to count inputs that may also become sources of income. And that’s without counting the invisible contributions we’ll all make as we move in always-on environments that evolve by learning from our very behaviors.

Our role won’t be just to help machines get smarter, but to constantly challenge decisions taken / rules fixed by supposedly superior intelligences, evaluating how far is too far, proposing simple ways of activating / deactivating key functionalities. The value will be less in designing or debugging algorithms, than in questioning their very purpose. We will enjoy a lot of fancy autopilot systems, but should remain the captain on board.

In fuzzy and disruptive environments: keep updated, but stay true


We know singularity is coming, we suspect it’s going to be big and pervasive, but we can’t be sure we’ll be ready on time as individuals and as corporations.

Don’t panic: humans have always been laggards scared by the unknown, trying to figure out how their environments worked, including the ones they built themselves (particularly the ones they built themselves). The only thing is that, at a certain stage called singularity, this man-made environment may understand us better than ourselves.

Now you can panic.

But don’t start running like headless chickens, and cool down a bit. You only need to develop the minimal level of paranoia and schizophrenia required for good strategic intelligence: ‘paranoia’ because playing with worst case scenarios years ahead is more fun than cleaning the mess after the disruptions actually hit the fan, and ‘schizophrenia’ because you understand much more clearly your environment when you alternate different points of views.

Of course, it helps to follow up what’s going on in innovation, to see how leaders try to remain at the top of the game. Look how Google beefs up its core assets – the deepest and fastest reach in requests -, in which fields Larry Page & Co. venture to cover key entry points (Google X, Calico, Singularity University…). Beyond the usual suspects and highly innovative sectors, combine key enablers with key players in key fields, and beyond in political or social domains, and let your imagination roll. Then consider from different viewpoints your own environment, your own companies, your own jobs… and see how it could play out.

The way stakeholders interact in a community will necessarily evolve. For instance, to remain in business fields, doesn’t the old employer-employee or provider-consumer pairs already sound obsolete? Who’s hiring whom? Who’s providing a service to whom? Look how individuals, groups, brands, services, or corporations evolve in the LinkedIn marketplace. Look how journalism evolved; the time spent crowdsourcing upstream and broadcasting downstream, with sometimes the same provider/consumer at the other end…

Companies themselves are becoming more agile networks, and depending on the field, some already function around a very limited core, or to the contrary through a pervasive, collaborative ‘human fog’.

The internet revolution not only permeated societies, but paved the way for singularity and the pedagogy of key concepts related to it. It also trained us for the changes to come.

Remember when the internet became a mainstream technology, during the mid-nineties: most businesses considered it as an external phenomenon, more ‘a new business’ than ‘a new way of doing business’. They didn’t pay attention to the potential impacts on their own activity before perceiving them, in which case it was often too late. Even in the major telecom group I worked for, most people, at the beginning, treated the internet like a new line of products rather than like a revolution in services. A lot of pedagogy was needed to change mindsets, and one easy way was to project decision makers into an environment where all other players would have evolved. Not just our competitors, but key players in fields that a priori didn’t seem connected to our own, some of which were bound to become coopetitors.

Guessing how players would evolve required understanding and challenging their environments, and fundamentally answering the question ‘what is their ‘mĂ©tier’?’ I used this old fashioned word because it reaches deeper than ‘occupation’, ‘trade’, ‘craft’, ‘calling’, ‘work’, ‘profession’, or ‘job’. It defines you more by what you are than by what you do. For instance, companies providing apparently exactly the same service and present at the same levels of the value chain could have different core mĂ©tiers: this one would be fundamentally more a designer, that one more an editor. As proprietary value chains became open value galaxies, players chose different paths to focus on core dimensions, outsource or abandon others, venture into new territories, build innovative partnerships. In these utterly uncertain times, you could tell who was true to themselves, who truly understood their mĂ©tier enough to see if not precisely how, at least in which ways they would have to evolve.

In shifting environments, identity demands clear values. It’s not what you do, but what you are, and what you won’t do. And in singularity times more than ever, transparency will be key to trust in a relationship.

*

Is the question ‘will technology create or destroy jobs?’* or ‘how far will technology redefine employment, organizations, ourselves, our relations with others, with our environment…?’ And ultimately, ‘how must we anticipate as individuals and societies to cope in sustainable ways?”

Anticipating singularity is a chance for us to reaffirm our humanity.

And to push stupidity to new levels.


* see my answers to the Singularity 99 questionnaire


mot-bile 2015




No comments:

Copyright Stephane MOT 2003-2024 HOME - Today's wireless headlines - Korea wireless news - all posts (full list) - useful links - stephanemot.com (personal portal)
my books : "dragedies" - "La Ligue des Oublies"
my other sites : blogules - blogules (VF) - Seoul Village - footlog - Citizen Came - Copyright Stephane MOT 2003-2022