20150228

Google's Gtown wins over ZeeTown and the Large Apple Collider

As soon as Facebook revealed the plans of Zuckerbergville ("Zee Talk of Zee Town"), Google presented its future headquarters on its official blog*.

Big G will still reside in Mountain View, but in North Bayshore District. And double its capacity to 20,000 employees.

Waterway? Check. Highway? Check. But the rendering is much sexier and futuristic, and there's a nice video to support it (YouTube of course):






With its organic touch and absence of cars, Gtown looks more like a city of the future than Zeetown, probably because Google is much more into hardware innovation than Facebook. Furthermore, the architect is not a 85 year old legend (Zuck's Gehry) but two fun looking younger guns: Bjarke Ingels (Bjarke Ingels Group) and Thomas Heatherwick (Heatherwick Studio). Where Z looked for a brand, G went for sense and the essence. 

Granted, I'm unfairly comparing headquarters with employee quarters, and the constraints are not the same. But these projects speak volumes about differences in approaches and projected images.

Nothing as scary as the dystopian panopticon / large apple colliderish Apple Campus 2 announced two years ago. Now that's One Infinite Loop if I ever saw one:


Consistent with a hardware manufacturer used to proprietary systems...

mot-bile 2015

* see ""



20150227

Zee Talk of Zee Town

Welcome to Zuckerbergville, 200-acre of sentient brick-and-mortar right across the Menlo Park headquarters, a paradise where Facebook will be able to monitor every move of 10,000 employees: who's buying what where? who's sleeping with whom when? just like your good old Facebook, but In Real Life.

Looking for the Hatespeech Arena? Start from Hacker Way, follow Push Marketing Avenue, and at the Cookie Roundabout, take a right turn. Never leave the NSA lanes, and try to avoid the Snowden Cul-de-Sac.

Frank Gehry has been invited to run his software and to propose a curvier cityscape than this dull 2012 project:

The 2012, Pre-Gehry Zuckerbergville
Gehry's fee must be comfortable, but only a tiny drop in Zuckerberg's multibillion dollar Zanadu. The land itself didn't cost too much: $400M for the last 55-acres, that's only $1,800 per sqm. You're on the waterfront, but also on a highway. And your HQ are on the other side of the Bayfront Expressway. Maybe they'll dig a tunnel for the commuters, and soundwalls to make sure employees get some sleep. Some Privacy? Don't even think about it. Your contract stipulates that you let your Samsung Smart TV on night and day*.

Ironically, the main street in the area is called Constitution Drive. Maybe Zuckerberg, who likes to build "the Hacker Way"**, has also his own special definition of the word 'Constitution'. I wonder if We the People of Zee-Town will enjoy Net Neutrality. And if visitors will need to sign special terms of service when they enter the Zee-Zone.


mot-bile 2015

* the EPIC (Electronic Privacy Information Center) just filed a complaint with the FTC (Federal Trade Commission) over privacy concerns regarding private conversations.
** "As part of building a strong company, we work hard at making Facebook the best place for great people to have a big impact on the world and learn from other great people. We have cultivated a unique culture and management approach that we call the Hacker Way. The word “hacker” has an unfairly negative connotation from being portrayed in the media as people who break into computers. In reality, hacking just means building something quickly or testing the boundaries of what can be done. Like most things, it can be used for good or bad, but the vast majority of hackers I’ve met tend to be idealistic people who want to have a positive impact on the world. The Hacker Way is an approach to building that involves continuous improvement and iteration. Hackers believe that something can always be better, and that nothing is ever complete. They just have to go fix it — often in the face of people who say it’s impossible or are content with the status quo."



20150209

I'm with stupid


A couple of days ago, it took several humans to rescue a Korean lady who had been savagely assaulted by her robot vacuum cleaner. 




Some may consider this incident to be the first post-singularity crime, because the machine was obviously gifted with an intelligence superior to that of its owner (yes, robot cleaners are great and long hair is great, but even if ondol is great, you simply don’t sleep with the latter on the path of the former).



Still, singularity is not for today. And if we’re not there yet, that’s also because stupidity is precisely what makes humans superior to machines. But we’ll have to get smarter. Not in order to win the evolution race against a more competitive opponent, but in order not to lose ourselves.




In this transitional period, humans keep ruling because they’re stupid


Making mistakes is essential in any learning process, and humans will remain superior to machines as long as machines can’t cope alone with major disruptions, spot and fix issues without rules and decisions based on human expertise or judgment. I don’t mean bugs, basic problems, or even simple loopholes, but issues that truly require cognitive leaps to identify and brand new approaches to solve.

One of the beauties of computing is that you can easily track back decisions, implement and improve rules. But for the moment, most learning systems keep exposing our own flows, because they are not truly self-learning, and ultimately rely on human expertise. I once fell for their devilish attractiveness: our information systems needed a scoring solution, and here was this new, agile, future-proof, and ego-flatteringly smart concept waiting to be toyed with. But we quickly understood that humans were the weaker link, that at that early technological stage, making sure the gizmo remained relevant would demand rules that no organization would ever be able to implement on a sustainable basis. Because our dream of simplified decision making processes promised to be the mother of all Rube Goldberg machines, we opted for an off-the-shelf solution that did the trick for a fraction of the cost. Yes neural networks will unlock new realms for innovation, but back then we simply couldn’t rely only on them to run our whole business. From a risk management perspective, that’s a no brainer.

Of course, we remain stupid animals, and financial organizations keep using algorithmic trading to bang their heads against the Wall with greater force and speed every day…

Still, we cannot afford to continue developing at the same time Artificial Intelligence in machines and Real Stupidity in ourselves.

‘Machines smarter than humans’ must not mean ‘machines out of human reach’


In singularity times, machines will be able to take more complex decisions, to better learn from indirect experiences, to invent radically disruptive rules and programs, to be more proactive. What will be the role of humans in these processes? More often pupils than teachers? More often followers than leaders? More often puppets than puppeteers? On which side of the food chain? Will humans become mere servers, accepting requests from machine-clients that only need them to perfect their own capacity to deal with imperfection? Will we become the only true ‘terminals’ in a pervasively connected world or, at the other extreme, become things among other things in the almighty yet hackable internet of things? In any case, you don’t want Web 3.0 to be controlled by a happy few. It is essential to guarantee transparency, democracy, trust.

More than ever, humans will have to learn how to think by themselves, without machines. We must learn to understand and to challenge our technological environments, we must learn to communicate with each other, not only through machines.

Yet of course, we must also learn to communicate with machines, to get inside their brains as much as they’ll get inside ours. Programming must not be a foreign language mastered by a minority, machines must not become black boxes. Because computers will be first modeled after our brains, then improved from there, we will need increasingly simple interfaces to cope with increasingly complex systems.

We will spend more time with machines, including maybe more time serving them. Information Services are already used to rationalize human resources by projects, to calculate man-hour units that are often measuring time devoted to a human-machine dialog. And today, we’re already measuring the time individuals spend on average in front of a screen (TV, smartphone, laptop…), without distinguishing active and passive periods. Tomorrow, we might need to track the time spent on active contributions, or to count inputs that may also become sources of income. And that’s without counting the invisible contributions we’ll all make as we move in always-on environments that evolve by learning from our very behaviors.

Our role won’t be just to help machines get smarter, but to constantly challenge decisions taken / rules fixed by supposedly superior intelligences, evaluating how far is too far, proposing simple ways of activating / deactivating key functionalities. The value will be less in designing or debugging algorithms, than in questioning their very purpose. We will enjoy a lot of fancy autopilot systems, but should remain the captain on board.

In fuzzy and disruptive environments: keep updated, but stay true


We know singularity is coming, we suspect it’s going to be big and pervasive, but we can’t be sure we’ll be ready on time as individuals and as corporations.

Don’t panic: humans have always been laggards scared by the unknown, trying to figure out how their environments worked, including the ones they built themselves (particularly the ones they built themselves). The only thing is that, at a certain stage called singularity, this man-made environment may understand us better than ourselves.

Now you can panic.

But don’t start running like headless chickens, and cool down a bit. You only need to develop the minimal level of paranoia and schizophrenia required for good strategic intelligence: ‘paranoia’ because playing with worst case scenarios years ahead is more fun than cleaning the mess after the disruptions actually hit the fan, and ‘schizophrenia’ because you understand much more clearly your environment when you alternate different points of views.

Of course, it helps to follow up what’s going on in innovation, to see how leaders try to remain at the top of the game. Look how Google beefs up its core assets – the deepest and fastest reach in requests -, in which fields Larry Page & Co. venture to cover key entry points (Google X, Calico, Singularity University…). Beyond the usual suspects and highly innovative sectors, combine key enablers with key players in key fields, and beyond in political or social domains, and let your imagination roll. Then consider from different viewpoints your own environment, your own companies, your own jobs… and see how it could play out.

The way stakeholders interact in a community will necessarily evolve. For instance, to remain in business fields, doesn’t the old employer-employee or provider-consumer pairs already sound obsolete? Who’s hiring whom? Who’s providing a service to whom? Look how individuals, groups, brands, services, or corporations evolve in the LinkedIn marketplace. Look how journalism evolved; the time spent crowdsourcing upstream and broadcasting downstream, with sometimes the same provider/consumer at the other end…

Companies themselves are becoming more agile networks, and depending on the field, some already function around a very limited core, or to the contrary through a pervasive, collaborative ‘human fog’.

The internet revolution not only permeated societies, but paved the way for singularity and the pedagogy of key concepts related to it. It also trained us for the changes to come.

Remember when the internet became a mainstream technology, during the mid-nineties: most businesses considered it as an external phenomenon, more ‘a new business’ than ‘a new way of doing business’. They didn’t pay attention to the potential impacts on their own activity before perceiving them, in which case it was often too late. Even in the major telecom group I worked for, most people, at the beginning, treated the internet like a new line of products rather than like a revolution in services. A lot of pedagogy was needed to change mindsets, and one easy way was to project decision makers into an environment where all other players would have evolved. Not just our competitors, but key players in fields that a priori didn’t seem connected to our own, some of which were bound to become coopetitors.

Guessing how players would evolve required understanding and challenging their environments, and fundamentally answering the question ‘what is their ‘métier’?’ I used this old fashioned word because it reaches deeper than ‘occupation’, ‘trade’, ‘craft’, ‘calling’, ‘work’, ‘profession’, or ‘job’. It defines you more by what you are than by what you do. For instance, companies providing apparently exactly the same service and present at the same levels of the value chain could have different core métiers: this one would be fundamentally more a designer, that one more an editor. As proprietary value chains became open value galaxies, players chose different paths to focus on core dimensions, outsource or abandon others, venture into new territories, build innovative partnerships. In these utterly uncertain times, you could tell who was true to themselves, who truly understood their métier enough to see if not precisely how, at least in which ways they would have to evolve.

In shifting environments, identity demands clear values. It’s not what you do, but what you are, and what you won’t do. And in singularity times more than ever, transparency will be key to trust in a relationship.

*

Is the question ‘will technology create or destroy jobs?’* or ‘how far will technology redefine employment, organizations, ourselves, our relations with others, with our environment…?’ And ultimately, ‘how must we anticipate as individuals and societies to cope in sustainable ways?”

Anticipating singularity is a chance for us to reaffirm our humanity.

And to push stupidity to new levels.


* see my answers to the Singularity 99 questionnaire


mot-bile 2015




20150205

KAIST's Wearable Thermo-Element Wins Netexplo Grand Prix 2015

We've seen cloth that generates electricity before, but this concept keeps it smart and simple: no need to work out like a madman, your own body temperature can make the trick.

This 10 cm strip can produce 40 mW.

50 cm to 1 m of this 'Wearable Thermo-Element' designed by KAIST Professor JO Byeong-jin can generate the 2W needed to power your smartphone.

Presented last year, the concept just won the Netexplo Grand Prix 2015.

Wearable Thermo-element by netexplo

Now a start-up coveted by major wearable and apparel manufacturers, from Nike to Apple, this thermoelectric marvel will certainly fuel many of your gizmos in the years to come, from your smartglasse to your smartwristband, to your smartjacket, and of course your smartunderwear (you smartass you).

mot-bile 2015




20150123

No Singularity Without Transparency

I was planning a post on singularity when this form came up ahead of a Singularity99 event. I might as well put down my answers there:

Q - Will technology create or destroy jobs?
A - Technology redefines employment. You won't employ a person, but hire skills, connections, shares of time and people. Jobs will be more contextual, evolutive, shared. There will be fewer permanent jobs, which doesn't necessarily mean that fewer people will have a job.

Q - Relation between technology, developement, unemployment:
A - Technology, as an accelerator and a revealer, accelerates unemployment, exposes discrepancies.
Some jobs are naturally made irrelevant or obsolete, new ones emerge, but we're reaching the point when the old employment model itself is obsolete.

Q - Is it the same as previous industrial revolutions, will new jobs compensate for the lost ones?
A - This is a more fundamental revolution than the previous ones.
The question remains what and where will be the added value of humans, but will we still be looking for the optimal return on investment, or at long last consider a sounder trade off?
Instant players will destroy jobs, humanists will work on a more sustainable, fair, open, shared platform.

Q - A.I., Deep learning, etc a threat to humans?
A - Since the stakes (economy, social trends, politics, environment, ethics...) and the complexity are maximal, so is the risk of seeing a minority trying to control key entry points.
The only answer is transparency: an open debate on the risks and opportunities, on who's doing what, on who's behind which initiative. Anyone can contribute on any issue, each decision can be monitored.

Also:

. There is a need for monitoring, regulation, ethics, but innovation demands reaching beyond limits. The 99% must act as moderators, not as censors. It is essential to not kill the game, but also to expose unfair play.

. High frequency trading proves that the frontier between intelligence and stupidity isn't that clear. As a reminder, here's my definition of both, along with other terms:




At a personal level, I don't want an enhanced brain. 

I have the right to remain stupid, and to write silly stuff about hacked transhumanism, such as my old "Rise Of The Nork Zombies".


mot-bile 2014




20150107

CES 2015 - Sensors and Sensibility - Internet Everywhere, Everyware, and Everywear

This excuse for a blog is starting its Season XI, at its own pace - about a post a month, far from the twice-a-week-routine of its heyday.

And as usual, the new year resonates with echoes from Vegas glitz and the CES.

So far, nothing disruptive, only recurrent stuff looming on a less distant horizon. No brain implants, mind you - Apple won't release iSingularity before a while, but driverless cars (Audi A7, BMW i3...), pervasive sensors, drones, cameras, and virtual reality, and an internet that keeps going everywhere, everyware, and everywear.

Samsung promises IoT on all its devices by 2017, and Tizen on its TVs as early as next month. But still no TTM for Tizen on smartphones.

Intel showed its RealSense 3D sensors, and smaller players their latest face-recognition-enabled cameras, ideal for a family, household, or small community/company: ArcSoft's Simplicam can deal with up to 10 regulars, and Netatmo's Welcome (TTM Q2 2015) can extend its reach through "Welcome Tags". Look how the Simplicam exsudes bigbrotherian power, where welcome plays on a rather 'air freshener' mode (netatmo again on the glam side of the force - remember last year's JUNE? - see "CES 2014: Beep Beep Goes Bling Bling"):


Welcome by netatmo
Simplicam by ArcSoft

Interesting to see how marketers try to give different flavors to similar enablers or functionalities. Camera-wise, for instance, the Narrative Clip 2 proposes a GoAm answer to GoPro - or is it dull vs bull? If this clipable, mini camera can also record everything as you go, it is marketed like a simple diary for ordinary people. Low expectations as a new driver for innovation, I like that.

And yes, we've got the unavoidable collection or more-or-less-pseudo-healthy wearables. At least, from the outlook, Withings's Activite Pop won't appear too obsolete next year, because it pretty much look like a normal watch:



NeuroMetrix's Quell aims at actual medical cure: this Bluetooth wrap-on sends neuro-signals to relieve pain.

But the winner of Day 1 remains Emiota's Belty, a Bluetooth belt that adjusts to your eating record. A must for anyone willing to cope with European austerity measures.

emiota's Belty - austerity rules!


mot-bile 2015




20141222

Digital payment: Apple Pay catching up with Google... and shaking up the market?

According to ITG*, 1% of all digital payments in USD last November were made through Apple Pay, compared to 4% for Google Wallet. The latter was launched on May 26th, 2011, the former last October 20th, and judging by the apparent success among early adopters**, Cupertino might have already seized an even more significant chunk of this fat holiday season pie. Over November, Apple barely scratched the surface, with a very early-adopter kind of retailer leading the pack: Whole Foods Market claimed 20% of Apple Pay transactions (28% in value)***.

But here, once again, Apple is leading in innovation rather than in invention, and this market pedagogy could also benefit Google itself, who didn't promote very much its own solution so far, but can leverage much wider platforms. The biggest loser could be PayPal, and ITG's Steve Weinstein thinks that they are likely to suffer against Apple Pay's much more user friendly solution. 

Needless to say, bigger players in finance are also paying attention. Many consumers still feel reluctant to make payments through other players than genuine financial institutions, particularly the ones that issue the reassuring plastic fetishes that, not so long ago, used to be referred to as 'smart cards'.



mot-bile 2014


* see "ITG Investment Research Report Finds Strong Apple Pay Momentum"
** key findings by ITG:
  • 60% of new Apple Pay customers used Apple Pay on multiple days through November, suggesting strong customer engagement. In comparison, New PayPal customers used the service on multiple days during the same time period just 20% of the time.
  • Apple Pay customers used the service roughly 1.4 times per week and used Apple Pay at the same merchant for future transactions roughly 66% of the time.
  • Upon adoption of Apple Pay, the average consumer uses the service for approximately 5.3% of all future card transactions and 2.3% of all future card dollars spent.
*** Walgreens comes second (19%/12%), McDonalds third (11%/3%).



HOME - Today's wireless headlines - Korea wireless news - all posts (full list) - useful links - stephanemot.com (personal portal)
my books : "dragedies" - "La Ligue des Oublies"
my other sites : blogules - blogules (VF) - Seoul Village - footlog - Citizen Came - Copyright Stephane MOT 2003-2009