Monday, May 30, 2016

Exponential Information Systems




1. Introduction


I had the chance, two weeks ago, to listen to a dual talk by Norm Judah, the Service CTO from Microsoft, and Bernard Ourghanlian, Microsoft France CTO , when they visited the ICT commission at the NATF . The ICT commission will produce a full report about our investigation into « AI renewal and the explosion of machine learning » next year (similar to our Big Data report last year), but let me share a few random notes. The topic of these two lectures was Machine Learning, and they both started by stating the obvious: Machine Learning is already everywhere in our digital lives. We are using it implicitly all the time in most digital products and services that we use every day. What makes ML the « hot topic of the year … or decade » is both the constant progress and the increase in the progress that we have seen in the past few years. The following illustration shows the reduction of word error rate in speech recognition from Nuance (the Microsoft version of this curve that Bernard Ourghanlian showed is even more impressive).



Similar curves exist for many other ML domains such as image recognition and they show a similar pattern. Microsoft is investing massively in ML, notably through its Cortana platform (and I mean massively – Microsoft CEO, Satya Nadella, has made Machine Learning the #1 priority for the next decade). I will not cover all that I learned about Cortana today, such as « the more you use it, the better it works », but I was struck by two things.

First, we are still in the days of supervised ML, mostly, which means that data (samples for learning) collection, selection and curation are the fundamental skill of the next 10 years. I had heard this idea expressed earlier by Pierre Haren, former CEO of ILOG : IP (intellectual property) of most next-generation technology will be about data sets and learning protocols. The formalization and optimization of these protocols is going to be a wonderful discipline, from an abstract computer science point of view, but that will be the topic of another post. This is very much related to the “data is the new code” claim.

Second, the next wave is indeed the coming of really smart interactive assistants, mostly chatbots. There is so much hype about chatbots that this does not sound like much, but Norm Judah’s talk convinced me that the wave that is coming is bigger than what I thought. One of his most striking argument is that conversational assistants will give access to digital services to new categories of people such as:
  • People who do not have a smartphone yet (in emerging countries), using SMS instead,
  • People who have a smartphone but are not comfortable with apps (people over 80 for instance),
  • People with smartphones and simple mobile application practice, but who lack the level of abstraction and education to feel at ease, or empowered, with many navigation systems proposed by corporations in their commercial apps.

According to Norm Judah, Bots are the revolution of the next 10 years. This revolution is coming faster than the next one: the revolution of business capabilities though general-purpose AI (the special purpose version is already there, as shown in fraud detection). He thinks that most digital applications will have conversational interfaces (such as chatbots) in the next 4 years. Bots will become a prevalent interface for many digital services (but not for everything, sometimes it is still faster to point and click than to tell). A good summary (in French) about the current state of chatbot may be found here, but the key message is that this is only the beginning and that text recognition and understanding is about to make spectacular progress in the years to come. Obviously, this is no way restricted to Microsoft / Cortana, this trend is everywhere : Facebook, Apple or Google for instance.

The upcoming revolution of Machine Learning and Artificial Intelligence is one of the key theme of the bestseller “Exponential Organizations”, a summary of which may be found here. I decided to call this blog post “Exponential Information Systems” because I believe that the general pitch from the book, that the exponential wave of technological progress that is coming demands a new organizational architecture, works for information systems as well. To be more specific, here are three key ideas from that book:

  • The exponential rate of change of technologies, such as machine learning or IOT, demands a new kind of organization for enterprises, with more distributed control towards a network of autonomous cells, so that companies adapts continuously to their changing environment and the associated opportunities
  • Such organizations – so called “exponential organizations” – exhibit a few SCALE-able traits: to use of on-demand resources, to leverage the power of communities, to master their own field of algorithms – I will underline this aspect in the conclusion - and to engage their customers and partners.
  • The set of founding principles for such organizations is summarized with the IDEAS acronym: Interfaces (to attract external contributions), Dashboards (to support decisions from facts), Experimentation, Autonomy (which is the heart of agility and speed) and Social.

I strongly urge you to read this book, it really helps to understand the “exponential” changes of the decade to come, and how to adapt. Today’s post is focused on how this applies – logically - to information systems as well. Section 2 is a transposition of the first idea: information systems also needs to continuously adapt to their digital environment  (both from the outside – the boundaries – and the inside – the underlying technologies) which requires a rate of internal change that was unheard of two decades ago. Section 3 develops the importance of boundaries, interfaces and the enrolment of communities to create ecosystems. There is no surprise here, the networked aspect of the digital economy is rooted in API and cloud services ecosystems. Section 4 focuses on two key characteristics of the digital economy that we just mentioned earlier: the need to base decisions on facts – and re-evaluate them regularly as everything changes – and the need for “tinkering”, the name given by Nassim Taleb and Salim Ismail to experimentation. It should be obvious to anyone that these two characteristics are inherited from the information systems capabilities. The last section summarizes the principle of “capability architecture”, whose goal is to build a scalable playground for tomorrow’s algorithms. The new name of the enterprise architecture game is symbiosis, as is wonderfully explained in Erik Campanini and Kyle Hutchins new book “Darwinism in a consumer-driven world”.

To summarize, one could say that Exponential Information Systems are those whose architecture, software culture and lifecycle make ready to leverage the exponential wave of new digital technology. Let me end this introduction with a great quote from Pierre-Jean Benghozi in his article “Digital Economy: A Disruptive Economy? : “ … Eventually, it is the constant transformation of their models that grants companies a kind of stability and resilience … not simply the better use of their strategic resources or the answer to changes of their environment through innovation”.

2. Digital World Refresh Rate is a Game Changer


I have spent a decade experimenting – when I was the CIO of Bouygues Telecom – and theorizing the need for constant change and refresh of information system elements. This is a key focus of my lecture at the Ecole Polytechnique, of my second book and some of these posts. Information Systems must be seen as living organisms, which was one of the reasons to start this blog in the first place. I have developed techniques and formulas to build sustainable and cost-efficient information systems based on the proper refresh rate that ensures that the technical debt stays moderate and that the benefits of new technology (e.g. Moore’s Law) may be leveraged.

However, my 10-years-old thinking is now totally obsolete, as we need to increase the refresh rate by one order of magnitude. This is the key message from Exponential Organizations: get ready for waves after waves of technologies for building better web sites, better mobile apps, better big data systems, better machine learning algorithms, better artificial intelligence frameworks, smarter connected object, etc. I could reuse the modeling approach that I developed earlier but I will spare you the unnecessary formalization: the first consequence is that Information Systems must indeed increase their refresh rate to incorporate these technology waves without too much delay. The reason is not only that the exponential wave of new technologies open a wave of new opportunities – which they do – but also that new technologies allow to do things much cheaper and much faster, as is illustrated with Big Data technology, creating huge opportunities and risk of disruption. The second consequence is that, to maintain a high refresh rate while keeping the IT budget moderate, one needs to reduce the footprint. Modern information systems are required to move fast, constantly, and must, therefore, reduce their inertia. This is closely related to Klaus Schwab’s famous quote : In the new world, it is not the big fish which eats the small fish, it’s the fast fish which eats the slow fish.

Somehow these ideas were already true 20 years ago, it is just that the stakes have been raised and these principles have become mandatory. When I give a lecture about information systems in the digital age  the pivotal moment is the need to write systems that will change constantly, this is why continuous build & delivery is so important, this is why we need new ways of writing code. The mandate to keep IS small and as much free of technical debt as possible is a tough one. It is not a universal or homogeneous requirement throughout the information system: As in any living organism each part has its own renewal rate. For reasons that would be too long to explain here, refresh rate is expected to follow a power law; some hubs need to evolve constantly while some leaves may enjoy a peaceful life. Indeed, a good architecture is one that let each component evolve at its own rate – a key lesson from my tenure at Bouygues Telecom that was the topic of my first book and which I will address in section 4. This brings us to the quest for modularity, an architecture mandate which is more relevant than ever in the digital world.

An ideal information system, like an ideal organization as defined in BetaCodex is one that is built like a living cell (hence the illustration). The membrane and everything close to it evolves faster than the core of the cell. Thinking about changes from an edge/frontier perspective is a key insight from books such as “Exponential Organizations” – cf. “scaling the edge” in this  Deloitte summary - or system thinkers such as Idriss Abderkane  since adaptation in living systems usually starts from the edge, not the center. Here is another quote from Complex Systems” by Par Terry R. J. Bossomaier and David G. Green : “Because systems near the edge have the richest, most complex behavior, such systems can adapt better and therefore have a selective advantage over other systems”. We will see this principle put to action in the capability map of Section 5. Sustainability and Systemic thinking is critical when designing information systems; otherwise massive amounts of refresh spending may be applied without changing much because the burden / mass of legacy is too large. I have seen too many times, in my days as CIO, people reasoning about change from the effort that was made to add new things without reflecting about the inertia of the technical debt.


3. Do Less, Hence Leverage Others' Work


This section will be short since I have already developed this idea in other posts : the only way  to churn out more software, with better quality without raising the cost is to massively reuse more from outside. The associated illustration is a living tissue as a metaphor of the cells working together in an ecosystem. In the same way that change happens both from the outside in and from within, software reuse also happens through the frontiers and from within. Without any attempt to be exhaustive, there are four major “engines” to churn software faster and I will focus today on the last since it is related to the previous section:

  • The first pillar is the use of open source software . This is not the only way to reuse, but this is mandatory in the digital world because the open-source community is where a lot of the software innovation occurs. There is a Darwinian advantage for open-source software: the way it is build – incrementally, around communities, continuous testing and debugging with thousands of eyeballs, constant change, etc. -  produces the traits that are necessary to fit the digital environment. It is not a simple mandate since it requires strong changes in software architecture and culture for many companies.
  • Reuse “at the frontier” is to develop APIs to propose and use web services. The importance of API does not need to be emphasized since the “Jeff Bezos memo”. There is also a culture dimension since API are attached to ecosystems, and since the best way to learn the API game is to participate to software communities.
  • Delivering software faster requires full automation, hence the rise of DevOps and continuous build & delivery. There is more to DevOps than continuous delivery but the key point here is that modern software factories come with automation tools that may not be ignored. This is the first lesson of my four years building set-top boxes at Bouygues Telecom.
  • Heterogeneous change-architecture with a focus on “agile borders” or “adaptive edge”. The most common pattern for this idea the the bi-modal architecture. Bi-modal is a very old idea in IS architecture. For instance, Octo and Pierre Pezziardi developed the empire/barbarian metaphor 15 years ago (the empire evolve more slowly than the barbarian outpost). Bi-modal is an obvious simplification since the core-to-edge differential design may be decomposed in many ways (cf. our Section 5).

Bi-modal is both a great and dangerous idea. Bi-modal defines two parts of the Information System realm, one where the legacy stands and one where the new digital frontier is being implemented. In his article, Bernard Golden develops similar reasons about the why and the importance of DevOps. It also leverages a key idea of the digital world: the best way to learn how to develop like the best digital companies is to use the same tools. Bi-modal is a great idea because it defines a new domain where the “new rules of the digital world” may be introduced. It makes “experimentation” much easier and support the introduction and development of a new software culture. Jay Fry blog post develop the same ideas, with a focus on the necessary disruption, which is precisely the same as the requirement for a new rate of change.
However, warnings started to be heard not late after Gartner re-introduced this idea in 2014. If taken literally, it may be seen as an excuse not to change the legacy, which would be wrong according to the previous section and according to Jason Bloomberg. In his article “Bimodal IT: Gartner recipe for disaster”, he proposed a caricatured version (slow versus fast IT) which he rightly criticizes.  He makes a thoroughly correct argument that the digital revolution need to embrace the whole information system.  In a more recent post entitled “Saying goodbye to bimodal IT”, Mark Campbell makes a harsh criticism about keeping the legacy untouched and mostly against a silo culture that will prevent global change. Change and new capabilities are indeed required everywhere; I will return to this point in the next two sections. 

This being said, I still believe that “bi-modal” is necessary because of speed requirement (back to the “rate of change” topic of the previous section). Everything mentioned in this section requires speedy transformation: the speed to adapt to a new open-source culture, the speed to adopt new software factory tools, the speed to learn how to play the API game. All these changes are mandates for the whole information systems, but if there is one common rate of change, it is mechanically too slow, because of the weight of legacy.

4. Fractal Capability Design


I started using the concept of capabilities in 2004 after reading the wonderful book of François JullienA Treatise on Efficacy”. Capabilities are the practical tool that one may use as a CIO or VP of engineering to develop a company’s situation potential. Very crudely summarized, François Jullien’s suggestion is to borrow from Chinese philosophy to develop strategies that are better suited to the complex and impossible to forecast world of our 21st century. François Jullien opposes the Greek philosophy which is governed by goals and actions plans, where the player is applying his or her will to the world to change outcomes in a top-down manner, to the Chinese philosophy which is governed by situation potential and opportunities, where the player is using the environment as it comes and builds her or his potential in a bottom-up manner. In terms of IT strategy, it means that we stop building strategic plans where we forecast how the information systems will be used 5 years from now, but that we think about what opportunities could exist, what would be needed to best leverage these opportunities – the capabilities - and how to best use the next 5 years to grow the capabilities. If the difference is not clear, please read the book, it is simply the most relevant business book I have ever read.

There are many capabilities that are required in the digital world, but I would like to focus on three key capabilities:
  • Mashup, that is, the ability to combine swiftly and deftly different services into new ones. The new term is composite applications, but I like this reference to the “old web days”. Being able “to mashup” is the first digital capability, derived from constant practice and open choices. The mashup capability is derived from many paradigms: source code integration, (micro) service-oriented architecture, tinkering culture, open source community adoption, etc.
  • Complex event processing, which is the ability to collect, process and react to complex collection of events. CEP is also sitting on a large array of technologies, from event-driven architecture to rule-based automation. Complex event processing is the critical capability for conversations, and we know that in the digital world, “markets are conversation”. Obviously the arrival of massive amounts of artificial intelligence will make contextual and ambient intelligence a must for future digital services.
  • Machine learning, as said in the introduction, plays a critical role everywhere. It is used to generate insights, to forecast and simplify usage, to personalize without becoming cumbersome, to detect threats and fraud. Machine learning is the critical capability for relevance, and no customer has time for irrelevant experiences in this new century.


The concepts behind these capabilities are not new, but the level of excellence that one may reach is constantly changing, hence the situation potential is changing. These capabilities intersect each other and reinforce each other. The consequence of the digital challenges is that information systems must develop these three capabilities in each of their components. I remember an old debate in 2007 when we discussed the ideal service architecture for Telcos. I came up with this motto “mashup everywhere” which meant that we had to build this “on-the-fly-composite-service” capability in the network, on our service delivery platform (SDP) and on the devices themselves – at the “edge” of the network.    This is truer than ever, the “exponential” wave of technologies means that these capabilities will have major impacts in all components of the information system. This leads to the concept of “fractal capabilities” which I have illustrated with the snowflake metaphor. “Fractal capability” means that the capability is implemented in a recursive manner, at different scales. To take another example from the previous three, I would propose today that "analytics capabilities must be everywhere" : the ability to take decisions based on facts is a given of the digital age, from core to edge.


The consequence, and the best illustration of this concept, is that EIP (Enterprise Integration Platform) is a concept more than a physical platform. As a system, it is a fractal architecture made of multiple integration patterns, at different scale. This is definitely not new, this was the topic of my first book more than 10 years ago and the lesson learned from many years of experience. Even in those years there were many reasons for the “fractal” design:
  • There are too many constraints (different integration challenges for each different for each frontier) and its is hard to find a technology that matches all of them,
  • Different components have different lifecycle; thus a fractal / recursive implementation (federated collection of platforms) is more flexible and adapts more easily to different “takt times”,
  • Integration is about ecosystems: open integration is even more challenging, since each frontier comes with its own culture, not simply technical constraints,
  • Resilience : fractal design supports the independence of sub-regions and reduces the possibility of a SPOF.

“Exponential times" amplify this set of constraints: seeing the EIP as a single platform becomes a juggling act. The successive waves and forms of integration technologies, patterns and ecosystems is much better suited to a fractal approach with the flexibility of organic thinking.

When I speak about system capabilities, it relates to people (teams) as much as technology. Capabilities are born from skills, and skills in an exponential world are developed through practice, through experimentation. Experimentation is a key pillar from the « Exponential Organizations » book : « Constant experimentation and process iteration are now the only ways to reduce risk. Large numbers of bottom-up ideas, properly filtered, always trump top-down thinking, no matter the industry or organization »;  « As Nassim Taleb explains, “Knowledge gives you a little bit of an edge, but tinkering (trial and error) is the equivalent of 1,000 IQ points. It is tinkering that allowed the Industrial Revolution.» In a world of constant exponential change, tinkering often beats expertise. A truly interesting example is given about a natural language contest where the most innovative team was a cross-functional team with diverse backgrounds and no true experts: « none of the winners had prior experience with natural language processing (NLP). Nonetheless, they beat the experts, many of them with decades of experience in NLP under their belts ». As I have expressed many times in this blog, "tinkering capabilities" are crucial to be able to import innovative technologies from the outside (you cannot buy a digital innovative platform that you do not understand, and understanding is achieved through practice).


5. Thinking Outside The Box: Four-Tiers Playground Landscape


I will end this post with a simple chart (below) which is a summary of the different ideas expressed so far, expressed as a “capability landscape”. It is a follow-up of a previous post entitled “software ecosystems and application sustainability”. There is a tension expressed here: on the one hand, many capabilities are “fractal” and must be developed everywhere; but on the other hand, it is always better to reuse what exists and organic design shows a way to deploy change faster. This landscape (not to be confused with a system architecture chart) is made of four domains, which are interwoven through the “fractal enterprise integration platform” that was just discussed:

  1. Business / core capabilities: the traditional domain of IT as a function for business support. A key challenge is to make business capabilities easy to reuse and to share, hence the focus micro-service architecture and internal APIs. As expressed in the previous section, mashup, tinkering (CORE requires its own sandbox) or CEP must be grown capabilities in this domain.
  2. Engagement capabilities:  a first frontier towards customers and partners, which is characterized by strong tinkering & integration capabilities. Since the first role of the engagement frontier is to “start conversations”, capabilities such as contextual intelligence or personalization are critical. This domain, called "matrix" on the schema, is the frontier with the outside world; it is the place where outside innovation gets mashed up with the enterprise own capabilities.
  3. External engagement capabilities: where the state-of-the-art of advanced capabilities live. The topic of this whole post is to get ready to ingest massive amounts of innovation to come, and this is done by mashing up the “agile frontier” of the information systems with services delivered by advanced partners.
  4. Last, I have represented the “Digital customer world”, with its own capabilities. There is a larger world outside, where our customer live, and it is also getting richer of advanced  capacities (such as smart assistants or machine learning) at exponential rate. Companies need to develop a symbiosis with this ever-richer digital environment and try not to compete with it.


To understand the value behind the decomposition of the “capability landscape” into four domains, it is interesting to look at the three following pairs. First, the matrix/edge distinction is important because the edge is bigger than the frontier, so companies must learn to become “parasites” of the software ecosystems that live on the edge. Leveraging the smartphone and the application stores is the most obvious example. Second, I have kept a bimodal core/matrix distinction because of the reasons exposed in Section 3, bit remember that the core domain needs to evolve and that it is also under disruptive attack from “Barbarians”. Last, I have emphasized a matrix/cloud difference because exponential technologies will grow – very fast – outside the enterprise and our challenge is to learn to use the capabilities before we master them (the mastery comes as a consequence of the practice, not as a prerequisite).

To conclude, I will rephrase the importance of algorithms, which is a cornerstone of the “Exponential Organizations” book, by borrowing three quotes. The first one tells about Allstate, shown as an example of a company that has learned to use algorithm competitions – similar to the ROADEF annual challenges – to improve their own state of the art: « It turned out that the Allstate algorithm, which has been carefully optimized for over six decades, was bested within three days by 107 competing teams. Three months later, when the contest ended, Allstate’s original algorithm had been improved 271 percent. And while the prize set the company back $10,000, the resulting cost savings due to better algorithms was estimated to be in the tens of millions annually”. The second quote tells about the UPS example of improving their telematics algorithm and shows that algorithms – which have always been important – are becoming mission-critical for many business as they become digital: « Given that the 55,000 trucks in UPS’s American fleet make sixteen million deliveries daily, the potential for inefficient routing is enormous. But by applying telematics and algorithms, the company saves its drivers eighty-five million miles a year, resulting in a cost savings of $2.55 billion. Similar applications in healthcare, energy and financial services mean that we’re entering a world of Algorithms R Us ». The last quote is a nice way to round the circle with my introduction on the importance and exponential progresses of Machine Learning: “ Machine Learning is the ability to accurately perform new, unseen tasks, built on known properties learned from training or historic data, and based on prediction”.

Wednesday, March 2, 2016

Consumer Electronics technologies : need for global warming


Introduction



I was fortunate to attend both CES (Consumer Electronics Show in Las Vegas) and MWC (Mobile World Congress in Barcelona) this year and I came back twice with mixed feelings and similar conclusions. This short post will explore some of them. The title is a provocative summary: the visible part of technologies in consumer electronics, such as screens or device size, has reached a plateau, while the “inside technologies” continue to grow at Moore’s law rate, which is cool but not necessarily impacting. You end up visiting the TV stands of CES, or the smartphone stands of MWC, without the excited feeling that ”this year generation of device is so much better than last year”, which has been the case for the past 15 years.                

The abundance of technological innovation and progress, which has not slowed down, is finding an outcome with the multiplication of gadgets and accessory devices. However, most of them are “cool” but not “warm” : they do not address a user pain points nor demonstrate an immediate benefit for our daily life. What I mean by “global warming” is the necessary massive embedding of design thinking and customer centricity for the next generations of consumer electronics.

This blog post is organized as follows. I will first explain my double tweet about the Bazar (which is thriving) and the Cathedral (which has stalled). I will then focus on the smartphone and explain why I see a plateau in its current evolution. The third section is a refresher on a theme that is common to my two blogs: the need for story telling and design thinking. The last section is a follow-up about the coming of AI in our smartphones to transform them into smart assistants.

 1. The Bazaar and the Cathedral   

      

I am borrowing the metaphor from Eric Raymond’s bestseller. In his book, the bazaar is the world of open source software, compared to the cathedral, the world of commercial software sold by ISV, independent software vendors. In this post, the Cathedral is the set of large, expensive booths from the well-known brands of Consumer Electronics. They have always been the stars of shows such as CES or MWC : very large, very crowded, beautiful and innovative displays, entertaining hosts and hostesses, gifts and joyful excitement. The Bazaar is the grid of tiny booths rented by startups and small technology players - most of the time, less than 10 square feet and no special-effect-displays. A few years ago, one would spend most of the time in the Cathedral – there was so much to see – and do a quick visit to the huge bazaar (thousands of small booths) in the hope of serendipity: to detect an early product or startup innovation that could complement the rising tide of CE products.

In 2016, the Cathedral has stalled and the Bazaar is thriving. This is very striking for someone who has been visiting CES over a decade. The huge booths of the Cathedral are surprisingly similar to what they looked like last year or two years ago. The flagship products, TVs or smartphone, are also very similar. Some booths are actually making this very clear: Samsung in Barcelona used a tiny fraction of the space for the new S7 flagship and most of the booth as a retrospective of past innovations. The crowd is still there, but there are no huge lines to try new smartphones or see new TVs. The “shows within the booth”, a trademark of cathedral organizations, are much more scarce and much less joyful than in the previous years. On the other side, the Bazaar is bursting with newly found energy and vastly improved self-organization. These actors have always been there, but it is clear that the ecosystem is changing: Internet of things (explosion of sensor technologies), ubiquity of the smartphone, faster and easier access to computing power, etc. Many of the small players now come with innovations that are much closer to: (a) user needs, (b) easy delivery to customers, than what we would have seen in the past. The common lore that “it is today much easier to build a high quality product with less resources”  is clearly shown to be true if we consider the quality of what small companies are able to present at CES or MWC. There is also a much better organization, either from a geographical or topical perspective: pavilions have emerged to create hot spots, such as the FrenchTech, Israel Mobile Nation, or standard-focused associations.

The combination of these two trends still makes for both exciting 2016 issues of CES or MWC. At CES, new cathedrals are being built with the explosion of connected cars. The technology innovation stream still produces a continuous exponential increase of raw computing power – as show in CES by the Drive PX2 board from NVidia for embedded smart car computing with the CPU/GPU power of 150 MacBook Pro on a single board, ready for embedded deep machine learning. Similarly, the continuous exponential improvement of video processing capabilities is quite spectacular with examples such as 360 real-time video stitching. The constant improvement of sensor capabilities is also pretty amazing. Smart objects for e-Health are now embedding medical-quality sensors (in response to previous concerns such as with Fitbit) with impressive capabilities (for instance, the electrocardiogram wrist band Qi from Heha). This improvement of sensing goes hand in hand with miniaturization which fuels the IoT explosion, which was very visible both at CES and MWC. New domains such as smart clothing are bursting, while more usual CES IoT sections such as e-Health or Smart Home are bubbling with new energy. This continuous growth is fueled by constant progress with the silicon, as well as the emergence of de facto API-based ecosystems such as IFTTT, Alexa (Amazon), Smartthings (Samsung) or ThinQ (LG) … with many cross-fertilization such as this or that.

However, this explosion of cool technology does not leave the visitor necessarily with the warm feeling of usefulness. The idea that IoT exponential progress is “cool” but not warm enough is not new. I made a similar comment in 2013 when visiting the “Smart Home Conference” in Amsterdam. I was already quite impressed by the availability of all the connected objects that are required to make one’s house smartly heated, lighted, filled with music, more secure, etc. However, what was cruelly lacking then was, and still is, the availability of a true “user-centric proposition” delivered by a credible brand. What I mean by “user-centric” is not a surprise:
  1. to solve a real pain point,
  2. to deliver clear and immediate benefits with a promise from a brand,
  3. to assume the promise and help to setup the “smart system”,
  4. to face the customer (accessibility) and not to shy away from troubleshooting and customer service.
As a Uber or AirBnB user, it is not the technology that impressed me, but the fact that there is a reachable company who fulfill the promise and who takes accountability.

2. The Plateau of the Smartphone Platform



We all know what we expect from a smartphone : great battery life but light to carry, beautiful screens with great resolution and vibrant colors, thin and elegant but robust, fast so it responds to touch instantly and runs our preferred apps. There are obvious conflicts in these goals, which makes the engineering problem interesting. It looks like we are reaching a plateau in smartphone evolution for two reasons:

  • For some the goals, we are near the optimal performance that the human user may appreciate. It is very clear for screen resolution (this is why the “retina display” label was proposed by Apple) and it is also true for screen size (I find it interesting to see that 5 inches seems to be the “average optimal size” since this is the value that Intel announced more than 10 years ago, before the iPhone was introduced, as the result of an in-depth customer study).
  • Some of the constraints are between “click and mortar” disciplines, between exponential technologies and mechanical engineering, and the rate of improvement is much lower for the second group. For instance, it is clear that the improvement of CPU/GPU comes with an electric consumption price and that battery capacity is slowing down the evolution. Similarly, thinness comes at the expense of sturdiness.

As a consequence, 2016 smartphones are no more exciting than those of last year. 2015 was full of really thin smartphones (from 5 to 6 mm), they are all gone. Following Apple who made the iPhone6S heavier than the iPhone6, manufacturers are shying away from the ultrathin design. On the contrary, Samsung new flagship (S7) is 152g and 7.9 mm thick, compared to the S6 138g and 7.9 mm. This is the price to pay to get a better battery life coupled with faster processors. Consequently, when you play with the new models in 2016, there is not much excitement compared to the same MWC 2015 visit last year. Sure, the processors are slightly faster (although 8 cores seems to be enough for everyone, and they were there last year already), but it does not make a big difference (yet … cf. last section). I could tell exactly the same story for smart watches: they are strikingly similar to last year’s models, still too thick, with battery life that is too small and app performance which remains sluggish. I am a great believer in smart watches and I live with one, but we are still in the infancy of the product. Today it remains a geek product.

Because I am conjecturing a plateau, I would not be surprised to see (Android) smartphone prices go down sharply in the next years (when the feature races stalls, commoditization kicks in). This is in itself a joyously disruptive piece of news since it means that the smartphone will continue its replacement of the “feature phone” in all markets, including emerging countries. Meanwhile cathedral brands need new products to keep us (richer customers) spend our money and the star of the year is clearly the VR (virtual reality) headset. I have been, like all visitors, quite impressed with the deep immersive capability of these devices when playing a game. I can also see a great future for business augmented reality applications. On the other hand, I do not see these headsets as a mass-market / everyday use product, no more than 3D TV convinced me 5 years ago.

3. Fed up with data, looking for stories


I have been really excited about quantified self for the past five years. I have bought and enjoyed a large number of connected gadgets, adding the apps and their dashboards to my iPhone. I still appreciate some of them, because they helped me to learn something about myself, but most of them are standing in my own “useless gadgets museum”. I developed while still at Bouygues Telecom the theory that, in its current state, “quantified self” addresses a unique group of people with (a) a “geek mindset” to cope with the setup and the maintenance of these gadgets (b) a “systemic” interest to see value in dashboards and to learn from figures and charts (c) a good measure of ego-centricity – not to say narcissists :) . This is a hefty niche where I find myself comfortably standing, but it is still too small to scale most existing “quantified-self value-propositions” linked to wearable connected devices. We know that many connected objects companies have not met success in the past 18 months. I have heard many times the same story from some of the “quantified self players” that I have interviewed: the launch starts well with a small group of enthusiasts, but does not scale.

For most people, there are two missing ingredients to fulfill the promise of quantified self: a story and a coach. There is indeed a great promise, since science tells us everyday that we can improve our health and our well-being by knowing ourselves better and changing our behaviors accordingly. The need for story telling, in the sense of Daniel Pink in his best-seller “A Whole New Mind”, and coaching (personalized customer assistance) is not limited to e-Health, it is also true for other fields such as Smart Home or Smart Transportation. For most of us, self-improvement needs to be fueled by emotions, not dashboards. To paraphrase Daniel Pink, I could say that connected devices must adress our right-brain more than our left-brain :) The importance of design thinking is one of the reasons startups seems to have an innovation edge. Focusing too much on data and data mining fail to recognize the true difficulty of self-change – one must read Alan Deutchman’s excellent book “Change or Die”. I will make “Lean User Experience” the theme of my blogpost next month to speak about this with more depth. It is very clear when wandering around CES booths that too many connected wearable are still focused on delivering data to their users, not a clear self-improvement story.

This is not to say that there is not value in Big Data for e-health or well-being, quite to the contrary. As explained earlier, science shows that we have a wealth of knowledge in our hands from these life digital tracers. If you have any doubts, read “Brain Rules”, the bestseller from John Medina.  You will learn what science says about the effect of exercise on your brain capacity (IQ) through better oxygenation, just to name one example. However, delivering value from data mining requires customer intimacy, from a large number of reasons ranging from relevance to acceptance. This obviously requires trust and the respect of data privacy, but it also requires a strong and emotional story to be told. I have no doubts that big data will change the game in that field of e-health, but as a “systemic reinforcer”. One needs a warm story and a clear value proposition to start with.

4. Waiting for smart assistants?


I strongly recommend reading Chris Dixon post on “What’s next in computing ?”. I share a number of his ideas about the current “plateau state” and the fact that the next big thing is probably the ubiquity of AI in our consumer electronics devices. He also points out the coming revolution of miniaturization / commoditization of computers, and the impact of the smartphone as a billion-units computing platform. The image of a “plateau” is not one of an asymptotic limit (i.e., I do not mean that we have reach a limit about how smartphones may be improved). I have no doubts that engineering constraints will be resolved by technological progress and that new generations of smartphone will delight us with considerably longer battery lives, thinner designs and reduced weight. The image of the transparent smartphone in Section 2 shows that there is still much to come about this ubiquitous device. My point is that we have some hard engineering / chemical / mechanical constraints in front of us so it will take a few years to solve them.

Meanwhile, the exponential growth of smartphone performance will come from within, from the software capabilities (fueled by hardware performance exponential growth). Both because GPU are getting amazingly powerful and because we see deep learning specialized chipsets becoming available to be embedded into smartphones in the next few years, it is easy to predict that deep learning, and other AI/machine learning techniques, will bring a new life to what a smartphone can do. We may expect surprising capabilities in the field of speech and sound recognition, image recognition, natural language understanding and, more generally, pattern recognition applied to context (contextual or ambient intelligence). Consequently, it is easy to predict that the ability of the smartphone to help us as a smart assistant will grow exponentially in the decade to come.
Artificial Intelligence on the phone is already there. “Contextual intelligence services” are already the heart of Siri, Google now or Microsoft Cortana. To the large players I should add a large number of startups playing in this field such as SNIPS, Weave.ai or Neura. But two things will happen in the years to come:  first, the exponential wave of available computing power is likely to raise these services to a new level of performance – starting with natural language understanding – and the capabilities of the smartphone will make it possible to do part of the processing locally. Today the “AI part” of the service sits on the cloud, leveraging the necessary computing power of large server farms. Tomorrow, the smartphone itself will be able to run some of the AI algorithms, both to increase the level of satisfaction and to better respect the privacy.  This is the application of the multi-scale intelligent system design, which I have covered in my previous post. Neura, for instance, is precisely positioned on how to better protect the privacy of the user’s context (personal events collected by the smartphone or other connected/wearable devices). By running some of the pattern recognition activity on the smartphone, a smart assistant system may avoid moving “private events” from the smartphone onto a less secure cloud distributed environment. This is precisely the positioning of SNIPS: to develop AI technology that runs on the smartphone to keep some of the ambient intelligence private. I start to recognize a pattern which I call “Tiny Data”: the use of data mining and machine learning techniques applied to local data, kept on the smartphone for privacy reasons, using the smartphone as a computing platform because of its ever-increasing capacities.

I have selected the movie “Her” as an illustration of this section because I think that it is remarkably prescient about what is to come : the massive arrival of AI into our personal lives, the key role of the personal assistant (and all the dangers that this may cause) and the combination of smart device and cloud massive AI.


 
Technorati Profile