Wednesday, March 2, 2016

Consumer Electronics technologies : need for global warming


Introduction



I was fortunate to attend both CES (Consumer Electronics Show in Las Vegas) and MWC (Mobile World Congress in Barcelona) this year and I came back twice with mixed feelings and similar conclusions. This short post will explore some of them. The title is a provocative summary: the visible part of technologies in consumer electronics, such as screens or device size, has reached a plateau, while the “inside technologies” continue to grow at Moore’s law rate, which is cool but not necessarily impacting. You end up visiting the TV stands of CES, or the smartphone stands of MWC, without the excited feeling that ”this year generation of device is so much better than last year”, which has been the case for the past 15 years.                

The abundance of technological innovation and progress, which has not slowed down, is finding an outcome with the multiplication of gadgets and accessory devices. However, most of them are “cool” but not “warm” : they do not address a user pain points nor demonstrate an immediate benefit for our daily life. What I mean by “global warming” is the necessary massive embedding of design thinking and customer centricity for the next generations of consumer electronics.

This blog post is organized as follows. I will first explain my double tweet about the Bazar (which is thriving) and the Cathedral (which has stalled). I will then focus on the smartphone and explain why I see a plateau in its current evolution. The third section is a refresher on a theme that is common to my two blogs: the need for story telling and design thinking. The last section is a follow-up about the coming of AI in our smartphones to transform them into smart assistants.

 1. The Bazaar and the Cathedral   

      

I am borrowing the metaphor from Eric Raymond’s bestseller. In his book, the bazaar is the world of open source software, compared to the cathedral, the world of commercial software sold by ISV, independent software vendors. In this post, the Cathedral is the set of large, expensive booths from the well-known brands of Consumer Electronics. They have always been the stars of shows such as CES or MWC : very large, very crowded, beautiful and innovative displays, entertaining hosts and hostesses, gifts and joyful excitement. The Bazaar is the grid of tiny booths rented by startups and small technology players - most of the time, less than 10 square feet and no special-effect-displays. A few years ago, one would spend most of the time in the Cathedral – there was so much to see – and do a quick visit to the huge bazaar (thousands of small booths) in the hope of serendipity: to detect an early product or startup innovation that could complement the rising tide of CE products.

In 2016, the Cathedral has stalled and the Bazaar is thriving. This is very striking for someone who has been visiting CES over a decade. The huge booths of the Cathedral are surprisingly similar to what they looked like last year or two years ago. The flagship products, TVs or smartphone, are also very similar. Some booths are actually making this very clear: Samsung in Barcelona used a tiny fraction of the space for the new S7 flagship and most of the booth as a retrospective of past innovations. The crowd is still there, but there are no huge lines to try new smartphones or see new TVs. The “shows within the booth”, a trademark of cathedral organizations, are much more scarce and much less joyful than in the previous years. On the other side, the Bazaar is bursting with newly found energy and vastly improved self-organization. These actors have always been there, but it is clear that the ecosystem is changing: Internet of things (explosion of sensor technologies), ubiquity of the smartphone, faster and easier access to computing power, etc. Many of the small players now come with innovations that are much closer to: (a) user needs, (b) easy delivery to customers, than what we would have seen in the past. The common lore that “it is today much easier to build a high quality product with less resources”  is clearly shown to be true if we consider the quality of what small companies are able to present at CES or MWC. There is also a much better organization, either from a geographical or topical perspective: pavilions have emerged to create hot spots, such as the FrenchTech, Israel Mobile Nation, or standard-focused associations.

The combination of these two trends still makes for both exciting 2016 issues of CES or MWC. At CES, new cathedrals are being built with the explosion of connected cars. The technology innovation stream still produces a continuous exponential increase of raw computing power – as show in CES by the Drive PX2 board from NVidia for embedded smart car computing with the CPU/GPU power of 150 MacBook Pro on a single board, ready for embedded deep machine learning. Similarly, the continuous exponential improvement of video processing capabilities is quite spectacular with examples such as 360 real-time video stitching. The constant improvement of sensor capabilities is also pretty amazing. Smart objects for e-Health are now embedding medical-quality sensors (in response to previous concerns such as with Fitbit) with impressive capabilities (for instance, the electrocardiogram wrist band Qi from Heha). This improvement of sensing goes hand in hand with miniaturization which fuels the IoT explosion, which was very visible both at CES and MWC. New domains such as smart clothing are bursting, while more usual CES IoT sections such as e-Health or Smart Home are bubbling with new energy. This continuous growth is fueled by constant progress with the silicon, as well as the emergence of de facto API-based ecosystems such as IFTTT, Alexa (Amazon), Smartthings (Samsung) or ThinQ (LG) … with many cross-fertilization such as this or that.

However, this explosion of cool technology does not leave the visitor necessarily with the warm feeling of usefulness. The idea that IoT exponential progress is “cool” but not warm enough is not new. I made a similar comment in 2013 when visiting the “Smart Home Conference” in Amsterdam. I was already quite impressed by the availability of all the connected objects that are required to make one’s house smartly heated, lighted, filled with music, more secure, etc. However, what was cruelly lacking then was, and still is, the availability of a true “user-centric proposition” delivered by a credible brand. What I mean by “user-centric” is not a surprise:
  1. to solve a real pain point,
  2. to deliver clear and immediate benefits with a promise from a brand,
  3. to assume the promise and help to setup the “smart system”,
  4. to face the customer (accessibility) and not to shy away from troubleshooting and customer service.
As a Uber or AirBnB user, it is not the technology that impressed me, but the fact that there is a reachable company who fulfill the promise and who takes accountability.

2. The Plateau of the Smartphone Platform



We all know what we expect from a smartphone : great battery life but light to carry, beautiful screens with great resolution and vibrant colors, thin and elegant but robust, fast so it responds to touch instantly and runs our preferred apps. There are obvious conflicts in these goals, which makes the engineering problem interesting. It looks like we are reaching a plateau in smartphone evolution for two reasons:

  • For some the goals, we are near the optimal performance that the human user may appreciate. It is very clear for screen resolution (this is why the “retina display” label was proposed by Apple) and it is also true for screen size (I find it interesting to see that 5 inches seems to be the “average optimal size” since this is the value that Intel announced more than 10 years ago, before the iPhone was introduced, as the result of an in-depth customer study).
  • Some of the constraints are between “click and mortar” disciplines, between exponential technologies and mechanical engineering, and the rate of improvement is much lower for the second group. For instance, it is clear that the improvement of CPU/GPU comes with an electric consumption price and that battery capacity is slowing down the evolution. Similarly, thinness comes at the expense of sturdiness.

As a consequence, 2016 smartphones are no more exciting than those of last year. 2015 was full of really thin smartphones (from 5 to 6 mm), they are all gone. Following Apple who made the iPhone6S heavier than the iPhone6, manufacturers are shying away from the ultrathin design. On the contrary, Samsung new flagship (S7) is 152g and 7.9 mm thick, compared to the S6 138g and 7.9 mm. This is the price to pay to get a better battery life coupled with faster processors. Consequently, when you play with the new models in 2016, there is not much excitement compared to the same MWC 2015 visit last year. Sure, the processors are slightly faster (although 8 cores seems to be enough for everyone, and they were there last year already), but it does not make a big difference (yet … cf. last section). I could tell exactly the same story for smart watches: they are strikingly similar to last year’s models, still too thick, with battery life that is too small and app performance which remains sluggish. I am a great believer in smart watches and I live with one, but we are still in the infancy of the product. Today it remains a geek product.

Because I am conjecturing a plateau, I would not be surprised to see (Android) smartphone prices go down sharply in the next years (when the feature races stalls, commoditization kicks in). This is in itself a joyously disruptive piece of news since it means that the smartphone will continue its replacement of the “feature phone” in all markets, including emerging countries. Meanwhile cathedral brands need new products to keep us (richer customers) spend our money and the star of the year is clearly the VR (virtual reality) headset. I have been, like all visitors, quite impressed with the deep immersive capability of these devices when playing a game. I can also see a great future for business augmented reality applications. On the other hand, I do not see these headsets as a mass-market / everyday use product, no more than 3D TV convinced me 5 years ago.

3. Fed up with data, looking for stories


I have been really excited about quantified self for the past five years. I have bought and enjoyed a large number of connected gadgets, adding the apps and their dashboards to my iPhone. I still appreciate some of them, because they helped me to learn something about myself, but most of them are standing in my own “useless gadgets museum”. I developed while still at Bouygues Telecom the theory that, in its current state, “quantified self” addresses a unique group of people with (a) a “geek mindset” to cope with the setup and the maintenance of these gadgets (b) a “systemic” interest to see value in dashboards and to learn from figures and charts (c) a good measure of ego-centricity – not to say narcissists :) . This is a hefty niche where I find myself comfortably standing, but it is still too small to scale most existing “quantified-self value-propositions” linked to wearable connected devices. We know that many connected objects companies have not met success in the past 18 months. I have heard many times the same story from some of the “quantified self players” that I have interviewed: the launch starts well with a small group of enthusiasts, but does not scale.

For most people, there are two missing ingredients to fulfill the promise of quantified self: a story and a coach. There is indeed a great promise, since science tells us everyday that we can improve our health and our well-being by knowing ourselves better and changing our behaviors accordingly. The need for story telling, in the sense of Daniel Pink in his best-seller “A Whole New Mind”, and coaching (personalized customer assistance) is not limited to e-Health, it is also true for other fields such as Smart Home or Smart Transportation. For most of us, self-improvement needs to be fueled by emotions, not dashboards. To paraphrase Daniel Pink, I could say that connected devices must adress our right-brain more than our left-brain :) The importance of design thinking is one of the reasons startups seems to have an innovation edge. Focusing too much on data and data mining fail to recognize the true difficulty of self-change – one must read Alan Deutchman’s excellent book “Change or Die”. I will make “Lean User Experience” the theme of my blogpost next month to speak about this with more depth. It is very clear when wandering around CES booths that too many connected wearable are still focused on delivering data to their users, not a clear self-improvement story.

This is not to say that there is not value in Big Data for e-health or well-being, quite to the contrary. As explained earlier, science shows that we have a wealth of knowledge in our hands from these life digital tracers. If you have any doubts, read “Brain Rules”, the bestseller from John Medina.  You will learn what science says about the effect of exercise on your brain capacity (IQ) through better oxygenation, just to name one example. However, delivering value from data mining requires customer intimacy, from a large number of reasons ranging from relevance to acceptance. This obviously requires trust and the respect of data privacy, but it also requires a strong and emotional story to be told. I have no doubts that big data will change the game in that field of e-health, but as a “systemic reinforcer”. One needs a warm story and a clear value proposition to start with.

4. Waiting for smart assistants?


I strongly recommend reading Chris Dixon post on “What’s next in computing ?”. I share a number of his ideas about the current “plateau state” and the fact that the next big thing is probably the ubiquity of AI in our consumer electronics devices. He also points out the coming revolution of miniaturization / commoditization of computers, and the impact of the smartphone as a billion-units computing platform. The image of a “plateau” is not one of an asymptotic limit (i.e., I do not mean that we have reach a limit about how smartphones may be improved). I have no doubts that engineering constraints will be resolved by technological progress and that new generations of smartphone will delight us with considerably longer battery lives, thinner designs and reduced weight. The image of the transparent smartphone in Section 2 shows that there is still much to come about this ubiquitous device. My point is that we have some hard engineering / chemical / mechanical constraints in front of us so it will take a few years to solve them.

Meanwhile, the exponential growth of smartphone performance will come from within, from the software capabilities (fueled by hardware performance exponential growth). Both because GPU are getting amazingly powerful and because we see deep learning specialized chipsets becoming available to be embedded into smartphones in the next few years, it is easy to predict that deep learning, and other AI/machine learning techniques, will bring a new life to what a smartphone can do. We may expect surprising capabilities in the field of speech and sound recognition, image recognition, natural language understanding and, more generally, pattern recognition applied to context (contextual or ambient intelligence). Consequently, it is easy to predict that the ability of the smartphone to help us as a smart assistant will grow exponentially in the decade to come.
Artificial Intelligence on the phone is already there. “Contextual intelligence services” are already the heart of Siri, Google now or Microsoft Cortana. To the large players I should add a large number of startups playing in this field such as SNIPS, Weave.ai or Neura. But two things will happen in the years to come:  first, the exponential wave of available computing power is likely to raise these services to a new level of performance – starting with natural language understanding – and the capabilities of the smartphone will make it possible to do part of the processing locally. Today the “AI part” of the service sits on the cloud, leveraging the necessary computing power of large server farms. Tomorrow, the smartphone itself will be able to run some of the AI algorithms, both to increase the level of satisfaction and to better respect the privacy.  This is the application of the multi-scale intelligent system design, which I have covered in my previous post. Neura, for instance, is precisely positioned on how to better protect the privacy of the user’s context (personal events collected by the smartphone or other connected/wearable devices). By running some of the pattern recognition activity on the smartphone, a smart assistant system may avoid moving “private events” from the smartphone onto a less secure cloud distributed environment. This is precisely the positioning of SNIPS: to develop AI technology that runs on the smartphone to keep some of the ambient intelligence private. I start to recognize a pattern which I call “Tiny Data”: the use of data mining and machine learning techniques applied to local data, kept on the smartphone for privacy reasons, using the smartphone as a computing platform because of its ever-increasing capacities.

I have selected the movie “Her” as an illustration of this section because I think that it is remarkably prescient about what is to come : the massive arrival of AI into our personal lives, the key role of the personal assistant (and all the dangers that this may cause) and the combination of smart device and cloud massive AI.


Sunday, December 20, 2015

Event-Driven Architecture and Biomimicry



1.  Introduction


Ten years ago I simultaneously discovered the concepts of Autonomic Computing  and the fascinating book “Out of Control – the New Biology of Machines, Social Systems and the Economic World” from Kevin Kelly. This came at a moment when I was still the CIO of Bouygues Telecom and getting puzzled with the idea of “organic operations”. I had become keenly aware that high availability and reliability were managed – on paper – using a mechanistic and probabilistic vision of system on engineering, while real-life crisis were solved using a more “organic” approach of how systems worked. This is described with more details in my first book. Autonomic computing gave me a conceptual framework for thinking about organic and self-repairing systems design. I then had the chance to learn about Google operations in 2006, including a long discussion with Urs HoĆ«lzle, and found that many of these ideas were already applied. It became clear that complex properties such as high-availability, adaptability or smart behavior could be seen as emergent properties that were grown and not designed, and this lead to the opening of this blog.

I decided to end this year with a post that fits squarely into this blog’s positioning – i.e., what can we learn from biological systems to design distributed information systems? - with a focus on event-driven architectures. The starting point for this post is the reading of the report “Inside the Internet of Things (IoT)” from Deloitte University Press. This is an excellent document, which I found interesting from a technology perspective, but which I thought could be expanded with a more “organic” vision of IoT systems design. The “Information Value Loop” proposed by Deloitte advocates for augmented intelligence and augmented behavior, which is very much aligned with my previous post on the topic of IoT and Smart Systems. The following schema is extracted from this report; it shows a stack of technology capabilities that may be added to the stream of information collected from connected objects. From a technologist’s standpoint, I like this illustration: it captures a lot of key capabilities without loosing clarity. However, it portrays a holistic, unified, structured vision which is too far, in my opinion, to the organic nature of Systems of Systems that  will likely emerge in the years to come.



The first section of this post will cover event-driven architectures, which are a natural framework for such systems. They also make perfect instances of “Distributed Information Systems” to which this blog is dedicated. The next section will introduce Complex Event Processing (CEP) as a platform for smart and adaptive behavior. I will focus mostly on how such systems should be grown and not designed, following the footsteps of Kevin Kelly. The last section will deal with the “cognitive computing” ambition of tomorrow’s smart systems. I will first propose a view that complements what is shown in the document from Deloitte, borrowing on biology to enrich the pattern of a “smart adaptive IoT system”. I will also advocate for a more organic, recursive, fractal vision of System of Systems design, in the spirit of the IRT SystemX.

I use the concept of biomimicry in a loose sense here, which is not as powerful or elegant as the real thing, as explained by Idriss Aberkane. In this blog, biomimicry (which I have also labelled as “biomimetism” in the past) means to look for nature as a source of inspiration for complex system design – hence the title of the blog. In today’s post, I will borrow a number of design principles for “smart systems of systems” from what I can read from biology about the brain or the human body, but a few of these principles directly come from readings about complex systems.

2. Event-Driven Architectures



Event-Driven Architectures (EDA) are well suited to design systems around smart objects, such as smart homes. Event-driven architectures are naturally scalable, open and distributed. The “Publication/Subscription” pattern is the cornerstone of application integration and modular system design. This was incidentally the foundation of application integration two decades ago, so there is no surprise that EDA has found its way back into SOA 2.0. I will not talk about technology solutions in this post, but a number of technologies such as Kafka or Smaza have appeared in the open source community that fit EDA systems. There is a natural fit to Internet of Things (IoT) – need for scalability, openness, decoupling – which is illustrated, for instance, in Cisco’s paper “Enriching Business Process through Internet of Everything”. Its reference to IfThisThenThat (IFTT), one of the most popular smart objects ecosystem, is a perfect example : IFTT has built its strategy on an open, API-based, event-driven architecture. The smart home protection service provided by myLively.com is another great instance of event-driven architecture at work to deliver a “smart experience” using sensors and connected devices.


In a smart system that adapts continuously to its environment, the preferred architecture is to distribute control and analytics. This is our first insight drawn both from complex systems and biological systems analysis. There are multiple possible reasons for this – because of the variety of control & analytics needs, because of the need for redundancy and reliability, because of performance constraints, … - but this should be taken more as an observation than a rational law (and it is more powerful as such). It is clear that “higher-level” control functions are more prone to errors and failure and that they typically react slower, which is why nature seems to favor redundant designs with multiple control engines and failover modes. Translated into the smart systems world, it means that we should both avoid single points of failure (SPOF) and single points of decisions (SPOD). In a smart home system, it is good to keep the control of the command layer if the automated system is down, and to keep the automated system on if the “smart” system is not operating properly. On the contrary, the distributed decision architecture designed decades ago by Marvin Minsky in his Society of Mind is a better pattern for robust smart systems. From a System of Systems design perspective, distributed control and analytics is indeed a way to ensure better performance (place the decision closer to where the action is, which recalls the trends towards edge computing, as exemplified by Cisco’s fog computing). It is also a way to adapt the choice of technology and analytics paradigms closer to the multiple situations that occur in a large distributed system.



A natural consequence of control distribution is the occurrence of redundant distributed storage. Although this is implicit in the Deloitte document, it is worth being underlined. Most complex control and decision systems require efficient access to data, hence distribution and redundancy are a matter of fact. Which leaves us with age-old data flows and synchronization issues (I write “age-old” since both the Brewer’s theorem or snapshot complexity show that these problems are here to stay).  This topic is out of scope of this post, but I strongly suggest the reading of the Accenture document “Data Acceleration : Architecture for the Modern Data Supply Chain”. Not only does the document illustrate the “flow dimension” of the data architecture, which is critical to design adaptive and responsive systems based on EDA, but it explains the concept of data architecture patterns that may be used in various pieces of a system of systems. There is a very good argument, if it was necessary, made for data caching, including main-memory systems. There are two pitfalls that must be avoided when dealing with data architecture design issues: focusing on a static view of data as an asset and searching for a unifying holistic design (more about this in the next section: hierarchical decomposition and encapsulation still have merit in the 21st century).

Smart biological systems operate on a multiplicity of time scales, irrespectively of the degree of “smartness”. What I mean by this is that smart living systems have developed control capabilities that operate on different time horizons: they are not different because of their deductive/inductive capabilities, but because their decision cycle runs on a completely different frequency. A very crude illustration of this idea could distinguish between reaction (short-term, emphasis on guaranteed latency), decision (still fast but less deterministic), adaptation (longer term). We shall see in the next section that the same distinction applies to learning, where adaptation could be distinguished from learning and reflection. Using the vocabulary of optimization theory, adaptation learns about the problem through variables adjustment, learning produces new problem formulation and reflection improves the satisfaction function. It is important to understand that really complex – or simple – approaches may be applied to each time scale: short-term is not a degraded version of long-term decision, nor is long-term an amplified and improved version of short-term. This is the reason for the now common pattern of the lambda-architecture which combines both hot and cold analytics. This understanding of multiple time scales is critical to smart System of Systems design. It has deep consequences, since most of what will be described later (goals, satisfaction criteria, learning feedback loops, emotion/pleasure/desire cycles) need to be thought about at different time scales. In a smart home, we equally need fast, secure and deterministic response time for user commands, smart behavior that requires complex decision capability and longer-term learning adaptive capabilities such as those of the ADHOCO system which I have quoted previously in this blog.
In this paper I consider a single system of its kind (even if a system of systems), but this should be further developed if the system is part of a population, which leads to collective learning (think of TESLA cars learning from one another) and population evolution (cf. Michio Kaku’s vision of emotion as Darwinian expression of population learning).

3. Emergent EDA Systems



Most systems produced by nature are hierarchical, this also applies to event architecture which must distinguish between different levels of events. Failure to do so results in systems that are too expensive (for instance, too much is stored) and too difficult to operate. For the architects reading this, please note that event “system-hierarchy” is not “event taxonomy” (which is supported out of the box by most frameworks), it is an abstraction hierarchy, not a specialization hierarchy (both are useful). A living organism uses a full hierarchy of events, some are very local, some gets propagated, some gets escalated to another scale of the system, etc. To distinguish between different levels of events, we need to introduce in smart systems what is known as Complex Event Processing (CEP). CEP is able to analyze and aggregate events to produce simple decisions which may trigger other events. You may find a more complete description of CEP in the following pages taken from theCEPblog, from which I have borrowed the illustration on the right. Similarly, you can learn a lot by watching YouTube videos of related open source technology platforms such as Storm.

A key feature of CEP is to be able to analyze and correlated events from a lower level to produce a higher level event. It is the foundation for event control logic in a “system of systems” architecture, moving from one level of abstraction to another. This is not, however, the unique responsibility of the CEP system. True to our “analytics everywhere” philosophy, “smarter” analytics systems, such as Big Data machine learning systems, need to be integrated onto the EDA, to participate to the smart behavior of the global system, in a fashion that is very similar to the organization of a living being.

Kevin Kelly’s advice for growing, rather than designing, emergent systems becomes especially relevant as soon as there exists a human in the loop. A key insight from smart system design is to let the system learn about the user and not the opposite (although one may argue that both are necessary in most cases, cf. the fourth section of my previous post). Systems that learn from their users’ behaviors are hard to design, it is easier to start from user feedback and satisfaction and let adaptation run its courses, than to get the “satisfaction equation” right from the first start. This is a key area of expertise of the IRT SystemX, which scientific and technology council I have the pleasure to lead. A number of ideas expressed here may be found in my inaugural talk of 2013. Emergence derives from feedback loops, which may be construed as “conversations”. CEP is the proper technology to develop a conversation with the user in the loop, following the insight of Chris Taylor who is obviously referring to the Cluetrain Manifesto’s “Markets are conversations”. The “complex” element of CEP is what makes the difference between a conversation (with the proper level of abstraction, listening and silence) and an automated response.

Another lesson from complex systems is that common goals should be reified (made first-class objects of the system) and distributed across smart adaptive distributed systems. There are two aspects to this rule. First, it states that complex systems with distributed control are defined by their “finality” which must be uniquely defined and shared. Second, the finality is transformed into actions locally, according to the local context and environment. This is both a key principle for designing Systems of Systems and a rule which has found its way to modern management theory. This is a lesson that has been discovered by distributed systems practitioners over and over. I found a vivid demonstration when working with OAI (optimization of application integration) over a decade ago. The best way to respect centrally-defined SLAs (service level agreements) is through policies that are distributed over the whole system and interpreted locally, as opposed to implementing a centralized monitoring system. This may be found in my paper about self-adaptive middleware. In the inaugural IRT speech that I mentioned earlier, I talked about SlapOS, the cloud programming OS, because Jean-Paul Smets told me a very similar story about SlapOS mechanism for maintaining SLA, which is also based on the distribution of goals, not commands. Commands are issued locally, in the proper context and environment, which is perfectly aligned with the control distribution strategy described in the previous section.

We should build intelligent capabilities the way nature builds muscles: by growing areas that are getting used. In the world of digital innovation, learning happens by doing. This simple but powerful ideas is a roadmap to growing emergent systems: start simple, observe what is missing but mostly reinforce what gets used. Example of reinforcement learning abound in biology from ants stigmergy to muscle growth through adaptation to efforts. Learning by doing is the heart of the lean startup approach, but it also applied to complex system design and engineering. This biology metaphor is well suited to avoid the pitfall of top-down feature-based design. Smart (hence emergent, if we follow Kevin Kelly’s axiom) systems must be grown in a bottom-up manner, by reinforcing gradually what matters most. This is especially useful when designing truly complex systems with cognitive capabilities (the topic of the next section). Nature tells us to think recursively (think fractal) and to grow from reinforcement (strengthening what is useful). If we throw a little bit of agile thinking into the picture, we understand why it’s better to start small when building an adaptive event-driven system.


4. Cognitive EDA Systems


As is rightly pointed by John E. Kelly from IBM, we are entering the new era of cognitive computing, with systems which grow by machine learning, not by programmatic design. This is precisely the vision of Kevin Kelly two decades ago. Cognitive systems, tells X. Kelly, “reason form a purpose”, which means that emergent systems emerge from their finality. The more the “how” is grown from experience (for instance, from data analysis in a Big Data setting), the more the definition and reification of goals become important (cf. previous section). One could argue that this already embedded into the Deloitte picture that I showed in the introduction, but there is a deeper transformation at work, which is why machine learning will play a bigger and more central role in IoT EDA architecture. I strongly suggest that you watch Dario Gill’s video about the rise of cognitive computing for IoT. His arguments about the usefulness of complex inferred computer model with no causality validation is very similar what is said in the NATF recently issued report on Big Data.

Biology obviously has a lot to teach us about cognitive, smart and adaptive systems. A simplistic view of our brain and nervous system distinguishes between different zones:
  • Reflexes (medulla oblongata and cerebellum) – these parts of the brain operate the unconscious regulation and the fine motor skills (cerebellum).
  • Emotions (amygdala) play a critical role in our decision process. There is an interplay between rational and emotional thoughts that has been popularized by Antonio Damasio’s best-seller. In a previous post, I referred to Michio Kaku’s analysis which makes emotion the equivalent of stored evaluation functions, honed through the evolution process.
  • Inductive thinking (cortex), since the brain is foremost a large associative memory.
  • Deductive thinking (front cortex), with a part of the brain that came later in the species evolution process and which is the last to grow in our own development process.

You may look at the previous link or at this one for more detailed information. I take this simplified view as input for the following pattern for cognitive event-driven architecture (see below). This is my own version of the introduction schema, with a few differences. Event-Driven architecture is the common glue and Complex Event Processing is the common routing technology. CEP is used to implement reflexes for the smart adaptive system that is connected with its environment (bottom part of the schema). Reflex decisions are based on rules wired with CEP but also on “emotions”, that is, valuation heuristics that are applied to input signals. Actions are either the result of reflexes or the result of planning. Goals are reified, as was explained in the previous section. This architecture pattern distinguishes between many different kinds of analytics and control capabilities. It should be made even richer if the multiple time-scale aspect was clearly shown. As said earlier, a number of these components (goals, emotions, anticipation) should be further specialized according to the time horizon under which they operate. Roughly speaking, one may recognize the earlier distinction between reflexes (CEP), decisions (with a separation between decision and planning, because planning is a specialized skill whereas decision could be left to a large type of Artificial Intelligence technology) and learning. Learning – which is meant to be covered by Big Data and Machine Learning capabilities – produces both adaptation (optimizing existing concepts) and “deep learning” (deriving news concepts). Learning is also leveraged to produce anticipation (forecasting), which is a key capability of advanced living beings. A specialized form of long-term learning, called reflection, is applied to question emotions versus long-term goals (reflection is a long-term process that assess the validity of the heuristic cost functions used to make short term decisions with respect to longer-term goals). Although this schema is a very simplified form of a learning system, it already shows multiple levels of feedback learning loops (meant to be used with different time scales).



It is important to notice that the previous picture is an incomplete representation of what was said in this post. The picture represents a pattern, which is meant to be declined in a “multi-scale” / “fractal” design, as opposed to a holistic system design view. Fractal architecture pattern was a core concept of the enterprise architecture book which I wrote in 2004. An organic design for enterprise architecture creates buffers, “isolation gateways” and redundancy that make the overall system more robust than a fully integrated design.

It is easier to build really smart smaller objects than large systems, thus they will appear first and “intelligence” will come locally before coming globally. This is the Darwinian consequence of the organic design principle. When one tries to develop a complex system in the spirit of the previous pattern, it is easier to produce with a more limited scope (input events, intended behaviors, …). Why, would you enquire ? Because intelligence comes from feedback loop analysis and it is easier to design and operate such a loop in a closed-system with a unique designer than with a larger-scope open system. Nothing in the previous schema says that it describes a big system. It could apply to a smart sensor or an intelligent camera. As a matter of fact, smart cameras such as Canary or Netatmo Welcome  are good examples of advanced cognitive functions integration. A consequence is that the “System of Systems” organic approach is more likely to leverage advanced cognitive capabilities than more traditional integrated or functionally specialized designs (which one might infer from the introduction Deloitte picture). Fog computing makes a good case for edge computing, but it also promote a functional architecture which I believe to be too homogeneous and too global.


 
Technorati Profile