May 30, 2020

A Framework for AI and IoT

Early in my career, I had the good fortune to work on several research and innovation projects on the topics of AI and ML. These project used mathematical techniques to model the dynamic behavior of machines and to work out if they were developing faults. This are known as ‘condition monitoring’ and ‘predictive maintenance’ procedures. These projects involved data from real machines, not simulations. They included an industrial scale diesel generator, a gas-turbine used for ship propulsion and, black-box data from military aircraft [1].

In modern terminology, these projects involved the creation of digital twins from IoT data. They began by collecting time series data around events such a change in operating speed. This is important because systems do not provide dynamically rich data under static operating conditions. Think of a lightbulb with a hairline crack in its filament. Unless you have incredible eyesight, it is impossible to tell if the lightbulb is work on not. However, if you gently tap the lightbulb, you will hear the filament vibrate. That is what reveals that the broken is bulb. In addition to the signal processing aspects, this diagnostic and testing process relies on our mental model of how filament lightbulbs work.

With access to dynamic data, we can apply AI and ML techniques to learn about patterns of behavior. From these, it is possible to detect trigger events via pattern recognition techniques to classify whether a machine is operating normally or signaling one of several possible modes of failure.

None of this was straightforward in the past. Computers were slow and data storage was expensive. I spent many hours and learned a lot from modifying a real-time data acquisition system to add data handling, visualization, and analytics components to it.

Device and data management functions are the basic building blocks that IoT platform providers offer. Now, they are starting to integrate analytics and AI/ML modules. Much of the hype around AI/ML for IoT is about developing a new market. The fundamental building blocks to address most applications have been around for a long time. I revisited these issues because I am seeing serious investment interest in AI/ML for IoT [2] and because of a current project [3] that addresses some of the strategic implications for technology and solution providers.

I plan to write a few articles on the topic of AI/ML for IoT over the coming months. This post begins by setting the scene. In future posts I will describe pitfalls in the use of AI/ML as well as new requirements at the intersection of AI and IoT. In the rest of this post, I use ‘AI’ as an umbrella term for AI and ML.

A Logical Framework for AI and IoT

Before we can explore the opportunities for AI in IoT, let us define a generic system. This consists of three domains beginning with an array of sensors attached to a machine or family of connected devices. In a propulsion engine, sensors might be attached to the fuel injector (the controlling variable) and to various output sensors that measure acceleration, pressure, or temperature. By deploying pressure or temperature sensors along a jet engine, it is possible to study variations in pressure differentials and temperature gradients. These will vary over time as the engine ages. The rate of change is one way to detect if certain parts of the engine are deteriorating more quickly than predicted by a digital twin.

The second important component in the system is the IoT platform. This can support a variety of service enabling tasks including but not limited to registration (to identify individual sensors and the applications that draw on their data), data management (data transfer and storage), device management (calibration and remote configuration of sensors) and visualization.
Data then enters the application domain via the IoT platform. The first step involves the application of signal processing and machine learning techniques. Example signal processing techniques include data filtering (e.g. for noise attenuation or to detect outliers based on standard deviation thresholds), normalization (e.g. to scale all readings onto a -100 to 100 scale) and feature extraction (e.g. amount of time a signal exceeds a preset threshold). Some of this data might then be processed using ML techniques to create a simplified, dynamic model of the machine. Think of this as a digital twin of the machine ‘learned’ from historical data. A technician can then operate this digital twin in parallel with the real machine. This would involve feeding a signal corresponding to the fuel injector into the dynamic model which would then predict the temperature at a key location in the engine. A comparison of actual and predicted temperatures, over time, would allow the technician to detect changes in behavior.

In practice an AI application would take the role of the technician. This would enable continuous monitoring and automate problem detection by mapping features extracted from the data onto a map that captures different modes of failure. The technician’s expert knowledge and mental model of the engine’s behavior corresponds to another form of digital twin based on logical relationships (e.g. a governor controls fuel flow into a manifold and then into several piston chambers that contains pistons with sealant rings that bear the brunt of wear) and causal reasoning (e.g. if there is a high frequency noise and a rise in lubricant temperature then a bearing might be about to fail).

Taking AI from Pilots into Business-Critical Environments

Systems that use AI techniques to analyze IoT data focus on software algorithms and knowledge representation capabilities in the application domain. The market has not yet evolved to a stage of maturity where AI technologies are integrated with components in the sensor or the platform domains. There is a separation of domains with AI and IoT operating in silos. As illustrated, the flow of information between the sensor and IoT domains is bidirectional. IoT platforms are a conduit for data. They also exert control over sensors and devices for identification, firmware updating and security policy functions. However, AI in the application domain is a unidirectional process. It consumes data but rarely interacts with upstream resources.

As the deployment of AI solutions matures and large numbers of firms rely on them in business-critical situations, there will be a tighter and bi-directional integration of the end-to-end solution stack. Consider the use of an AI system to detect and report incipient faults. Before taking the system out of service for inspection and maintenance purposes, operational managers will want to validate the reasons behind the reported fault. Might this be a case of a faulty sensor or is there a likely fault in the machine? I will discuss the implications of such requirements in forthcoming posts. These will touch on new opportunities to build explainable-AI systems and what this means for communications service and IoT platform providers.

[1] K. Figueredo, Coupling numerical & symbolic techniques for dynamic data validation, First International Conference on Intelligent Systems Engineering, 1992 (Conf. Publ. No. 360), Edinburgh, UK, 1992, pp. 35-40. 

[2] White Paper on Artificial Intelligence: a European approach to excellence and trust - 

[3] ETSI Specialist Task Force 584: Artificial Intelligence for IoT Systems

Image Credit: Alessandro Bianchi via


  1. 30 May 2020 update

    Google’s medical AI was super accurate in a lab. Real life was a different story

  2. 10 June 2020 update

    Achieving trustworthy AI with standards

    The TR identifies possible measures that improve trustworthiness of an AI system by mitigating vulnerabilities across its lifecycle. These relate to:

    - transparency for the features, components and procedures of an AI system
    controllability by providing reliable mechanisms for an operator to take over control from the AI system
    - aspects of data handling, such as strategies for reducing unfair bias and maintaining privacy by de-identifying personal data
    - robustness or the ability of a system to maintain its level of performance under any circumstances including external interference or harsh environmental conditions
    - testing, evaluation and use

    The TR notes that AI systems are complex and their impact on stakeholders should be carefully considered case by case to decide whether their use is appropriate. A risk-based approach helps organizations identify relevant AI systems’ vulnerabilities and their possible impacts on organizations, their partners, intended users and society and will help mitigate risks. It is important that all stakeholders understand the nature of the potential risks and the mitigation measures implemented. SC 42 is working on ISO/IEC 23894, Information technology – Artificial intelligence – Risk Management that will define how to apply the risk-based approach to AI.

  3. 10 June 2020 update

    UK's Office for AI issues guidelines on procurement of AI systems

    Important points about vendor lock in and explainable AI.

  4. 22 July 2020 update

    Fawkes: Digital Image Cloaking

    Fawkes is a system for manipulating digital images so that they aren't recognized by facial recognition systems.

    At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.