May 30, 2020

A Framework for AI and IoT

Early in my career, I had the good fortune to work on several research and innovation projects on the topics of AI and ML. These project used mathematical techniques to model the dynamic behavior of machines and to work out if they were developing faults. This are known as ‘condition monitoring’ and ‘predictive maintenance’ procedures. These projects involved data from real machines, not simulations. They included an industrial scale diesel generator, a gas-turbine used for ship propulsion and, black-box data from military aircraft [1].

In modern terminology, these projects involved the creation of digital twins from IoT data. They began by collecting time series data around events such a change in operating speed. This is important because systems do not provide dynamically rich data under static operating conditions. Think of a lightbulb with a hairline crack in its filament. Unless you have incredible eyesight, it is impossible to tell if the lightbulb is work on not. However, if you gently tap the lightbulb, you will hear the filament vibrate. That is what reveals that the broken is bulb. In addition to the signal processing aspects, this diagnostic and testing process relies on our mental model of how filament lightbulbs work.

With access to dynamic data, we can apply AI and ML techniques to learn about patterns of behavior. From these, it is possible to detect trigger events via pattern recognition techniques to classify whether a machine is operating normally or signaling one of several possible modes of failure.

None of this was straightforward in the past. Computers were slow and data storage was expensive. I spent many hours and learned a lot from modifying a real-time data acquisition system to add data handling, visualization, and analytics components to it.

Device and data management functions are the basic building blocks that IoT platform providers offer. Now, they are starting to integrate analytics and AI/ML modules. Much of the hype around AI/ML for IoT is about developing a new market. The fundamental building blocks to address most applications have been around for a long time. I revisited these issues because I am seeing serious investment interest in AI/ML for IoT [2] and because of a current project [3] that addresses some of the strategic implications for technology and solution providers.

I plan to write a few articles on the topic of AI/ML for IoT over the coming months. This post begins by setting the scene. In future posts I will describe pitfalls in the use of AI/ML as well as new requirements at the intersection of AI and IoT. In the rest of this post, I use ‘AI’ as an umbrella term for AI and ML.

A Logical Framework for AI and IoT


Before we can explore the opportunities for AI in IoT, let us define a generic system. This consists of three domains beginning with an array of sensors attached to a machine or family of connected devices. In a propulsion engine, sensors might be attached to the fuel injector (the controlling variable) and to various output sensors that measure acceleration, pressure, or temperature. By deploying pressure or temperature sensors along a jet engine, it is possible to study variations in pressure differentials and temperature gradients. These will vary over time as the engine ages. The rate of change is one way to detect if certain parts of the engine are deteriorating more quickly than predicted by a digital twin.

The second important component in the system is the IoT platform. This can support a variety of service enabling tasks including but not limited to registration (to identify individual sensors and the applications that draw on their data), data management (data transfer and storage), device management (calibration and remote configuration of sensors) and visualization.
Data then enters the application domain via the IoT platform. The first step involves the application of signal processing and machine learning techniques. Example signal processing techniques include data filtering (e.g. for noise attenuation or to detect outliers based on standard deviation thresholds), normalization (e.g. to scale all readings onto a -100 to 100 scale) and feature extraction (e.g. amount of time a signal exceeds a preset threshold). Some of this data might then be processed using ML techniques to create a simplified, dynamic model of the machine. Think of this as a digital twin of the machine ‘learned’ from historical data. A technician can then operate this digital twin in parallel with the real machine. This would involve feeding a signal corresponding to the fuel injector into the dynamic model which would then predict the temperature at a key location in the engine. A comparison of actual and predicted temperatures, over time, would allow the technician to detect changes in behavior.

In practice an AI application would take the role of the technician. This would enable continuous monitoring and automate problem detection by mapping features extracted from the data onto a map that captures different modes of failure. The technician’s expert knowledge and mental model of the engine’s behavior corresponds to another form of digital twin based on logical relationships (e.g. a governor controls fuel flow into a manifold and then into several piston chambers that contains pistons with sealant rings that bear the brunt of wear) and causal reasoning (e.g. if there is a high frequency noise and a rise in lubricant temperature then a bearing might be about to fail).

Taking AI from Pilots into Business-Critical Environments


Systems that use AI techniques to analyze IoT data focus on software algorithms and knowledge representation capabilities in the application domain. The market has not yet evolved to a stage of maturity where AI technologies are integrated with components in the sensor or the platform domains. There is a separation of domains with AI and IoT operating in silos. As illustrated, the flow of information between the sensor and IoT domains is bidirectional. IoT platforms are a conduit for data. They also exert control over sensors and devices for identification, firmware updating and security policy functions. However, AI in the application domain is a unidirectional process. It consumes data but rarely interacts with upstream resources.

As the deployment of AI solutions matures and large numbers of firms rely on them in business-critical situations, there will be a tighter and bi-directional integration of the end-to-end solution stack. Consider the use of an AI system to detect and report incipient faults. Before taking the system out of service for inspection and maintenance purposes, operational managers will want to validate the reasons behind the reported fault. Might this be a case of a faulty sensor or is there a likely fault in the machine? I will discuss the implications of such requirements in forthcoming posts. These will touch on new opportunities to build explainable-AI systems and what this means for communications service and IoT platform providers.


[1] K. Figueredo, Coupling numerical & symbolic techniques for dynamic data validation, First International Conference on Intelligent Systems Engineering, 1992 (Conf. Publ. No. 360), Edinburgh, UK, 1992, pp. 35-40. 

[2] White Paper on Artificial Intelligence: a European approach to excellence and trust - https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en 

[3] ETSI Specialist Task Force 584: Artificial Intelligence for IoT Systems https://portal.etsi.org/STF/STFs/STF-HomePages/STF584


Image Credit: Alessandro Bianchi via unsplash.com

14 comments:

  1. 30 May 2020 update

    Google’s medical AI was super accurate in a lab. Real life was a different story

    https://www.technologyreview.com/2020/04/27/1000658/google-medical-ai-accurate-lab-real-life-clinic-covid-diabetes-retina-disease/

    ReplyDelete
  2. 10 June 2020 update

    Achieving trustworthy AI with standards

    The TR identifies possible measures that improve trustworthiness of an AI system by mitigating vulnerabilities across its lifecycle. These relate to:

    - transparency for the features, components and procedures of an AI system
    controllability by providing reliable mechanisms for an operator to take over control from the AI system
    - aspects of data handling, such as strategies for reducing unfair bias and maintaining privacy by de-identifying personal data
    - robustness or the ability of a system to maintain its level of performance under any circumstances including external interference or harsh environmental conditions
    - testing, evaluation and use


    The TR notes that AI systems are complex and their impact on stakeholders should be carefully considered case by case to decide whether their use is appropriate. A risk-based approach helps organizations identify relevant AI systems’ vulnerabilities and their possible impacts on organizations, their partners, intended users and society and will help mitigate risks. It is important that all stakeholders understand the nature of the potential risks and the mitigation measures implemented. SC 42 is working on ISO/IEC 23894, Information technology – Artificial intelligence – Risk Management that will define how to apply the risk-based approach to AI.

    https://etech.iec.ch/issue/2020-03/achieving-trustworthy-ai-with-standards

    ReplyDelete
  3. 10 June 2020 update

    UK's Office for AI issues guidelines on procurement of AI systems

    Important points about vendor lock in and explainable AI.

    https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement

    ReplyDelete
  4. 22 July 2020 update

    Fawkes: Digital Image Cloaking

    Fawkes is a system for manipulating digital images so that they aren't recognized by facial recognition systems.

    At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.


    https://www.schneier.com/blog/archives/2020/07/fawkes_digital_.html

    ReplyDelete
  5. 16 October 2020 update

    Since some of the weaknesses of neural nets are the strengths of symbolic AI and vice versa, neurosymbolic AI would seem to offer a powerful new way forward. Roughly speaking, the hybrid uses deep nets to replace humans in building the knowledge base and propositions that symbolic AI relies on. It harnesses the power of deep nets to learn about the world from raw data and then uses the symbolic components to reason about it.


    https://knowablemagazine.org/article/technology/2020/what-is-neurosymbolic-ai

    ReplyDelete
  6. 20 July 2021

    AI Standardisation Landscape: state of play and link to the EC proposal for an AI regulatory framework

    https://publications.jrc.ec.europa.eu/repository/handle/JRC125952

    ReplyDelete
  7. 24 Aug 2021 update

    An interesting way to look at digital twins.

    One of the strangest aspects of quantum physics is entanglement: If you observe a particle in one place, another particle—even one light-years away—will instantly change its properties, as if the two are connected by a mysterious communication channel.

    https://www.sciencemag.org/news/2018/04/einstein-s-spooky-action-distance-spotted-objects-almost-big-enough-see

    ReplyDelete
  8. 7 Sep 2021 update

    In this article, we are looking into predictive maintenance for pump sensor data. Our approach is quite generic towards time-series analysis, even though each step might look slightly different in your own project. The idea is to give you an idea about the general thought process and how you would attack such a problem.

    https://towardsdatascience.com/lstm-for-predictive-maintenance-on-pump-sensor-data-b43486eb3210

    ReplyDelete
  9. 18 July 2022 update

    UK Policy Paper on Establishing a pro-innovation approach to regulating AI

    We are therefore proposing to establish a pro-innovation framework for regulating AI which is underpinned by a set of cross-sectoral principles tailored to the specific characteristics of AI, and is:

    Context-specific. We propose to regulate AI based on its use and the impact it has on individuals, groups and businesses within a particular context, and to delegate responsibility for designing and implementing proportionate regulatory responses to regulators. This will ensure that our approach is targeted and supports innovation.

    Pro-innovation and risk-based. We propose to focus on addressing issues where there is clear evidence of real risk or missed opportunities. We will ask that regulators focus on high risk concerns rather than hypothetical or low risks associated with AI. We want to encourage innovation and avoid placing unnecessary barriers in its way.

    Coherent. We propose to establish a set of cross-sectoral principles tailored to the distinct characteristics of AI, with regulators asked to interpret, prioritise and implement these principles within their sectors and domains. In order to achieve coherence and support innovation by making the framework as easy as possible to navigate, we will look for ways to support and encourage regulatory coordination - for example, by working closely with the Digital Regulation Cooperation Forum (DRCF) and other regulators and stakeholders.

    Proportionate and adaptable. We propose to set out the cross-sectoral principles on a non-statutory basis in the first instance so our approach remains adaptable - although we will keep this under review. We will ask that regulators consider lighter touch options, such as guidance or voluntary measures, in the first instance. As far as possible, we will also seek to work with existing processes rather than create new ones.

    ReplyDelete
  10. 1 August 2022

    System Cards, a new resource for understanding how AI systems work

    https://ai.facebook.com/blog/system-cards-a-new-resource-for-understanding-how-ai-systems-work/

    ReplyDelete
  11. 9 August 2022 update

    Our central message is that data quality matters more than data quantity, and that compensating the former with the latter is a mathematically provable losing proposition.

    https://www.nature.com/articles/s41586-021-04198-4

    ReplyDelete
  12. 29 August 2022 update

    How to recognize AI snake oil, Arvind Narayanan, Associate Professor of Computer Science, Princeton University

    Takeaways
    - AI excels at some tasks,but can’t predict social outcomes.
    - We must resist the enormous commercial interests that aim to obfuscate this fact.
    - In most cases, manual scoring rules are just as accurate, far more transparent, and worth considering.

    https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf


    ReplyDelete
  13. 24 Dec 2022 update

    AI Governance Needs Technical Work

    People who want to improve the trajectory of AI sometimes think their options for object-level work are (i) technical safety work and (ii) non-technical governance work. But that list misses things; another group of arguably promising options is technical work in AI governance, i.e. technical work that mainly boosts AI governance interventions. This post provides a brief overview of some ways to do this work—what they are, why they might be valuable, and what you can do if you’re interested. I discuss:

    - Engineering technical levers to make AI coordination/regulation enforceable (through hardware engineering, software/ML engineering, and heat/electromagnetism-related engineering)
    - Information security
    - Forecasting AI development
    - Technical standards development
    - Grantmaking or management to get others to do the above well
    - Advising on the above
    - Other work

    https://forum.effectivealtruism.org/posts/BJtekdKrAufyKhBGw/ai-governance-needs-technical-work#Technical_standards_development

    ReplyDelete
  14. 3 August 2023 update

    The Need for Trustworthy AI

    Across the internet, devices and services that seem to work for you already secretly work against you. Smart TVs spy on you. Phone apps collect and sell your data. Many apps and websites manipulate you through dark patterns, design elements that deliberately mislead, coerce or deceive website visitors. This is surveillance capitalism, and AI is shaping up to be part of it.
    ---
    You have no reason to trust today’s leading generative AI tools. Leave aside the hallucinations, the made-up “facts” that GPT and other large language models produce. We expect those will be largely cleaned up as the technology improves over the next few years.

    But you don’t know how the AIs are configured: how they’ve been trained, what information they’ve been given, and what instructions they’ve been commanded to follow. For example, researchers uncovered the secret rules that govern the Microsoft Bing chatbot’s behavior. They’re largely benign but can change at any time.

    Many of these AIs are created and trained at enormous expense by some of the largest tech monopolies. They’re being offered to people to use free of charge, or at very low cost. These companies will need to monetize them somehow. And, as with the rest of the internet, that somehow is likely to include surveillance and manipulation.
    ---
    We believe that people should expect more from the technology and that tech companies and AIs can become more trustworthy. The European Union’s proposed AI Act takes some important steps, requiring transparency about the data used to train AI models, mitigation for potential bias, disclosure of foreseeable risks and reporting on industry standard tests.

    Most existing AIs fail to comply with this emerging European mandate, and, despite recent prodding from Senate Majority Leader Chuck Schumer, the US is far behind on such regulation.



    https://www.schneier.com/blog/archives/2023/08/the-need-for-trustworthy-ai.html

    ReplyDelete