Jul 7, 2020

Opportunities to Apply AI and ML in IoT Systems

My last article [1] introduced a framework to explain the basic elements of an IoT system with the aim of highlighting where Artificial Intelligence (AI), Machine Learning (ML) and Digital Twin (DT) components are typically added. 

The aim of this article is to explore the longer-term opportunities for AI/ML technologies and how these will shape mobile operator and technology provider business strategies. There are two developments to consider in drawing out this roadmap. One is the tighter integration of IoT and AI/ML technologies vertically along the technology stack. Think of this as a way of improving how well different components interact to improve reliability and service quality. The second concerns a new set of requirements that users and regulatory agencies will expect from AI/ML systems. As an illustrative example, consider an AI application that issues an alarm that a machine is about to fail with some probabilistic context such as “greater than 75% chance of failure in the next month”. Is it enough to stop a production line based on this read out? In practice, there is likely to be a higher-level requirement that determines the trustworthiness of this alarm based on its performance over time. Like the boy who cried ‘wolf’, does a sequence of alarms point to a deteriorating piece of equipment or a faulty sensor? The judgement required here involves a different set of data and potentially the involvement of other, supervisory AI/ML sub-systems.

The following decomposition of an IoT system helps to explain these issues and the differing levels of complexity that apply to AI/ML capabilities. A simplified view of the combination of AI/ML and IoT treats the IoT portion as the domain responsible for collecting and feeding data into an application that relies on AI/ML functionality. This activity maps to the user plane, as illustrated in the leftmost structure.
The next way to improve the performance of IoT systems involves the application of AI/ML within the IoT domain. This could be a background process that checks whether a data stream is valid or possibly corrupted due to a faulty sensor or intermittent transmission of data. This type of processing maps to the control plane.

The rightmost structure in the illustration shows a tighter and bi-directional relationship between application and IoT domains. This apples to requirements for enabling explainable-AI and trustworthy-AI among other knowledge-related capabilities. Consider the case of an AI application that signals an incipient fault in a mission critical machine. Before that machine is taken out of service for maintenance, a technician would seek an explanation of the chain of logic, provenance of data and the anomalies that led to a fault being deduced. One strategy might rely on the use of semantic modeling and automated fault-tree reasoning functions. An example of trustworthy-AI would use sensor profile and historical meta data from IoT sensors. Profile data might include information about the supplier as well as certification credentials or security capabilities engineered into the data source. Historical data might include information about the regularity of calibration tests and updates to firmware and security credentials over time. These kinds of interactions map to the knowledge plane between application and IoT modules.

This framework leads to the following implications for mobile operators and technology providers:

  • User Plane – there is a growing segment of specialist firms offering AI/ML solutions based on condition monitoring algorithms or digital twin technology capabilities. These specialists are interested in partnering with data-rich firms and service providers that offer monitoring and maintenance solutions to building managers, facilities owners, and transportation network operators, for example. Another route to market involves embedding AI/ML capabilities in machines via critical and data-rich components such as bearing units, compressors, and pumps. The application domain is typically beyond the addressable market of mobile operators, Nevertheless, they have an opportunity to assemble a portfolio of AI/ML specialists that their customers can draw upon. This could be an extension of the partner programs that mobile operators deployed in the heyday of the M2M market to simplify access to connectivity hardware (e.g. network-certified modules, gateways etc.) and systems integration capabilities. The value proposition in this case is one of making it easy for end user organizations and their systems integration teams to source AI/ML expertise in ‘plug-and-play’ fashion.
  • Control Plane – to some degree, IoT platform providers are building basic AI/ML capabilities into their platform service offerings. There is scope to add more sophistication to the AI/ML toolkit over time, driven by rising quality-of-service and advanced analytics expectations. Since mobile operators focus on connectivity solutions and services, the Control Plane opportunity is to engineer AI/ML capabilities out of network service functions. This should make it easier to leverage synergies between communications networks and IoT platforms to create more intelligent IoT platforms instead of treating IoT platforms as ‘over-the-top’ solutions. In a similar manner, Edge computing vendors can add AI/ML capabilities to edge devices and gateways to offload Fog traffic in ways that enhance end-to-end measures of service quality.
  • Knowledge Plane – opportunities in the knowledge plane depend on collaboration and standardization because they rely on data an intelligence sharing between different components in IoT systems. This is new territory. Ground rules are currently being shaped through numerous initiatives to develop the rules, best practices, and regulations around ethical AI, initially around personally identifiable data. In the realms of IoT data, the opportunities map onto enablers such as interoperable information models, semantics and IoT workflow management, for example. The mobile operator challenge is to figure out how to make the data from their networks and operational support systems knowledge-ready and commercially shareable with different categories of users outside the traditional mobile ecosystem. To avoid disintermediation risks, mobile operators can capture a share of the IoT platform services value by establishing knowledge and information links with end users. 



[1] A Framework for AI and IoT - https://www.more-with-mobile.com/2020/05/a-framework-for-ai-and-iot.html

17 comments:

  1. 7 July 2020 update

    Guidance on the AI auditing framework

    https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf

    ReplyDelete
  2. 8 July 2020 update

    South Korean carriers increase presence in the smart factory field

    https://enterpriseiotinsights.com/20200706/smart-factory/south-korean-telcos-increase-presence-smart-factory-field

    ReplyDelete
  3. 16 July 2020 update

    TOWARDS A TRUSTWORTHY AI

    https://www.iso.org/news/ref2530.html

    ReplyDelete
  4. 20 July 2020 update

    The EU's ALTAI - The Assessment List on Trustworthy Artificial Intelligence

    The concept of Trustworthy AI was introduced by the High-Level Expert Group on AI (AI HLEG) in the Ethics Guidelines for Trustworthy Artificial Intelligence (AI) and is based on seven key requirements:

    - Human Agency and Oversight;
    - Technical Robustness and Safety;
    - Privacy and Data Governance;
    - Transparency;
    - Diversity, Non-discrimination and Fairness;
    - Environmental and Societal well-being; and
    - Accountability.

    https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-assessment-list-trustworthy-artificial-intelligence

    ReplyDelete
  5. 28 July 2020 update

    Extract from AINow Institute comments on EU's AI Strategy White Paper

    Need For A Shift In Business Models To Incentivize Safety

    The lack of investment in security during the production of IoT devices, motivates the EY expert to conclude that “[i]t is critically urgent that the organizations and institutions developing and using these technologies adopt a Security by Design perspective.” However, the ability to ensure AI systems are safe or secure by design is compromised by the economic incentives and power structures in the software industry. As Gürses et al. show, more and more tools are hidden away in “services” in the form of software libraries, toolchains and modular application programming interfaces (APIs). These are controlled by a handful of tech companies which effectively reign over both software tooling and the accompanied digital infrastructure. As a result, building a system may be more efficient in terms of coding. However, the same modularity makes addressing the vulnerabilities of the overall system, which is now a composition of modular services (consisting of different APIs, toolchains and libraries), a much more complex task, as the tech companies that provide these often don't enable access to documentation that would allow these users to validate their security. This is further intensified when certain service modules are developed by third parties and hidden away behind APIs. To make matters more challenging, different modules tend to be developed by different teams or companies and may change in an asynchronous manner.

    https://ainowinstitute.org/ai-now-comments-to-eu-whitepaper-on-ai.pdf

    ReplyDelete
  6. 23 Oct 2020

    Natural Language Explanations of Deep Networks

    How to explain, in natural language temrs, the meaning of a 'lit-up' neuron in a deep neural network.

    https://youtu.be/uZUC1zfx4MA



    ReplyDelete
  7. 24 Dec 2020 update

    Re-usable ML driven by concerns over energy usage


    Given the ever-increasing computational costs of modern machine learning models, we need to find new ways to reuse such expert models and thus tap into the resources that have been invested in their creation. Recent work suggests that the power of these massive models is captured by the representations they learn. Therefore, we seek a model that can relate between different existing representations and propose to solve this task with a conditionally invertible network. This network demonstrates its capability by (i) providing generic transfer between diverse domains, (ii) enabling controlled content synthesis by allowing modification in other domains, and (iii) facilitating diagnosis of existing representations by translating them into interpretable domains such as images. Our domain transfer network can translate between fixed representations without having to learn or finetune them. This allows users to utilize various existing domain-specific expert models from the literature that had been trained with extensive computational resources. Experiments on diverse conditional image synthesis tasks, competitive image modification results and experiments on image-to-image and text-to-image generation demonstrate the generic applicability of our approach. For example, we translate between BERT and BigGAN, state-of-the-art text and image models to provide text-to-image generation, which neither of both experts can perform on their own.

    https://arxiv.org/abs/2005.13580

    ReplyDelete
  8. 19 March 2021 update

    Parsing Interpretability

    Shah identifies three main types of AI interpretability: The engineers’ version of explainability, which is geared toward how a model works; causal explainability, which relates to why the model input yielded the model output; and trust-inducing explainability that provides the information people need in order to trust a model and confidently deploy it.

    https://hai.stanford.edu/news/should-ai-models-be-explainable-depends

    ReplyDelete
  9. 1 April 2021 update

    A SIMPLE GUIDE TO EXPLAINABLE ARTIFICIAL INTELLIGENCE

    https://www.ai4eu.eu/simple-guide-explainable-artificial-intelligence

    ReplyDelete
  10. 2 April 2021 update

    Data provenance & lineage: technical guidance on the tracing of data - Part 1 - https://eudatasharing.eu/technical-aspects/data-provenance-part-1

    Data provenance & lineage: technical guidance on the tracing of data - Part 2 - https://eudatasharing.eu/technical-aspects/data-provenance-part-2



    ReplyDelete
  11. 7 April 2021 update

    Should AI Models Be Explainable? That depends.

    A Stanford researcher advocates for clarity about the different types of interpretability and the contexts in which it is useful.

    Shah identifies three main types of AI interpretability:
    i) The engineers’ version of explainability, which is geared toward how a model works

    ii) causal explainability, which relates to why the model input yielded the model output

    iii) and trust-inducing explainability that provides the information people need in order to trust a model and confidently deploy it.

    https://hai.stanford.edu/news/should-ai-models-be-explainable-depends

    ReplyDelete
  12. 18 May 2021 update

    5G PPP Technology Board publishes a position paper on "AI and ML – Enablers for Beyond 5G Networks"

    https://5g-ppp.eu/wp-content/uploads/2021/05/AI-MLforNetworks-v1-0.pdf

    ReplyDelete
  13. 20 May 2021 update

    an observation on the importance of causality in AI

    "By the way, Judea Pearl has written a book called The book of Why. Pearl believes that current AI is mostly just glorified curve fitting, and that adding causal intelligence to current AI will be necessary in order to endow machines with conscience, empathy, free will, etc. Pearl defines a conscious, self-aware machine as one that has access to a rough blueprint of its personal bnet structures."

    https://www.datasciencecentral.com/profiles/blogs/causal-ai-amp-bayesian-networks

    ReplyDelete
  14. 18 June 2021 update

    It’s hard to understate just how strange the state of “explainable AI” is today. For nearly all “local explanation” techniques, the only people who understand them would never use them, and the only people who use them do not understand them.

    https://twitter.com/zacharylipton/status/1405693787867914242?s=20

    ReplyDelete
  15. 17 July 2021

    Computational Trust

    https://scholar.google.co.uk/citations?view_op=view_citation&hl=en&user=Qz73wh4AAAAJ&citation_for_view=Qz73wh4AAAAJ:u5HHmVD_uO8C

    ReplyDelete
  16. 28 June 2022

    Understanding and explaining the mistakes made by trained models is critical to many machine learning objectives, such as improving robustness, addressing concept drift, and mitigating biases. However, this is often an ad hoc process
    that involves manually looking at the model’s mistakes on many test samples and guessing at the underlying reasons for those incorrect predictions.

    In this paper, we propose a systematic approach, conceptual counterfactual explanations (CCE), that explains why a classifier makes a mistake on a particular test sample(s) in terms of human-understandable concepts (e.g. this zebra is misclassified as a dog because of faint stripes). We base CCE on two prior ideas: counterfactual explanations and concept activation vectors, and validate our approach on well-known pretrained models, showing that it explains the models’ mistakes meaningfully. In addition, for new models trained on data with spurious correlations, CCE accurately identifies the spurious correlation as the cause of model mistakes from a single misclassified test sample.

    On two challenging medical applications, CCE generated useful insights, confirmed by clinicians, into biases and mistakes the model makes in real-world settings. The code for CCE is publicly available at https://github.com/mertyg/debug-mistakes-cce

    https://arxiv.org/pdf/2106.12723.pdf

    ReplyDelete
  17. 22 Nov 2022 update

    Scientists Increasingly Can’t Explain How AI Works

    AI researchers are warning developers to focus more on how and why a system produces certain results than the fact that the system can accurately and rapidly produce them. 

    https://www.vice.com/en/article/y3pezm/scientists-increasingly-cant-explain-how-ai-works

    ReplyDelete