BOOK

Table of Contents

IOT-OPEN EU Reloaded empty

Authors

IOT-OPEN.EU Reloaded Consortium partners proudly present the Advanced IoT Systems book. The complete list of contributors is juxtaposed below.

Riga Technical University
  • Agris Nikitenko, Ph. D., Eng.
  • Karlis Berkolds, M. sc., Eng.
Silesian University of Technology
  • Piotr Czekalski, Ph. D., Eng.
  • Krzysztof Tokarz, Ph. D., Eng.
  • Godlove Suila Kuaban, M. sc., Eng.
Tallinn University of Technology
  • Raivo Sell, Ph. D., ING-PAED IGIP
External Contributors
  • DCB Distribution & Consulting Becker, Friedhelm Becker

Graphic Design and Images

  • Blanka Czekalska, M. sc., Eng., Arch.
  • Piotr Czekalski, Ph. D., Eng.

Versions

This page keeps track of the content reviews and versions done as a continuous maintenance process

Table 1: Versions and Content Updates
Version Update Date Content updates summary Other comments
1 v 0.1 08.08.2023 ToC created

 

Project Information

This Book was implemented under the wings of the following project:

  • Cooperation Partnerships in Higher Education, 2022, IOT-OPEN.EU Reloaded: Education-based strengthening of the European universities, companies and labour force in the global IoT market, project number: 2022-1-PL01-KA220-HED-000085090,
  • Horizon 2020 Research Innovation and Staff Exchange Programme (RISE) under the Marie Skłodowska-Curie Action, Programme H2020-EU.1.3.3. - Stimulating innovation by means of cross-fertilisation of knowledge, Grant Agreement No 871163: Reliable Electronics for Tomorrow’s Active Systems.

Erasmus+ Disclaimer
This project has been funded with support from the European Commission.
This publication reflects the views of only the author, and the Commission cannot be held responsible for any use that may be made of the information contained therein.

Copyright Notice
This content was created by the IOT-OPEN.EU Reloaded Consortium 2022-2025.
The content is Copyrighted and distributed under CC BY-NC Creative Commons Licence, free for Non-Commercial use.

CC BY-NC

In case of commercial use, please get in touch with IOT-OPEN.EU Reloaded Consortium representative.

Introduction

General audience classification iconGeneral audience classification iconGeneral audience classification icon

For now (2024-2025), we have experienced rapid growth in the Internet of Things (IoT) domain, as expressed by the number of scientific publications, the market volume, and other indicators suggesting that IoT has come to stay for long. IoT is one of the top priorities in Horizon Europe’s Research and Innovation strategic plan, which, among different Thematic areas, recognizes IoT as one of the most important under the Technology thematic group [European Commission, Directorate-General for Research and Innovation, Synopsis report – Looking into the R&I future priorities 2025-2027, Publications Office of the European Union, 2023, https://data.europa.eu/doi/10.2777/93927]. Knowing the importance of IoT technologies, how can one contribute to the domain by developing, using and designing IoT systems for different applications? This book, as a continuation of the previous one, “Introduction to the IoT,” provides the background needed for the design methods, IoT data analysis, cybersecurity essentials, and other vital topics.

Content classification hints

The book composes a comprehensive guide for a variety of education levels. A brief classification of the contents regarding target groups may help in a selective reading of the book and ease in finding the correct chapters for the desired education level. To inform a reader about the proposed target group, icons are assigned to the chapters level 1 (top) and 2nd level chapters. The list of icons and their reflection on the target groups is presented in the table 2.

Table 2: List of icons presenting content classification and corresponding target groups
Icon Target group
 General audience classification icon General Public audience: all those who want to get familiar with basic concepts but do not necessarily step into technical details.
 Bachelors (1st level) classification icon Bachelor and Engineering level students
 Masters (2nd level) classification icon Masters students
 Enterprise and VETS classification icon Enterprise, VETS and technical

— MISSING PAGE —

IoT Design Methodologies

IoT systems are software-intensive smart cyber-physical systems (CPS) and include components from three main domains: the hardware, mostly electromechanical devices; software, mostly microcontroller-specific process control software; and communication infrastructure. To develop an IoT solution, all aspects of these three domains must be designed in great synergy. When looking at the component level, the main building block of the IoT system is a node. The node is usually a microcontroller-powered device dedicated to performing a specific task. The most common task is to perform measurements from the environment, but the node can also act as an actuator or user interface. In addition, IoT nodes can provide all kinds of supportive functions, like logging, providing time, storage, etc. However, the main function is still connected to the three main functions of sensing the environment, actuating or interfacing with humans, i.e. user interfaces. Today, CPS are created by expanding mechatronic systems with additional inputs and outputs and coupling them to the IoT. In principle, the IoT system is similar to classical smart systems, e.g., robots or mechatronic systems. These systems can be decomposed into three interconnected domains: process control by software, mechanical movements, and sensing of physical parameters from the system's environment. The figure below demonstrates how these domains are interconnected to act as a smart system.

Figure 2: Smart system components and interactions

The IoT system has a similar purpose as general smart systems. Still, the main difference is that the IoT system is a distributed solution of smart functions using internet infrastructure. Similar functionality is decomposed into smaller devices acting as a single functioning device rather than a complex system. Nevertheless, when all small nodes are interconnected and can exchange messages with each other despite their location, we get a powerful system dedicated to performing automation tasks in very wide application domains. The following figure represents the IoT system architecture, its distributed nature, and its communication function.

Figure 3: Smart IoT system components and interactions

Even if the IoT system has a different component architecture from a regular mechatronic system, the development methodologies can be easily adapted from mechatronic system design and software system design domains. IoT system has its own specifics but at the conceptual level it is as any other smart software-intensive system. Thus, the methodologies are not IoT-specific but combinations and adaptions from related domains.

Product development process

The product development process is a well-established domain, and many different concepts exist. Over time, as the software part is increasing in today's technical systems, more and more software development methodologies are integrated into the physical product development process. IoT systems are similar in their component level to cyber-physical systems consisting of characteristics and features of mechatronic, software and network systems. Thus, the existing product design methodologies are also logical choices to apply to the IoT system design process. The general product design process, despite the product's nature, has several iteration steps through the design stages.

The classical product design process starts with requirement analysis followed by conceptual design. When the design candidate is selected, the detail design stage develops domain-specific solutions like mechanical, electrical, software, etc. The next stage is to integrate domain-specific design results into one product and validate the solution. In addition, the product design process must deal with manufacturing preparation, maintenance and utilization planning. In the figure below, the general process is illustrated for most of the technical systems designs despite the application fields. However, depending on the system specifics, several other relevant stages and procedures might be important to pass.

Figure 4: General product design stages

V-model

IoT systems are a combination of mechatronic and distributed software systems. Therefore, design methodologies from these domains are most relevant for IoT systems. For example, the well-known V-model has long been used for the software development process but has also adapted to the mechatronic design process. The Association of German Engineers has issued the guideline VDI 2206 - Design methodology for mechatronic systems (Entwicklungsmethodik für mechatronische Systeme) [1]. This guideline adopts the V-model as a macro-cycle process. The V-model is in line with general product design stages but emphasises the verification and validation through the whole development process. The execution of processes happens in a sequential manner similar to V-shape, thus the name. The actual design process runs through a number of V-shape macro-cycles. Every cycle increases the product maturity. For example, the output of the first iteration can be just an early concept-proof prototype where the last iteration output is ready to deploy the system. How many iterations are needed depends on the complexity of the final product. In the figure below, the IoT system design adapted V-model is presented. The only difference from mechatronic systems is a domain-specific design stage. However, every general stage has several internal procedures and IoT-specific sub-design stages which must be addressed.

Figure 5: V-model for IoT systems

The new product development starts with customer input or other motivation, e.g., a business case, which must be carefully analysed and specified in a structured way. Requirements are not always clearly defined, and to put effort into proper requirement engineering pays off to save significantly from later design stages. It is not good practice to start designing a new system or solution when requirements are not adequately defined. At the same time, rarely all information is available initially, and requirements may be refined or even changed during the design process. Nevertheless, well-defined and analyzed requirement specifications simplify the later design process and reduce the risk of expensive change handling at later stages. The initial requirements are articulated from the stakeholder's perspective, focusing on their needs and desires rather than the system itself. In the subsequent step, these requirements are translated into a system-oriented perspective. The resulting specification from the requirements elicitation process provides a detailed description of the system to be developed,

The second design stage is a system architecture and design dedicated to developing concepts for the whole system. Concept development and evaluation are decomposed into several sub-steps and procedures. For example, different concept candidates' development, assessment of concept candidates, and selecting the best concept solution for further development. Once the concept solution is selected and validated with requirements, the final solution candidates can be frozen, and the development will enter the detailed design stage. In the detailed design stage, domain-specific development occurs, including hardware, software, network structure, etc. Once the domain-specific solutions are ready at the specified maturity, integration and validation will follow. The final step before the first prototype solution is full system testing and again verifying and validating according to the system requirements.

The whole process may be repeated as many times as necessary, depending on the final system's maturity level. If only proof of concept is needed, one iteration might be enough—often the case for educational projects. But for real customer systems, many V-cycles iterations are usually performed. Once the design process is completed, the system enters the production stage, and then the focus goes to system/user support and maintenance. However, like in modern software-intensive systems, constant development, bug fixes, upgrades, and new feature development is common practice.

Challenges

When designing an IoT system, there are common design challenges as in any other system engineering project, but also a few IoT-specific aspects. The engineering team must deal with challenges similar to mechatronic and software system design challenges. Some relevant key aspects to address when designing and deploying a new IoT system:

  • New IoT system often requires organizational and working culture changes, which are often underestimated. Changing workers' mindsets to collaborate with new IoT systems might be a critical issue and often underestimated during the design process.
  • Due to the complex nature and dependence on several existing systems, IoT projects tend to take much longer to implement than anticipated.
  • IoT systems are multi-domain solutions and thus require engineering skills from very different fields, which might not be available. Naming some of them like microcontroller programming, sensors, data communication, cybersecurity, etc.
  • Interconnectivity issues can be critical as the IoT system components must be able to communicate with each other, but there are tons of protocols, network architectures, and even electrical connectors that may fail.
  • Data security is often underestimated. IoT system is not a standalone system but, in most cases, interconnected systems through the public internet. This means that implementing cybersecurity is very challenging because the overall system security is defined by its weakest segment.
  • Scalability and dealing with legacy equipment. IoT systems often update old heavy machinery in the industry and combine old and new technologies. This might be more challenging than expected and, in some cases, extremely costly to eliminate all interfacing issues.

— MISSING PAGE — — MISSING PAGE — — MISSING PAGE —

System Thinking and IoT Design Methodology

The need for system-based IoT design methods

The Internet of Things (IoT) is still in its formative phase, presenting a critical window of opportunity to design and implement IoT systems that are not only scalable and cost-effective but also energy-efficient and secure. These systems must be developed with an emphasis on delivering acceptable Quality of Service (QoS) while meeting essential requirements such as interoperability to enable seamless integration across different devices and platforms.

Achieving these ambitious design objectives requires a comprehensive, system-based approach that takes into account the diverse priorities of various stakeholders, including network operators, service providers, regulatory bodies, and end users. Each group brings its own set of requirements and constraints, and balancing these is essential to ensure the system's overall success.

To support this, there is a significant need for the development of robust formal methods, advanced tools, and systematic methodologies aimed at the design, operation, and ongoing maintenance of IoT systems, networks, and applications. Such tools and methods should be capable of guiding the process to align with stakeholder goals while minimizing potential unintended consequences. This approach will help create resilient and adaptive IoT ecosystems that not only meet current demands but are also prepared for future technological advancements and challenges.

System thinking, design thinking, and systems engineering methodologies provide powerful frameworks for developing formal tools essential for the design and deployment of complex IoT systems. These interdisciplinary approaches enable a comprehensive understanding of how interconnected components interact within a larger ecosystem, allowing for the creation of more resilient, efficient, and effective IoT solutions.

A practical example of leveraging these methodologies can be found in the work referenced in [2], where system dynamics tools were applied to design IoT systems for smart agriculture. In this study, researchers constructed causal loop diagrams to map and analyze the intricate interplay between multiple factors impacting rice farming productivity. By visually representing the causal relationships within the agricultural system, they identified key drivers and dependencies that influence outcomes. This insight allowed them to propose an IoT-based smart farming solution designed to optimize productivity through data-driven decision-making informed by these interdependencies.

The value of system dynamics and systems engineering tools extends beyond smart agriculture. These methods can be employed to simplify the design and analysis of complex IoT systems, networks, and applications across various sectors. They offer a structured way to break down the complexity of interconnected systems, ensuring that the resulting IoT solutions are not only cost-effective and reliable but also secure and energy-efficient. This approach ensures that the needs of diverse stakeholders—including developers, network operators, regulatory bodies, and end-users—are met effectively.

Moreover, system dynamics tools have proven beneficial in educational contexts, particularly for teaching IoT courses. By adopting a system-centric approach, educators can help students grasp the complexity of IoT systems and concepts more intuitively. This holistic teaching method supports learners in understanding how various components and processes interact within an IoT ecosystem, thereby fostering a deeper comprehension of the subject matter and preparing them for real-world IoT challenges, as demonstrated in the findings of [3].

While numerous IoT-based systems are being individually developed and tested by both practitioners and researchers, these efforts often fall short of addressing the practical reality that IoT systems must ultimately interact with each other and with human users. This interconnectedness underscores the need for a holistic, system-centric design methodology that can effectively manage the complexity and interdependencies of IoT systems. The design of these systems should move beyond isolated functionalities to consider the broader ecosystem in which they operate, including human interaction, cross-system communication, and scalability.

Several studies have ventured into leveraging methods and tools for the design of IoT systems. For example, research referenced in [4] utilized causal loop diagrams to study the intricate interactions between different systems and stakeholders, identifying key feedback loops that influence productivity. This approach provided actionable insights and recommendations on improving efficiency and performance within specific applications, such as smart agriculture. The use of causal loop diagrams in such studies highlights the importance of visualizing and understanding the relationships and feedback mechanisms within complex IoT ecosystems.

However, to advance the design and operational robustness of IoT systems, it is crucial to incorporate both qualitative and quantitative system dynamics tools. While causal loop diagrams are effective for modelling qualitative interactions and identifying feedback structures, quantitative methods are needed to simulate and analyze the dynamic behaviour of IoT systems under various conditions. By integrating both approaches, it becomes possible to model not just the structure but also the real-time, data-driven interactions among different IoT components.

This highlights the urgent need to develop a comprehensive, multi-faceted framework that blends system thinking, design thinking, and systems engineering tools. Such an integrated approach would support the end-to-end design, operation, and maintenance of IoT systems, networks, and applications. The goal would be to create systems that align with the objectives of various stakeholders, including developers, service providers, network operators, regulators, and end-users while minimizing unintended consequences such as system inefficiencies, vulnerabilities, or user dissatisfaction.

System thinking enables a broad, interconnected view that helps identify and understand the relationships and dependencies across components. Design thinking ensures that solutions are user-centric, addressing real needs through iterative prototyping and feedback. Systems engineering brings discipline and structure, employing established methodologies and tools to optimize system performance and reliability.

By developing a framework that synergizes these approaches, IoT systems can be designed to be not only technically proficient but also adaptable, scalable, and aligned with stakeholder needs. This will foster sustainable, resilient IoT ecosystems capable of evolving alongside technological advancements and societal demands, paving the way for a future where IoT seamlessly integrates into everyday life, supporting everything from smart cities to connected healthcare with minimal risk and maximal benefit.

In conclusion, integrating system thinking, design thinking, and systems engineering methodologies into the development of IoT systems can significantly enhance their design and implementation. These approaches facilitate the creation of robust, scalable, and efficient IoT solutions tailored to the complex requirements of modern applications while addressing the needs of all stakeholders involved.

IoT linear thinking design methodology

IoT design thinking methodology

Design Thinking is a powerful, human-centered methodology that places a strong emphasis on understanding users and their experiences. This approach encourages designers to dig deeply into the needs, motivations, and challenges of their target audience to create solutions that resonate and provide real value. By focusing on empathy and user-centricity, Design Thinking transforms traditional problem-solving into an iterative, flexible, and collaborative process. It is composed of several distinct phases, each targeting a crucial aspect of design development and refinement:

Empathize: The foundation of Design Thinking starts with building a deep understanding of the users. This phase involves immersing oneself in the users' environment, observing behaviors, conducting interviews, and gathering insights to uncover latent needs and pain points. Empathy is not just about asking questions—it is about listening and connecting with users to see the world through their eyes.

Define: Armed with the knowledge gained from the empathize phase, designers move on to clearly articulating the problem. This step involves synthesizing observations and insights into a user-centric problem statement. The goal is to frame the challenge in a way that inspires creative solutions. Instead of defining the problem from the company's perspective (e.g., “We need to increase sales”), it is reformulated from the user’s standpoint (e.g., “How might we make it easier for customers to find what they need quickly?”).

Ideate: In this phase, creativity takes the spotlight. Designers brainstorm a wide array of potential solutions without judgment or constraint. The ideation stage encourages thinking outside the box, combining and expanding on ideas to generate a range of possibilities. Diverse teams collaborate to pool their perspectives and expertise, fostering a dynamic space where even unconventional concepts are welcomed. Techniques such as mind mapping, sketching, and rapid prototyping can be employed to spark inspiration.

Prototype: Once a range of ideas is developed, the next step is to create low-fidelity prototypes. These can be simple models or mock-ups that bring concepts to life, allowing designers and users to interact with them and visualize potential solutions. Prototyping is an experimental phase where the focus is on building to think and exploring how each idea can be translated into a tangible product or experience. The goal is to learn and iterate quickly by observing how users respond to the prototypes.

Test: The final phase involves sharing prototypes with real users to gather feedback and insights. Testing helps identify strengths, weaknesses, and areas for improvement. This phase is critical for refining the solution and ensuring it meets user needs effectively. The testing phase is iterative—feedback leads to modifications and adjustments, often cycling back to earlier stages, such as ideation or prototyping, to further enhance the solution. Through this continuous feedback loop, the design evolves to become more attuned to user expectations and more robust in its final form.

Iterate: Design Thinking is inherently non-linear, meaning that designers may return to previous phases multiple times as they learn and gather new insights. Iteration is a hallmark of this methodology, as it allows for continual refinement and optimization. This flexibility ensures that the final solution is not only functional but also aligned with users' true needs and expectations.

Refine:

Design Thinking’s structured yet adaptable framework encourages innovation and problem-solving across industries, from product development and digital services to organizational strategy and social impact initiatives. By emphasizing user empathy, collaboration, and iterative refinement, it empowers teams to create solutions that are meaningful, effective, and poised to make a positive difference.

IoT system thinking design methodology

[pczekalski]draw a figure - diagram of workflow design

System Modelling

Model-based Systems Engineering (MBSE) is a systems engineering approach that prioritizes the use of models throughout the system development lifecycle. Unlike traditional document-based methods, MBSE focuses on developing and using various models to depict different facets of a system, including its requirements, behaviour, structure, and interactions.

System Modeling Language

The systems modelling language (SysML)[5] is a general-purpose modelling language for systems engineering applications. It supports the specification, analysis, design, verification and validation of a broad range of systems and systems-of-systems. SysML plays a crucial role in the MBSE methodology. SysML provides nine diagram types to represent different aspects of a system. These diagram types help modellers visualize and communicate various perspectives of a system's structure, behaviour, and requirements.

Figure 6: Diagrams in SysML

Requirements

Product development, including IoT systems development, commences with the proper engineering of requirements and the definition of use cases. The customer establishes requirements, and here, the term “customer” encompasses a broad spectrum. In most instances, the customer is an individual or organization commissioning the IoT system. However, it could also be an internal customer, such as a different department within the same organization or entity. In the latter case, the customer and the developer are the same. Nonetheless, this scenario is the exception rather than the rule. The importance of conducting a thorough requirement engineering process remains constant across all cases.

 

In reality, requirements are often inadequately defined by the customer, and many parameters or functions remain unclear. In such cases, the requirement engineering stage assumes pivotal importance, as poorly defined system requirements can lead to numerous changes in subsequent design phases, resulting in an overall inefficient design process. In the worst-case scenario, this may culminate in significant resource wastage and necessitate the restart of system development amid a project. Such occurrences are not only costly but also time-consuming. While it is impossible to completely avoid changes during the design and development process, proper change management procedures and resource allocation can significantly mitigate the impact on the overall design process.

In this section, we use an industrial IoT system as a case study to present examples of SysML diagrams. The context of this case study revolves around a wood and furniture production company with multiple factories across the country. Each factory specializes in various stages of the production chain, yet all factories are interconnected. The first factory processes raw wood and prepares building elements for the subsequent two. The second factory crafts furniture from the prepared wood elements, while the third factory assembles customized products by combining building elements and production leftovers. These factories utilize a range of machinery, some modern and automated, while others employ classical mechanical machines with limited automation.

The company seeks an IoT solution to ensure continuous production flow, minimize waste, and implement predictive maintenance measures to reduce downtime. In the following examples, we utilize this case study, presenting fragments as examples without covering the entire system through diagrams.

Let's consider a fragment of customer input regarding functional requirements for the system:

  • The system must provide real-time machine status (ok, err, waiting for service) for every machine requiring periodic maintenance (totalling 54 machines across three plants).
  • The system must measure critical machine parameters linked to the most frequent failures.
  • The system must enable authorized operators to change the machine status to “require maintenance manually”.
  • [functional requirements continue]

Furthermore, the non-functional requirements include:

  • The developed system must utilize the existing wireless or wired internal network, and no new cables or wireless networks should be installed.
  • Installed devices and sensors must not obstruct or interfere with production units or production processes.
  • The cost per unit must not exceed 50 €.
  • [non-functional requirements continue]

Based on fragments of the requirement list like the ones above, we can construct a hierarchical requirement diagram (req) with additional optional parameters to precisely specify all individual requirements. Not all individual requirements need to be defined at the same level. If insufficient information is available at the current stage, requirements can be further refined in subsequent design iterations.

Figure 7: Requirement diagram of the IoT system

Use case diagrams at the requirement engineering stage allow for the visualization of higher-level services and identification of the main external actors interacting with the system services or use cases. Use case diagrams (uc) can be subsequently decomposed into lower-level subsystems. Still, in the requirement design stage, they facilitate a common understanding of the IoT system under development by different stakeholders, including management, software engineers, hardware engineers, customers, and others.

The following use case diagrams describe the high-level context of the IoT system.

Figure 8: Use Case diagram of high-level context of the IoT system

System Architecture

System architecture defines the system's physical and logical structure and interconnections between the subsystems and components. For example, block definition diagrams (bdd) can define the system's hierarchical decomposition into subsystems and even component levels. The figure below shows a simple decomposition example of one IoT sensing node. It is important to understand that blocks are one of the main elements of SysML and, in general, can represent either the definition or the instance. This is the fundamental system design concept and the pattern used in system modelling. Blocks are named with stereotype notation «block» and its name. It also may consist of several additional compartments like parts, references, values, constraints, operations, etc.) In this example Operations and values are demonstrated. Relationships between blocks describe the nature and requirement for the block's external connection. The most common relationships are associations, generalizations and dependencies. All of these have specific arrowheads that define the particular relationship. In the following example, a composite association relationship (dark diamond arrowhead) is used to represent the structural decomposition of the subsystem.

Figure 9: Block Definition diagram of sensing node

With the internal block diagram (ibd), one can define component interactions and flows. Cross-domain components and flows can be used in one single diagram, especially in the conceptual design stage. The ibd is closely related to bdd and describes the usages of the blocks. The interconnections between parts of blocks can be very different by their nature; in one diagram, you can describe flows of energy, matter, and data, as well as services that are required or provided by connections. The following example shows how the data flows from the sensor to the UI node in a simplified way.

Figure 10: Internal Block diagram of data flow between nodes

System Behaviour

The system behaviour of an IoT system defines the implementation of system services and functionality. The combination of hardware, software, and interconnections enables to offer the required services and functionality, and it defines the system behaviour. It consists of cyber-physical system activities, actions, state changes, and algorithms. For example, we can define with activity diagram (act) a system sensing node software algorithm. In the figure below a production system main critical parameters are measured and initial needs.

Figure 11: Activity diagram of sensing node software algorithm

Requirement verification and validation

Property prognosis and assurance are conducted during the complete development process. Expected system properties are forecasted early on using models. Property validation, which accompanies development, continuously examines the pursued solution through investigations with virtual or physical prototypes or a combination of both. The property validation includes verification and validation. Verification is the confirmation by objective proof that a specified requirement is fulfilled, while validation proves that the work result can be used by the user for the specified application [6].

SysML enables the tracking and connecting of requirements with different elements and procedures of the model. For example, the SysML requirement diagram captures requirements hierarchies and the derivation, satisfaction, verification, and refinement relationships. The relationships provide the capability to relate requirements to one another and to relate requirements to system design models and test cases.

Figure 12: Requirement validation and verification

IoT Architectures

[pczekalski][✓ pczekalski, 2023-11-29]IoT layered architecture diagram comes here

Figure 13: 4 layered IoT architecture model

Components of IoT network architectures

IoT nodes

IoT network nodes are often connected directly with each other or an access point (which connects them to the internet) using low-power communication technologies (LPCT). These technologies are essential for enabling cost-effective connectivity among energy-constrained electronic devices. These technologies include wireless access technologies used at the physical layer to establish connectivity over physical mediums and communication protocols at the application layer to facilitate communication over IP networks.

Wireless Access Technologies Wireless access technologies are categorized into long-range, short-range, licensed, and unlicensed technologies, with the choice of technology depending on the specific application. For example, LoRaWAN (Low Power Wide Area Network) is preferred for open-field farming due to its long-range capabilities. Examples of short-range wireless access technologies include ZigBee, Bluetooth, Bluetooth Low Energy (BLE), Z-Wave, IEEE 802.15.4, and Near Field Communication (NFC). In contrast, examples of long-range technologies include LoRaWAN, Sigfox, Weightless-P, INGENU RPMA, TELENSA, NB-IoT, and LTE CAT-M.

Unlicensed technologies often prove more cost-effective in the long term compared to licensed technologies offered by cellular network providers. However, IoT operators must build and maintain their infrastructure for unlicensed technologies, which can involve significant initial costs.

Low Power Wide Area Networks (LPWAN) LPWAN technologies are pivotal for the broader adoption of IoT, as they maintain connectivity with battery-operated devices for up to ten years over distances spanning several kilometers. Key advantages of LPWAN technologies include:

  • Unordered List ItemReliable wide-area coverage, enabling communication over long distances.
  • Ultra-low power communication, ideal for battery-powered devices.
  • Low-cost network connectivity, significantly reducing both capital expenditures (CAPEX) and operational expenditures (OPEX) for IoT operators.
  • Support for scalable IoT solutions, allowing for the connection of vast numbers of sensors.
  • Acceptable Quality of Service (QoS) for many IoT applications.

Well-established LPWAN communication protocols such as LoRaWAN, Sigfox, and NB-IoT are suitable for IoT systems designed to cover wide areas due to their low power consumption and reliable transmission over long distances. These protocols are optimized for transmitting text data; however, certain IoT applications, such as those in agriculture, such as crop and livestock monitoring, may require multimedia data transmission. In such cases, image and sound compression techniques must be applied, balancing the trade-off between data quality and bandwidth requirements.

Application Layer Communication Protocols Application layer communication protocols ensure reliable interaction between IoT devices and data analytics platforms, addressing the limitations of traditional HTTP protocols in constrained networks. The Constrained Application Protocol (CoAP) is a UDP-based request-response protocol standardized by the IETF (RFC 4944 and 6282) for use with resource-constrained devices. CoAP enables lightweight and efficient communication, making it suitable for IoT.

The MQTT protocol follows a publish-subscribe model, with a message broker distributing packets between entities. It uses TCP as the transport layer but also has an MQTT-SN (MQTT for Sensor Networks) specification that operates over UDP. Other notable communication protocols include the Advanced Message Queuing Protocol (AMQP), Lightweight Machine-to-Machine (LWM2M), and UltraLight 2.0, all designed to support efficient and reliable communication within IoT networks.

The IoT Gateways

The Internet of Things (IoT) Gateway serves as a critical connection point that facilitates the interaction between sensors, actuators, and various other IoT devices with the broader Internet. This gateway plays an essential role by enabling communication not only between connected devices and the cloud but also by acting as a bridge for IoT nodes that cannot communicate directly with each other. Such gateways ensure seamless data transmission, device management, and integration into larger IoT networks, supporting both upstream and downstream data flow.

The type of wireless access technology employed influences the specific implementation of an IoT gateway. Different use cases and deployment scenarios may require specific types of gateways to ensure efficient connectivity and data handling. Several widely adopted IoT gateway solutions utilize LoRaWAN, Sigfox, WiFi, and NB-IoT technologies. Each of these protocols brings unique advantages tailored to distinct use cases. For instance, LoRaWAN and Sigfox are well-suited for long-range, low-power communication, which is essential for connecting dispersed agricultural sensors in rural areas. WiFi provides robust, high-speed connectivity for scenarios requiring larger data payloads. At the same time, NB-IoT offers cellular-based connectivity with low power consumption, ideal for areas where cellular infrastructure is present.

Resource-constrained computing devices such as Raspberry Pi, Orange Pi, and NVIDIA Jetson Nano Developer Kit can be utilized to handle networking and computational tasks at the edge. These devices, known for their affordability and energy efficiency, are capable of running lightweight algorithms that manage data preprocessing, real-time decision-making, and local storage. By leveraging these compact yet powerful computing nodes, organizations can implement IoT solutions that are scalable, cost-effective, and adaptable to various operational demands. The use of such technologies not only enhances connectivity but also paves the way for smart IoT solutions.

Fog and Edge Computing Nodes

In some IoT deployments, computationally lightweight fog or edge computing nodes are deployed between the IoT nodes and cloud computing data centres. Fog or edge computing offloads some of the computation or processing workloads from cloud data centres to fog or edge computing nodes closer to the IoT device (data sources). The concepts of fog computing and “edge” computing are frequently mentioned together and often used interchangeably. While they share a common goal of decentralizing computational resources and bringing them closer to the source of data generation, there are nuanced distinctions between the two. Fog computing, in particular, can be viewed as a broader system that encompasses edge computing within its scope, extending its capabilities across a wider network infrastructure. Both approaches represent an architectural design paradigm that moves computation, communication, control, and data storage closer to the end-users and data sources, enhancing overall system efficiency and responsiveness.

The advantages of fog and edge computing

Traditional cloud computing models centralize data processing power in large data centres, which are often located at considerable distances from the IoT (Internet of Things) devices that generate data. While this centralized approach offers significant computational capacity and scalability, it introduces certain limitations, particularly for applications that require low latency and real-time data processing. The inherent latency in cloud computing arises from the physical distance between IoT devices and data centres, as well as potential network congestion. This latency can lead to delays that undermine the performance of critical applications, such as those in industrial automation, autonomous vehicles, healthcare monitoring systems, augmented reality, and smart city management. In these use cases, even slight delays can be detrimental, affecting decision-making processes and overall system effectiveness.

Cisco introduced fog computing to address these shortcomings by extending the cloud’s functionality closer to the data source, effectively forming an intermediary layer between IoT devices and centralized cloud data centres. This layer, often referred to as the “fog layer,” provides localized computing, storage, and networking capabilities, enabling data to be processed at or near the point of generation. By leveraging fog nodes, which can be routers, gateways, or other network devices with processing capabilities, fog computing supports data preprocessing, filtering, and real-time analysis before sending only relevant or summarized information to the cloud for further storage and processing. This approach reduces the amount of raw data transmitted over the network, thus minimizing bandwidth usage and enhancing overall system efficiency.

Edge computing, on the other hand, refers more specifically to processing that takes place directly on the devices at the network’s edge or very close to the data source. Edge devices, such as sensors, cameras, and IoT-enabled machinery, are equipped with sufficient processing power to handle basic data analysis and decision-making without the need to communicate with distant servers. This direct processing enables faster response times and reduces the dependency on continuous connectivity to a central cloud infrastructure.

Both fog and edge computing offer significant advantages over traditional cloud models by addressing latency and bandwidth limitations. They allow data to be processed, stored, and acted upon closer to where it is generated, which is particularly beneficial in scenarios involving massive data production and real-time decision-making. For instance, in an industrial setting with automated machinery, real-time data analysis can help identify and mitigate potential equipment failures before they escalate into major issues. In the realm of autonomous vehicles, local processing facilitated by edge computing ensures rapid response to dynamic road conditions and safety hazards, enhancing vehicle control and passenger safety.

Moreover, healthcare monitoring systems that rely on continuous data streams from patient devices, such as heart rate monitors and wearable sensors, benefit from the reduced latency and improved reliability offered by fog and edge computing. These technologies ensure that critical health data is analyzed promptly, enabling timely alerts and interventions that could be life-saving.

Smart cities represent another domain where the combination of fog and edge computing can play a transformative role. The vast array of sensors and IoT devices deployed for traffic management, energy distribution, public safety, and environmental monitoring produce an overwhelming amount of data. Processing this data locally through edge and fog nodes helps manage resources efficiently, reduce congestion, and respond to incidents in real-time.

The proximity enabled by fog and edge computing not only reduces latency but also enhances the security and privacy of data. Since data can be processed locally without needing to traverse long distances to central servers, there is a reduced risk of interception and unauthorized access. This local processing can comply better with data protection regulations that require sensitive data to remain within certain geographical boundaries.

Overall, fog and edge computing contribute to a more robust, adaptable, and scalable system architecture. They facilitate real-time analytics and empower IoT applications across multiple industries by delivering the responsiveness and efficiency needed in today’s data-driven world. By complementing traditional cloud services and addressing their inherent limitations, these technologies are poised to play an increasingly pivotal role in the future of distributed computing.

Fog computing and AI

Fog computing offers a promising approach to harness artificial intelligence (AI) as a mediator between edge and cloud devices, providing an effective solution for improving overall system performance and resource utilization. Due to the inherent limitations in computational and communication capacities of the cloud, there is a growing need for transforming edge computing devices and connected devices into more intelligent entities. This transformation is critical to addressing the challenges posed by cloud computing's constrained resources and the ever-expanding needs of Internet of Things (IoT) networks.

By incorporating a fog computing layer between the IoT layer and the cloud computing layer, a more efficient and responsive system architecture can be established. This setup allows for the offloading of lightweight processing tasks, such as real-time data stream processing and the execution of simple AI algorithms, directly to the edge devices within the network (e.g., low-cost computing platforms like Raspberry Pi or Orange Pi). These edge devices, or fog nodes, which are co-located with IoT gateways, can perform local AI processing without needing to rely on the cloud for every task.

Moreover, more complex and resource-intensive computations, such as big data analytics, can be handled at the network edge, thus alleviating the burden on cloud infrastructure. This approach significantly enhances system efficiency by reducing the time spent transmitting data to and from the cloud. The reduced dependency on centralized cloud servers also lowers communication latency, enabling faster decision-making, which is especially valuable in time-sensitive applications.

The fog computing paradigm not only optimizes computational load distribution but also facilitates the scalability of IoT systems, enabling them to adapt to increasing demands without overwhelming centralized cloud systems. It further supports the mobility of devices and users, allowing seamless transitions between network zones while maintaining consistent performance. Additionally, by processing data closer to where it is generated, fog computing minimizes the volume of traffic transmitted across the internet backbone, easing congestion and reducing the strain on cloud data centers. This improvement is crucial in optimizing network performance and ensuring that both IoT devices and cloud systems operate efficiently, particularly as IoT networks continue to grow in size and complexity.

Internet core networks

Internet core networks play an indispensable role in supporting the vast infrastructure underpinning the Internet of Things (IoT). These core networks form the backbone that facilitates seamless data flow between billions of interconnected devices and cloud computing platforms. IoT systems are composed of an array of devices and sensors, commonly referred to as IoT nodes, that capture and generate significant volumes of data. This data, often complex and voluminous, needs to be transmitted to cloud platforms where it undergoes sophisticated processing and analysis to yield actionable insights. The journey of this data begins with its transmission from IoT nodes to the cloud, known as the uplink. Once processed, the cloud platforms send the analyzed data, control commands, or feedback back to the IoT nodes via the downlink. This bidirectional communication is critical for enabling various IoT applications such as smart cities, industrial automation, and advanced healthcare systems, where data-driven decision-making and real-time responsiveness are imperative.

Challenges in Handling IoT Traffic over Core Networks

While the role of internet core networks in IoT ecosystems is undeniably significant, the exponential increase in IoT traffic introduces several challenges that must be addressed to ensure reliable and secure operations.

1. Security Vulnerabilities

One of the primary challenges associated with transmitting large volumes of IoT traffic through traditional core networks is the heightened risk of security breaches. As IoT ecosystems continue to grow, they become increasingly attractive targets for cyber-attacks, including data interception, unauthorized access, and distributed denial-of-service (DDoS) attacks. These vulnerabilities pose significant threats to the integrity, confidentiality, and availability of data. Ensuring robust security measures, such as end-to-end encryption, secure authentication protocols, and continuous monitoring, is critical for protecting IoT data during transmission. Without adequate security frameworks, IoT systems could be compromised, leading to data leaks, operational disruptions, or unauthorized control of IoT nodes.

2. Maintaining Quality of Service (QoS)

The surge in data traffic generated by billions of IoT devices places immense pressure on core networks, potentially leading to congestion and latency issues. QoS is a crucial factor in maintaining the performance and reliability of IoT services. Any degradation in QoS can disrupt applications that require seamless communication and real-time responses, such as autonomous vehicle navigation, industrial process control, and remote medical monitoring. High latency or data loss in these scenarios could result in severe consequences, including safety hazards and operational failures. To combat these issues, implementing traffic management strategies, network optimization protocols, and prioritization mechanisms is essential for ensuring consistent QoS.

3. Energy Consumption

The continuous transmission and processing of IoT data through core networks (as they are transported from IoT devices to cloud platforms) demands substantial energy resources. This persistent energy requirement not only results in higher operational costs but also contributes to environmental concerns due to increased carbon emissions. As the scale of IoT networks expands, sustainable energy management becomes an urgent necessity. Strategies to improve energy efficiency include optimizing data routing, using energy-efficient network equipment, and leveraging edge computing to reduce the load on core networks by processing data closer to its source. Adopting these strategies can help balance energy consumption and support the sustainability of IoT infrastructures.

4. Network Management Complexity

Effectively managing the ever-increasing data traffic from IoT nodes presents significant challenges for network administrators. Coordinating between a multitude of data flows, ensuring optimal routing paths, and balancing the load across various network nodes require advanced and adaptable network management techniques. Traditional network management approaches often struggle to keep up with the scale and dynamic nature of IoT traffic. Innovations such as software-defined networking (SDN) and network function virtualization (NFV) offer promising solutions. SDN provides enhanced flexibility by decoupling network control from the hardware, allowing centralized management and automation of traffic flows. NFV, on the other hand, enables the deployment of network functions as software, facilitating rapid scaling and efficient resource allocation. Together, these technologies enhance network agility and streamline the administration of complex IoT environments.

The internet core networks are fundamental to the operation and success of IoT ecosystems, enabling the transmission and processing of massive volumes of data. However, the rapid expansion of IoT introduces a series of challenges, including security vulnerabilities, QoS maintenance, energy consumption, and network management complexities. Addressing these challenges is vital for fostering a sustainable, secure, and efficient IoT landscape. By implementing comprehensive security measures, prioritizing QoS, optimizing energy use, and adopting advanced network management technologies like SDN and NFV, the infrastructure supporting IoT can continue to evolve and thrive in an increasingly connected world.

Cloud computing data centres

Since IoT devices possess limited computational capabilities and memory, the vast amounts of data collected by IoT devices are sent to cloud data centres for advanced analytics and storage. IoT cloud computing represents the convergence of cloud technology with the rapidly expanding field of the Internet of Things (IoT). Cloud computing, recognized as a highly dynamic and transformative paradigm, has revolutionized how individuals and organizations manage, store, and utilize IT resources. It offers significant benefits in terms of cost-effectiveness, scalability, and operational flexibility, making it indispensable to contemporary IT strategies. The integration of cloud computing and IoT enhances these advantages by enabling on-demand, remote access to diverse computing resources—such as software, infrastructure, and platform services—delivered seamlessly over the internet. This convergence provides IoT devices with the ability to connect to cloud-based environments from virtually anywhere and at any time, tailored to their specific data processing and storage needs. This accessibility allows organizations to leverage cloud capabilities without facing the complexities and financial burdens associated with the setup and maintenance of dedicated infrastructure, significantly reducing the time and cost involved in scaling IT services.

One of the fundamental advantages of IoT cloud computing is its potential to reduce the costs associated with building and maintaining physical infrastructure. In the past, organizations had to make substantial capital investments to set up and manage on-premises data centers, which required continuous maintenance, security updates, and hardware upgrades. These costs represented a significant barrier, especially for smaller enterprises with limited financial resources. Cloud computing shifts this responsibility to cloud service providers, who take on the procurement, installation, and maintenance of the necessary hardware and software. This approach frees up financial and human resources, allowing organizations to focus on their core business activities rather than IT infrastructure management. For small to medium-sized enterprises (SMEs), this shift can be particularly transformative, enabling them to access state-of-the-art computing power and data management capabilities without the prohibitive cost of running their own data centers.

In addition to cost savings, IoT cloud computing offers enhanced security, storage, and management efficiencies. Leading cloud providers implement comprehensive security measures to safeguard data and applications from unauthorized access, cyber threats, and breaches. This level of security would require significant investment and expertise if handled internally by an organization. By outsourcing security management to cloud providers, users benefit from sophisticated and continually updated defenses without needing to maintain in-house security teams. Moreover, cloud platforms offer flexible and scalable storage solutions that can be adjusted to meet fluctuating data volumes, ensuring that users only pay for the storage they actually need. These managed services also handle essential updates and maintenance automatically, reducing the risk of software vulnerabilities and downtime while ensuring systems remain up-to-date.

For application developers working in the IoT ecosystem, cloud computing provides a cutting-edge development environment replete with advanced tools, frameworks, and services. This environment allows developers to create, test, and deploy IoT applications with greater efficiency and speed than traditional development methods would allow. With cloud computing, developers can bypass concerns related to managing infrastructure, which enables them to concentrate on the functionality and innovation of their applications. The cloud’s collaborative capabilities also facilitate teamwork, as developers can work simultaneously on projects in real-time from different locations. This collaboration enhances productivity and accelerates project timelines, leading to faster rollouts of new IoT applications and services.

The proliferation of IoT devices has underscored the need for integrated cloud solutions tailored specifically to IoT applications. In response, a variety of IoT cloud platforms have been developed, each offering a unique array of services to support IoT ecosystems. These platforms provide essential capabilities such as data storage, real-time data processing, device management, analytics, and application hosting. Public cloud services like Microsoft Azure IoT Suite, Amazon AWS IoT, and DeviceHive are designed to meet the demands of IoT users by providing robust, scalable solutions that support a wide range of use cases—from simple consumer applications to intricate industrial IoT systems. These platforms allow businesses and developers to deploy IoT solutions without needing extensive, costly in-house infrastructure.

The use of cloud-based IoT platforms extends well beyond mere convenience. By streamlining the process of integrating IoT devices into cloud environments, these platforms make it possible for businesses to implement IoT solutions quickly and affordably. This capability encourages innovation and supports operational efficiency by allowing organizations to analyze and act upon real-time IoT data. Leveraging cloud-based solutions helps businesses optimize workflows, improve decision-making, and deliver better services to their customers. Additionally, ongoing advancements in cloud technology and specialized IoT services highlight the critical role cloud computing plays in supporting the continued growth and success of IoT implementations. The combination of these technologies sets the stage for an interconnected, data-driven future where cloud computing and IoT work hand-in-hand to drive progress and enhance global connectivity.

IoT Software applications

The value of IoT lies not just in the devices themselves but in the software applications that leverage the data generated by these devices to provide actionable insights and drive automation. These software applications are at the heart of IoT solutions and can be designed for a wide range of purposes. Let's explore the various aspects of IoT applications in detail:

1. Monitoring

Monitoring is one of the most common IoT application categories. In this use case, IoT devices (such as sensors, cameras, or smart meters) continuously collect data about the environment, processes, or systems they are designed to observe. The role of the software application is to:

Collect and aggregate data: The software interfaces with the devices to retrieve real-time data, such as temperature, humidity, energy consumption, or security status.

  • Analyze the data: Through visualization tools and dashboards, users can view trends and patterns in real-time, making it easy to monitor critical metrics.
  • Alert and notify: When the system detects anomalies or values that exceed predefined thresholds, the software can send alerts or notifications to stakeholders, such as technicians or facility managers.

For example, in industrial applications, IoT sensors might monitor equipment for signs of wear and tear, allowing a company to detect potential failures before they cause disruptions. In healthcare, IoT devices can continuously monitor patient vitals and send updates to doctors or hospitals for immediate action.

2. Control

Control-oriented IoT applications allow users to interact with and manage devices or systems remotely. This can include turning devices on or off, adjusting settings, or configuring them to operate in specific modes. Control applications offer the following capabilities:

  • Remote Device Management: Users can remotely access devices (such as smart thermostats, lights, or machinery) to change their configurations, reset them, or check their operational status.
  • Automation and Scheduling: IoT devices can be controlled based on automated rules or schedules. For example, an IoT-enabled irrigation system can be set to water crops at specific times of the day based on weather conditions or soil moisture levels.
  • Access Control: In security systems, IoT devices such as smart locks or cameras can be controlled to allow or deny access to a specific location. Users can lock/unlock doors remotely or view live feeds to ensure security.

For example, in a smart home, IoT applications might control lighting, heating, and even security systems from a central interface like a smartphone app.

3. Automation

Automation is one of the most transformative aspects of IoT applications. By automating processes based on real-time data, IoT can eliminate the need for manual intervention and optimize systems for greater efficiency. Key functions of IoT automation applications include:

  • Smart Decision-Making: Automation is driven by data insights. For instance, an IoT-enabled HVAC system can automatically adjust the temperature based on the number of people in a room or the outside weather.
  • Process Optimization: In manufacturing, IoT sensors may monitor machine performance and trigger automated actions, such as switching production lines or adjusting settings for energy savings. This ensures optimal performance without requiring human oversight.
  • Predictive Automation: Leveraging advanced analytics and machine learning, IoT systems can predict future trends or events, triggering automatic actions. For example, a smart fridge might reorder items when it detects that supplies are running low or based on usage patterns.

In agriculture, IoT-enabled irrigation systems can automatically adjust water flow based on soil moisture readings, ensuring that crops receive optimal care without human input.

4. Data-Driven Insights

One of the most significant advantages of IoT applications is their ability to extract valuable insights from the vast amounts of data generated by devices. These insights can inform business decisions, optimize operations, and improve outcomes across a range of sectors. Key capabilities of data-driven IoT applications include:

  • Data Analytics: IoT applications often incorporate advanced analytics tools that process and analyze data to generate insights. This can include historical trend analysis, predictive analytics, and anomaly detection.
  • Reporting: The data collected can be presented in comprehensive reports, giving users a detailed view of system performance or activity. This is especially useful for management or decision-makers who rely on actionable insights to make informed choices.
  • Machine Learning and AI: Many IoT systems incorporate machine learning algorithms that allow the system to learn from the data over time, improving its ability to predict future events or optimize performance automatically.

In the automotive industry, IoT data can be used to track vehicle performance, predict maintenance needs, and enhance fuel efficiency. Similarly, in the energy sector, IoT applications help to analyze consumption patterns and make adjustments that improve energy efficiency and reduce costs.

5. Security and Privacy

IoT applications also play a critical role in securing IoT devices and the data they generate. As the number of connected devices increases, ensuring the privacy and security of sensitive information is essential. IoT security applications focus on:

  • Device Authentication: Ensuring that devices accessing the network are authorized and cannot be tampered with.

Data Encryption: Securing data both in transit and at rest to prevent unauthorized access or breaches.

  • Real-time Monitoring: Constantly monitoring the health and security of IoT devices and systems to detect and respond to potential threats.

For example, in a smart home, an IoT security system could monitor unauthorized access attempts and alert homeowners while also enabling remote surveillance.

6. Integration with Other Systems Many IoT applications are not standalone but integrate with other systems or platforms to provide enhanced functionality. These integrations can span various sectors, including enterprise resource planning (ERP), customer relationship management (CRM), and cloud platforms. Some common integrations include:

  • ERP Systems: In manufacturing, IoT data can feed into an ERP system, automatically updating inventory levels, tracking production progress, and informing supply chain decisions.
  • Cloud Computing: Many IoT applications rely on cloud infrastructure to store and analyze large datasets, providing scalability and reducing the need for on-premise hardware.
  • Third-Party Services: IoT applications often integrate with third-party platforms, enabling additional capabilities such as weather forecasting, supply chain logistics, or data analytics.

For example, in smart cities, IoT applications integrate with traffic management systems, environmental sensors, and city services, enabling more efficient and responsive urban management.

The true value of IoT applications lies in their ability to convert raw data from connected devices into actionable insights, drive automation, and improve decision-making. Whether for monitoring, control, or automation, IoT applications are revolutionizing industries by improving efficiency, reducing costs, and enhancing user experiences. As IoT technology continues to evolve, the potential for even more advanced, intelligent, and integrated applications will only grow, further embedding IoT into our daily lives and business operations.

IoT network security systems

As the number of IoT devices continues to grow, the need for robust security measures becomes even more critical. Protecting the sensitive data collected by these devices from unauthorized access, tampering, or misuse is paramount to ensure the integrity and privacy of users and organizations. Thus, network security systems should be considered when designing IoT networks and systems to ensure that they're secure by design.

Security in IoT Networks: Security within IoT networks is a multifaceted concern, as IoT devices often operate in decentralized and dynamic environments. These devices communicate through wireless networks, making them vulnerable to various types of cyberattacks. Given that IoT systems are often connected to the cloud or other external networks, vulnerabilities in one device can expose the entire network to risks. Hence, strong security protocols are essential for the protection of data in these networks.

Key Security Measures

  • Encryption: Encryption is one of the most fundamental techniques used to protect data transmitted across IoT networks. It ensures that even if data is intercepted by malicious actors, it remains unreadable without the appropriate decryption key. Both data at rest (stored data) and data in transit (data being transmitted) can be encrypted. IoT devices often use advanced encryption standards (AES), Transport Layer Security (TLS), or Secure Socket Layer (SSL) protocols to safeguard the communication between devices and the cloud or other endpoints. This makes it difficult for attackers to gain meaningful access to sensitive data.
  • Authentication: Authentication verifies the identity of both the devices and the users interacting with the IoT network. With IoT systems often comprising many different types of devices, each with varying levels of capabilities, ensuring that only legitimate devices can join the network is critical. Authentication mechanisms can include device certificates, biometrics, and multi-factor authentication (MFA) for users. Device authentication ensures that only authorized devices are able to communicate within the network, reducing the risk of a rogue or compromised device gaining access to sensitive information.
  • Authorization: Once authenticated, the authorization process dictates what actions a device or user is permitted to perform within the network. Authorization systems define roles and permissions, ensuring that devices only have access to data and resources necessary for their function. For example, a smart thermostat may be authorized to adjust temperature settings but not to access user data stored in the cloud. This limits the potential impact of a compromised device by preventing it from performing unauthorized actions that could lead to data breaches or system failures.
  • Data Integrity: Ensuring data integrity involves preventing unauthorized alteration of data. Integrity measures like hash functions or digital signatures are used to verify that the data sent from one device to another has not been tampered with. This is essential in IoT networks where real-time data is constantly being exchanged, as any modification in this data can result in inaccurate readings, malicious activities, or faulty system behavior.
  • Intrusion Detection and Prevention Systems (IDPS): IoT networks are prone to cyberattacks such as denial-of-service (DoS) attacks, malware, or unauthorized access attempts. Intrusion detection systems (IDS) and intrusion prevention systems (IPS) play a critical role in identifying and blocking suspicious activities in real time. These systems monitor the network for unusual patterns of behavior or unauthorized actions and respond promptly to mitigate potential threats before they can cause harm.
  • Firmware and Software Updates: Keeping devices' firmware and software up to date is an essential aspect of IoT security. Security vulnerabilities can be discovered in IoT devices over time, and if these devices are not regularly updated with patches or new software versions, they can become easy targets for attackers. Many IoT devices now include features that allow for remote updates, ensuring that the system remains protected against newly discovered threats.
  • Secure Network Architecture: The design of the IoT network itself plays a crucial role in security. Segmentation of the network can limit the scope of damage in case a device is compromised. By creating isolated segments, IoT networks can minimize the impact of a breach, preventing attackers from moving laterally across the entire system. In addition, the use of virtual private networks (VPNs) and private communication channels can enhance security further, protecting communication between devices and their control centers.
  • Physical Security: In addition to cyber threats, physical security is also an important aspect of IoT device protection. Devices located in publicly accessible places or vulnerable environments can be tampered with or stolen, leading to a loss of control or misuse of data. Protecting IoT devices physically, through tamper-resistant hardware, secure storage solutions, and proper disposal methods, ensures that attackers cannot easily gain unauthorized access by physically compromising a device.
  • Challenges in IoT Security: While these security measures are critical, implementing them in IoT networks presents several challenges. Many IoT devices have limited computational power and storage, which can make implementing complex encryption or authentication mechanisms difficult. Additionally, the sheer volume of IoT devices increases the attack surface, making it more challenging to monitor and respond to every threat. Moreover, the rapid pace of IoT innovation and the frequent introduction of new devices and technologies can lead to inconsistent security practices across the industry, leaving gaps that attackers can exploit.

Securing IoT networks requires a comprehensive, multi-layered approach that addresses various aspects of security. By implementing measures like encryption, authentication, authorization, and regular software updates, organizations can significantly reduce the risk of data breaches and unauthorized access to IoT systems. While IoT security presents significant challenges, these challenges can be mitigated with careful planning, robust protocols, and a proactive security strategy.

IoT networks

An IoT (Internet of Things) network is composed of interconnected IoT nodes, which can include sensors, actuators, and fog nodes. Each IoT node typically comprises several key components: a power supply system, a processing unit (such as microprocessors, microcontrollers, or specialized hardware like digital signal processors), communication units (including radio, Ethernet, or optical interfaces), and additional electronic elements (e.g., sensors, actuators, and cooling mechanisms). These components work in unison to enable the node to collect, process, and transmit data effectively, supporting various IoT applications.

The architecture of a typical IoT network is structured into four main layers: the perception layer, the fog layer, the Internet core network (transport layer), and the cloud data centre (cite fig.). This multi-layered structure allows for scalability, efficiency, and optimized data processing.

Fig. here

  • IoT network Layer: This foundational layer consists of IoT devices, such as sensors and actuators, that are responsible for collecting data from their surrounding environment. These devices can range from simple temperature and humidity sensors in smart homes to complex monitoring systems in industrial settings. Depending on their configuration, these devices may perform preliminary data processing to filter or compress data before transmission. For example, motion sensors in a security system might only transmit data when movement is detected, thereby conserving energy and bandwidth. This layer consists primarily of a network of IoT nodes connected directly to each other or an access point, depending on the network topology chosen for the given IoT network deployment scenario. The IoT nodes are connected directly to each other or an access point via low-power wireless communication technologies.
  • Fog computing Layer: The fog computing layer acts as an intermediary between the IoT devices at the IoT network layer and the cloud. It provides localized, lightweight processing capabilities that help reduce latency and bandwidth usage. By processing data closer to the source, the fog layer can handle tasks such as real-time data analysis, decision-making, and local storage. This is particularly useful in applications where immediate responses are crucial, such as in autonomous vehicles, healthcare monitoring, and smart manufacturing systems. The use of fog computing enhances the network’s overall performance and reduces the burden on centralized cloud resources.
  • Transport Layer (Internet Core Network): This layer is responsible for the transmission of data between the perception and fog layers and the cloud data centre. It serves as the backbone of IoT communication, leveraging a variety of networking technologies such as wireless networks (e.g., Wi-Fi, LTE, 5G), wired connections (e.g., Ethernet), and even optical networks for high-speed data transfer. The transport layer ensures reliable and secure data flow, using protocols that safeguard data integrity and reduce transmission errors. This layer's efficiency directly impacts the overall responsiveness and performance of the IoT network.
  • Cloud Data Center layer: The cloud data centre layer represents the centralized processing hub where advanced data analytics, complex computation, and long-term data storage occur. It can handle vast amounts of data generated by IoT devices across the network. The cloud layer employs powerful data analytics tools, machine learning algorithms, and big data technologies to extract insights and generate actionable outcomes. For instance, data collected from smart grids can be analyzed to optimize energy distribution, while data from medical sensors can support remote patient monitoring and predictive healthcare interventions. The processed information is then sent back to users or devices to facilitate informed decision-making or automated physical responses (control of physical systems).

In an IoT network, the seamless integration of these layers enables efficient data collection, processing, and transmission. This layered approach supports diverse applications, ranging from smart homes equipped with automated climate control and security systems to large-scale industrial automation, smart cities, and agricultural monitoring. The robust structure of IoT networks allows for scalable solutions that can adapt to the needs of various industries, enhancing productivity, efficiency, and quality of life.

— MISSING PAGE —

IoT Network Design Consideration and Challenges

Hardware limitations

Range

Bandwidth

energy consumption and battery life

Quality of Service (QoS)

Intermittent connectivity collisions interference low need for frequent maintenance (low breakdown rate)

Security

Flexibility

the aspect of orchestration and programmability (eg SDM)

Cost

Interoperability

User interface requirements

Standardisation

IoT Communication and Networking Technologies

The IoT Network Access Technologies

Short range technologies

Radio Frequency Identification (RFID)

Near Field Communication (NFC)

Bluetooth Low Energy (BLE)

long range technologies

Low Power Wide Area Networks (LPWAN)

  1. LoRA
  2. SigFox
  3. Haystack

Cellular IoT

  1. NB-IoT
  2. LTE-M

WiFi

Ethernet

The IoT Networking Technologies

IPv6

IPv6 Low Power Wireless Personal Area Network (6LoWPAN)

IPv6 Routing Protocol for Low-Power

Lossy Networks (RPL)

ZigBee

The IoT High Level Communication Technologies

Message Queue Telemetry Transport (MQTT)

Advanced Message Queuing Protocol (AMQP)

Extensible Messaging and Presence Protocol (XMPP)

Constrained Application Protocol (CoAP)

Lightweight machine-to-machine (LWM2M)

UltraLight 2.0

IoT Network Design Methodologies

IoT Network Design Methodologies Designing a network for the Internet of Things (IoT) requires a strategic approach that integrates scalability, security, efficiency, and interoperability. IoT network design methodologies revolve around creating robust, flexible, and efficient networks capable of supporting diverse devices, applications, and services. These methodologies emphasize handling large volumes of data, ensuring real-time communication, and maintaining high levels of security and reliability.

This guide explores the principles, methodologies, challenges, and best practices for designing IoT networks.

Key Principles of IoT Network Design Scalability

IoT networks must accommodate the addition of millions of devices without degrading performance. This includes planning for future expansion in terms of devices, data traffic, and services. Interoperability

IoT systems often comprise devices from various vendors using different communication protocols. Designing for interoperability ensures seamless communication and data exchange. Low Latency

Real-time applications like autonomous vehicles or healthcare monitoring require minimal latency to ensure timely actions and responses. Energy Efficiency

Many IoT devices operate on battery power. Networks must minimize energy consumption to prolong device lifespans. Security and Privacy

IoT networks must protect sensitive data from unauthorized access, breaches, and malicious attacks through encryption, secure protocols, and access controls. Reliability

Networks should offer high uptime and ensure consistent performance, even during peak usage or failures. Cost-Effectiveness

The design should balance performance with budget constraints, ensuring efficient resource utilization. IoT Network Design Methodologies 1. Hierarchical Design A hierarchical approach organizes the IoT network into distinct layers, typically categorized as: Perception Layer (Device Layer): Includes sensors, actuators, and devices that collect data. Network Layer: Responsible for data transmission between devices and processing units via communication protocols. Application Layer: Handles data processing, storage, and service delivery to end users. Advantages:

Simplifies management. Optimizes resource allocation at each layer. Enhances scalability and modularity. 2. Edge-Centric Design Focuses on processing data closer to where it is generated, at the network edge. Edge devices like gateways and edge servers handle computation, storage, and analysis. Advantages:

Reduces latency for time-sensitive applications. Decreases data transmission costs by minimizing reliance on cloud services. Enhances privacy by processing sensitive data locally. 3. Mesh Networking Employs a decentralized design where devices connect directly to each other in a peer-to-peer manner. Mesh networks are often used in smart homes, industrial IoT, and smart cities. Advantages:

High reliability due to redundant paths. Simplifies network expansion. Reduces single points of failure. 4. Centralized Design Involves a hub-and-spoke model where devices connect to a central controller, gateway, or server for data processing and management. Advantages:

Simplifies monitoring and control. Suitable for small-scale IoT deployments. Centralizes security measures. 5. Cloud-Based Design Data from IoT devices is transmitted to a centralized cloud platform for processing, storage, and management. Cloud providers also offer analytics, machine learning, and application integration services. Advantages:

Unlimited scalability and computing power. Simplifies data analysis and application deployment. Offers built-in security and redundancy. 6. Hybrid Design Combines edge and cloud computing to leverage the benefits of both. Critical, low-latency tasks are processed at the edge, while large-scale analytics and storage are handled in the cloud. Advantages:

Balances latency and scalability. Optimizes resource utilization. Enhances flexibility for diverse applications. Steps in IoT Network Design Requirement Analysis

Identify the purpose of the IoT system, including device types, communication needs, expected data volumes, and performance requirements. Topology Selection

Choose the most suitable topology (e.g., star, mesh, tree, hybrid) based on the use case, device distribution, and scalability needs. Protocol and Communication Technology

Select protocols and technologies for connectivity: Short-range: Bluetooth, Zigbee, Wi-Fi. Long-range: LoRaWAN, NB-IoT, LTE-M. Wired: Ethernet, Powerline communication. Hybrid: Combining short-range and long-range technologies. Bandwidth and Capacity Planning

Ensure the network can handle peak data loads without performance degradation. Security Architecture

Integrate encryption, authentication, and access control mechanisms. Implement intrusion detection and prevention systems (IDPS). Energy Management

Design for energy efficiency by using low-power communication protocols and scheduling device wake-up times. Testing and Optimization

Conduct rigorous testing for performance, reliability, and security under real-world conditions. Optimize the design based on feedback and test results. Challenges in IoT Network Design Device Diversity

Supporting multiple device types, protocols, and standards is complex and may lead to compatibility issues. Scalability

Managing millions of devices and their data streams requires robust and scalable solutions. Security Threats

IoT networks are vulnerable to attacks such as DDoS, data breaches, and device hijacking. Latency Sensitivity

Real-time applications demand ultra-low latency, which can be challenging in distributed environments. Resource Constraints

Balancing performance and energy efficiency for resource-constrained devices is a persistent challenge. Regulatory Compliance

IoT networks must adhere to regional and industry-specific regulations on data privacy and security. Best Practices for IoT Network Design Use Standardized Protocols

Ensure compatibility and interoperability by adopting widely accepted standards like MQTT, CoAP, and IPv6. Implement Redundancy

Incorporate failover mechanisms and redundant pathways to enhance reliability. Prioritize Security

Encrypt data, use secure boot processes, and enforce least privilege access policies. Adopt Modular Architecture

Design the network in modular components to simplify maintenance and scalability. Monitor and Manage

Deploy monitoring tools to track performance, detect anomalies, and optimize resource utilization. Optimize for Energy Efficiency

Use low-power wireless technologies and energy-efficient hardware. Emerging Trends in IoT Network Design 6G Networks

Future IoT networks will leverage 6G technologies to achieve ultra-low latency, massive connectivity, and enhanced reliability. AI-Driven Network Management

Artificial intelligence (AI) and machine learning (ML) are being used to optimize IoT network performance and predict potential failures. Blockchain for Security

Blockchain technology is increasingly used to secure IoT networks by providing immutable, decentralized record-keeping. Digital Twins

Digital twins enable real-time simulation and optimization of IoT networks, improving design and operation. Fog Computing

Extending the capabilities of edge computing, fog computing processes data closer to devices, enhancing speed and efficiency.

IoT network design methodologies are critical for creating robust, scalable, and secure ecosystems that can handle the diverse demands of IoT applications. By adhering to structured methodologies and staying informed about emerging trends, organizations can build IoT networks that are efficient, reliable, and prepared for future challenges.

IoT Network Design Tools

The design of a robust IoT (Internet of Things) network is fundamental to the success of any IoT project. A well-architected network ensures reliable communication between IoT devices, minimizes latency, optimizes power consumption, and enables efficient data transfer. However, building an IoT network is complex, requiring the integration of various technologies, protocols, and platforms. IoT network design tools assist in modeling, simulating, and managing the networks that interconnect the myriad IoT devices. This section explores the types of IoT network design tools, their features, and their use cases.

Categories of IoT Network Design Tools IoT network design tools can be classified into the following categories:

Network Simulation Tools Network Protocol Design Tools IoT Connectivity and Communication Tools IoT Network Topology Design Tools Performance and Load Testing Tools Security Testing and Validation Tools End-to-End IoT Network Platforms 1. Network Simulation Tools Network simulation tools allow developers to create and test IoT networks virtually before actual deployment. These tools simulate the behavior of devices, communication protocols, and network conditions, allowing for better planning, optimization, and troubleshooting.

Common Tools: Cisco Packet Tracer

Features: Network simulator and visual tool for IoT networks. Use Case: It is widely used for learning and testing IoT network designs. It allows the simulation of network protocols like TCP/IP, HTTP, and MQTT. Key Benefits: Low cost, easy-to-use interface, and the ability to simulate IoT device configurations. OMNeT++

Features: Open-source, modular simulation framework for simulating IoT and wireless networks. Use Case: Primarily used for academic research, OMNeT++ allows simulation of large-scale IoT networks, including the modeling of communication protocols like Zigbee, LoRa, and NB-IoT. Key Benefits: Flexibility in modeling network conditions, protocol analysis, and support for various IoT scenarios. NS3 (Network Simulator 3)

Features: A discrete-event network simulator with support for IoT protocols, 5G, and Wi-Fi simulations. Use Case: Ideal for testing network performance, including IoT communication methods such as LoRaWAN, Zigbee, and NB-IoT. Key Benefits: High-level simulation capabilities, scalability, and integration with real-world traffic patterns. Castalia

Features: A simulation environment for wireless sensor networks, including IoT devices. Use Case: Often used in academic research to simulate low-power IoT networks and energy consumption. Key Benefits: Focus on energy-efficient devices, low-power sensor networks, and resource-constrained environments. 2. Network Protocol Design Tools IoT networks require robust communication protocols to enable devices to exchange data efficiently. Network protocol design tools help in defining and optimizing these protocols, ensuring they meet the specific needs of IoT environments.

Common Tools: Wireshark

Features: A popular network protocol analyzer that supports many IoT protocols like MQTT, CoAP, and HTTP. Use Case: Wireshark is used to capture and analyze packets in the network to diagnose issues with IoT protocol communication. Key Benefits: Real-time packet inspection, detailed protocol analysis, and customizable filters. Mininet

Features: A network emulator that allows the creation of custom virtual network topologies for testing network protocols. Use Case: Used for testing the interaction of IoT protocols and evaluating their scalability. Key Benefits: High flexibility in designing and emulating IoT network topologies and protocols. MQTT.fx

Features: A tool for MQTT protocol testing, providing a client interface to monitor and interact with MQTT brokers. Use Case: Used for testing communication between IoT devices using the MQTT protocol. Key Benefits: Allows testing and troubleshooting of MQTT-based communication, including message payload inspection. 3. IoT Connectivity and Communication Tools Connectivity is at the heart of any IoT network. These tools are designed to help manage and optimize the communication between IoT devices and their associated infrastructure (gateways, clouds, etc.).

Common Tools: LoRaWAN Network Server (LNS)

Features: A tool for managing LoRaWAN (Long Range Wide Area Network) devices, which is commonly used for low-power, long-range IoT communication. Use Case: It is widely used in applications like smart agriculture and remote monitoring where long-range connectivity is critical. Key Benefits: Efficient management of LoRaWAN devices, monitoring of network traffic, and data encryption. Zigbee2MQTT

Features: Connects Zigbee devices to an MQTT broker, providing a standardized way of communicating with Zigbee IoT devices. Use Case: Commonly used for home automation applications like smart lighting and thermostats. Key Benefits: Enables seamless communication between Zigbee and MQTT systems, supporting a wide range of Zigbee devices. NB-IoT (Narrowband IoT) Design Tools

Features: Tools designed to simulate and optimize narrowband IoT networks that use cellular connectivity. Use Case: Ideal for smart city applications, asset tracking, and industrial IoT solutions where low bandwidth and energy efficiency are critical. Key Benefits: Enables the design and optimization of networks with low power and high device density. 4. IoT Network Topology Design Tools Designing an efficient network topology is critical in IoT systems. These tools help in creating the architecture of an IoT network, determining how devices communicate with each other, and ensuring data flows efficiently.

Common Tools: Fritzing

Features: A tool for designing and simulating electronic circuits and IoT networks. Use Case: Used for creating the layout of IoT devices and their connections, particularly in prototype stages. Key Benefits: Visual interface for creating circuit diagrams and prototypes, easy export to production-ready files. Lucidchart

Features: A web-based diagramming tool for designing IoT network topologies. Use Case: Ideal for creating detailed network topology diagrams that represent device connections, data flow, and communication protocols. Key Benefits: Intuitive drag-and-drop interface, real-time collaboration, and extensive template library. Autocad Electrical

Features: A design tool specifically for electrical circuit and IoT network layouts. Use Case: Used in industrial IoT designs that require precise electrical schematics and connectivity. Key Benefits: Industry-standard tool for electrical network design, extensive component libraries. 5. Performance and Load Testing Tools IoT networks need to be able to handle high device densities and traffic loads without compromising performance. These tools allow for testing the performance of IoT networks under varying conditions.

Common Tools: iPerf

Features: Network testing tool that measures bandwidth and performance between two devices. Use Case: Used for testing network throughput and latency in IoT systems. Key Benefits: Measures critical network metrics and helps to optimize network conditions. JMeter

Features: Open-source performance testing tool that supports IoT network stress testing. Use Case: Used to test the scalability and load handling capabilities of IoT networks, including simulated device traffic. Key Benefits: Detailed reporting, scalability, and extensibility. LoadRunner

Features: A performance testing tool that can simulate the load from thousands of IoT devices. Use Case: Employed to understand how IoT networks perform under heavy loads and ensure optimal configuration before full deployment. Key Benefits: Scalable testing, detailed performance metrics, and compatibility with IoT protocols. 6. Security Testing and Validation Tools Security is a significant concern in IoT networks. These tools help to identify vulnerabilities and ensure that IoT systems are secure against cyber threats.

Common Tools: Wireshark (as mentioned above)

Use Case: Analyzes network traffic for vulnerabilities, including IoT-specific communication protocols like MQTT, CoAP, and Zigbee. Key Benefits: Helps identify potential security gaps in IoT network communication. Nessus

Features: A vulnerability scanning tool that checks for known security issues. Use Case: Used to perform security audits on IoT devices and networks, identifying vulnerabilities before deployment. Key Benefits: Comprehensive vulnerability scanning, frequent updates, and user-friendly reporting. Kali Linux

Features: A security-focused operating system with a suite of penetration testing tools. Use Case: Employed to test IoT network security, including the identification of insecure communication channels or exposed devices. Key Benefits: A comprehensive suite of tools for ethical hacking and security validation. 7. End-to-End IoT Network Platforms End-to-end IoT network platforms provide a complete solution for managing IoT networks from device connectivity to cloud-based data analytics and security.

simulators, labs, emulators, sniffing tools, monitoring tools, mathematical modelling, network management systems, network security design tools

IoT system architectures

Industrial IoT Systems

IoT Data Analysis

General audience classification iconGeneral audience classification iconGeneral audience classification icon

IoT systems, in their essence, are built to act as a tool for getting better insights into different processes and systems in order to make better decisions. The insights are provided by measuring the statuses of the systems or process elements represented by data. Unfortunately, without properly interpreting the data content, they turn into useless bits and bytes. Therefore, providing a means for understanding data is an essential property of a modern IoT system. Today, IoT systems produce a vast amount of data, which is very hard to use manually. Thanks to modern hardware and software developments, it is possible to develop fully or semi-automated systems for data analysis and interpretation, which may go further into decision-making and acting according to the decisions.

As various resources have stated, IoT in most cases, complies with the so-called big 5Vs of Big Data, where just one correspondence is needed to solve a Big Data problem. As has been explained by Jain et al. [7] Big Data might be of different forms, volumes and structures, and in general, the 5Vs i.e. Volume, Variety, Veracity, Velocity and Value might be interpreted as follows:

Volume

This characteristic is the most obvious and refers to the size of the data. In most practical applications of IoT systems, large volumes of data are reached through intensive production and collection of sensor data. It usually rapidly populates existing operational systems and requires dedicated IoT data collection systems to be upgraded or developed from scratch (which is more advisable).

Variety

As Jain explained, big data is highly heterogeneous in terms of source, kind, and nature. Having different systems, processes, sensors, and other data sources, variety is usually a distinctive feature of practical IoT systems. For instance, a system of intelligent office buildings would need data from a building management system, appliances and independent sensors, and external sources like weather stations or forecasts from appropriate external weather forecast APIs (Application programming interfaces). Additionally, the given system might require historical data from other sources, like XML documents, CSV files or other sources, diversifying the sources even more.

Veracity

Unfortunately, the volume or diversity of data does not bring value; the data needs to be reliable and clean. In other words, data has to be of good quality; otherwise, the analysis might not bring additional value to the system’s owner or even compromise the decision-making process—the quality of data is represented by Veracity. In IoT applications, it is easy to lose data quality due to malfunctioning sensors that are missing or producing false data. Since the IoT essential part is hardware, the data must be preprocessed in most cases.

Velocity

Data velocity characterises the data bound to the time and its importance during a specific period or at a particular time instant. A good example might be any real-time system like an industrial process control system, where reactions or decisions must be made during a fixed period of time, requiring data at particular time instants. In this case, data has a flow nature of a particular density.

Value

Since the IoT systems and their data analysis subsystems are built to add value to their owner, the costs of the development and ownership should exceed the returned value. The system is of low or no value if it does not apply.

Dealing with big data requires specific hardware and software infrastructure. While there is a certain number of typical solutions and a lot more customise, some of the most popular are explained here:

Relational DB-based systems

Those systems are based on well-known relational data models and appropriate database management systems like MS SQL Server, Oracle Server, MySQL, etc. There are some advantageous features of those systems, for instance:

  • Advantages of SQL (Structured Querying Language): enabling easy manipulation of the data while maintaining a relatively good expressiveness of the data model;
  • A well-designed set of software tools and interfaces enabling integration with a large number of different systems;
  • A lot of built-in data processing routines (stored procedures) provide higher development productivity.
  • Enables asynchronous reactions to events by triggering internal events.
  • Data reading might be scaled out using multiple entities, while writing might be scaled up using more productive servers.

Unfortunately, scaling out data writing is not always possible and is usually supported at a high cost for software products.

 Relational DBMS scaling options
Figure 14: Relational DBMS scaling options

Complex Event Processing (CEP) systems

CEP systems are very application-tailored, enabling significant productivity at a reasonable cost. High productivity is usually needed for processing data streams, such as voice or video. Maintaining a limited time window for data processing is possible, which is relevant for systems that are close to real-time. Some of the most common drawbacks to be considered are:

  • It might be scaled up only by introducing higher productivity hardware, which is limited by the application-specific design. To some extent, the design might be more flexible if microservices and containerisation are applied.
  • Due to the factors mentioned above and the complexity, the maintenance costs are usually higher than a universal design.
 CEP systems
Figure 15: CEP systems

NoSQL systems

As the name suggests, the main characteristic is higher flexibility in data models, which overcomes the limitations of highly structured relational data models. NoSQL systems are usually distributed, where the distribution is the primary tool to enable supreme flexibility. In IoT systems, software typically gets older faster than hardware, which requires the maintenance of many versions of communication protocols and data formats to ensure back compatibility. Another reason is the variety of hardware suppliers, where some protocols or data formats are specific to the given vendor. It also provides a means for scalability out and up, enabling high future tolerance and resilience. A typical approach is to use a key-value or key-document approach, where a unique key indexes incoming data blocks or documents (JSON, for instance). Some other designs might extend the SQL data models by others – object models, graph models, or the mentioned key-value models, providing highly purpose-driven and, therefore, productive designs. However, the complexity of the design raises problems of data integrity as well as the complexity of maintenance.

 NoSQL systems
Figure 16: NoSQL systems

In-memory data grids

This is probably the most productive type of system, providing high flexibility, productivity and scalability. Because these systems are designed to operate in servers RAM, the in-memory data grids are the best choice for data preprocessing in IoT systems due to their high productivity and ability to scale dynamically depending on actual workloads. They provide all the benefits of the CEP and Relational systems, adding a scale-out functionality for data writing. There are only two major drawbacks – limited RAM and high development costs. Some examples of available solutions:

This chapter is devoted to the main groups of algorithms for numerical data analysis and interpretation, covering both mathematical foundations and application specifics in the context of IoT. The chapter is split into the following subchapter:

Data Products Development

General audience classification iconGeneral audience classification iconGeneral audience classification icon

In the previous chapter, some essential properties of Big Data systems have been discussed, as well as how IoT systems relate and why to Big Data problems. In any IoT implementation, data processing is the system's heart, which at least transforms into a data product. While it is still mainly a software subsystem, its development differs significantly from that of a regular software product. The difference is expressed through the roles involved and the lifecycle itself. It is often wrongly assumed that the main contributor is the data scientist responsible for developing a particular data processing or forecasting algorithm. It is somewhat valid, except other essential roles are vital to success. The team of developers playing the roles might be as small as three or as large as 20 people, depending on the scale of the project. The main roles are explained below.

Business user

Business users have good knowledge of the application domain and, in most cases, benefit significantly from the developed data product. They know how to transform data into a business value in the organisation. Typically, they take positions like Production manager, Business/market analyst, and Domain expert.

Project sponsor

He is the one who defines the business problem and is triggering the birth of the project. He defines the project's scope and volume and ensures the necessary provisions are met. While he defines project priorities, he does not have deep knowledge or skills of the technology, algorithms or methods used.

Project manager

As in most software projects, the project manager is responsible for meeting project requirements and specifications within the given time frame and available provisions. He selects the needed talents, chooses development methods and tools, and selects goals for the development team members. Usually, he reports to the project sponsor and ensures that information flows within the team.

Business information analyst

He possesses deep knowledge in the given business domain, supported by his skills and experience. Therefore, he is a valuable asset for the team in understanding the data's content, origin, and possible meaning. He defines the key performance indicators (KPI) and metrics, which are to be measured to assess the project's success level. He selects information and data sources to prepare information and data dashboards for the organisation's decision-makers.

Database administrator

He is responsible for the configuration of the development environment and Database (one, many or a complex distributed system). In most cases, the configuration must meet specific performance requirements, which must be maintained. He ensures to maintain secure access to the data for the team members. During the project, he does data backup, restores if needed, does configuration updates, and provides other support.

Data engineer

Data engineers usually have deep technical knowledge of data manipulation methods and techniques. During the project, he tunes data manipulation procedures, SQL queries, and memory management and developed specific stored or server-side procedures. He is responsible for extracting specific data chunks for the Sandbox environment, formats and tunes them according to data scientist needs.

Data scientist

Develops or selects data processing models needed to meet the project specifications. Develops, tests and implements data processing methods and algorithms; develops decision-making support methods and their implementations for some projects. Provides needed research capacities for selecting and developing the data processing methods and models.

As it might be noticed, there is no doubt that the Data Scientist is playing a vital role, but only in cooperation with the other roles. For a single person, depending on his or her competencies and capacities, roles might overlap or several roles provided by a single team member. Once the team is built, the development process can start. As with any other product development, data product development follows a specific life cycle consisting of phases. Depending on particular project needs, there might be variations, but the data product development follows the well-known waterfall pattern in most cases. The phases are explained below:

 Data product life cycle
Figure 17: Data product life cycle
Discovery

The project team learns about the problem domain, the problem itself, its structure, and possible data sources and defines the initial hypothesis. The phase involves interviewing the stakeholders and other potentially related parties to reach as broad an insight as necessary. It said that during this phase, the problem is farmed – defined the analytical problem, indicators of the success for the potential solutions, business goals and scope. To understand business needs, the project sponsor is involved in the process from the very beginning. The identified data sources might include external systems or APIs, sensors of different types, static data sources, official statistics and other vital sources. One of the primary outcomes of the phase is the Initial Hypothesis (IH), which concisely represents the team's vision of the problem and potential solution at the same time, for instance: “Introduction of deep learning models for sensor time series forecast provides at least 25% better performance over statistical methods used at the moment.” Whatever the IH is, it is a much better starting point rather than defining the hypothesis during the project implementation in later phases.

Data preparation

The phase focuses on creating a sandbox system by extracting, transforming and loading it into a sandbox system (ETL – Extract, Transform, Load). This is usually the most prolonged phase in terms of time and can take up 50% of the total time allocated to the project. Unfortunately, most teams tend to underestimate this time consumption, which costs the project manager and analysts dearly, leading to losing trust in the project's success. Data scientists given a special role and authority in the team tend to “skip” this phase and go directly to phase 3 or 4, which is costly because of incorrect or insufficient data to solve the problem.

  1. Data analysis sandbox - Client's operational data, log (window), raw streams, etc., are copied. There is a possibility of a natural conflict where Data scientists want everything, and IT «service» provides a minimum. The needs must, therefore, be explained through a thorough argument. The sandbox can be 5 – 10 times larger than the original dataset!
  1. Carrying out ETLs - The data is retrieved, transformed and loaded back into the sandbox system. Sometimes, simple data filtering excludes outliers and cleans the data. Due to the volume of data, there may be a need for parallelisation of data transfers, which leads to the need for appropriate software and hardware infrastructure. In addition, various web services and interfaces are used to obtain context;
  2. Exploring the content of the data - The main task is to get to know the content of the extracted data. A data catalogue or vocabulary is created (small projects can skip this step). Data research allows for identifying data gaps and technology flaws, as well as teams' own and extraneous data (for determining responsibilities and limitations).
  3. Data conditioning - Slicing and combining are the most common actions in this step. The compatibility of data subsets with each other after the performed manipulations is checked to exclude systematic errors – errors that occur as a result of incorrect manipulation (formatting of data, filling in voids, etc…). The team ensures the time, metadata, and content match during this step.
  4. Reporting and visualising - This step uses general visualisation techniques, providing a high-level overview – value distributions, histograms, correlations, etc. explaining the data content. It is necessary to check whether the data represent the problem sphere, how the value distributions “behave” throughout the dataset, and whether the details are sufficient to solve the problem.
Model planning

The main task of the phase is to select model candidates for data clustering, classification or other needs that are consistent with the Initial Hypothesis from Phase 1.

  1. Exploring data and selecting variables - The aim is to discover and understand variables' interrelationships through visualisations. The identified stakeholders are an excellent source of relevant insights about internal data relationships – even if they do not know the reasons! These steps allow the selection of key factors instead of checking all against all;
  2. Selection of methods or models - During this step, the team creates a list of methods that match the data and the problem. A typical approach is creating many trim model prototypes using ready-made tools and prototyping packages, such as R, SPSS, Excel, Python, and other specific tools. Tools typical of the phase might include but are not limited to R or Python, SQL and OLAP, Matlab, SPSS, and Excel (for simpler models);
Model development

During this phase, the initially selected trim models are implemented on a full scale concerning the gathered data. The main question is whether the data is enough to solve the problem. There are several steps to be performed:

  1. Data preparation - Specific subsets of data are created, such as training, testing, and validation. The data is adjusted to the selected initial data formatting and structuring methods.
  2. Model development - Usually, conceptually, it is very complex but relatively short in terms of time.
  3. Model testing - The models shall be operated and tuned using the selected tools and training datasets to optimise the models and ensure their resilience to incoming data variations. All decisions must be documented! This is important because all other team roles require detailed reasoning about decisions, especially during communication and operationalization.
  4. Key points to be answered during the phase area:
    • Is the model accurate enough?
    • Are the results obtained meaningful in relation to the objectives set?
    • Don't models make unacceptable mistakes?
    • Is the data enough?

In some areas, false positives are more dangerous than false negatives. For example, targeting systems may inadvertently target “their own”.

Communication

During this phase, the results must be compared against the established quality criteria and presented to those involved in the project. It is important not to present any drafts outside of a group of data scientists!! - The methods used by most of those involved are too complex, which leads to incorrect conclusions and unnecessary communication to the team. Usually, the team is biased in not accepting the results, which falsifies the hypotheses, taking it too personally. However, the data led the team to the conclusions, not the team itself! Anyway, it must be verified that the results are statistically reliable. If not, the results are not presented. It is also essential to present all the obtained side results, as they almost always provide additional value to the business. The general conclusions need to be complemented by sufficiently broad insights into the interpretation of the results, which is necessary for users of the results and decision-makers.

Operationalisation

The results presented are first integrated into the pilot project before full-scale implementation, after which the widespread roll-out follows the pilot's tests in the production environment. During this phase, some performance gaps may require replacing, for instance, Python or R code with compiled code; Expectations for each of the roles during this phase:

  • Business user: Identifiable benefits of the model for the business;
  • Project sponsor: return on investment (ROI) and impact on the business as a whole – how to highlight it outside the organisation / other business;
  • Project manager: the completion of the project within the expected deadlines with the intended resources;
  • Business Information Analyst: add-ons to existing reports and dashboards;
  • Data scientist: convenient maintenance of models after preparation of detailed documentation of all developments and explanation of the work performed to the team;

Data Preparation for Data Analysis

General audience classification iconGeneral audience classification iconGeneral audience classification icon

Introduction

In most cases, data must be prepared before analysing or applying some processing methods. There might be different reasons for it, for instance, missing values, sensor malfunctioning, different time scales, different units, specific format needed for a given method or algorithm, and many more. Therefore, data preparation is as necessary as the analysis itself. While data preparation is usually very specific to a given problem, some common general cases and preprocessing tasks prove to be very useful. Data preprocessing also depends on the data's nature– preprocessing is usually very different for data, where the time dimension is essential (time series), or it is not like a log of discrete cases for classification, where there are no internal causal dependencies among entries. It must be emphasised that whatever the data preprocessing is done, it needs to be carefully noted, and the reasoning behind it must be explained to allow others to understand the results acquired during the analysis.

"Static data"

Some of the methods explained here might also be applied to time series but must be done with full awareness of possible implications. Usually, the data should be formatted as a table consisting of rows representing data entries or events and fields representing features of the event entry. For instance, a row might represent a room climate data entry, where fields or factors represent air temperature, humidity level, CO2 level and other vital measurements. For the sake of simplicity in this chapter, it is assumed that data is formatted as a table.

Filling the missing data

One of the most common situations is missing sensor measurements, which might be caused by communication channel issues, IoT node malfunctioning or other reasons. Since most of the data analysis methods require complete entries, it is necessary to ensure that all data fields are present before applying the analysis methods. Usually, there are some common approaches to deal with the missing values:

  • Random selection – the method, as suggested by the name, allows randomly selecting one of the possible values of the data field. If the field value list is categorical, representing a limited set of possible values, for instance, a set of colours or operation modes, one value from the list is randomly selected. In the case of a continuous value, a random value from an interval is selected. Besides its simplicity, the method allows for filling gaps in data in cases where a fraction of missing values is insignificant. In case of a significant fraction of missing values, the method should not be applied due to implications on the data analysis.
  • Informed selection – the method, in essence, does the same as the Random selection except that additional information on values distribution of the field (factor) is used. In other words, the most common might be selected for discrete factor values. However, in the case of continuous values, an average value might be selected according to the distribution characteristics. There might be more complex situations which cannot be described by Gaussian distribution. In those cases, the data analyst needs to make an informed decision on particular selection mechanisms, representing the distribution's specifics.
  • Value marking – this approach might be applied for cases where there is the chance that missing data is a consequence of some critical processes; for instance, whenever the engine's temperature reaches a critical value, the pressure sensor stops functioning due to overheating. Analysts might know the issue or not; in any case, it is essential to mark those situations to find possible causalities in the data. A dedicated new category might be introduced if the factor is categorical, like “empty”. In the case of continuous values, a dedicated “impossible” value might be assigned, such as max integer value, minimum integer value, zero, and others.

Scaling

Scaling is a very often used method for continuous value numerical factors. The main reason is that different value intervals for different factors are observed. It is essential for methods like clustering, where a multi-dimensional Euclidian distance is used, where, in the case of different scales, one of the dimensions might overwhelm others just because of a higher order of the numerical values. Usually, scaling is performed by applying a linear transformation of the data with set min and max values, which mark the desired value interval. In most software packages, like Python Pandas [8], scaling is implemented as a simple-to-use function. However, it might be done manually if needed as well:

 Scaling
Figure 18: Scaling

,where: Vold – the old measurement Vnew – the new – scaled measurement mmin – minimum value of the measurement interval mmax – maximum value of the measured interval Imin – minimum value of the desired interval Imax – maximum value of the desired interval

Normalisation

Normalisation is effective when the data distribution is unknown or known to be non-Gaussian (not following a bell curve of the Gaussian distribution). It is beneficial for data with varying scales, especially when using algorithms that do not assume any specific data distribution, such as k-nearest neighbours and artificial neural networks. Normalisation does not change the scale of the values but the distribution of the values to represent a Gaussian distribution. This technique is mainly used in machine learning and is performed with appropriate software packages due to the complexity of the calculations when compared to scaling.

Adding dimensions

Sometimes, it is necessary to emphasise a particular phenomenon in the data. For instance, it might be very helpful to bolden the changes in the factor value, i.e. those that are more distant from 0 should be even larger, but those closer should not be raised. In this case, applying the exponent function to the factor values – squaring or raising in power of 4 is a simple technique. If the negative values are present, uneven power values might be used. A variation of the technique might be summing up different factor values before or after applying the exponent. In this case, a group of similar values representing the same phenomenon emphasises it. Any other function can be applied to represent the specifics of the problem.

Time series

Time series usually represent the dynamics of some process, and therefore, the order of the data entries has to be preserved. This means that in most cases, all of the mentioned methods might be used as long as the data order remains the same. A time series is simply a set of data - usually events, arranged by a time marker. Typically, time series are arranged in the order in which events occur/ recorded. Several important consequences follow from this simple fact:

  • The sequence of events must be followed for any data manipulation;
  • The arrangement of events in time is not only the order of data arrival but is a reflection of a certain process and its development in time.
  • The sequence of events reflects the causal relations of this process, which we try to discover through data analysis;
Time Series Analysis Questions

Therefore, there are several questions that data analysis typically tries to answer:

  • Is the process stationary, i.e. is the process variable over time?
  • If the process is dynamic, is there a direction of development:
    • The process is chaotic or regular;
    • There is periodicity in the dynamics of the process:
  • Are there any regularities between the individual changes of the parameters characterising the process – correlation?
  • Does the dynamics of the process depend on changes in parameters of the external environment that we can influence, i.e. is the process adaptive?
Some definitions

Autocorrelation - A process is autocorrelated if the similarity of the values of a given observation is a function of the time between observations. In other words, the difference between the values of the observations depends on the interval between the observations. This does not mean that the process values are identical but that the difference between them is similar. The process can equally well be decaying or growing in the mean value or amplitude of the measurements, but the difference between subsequent measurements is always the same (or close).

Seasonality - The process is seasonal if the deviation from the average value is repeated periodically. This does not mean the values must match perfectly, but there must be a general tendency to deviate periodically from the average value. A perfect example is a sinusoid.

Stationarity - A process is stationary if its statistical properties do not change over time. Generally, the mean and variance over a period serve as good measures. In practice, a certain tolerance interval is used to tell whether a process is stationary since ideal cases (no noise) do not tend to occur in practice. For example, temperature measurements over several years are stationary and seasonal. It is not autocorrelated because temperatures are still relatively variable across days. Numerically, stationarity is evaluated with the so-called Dickey-Fuller test [9], which uses a linear regression model to measure change over time at a given time step. The model's t-test [10] indicates how statistically strong the hypothesis of process stationarity is.

Time series modelling

In many cases, it is necessary to emphasise the main pattern of the time series while removing the “noise”. In general, there are two main techniques – decimation and smoothing. Both are widely used but need to be treated carefully.

Moving average (sliding average)

The essence of the method is to obtain an average value within a certain time window, M, thereby giving inertia to the incoming signal reducing noise's impact on the overall analysis result. Different effects might be obtained depending on the size of the time window M.

 Moving average
Figure 19: Moving average

, where SMAt - the new smoothed value at time instant t; Xi – ith measurement at a time instant i M – time window

The image below demonstrates the effects of a time window size of 10 and 100 measurements – an incoming signal from a freezer's thermometer.

  • At first, it needs to be emphasised that the moving average adds a slight lag in the incoming data, i.e., the rise and fall of the values are slightly behind the original values.
  • In the case of M = 10, the overall shape of the time series is preserved, while noise is removed.
  • In the case of M = 100, the time series shape is transformed into a new function, which does not represent the main feature of the original measurements. For instance, the rises are replaced by falls and vice versa, while the data spike melts with the coming rise and forms one more significant rise of the signal. So, the result annihilates the initial features of the signal.
:en:iot-reloaded:equationtwo.png?900 | Moving average
Figure 20: Moving average
Exponential moving average

The exponential moving average is widely used in noise filtering, for example, in the analysis of changes in stock markets. Its main idea is that each measurement's weight (influence) decreases exponentially as time increases. Thus, the evaluation takes more recent measurements and less considers older ones.

 Exponential moving average
Figure 21: Exponential moving average

, where EMAt - the new smoothed value at time instant t; Xi – ith measurement at a time instant i Alpha - smoothing factor between 0 and 1, which reflects the weight of the last - the most recent measurement.

As seen in the picture below, the exponential moving average in the case of different weighting factor values preserves the shape of the initial signal. It has a minimal lag while removing the noise, which makes it a handy smoothing technique.

 Exponential moving average
Figure 22: Exponential moving average
Decimation

Decimation is a technique of excluding some entries from the initial time series to reduce the overwhelming or redundant data. As the name suggests, usually, to reduce the data by 10%, every tenth entry is excluded. It is a simple method that significantly benefits cases of over-measured processes with slow dynamics. With preserved time stamps, the data still allows the application of general time-series analysis techniques like forecasting.

Regression Models

General audience classification iconGeneral audience classification iconGeneral audience classification icon

Introduction

While AI and especially Deep Learning techniques have advanced tremendously, the fundamental data analysis methods still provide a good and, in most cases, efficient way of solving many data analysis problems. Linear regression is one of those methods that provide at least a good starting point to have an informative and insightful understanding of the data. Linear regression models are relatively simple and do not require significant computing power in most cases, which makes them widely applied in different contexts. The term regression towards a mean value of a population was widely promoted by Francis Galton, who introduced the term “correlation” in modern statistics[11] [12] [13].

Linear regression model

Linear regression is an algorithm that computes the linear relationship between the dependent variable and one or more independent features by fitting a linear equation to observed data. In its essence, linear regression allows the building of a linear function – a model that approximates a set of numerical data in a way that minimises the squared error between the model prediction and the actual data. Data consists of at least one independent variable (usually denoted by x) and the function or dependent variable (usually denoted by y). If there is just one independent variable, then it is known as Simple Linear Regression, while in the case of more than one independent variable, it is called Multiple Linear Regression. In the same way, in the case of a single dependent variable, it is called Univariate Linear Regression. In contrast, in the case of many dependent variables, it is known as Multivariate Linear Regression. For illustration purposes in the figure below, a simple data set is illustrated that was used by F. Galton while studying relationships between parents and their children's heights. The data set might be found here: [14]

 Galton's data set
Figure 23: Galton's data set

If the fathers' heights are Y and their children's heights are X, the liner regression algorithm is looking for a liner function that, in the ideal case, will fit all the children's heights to their parent heights. So, the function would look like the following equation:

 Linear model
Figure 24: Linear model

Where:

  • Yi – ith child height
  • Xi – ith father height
  • β0 and β1 y axis crossing and slope coefficients of the liner function correspondingly

Unfortunately, in the context of the given example, finding such a function is not possible for all x-y pairs at once since x and y values differ from pair to pair. However, finding a linear function that minimises the distance of the given y to the y' produced by the function or model for all x-y pairs is possible. In this case, y' an estimated or forecasted y value. At the same time, the distance between each y-y'pair is called an error. Since the error might be positive or negative, a squared error is used to estimate the error. It means that the following equation might describe the model:

 Linear model
Figure 25: Linear model with estimated coefficients

where

  • Y'i – ith child height estimated by the model
  • Xi – ith father height
  • Β’0 and β’1 y axis crossing and slope coefficients' estimates of the liner function correspondingly, which minimises the error term:
 Model error
Figure 26: Model error

The estimated beta values might be calculated as follows:

 Coefficient values
Figure 27: Coefficient values

Where:

  • Cor(X,Y) – Correlation between X and Y (capital letter mean vectors of individual x and y corresponding values)
  • σx and σy – standard deviations of vectors X and Y
  • µx and µy – mean values of the vectors X and Y

Most modern data processing packages possess dedicated functions for building linear regression models with few lines of code. The result is illustrated in the following figure:

 Galton's data set
Figure 28: Galton's data set with linear model

Errors and their meaning

As discussed previously, an error in the context of the linear regression model represents a distance between the estimated dependent variable values and the estimate provided by the model, which the following equation might represent:

 Coefficient values
Figure 29: Coefficient values

where,

  • y'i – ith child height estimated by the model
  • yi - ith childer height true values
  • ei - error of the model's ith output

Since an error for a given yith might be positive or negative and the model itself minimises the overall error, one might expect that the error is normally distributed around the model, with a mean value of 0 and its sum close to or equal to 0. Examples of the error for a few randomly selected data points are depicted in the following figure in red colour:

 Galton's data set
Figure 30: Galton's data set with linear model and its errors

Unfortunately, knowing the following facts does not always provide enough information about the modelled process. In most cases, due to some dynamic features of the process, the distribution of the errors is as important as the model itself. For instance, a motor shaft wears out over time, and the fluctuations steadily increase from the centre of the rotation. To estimate the overall wearing of the shaft is enough to have just a max amplitude measurement. However, it is not enough to understand the dynamics of the wearing process. Another important aspect is the order of magnitude of the errors compared to the measurements, which, in the case of small quantities, might be impossible to notice even if the modeller illustrated the model. The following figure might illustrate the following situation:

 Error distribution example
Figure 31: Error distribution example

In this figure, both small error quantities and progression dynamics are illustrated. Another example of cyclic error distribution is provided in the following figure:

 Error distribution example
Figure 32: Error distribution example

From this discussion, a few essential notes have to be taken:

  • Error distributions (around 0) should be treated as carefully as the models themselves;
  • In most cases, error distribution is hard to notice even if the errors are illustrated;
  • It is essential to look into the distribution to ensure that there are no regularities.

If any regularities are noticed, whether a simple variance increase or cyclic nature, they point to something the model does not consider. It might point to a lack of data, i.e., other factors that influence the modelled process, but they are not part of the model, which is therefore exposed through the nature of the error distribution. It also might point to an oversimplified look at the problem, and more complex models should be considered. In any of the mentioned cases, a deeper analysis should be considered. In a more general way the linear model might be described with the following equation:

 Linear model
Figure 33: General notation of a linear model

Here, the error is considered to be normally distributed around 0, with its standard deviation sigma and variance sigma squared. Variance provides at least a numerical insight into the error distribution; therefore, it should be considered as an indicator for further analysis. Unfortunately, the true value of sigma is not known; therefore, its estimated value should be used:

 Sigma estimate
Figure 34: Sigma estimate

Here, the variance estimated value's expected value equals the true variance value:

 Variance estimate
Figure 35: Variance estimate

Multiple linear regression

In many practical problems, the target variable Y might depend on more than one independent variable X, for instance, wine quality, which depends on its level of serenity, amount of sugars, acidity and other factors. In the case of applying a linear regression model, it seems much complicated, but it is still a linear model of the following form:

 Multiple linear model
Figure 36: Multiple linear model

During the application of the linear regression model, the error term to be minimised is described by the following equation:

 Multiple linear model error estimate
Figure 37: Multiple linear model error estimate

Unfortunately, the results of multiple linear regression cannot be visualised in the same way as for a single linear regression due to the number of factors (dimensions). Therefore, numerical analysis and interpretation of the model should be done. In many situations, numerical analysis is complicated and requires a semantic interpretation of the data and model. To do it, visualisations reflecting the relation between the dependent variable and independent variables result in multiple graphs. Otherwise, the quality of the model is hardly assessable or even unassessable.

Piecewise linear models

Piecewise linear models, as the name suggests, allow splitting the overall data sample into pieces and building a separate model for every piece, thus achieving better prediction for the data sample. The formal representation of the model is as follows:

 Piecewise linear model
Figure 38: Piecewise linear model

As it might be noticed, the individual models are still linear and individually simple. However, the main difficulty is to set the threshold values b that splits the sample into pieces. To illustrate the problem better, one might consider the following artificial data sample:

 Complex data example
Figure 39: Complex data example

Intuition suggests splitting the sample into two pieces and, with the boundary b around 0, fitting a linear model for each of the pieces separately:

 Piecewise linear model
Figure 40: Piecewise linear model with 2 splits

Since we do not know the exact best split, it might seem logical to play with different numbers of splits at different positions. For instance, a random number of splits might generate the following result:

 Piecewise linear model
Figure 41: Piecewise linear model with many splits

It is evident from the figure above that some of the individual linear models do not reflect the overall trends, i.e. the slope steepness and direction (positive or negative) seem to be incorrect. However, it is also apparent that those individual models might be better for the given limited sample split. This simple example brings a lot of confusion in selecting the number of splits and their boundaries. Unfortunately, there is no simple answer, and the possible solution might be one of the following:

  • Using contextual information, the model developer might select a particular number of splits and boundaries based on the context;
  • Some additional methods might be used to find the best split automatically. In this case, software packages usually have tools for this. For Python developers, a very handy package mlinsights [15] provides a set of such tools, including regression trees and other methods.

Clustering Models

General audience classification iconGeneral audience classification iconGeneral audience classification icon

Introduction

Clustering is a methodology that belongs to the class of unsupervised machine learning. It allows for finding regularities in data when the group or class identifier or marker is absent. To do this, the data structure is used as a tool to find the regularities. Because of this powerful feature, clustering is often used as part of data analysis workflow prior to classification or other data analysis steps to find natural regularities or groups that may exist in data.

This provides very insightful information about the data's internal organisation, possible groups, their number and distribution, and other internal regularities that might be used to better understand the data content. One might consider grouping customers by income estimate to explain the clustering better. It is very natural to assume some threshold values of 1KEUR per month, 10KEUR per month etc. However:

  • Do the groups reflect a natural distribution of customers by their behaviour?
  • For instance, does a customer with 10KEUR behave differently from the one with 11KEUR per month?

It is obvious that, most probably, customers' behaviour depends on factors like occupation, age, total household income, and others. While the need for considering other factors is obvious, grouping is not – how exactly different factors interact to decide which group a given customer belongs to. That is where clustering exposes its strength – revealing natural internal structures of the data (customers in the provided example).

In this context, a cluster refers to a collection of data points aggregated together because of certain similarities [16]. Within this chapter, two different approaches to clustering are discussed:

  • Cluster centroid-based, where the main idea is to find an imaginary centroid point representing the “centre of mass” of the cluster or, in other words, the centroid represents a “typical” member of the cluster that, in most cases, is an imaginary point.
  • Cluster density-based, where the density of points around the given one determines the membership of a given point to the cluster. In other words, the main feature of the cluster is its density.

In both cases, a distance measure estimates the distance among points or objects and the density of points around the given. Therefore, all factors used should generally be numerical, assuming an Euclidian space.

To illustrate the mentioned algorithm groups, the following algorithms are discussed in detail:

  • K-Means - a widely used algorithm that uses distance as the main estimate to group objects;
  • DBSCAN - a good example of density-based algorithm widely used in signal processing;

Data preprocessing before clustering

Before starting clustering, several important steps have to be performed:

  • Check if the used data is metric: In clustering, the primary measure is Euclidian distance (in most cases), which requires numeric data. While it is possible to encode some arbitrary data using numerical values, they must maintain the semantics of numbers, i.e. 1 < 2 < 3. Good examples of natural metric data are temperature, exam assessments, and the like. Bad examples: gender, colour.
  • Select the proper scale: For the same reasons as the distance measure, the values of each dimension should be on the same scale. For instance, customers' monthly incomes in euros and their credit ratios are typically at different scales – the incomes in thousands, while ratios between 0 and 1. If scales are not adjusted, the income dimension will dominate distance estimation among points, deforming the overall clustering results. A universal scale is usually applied to all dimensions to avoid this trap. For instance:
    • Unity interval: a minimal factor value is substructed from the given point value and divided by the interval value, giving the result 0 to 1.
    • Z-scale: The factor's average value is substructed from the original value of the given point and then divided by the factor's standard deviation, which provides results distributed around 0 with a standard deviation of 1.

Summary about clustering

  • Besides the discussed, there are many other clustering methods; however, all of them, including the discussed, require prior knowledge about the problem domain;
  • All of the clustering methods require setting some parameters which drive the algorithms. In most cases, the value setting might not be intuitive and may require interesting fintuning.
  • Proper data coding in clustering may provide a significant value even in complex application domains, including medicine, customer behaviour analysis, and finetuning of other data analysis algorithms.
  • In data analysis, clustering is used among the first methods to acquire the internal structure of the data before applying more informed methods.

K-Means

General audience classification iconGeneral audience classification iconGeneral audience classification icon

The first method discussed here is one of the most commonly used – K-Means. K-means clustering is a method that splits the initial set of points (objects) into groups, using distance measure, which represents a distance from the given point of the group to the group's centre representing a group's prototype - centroid. The result of the clustering is N points grouped into K clusters, where each point has assigned a cluster index, which means that the distance from the point of the cluster centroid is closer than the distance to any other centroids of other clusters. Distance measure employs Euclidian distance, which requires scaled or normalised data to avoid the dominance of a single dimension over others. The algorithm steps schematically are represented in the following figure:

 |  K-means steps
Figure 42: K-means steps

In the figure:

  • STEP 1: Initial data set, where points do not belong to any of the clusters;
  • STEP 2: Cluster initial centers are selected randomly;
  • STEP 3: For each point, the closest cluster centre is selected, which is the point marker;
  • STEP 4: Cluster mark is assigned to each point;
  • STEP 5: The initial cluster centre is being refined to minimise the average distance to the cluster centre from each cluster point. As a result, cluster centres might not be physical points any more; instead, they become imaginary.
  • STEP 6: Cluster marks of the points are updated;

Steps 4-6 are repeated until changes in cluster positions do not change or changes are insignificant. The distance is measured using Euclidian distance:

  Euclidian distance
Figure 43: Euclidian distance

, where:

  • Data points - points {xi}, i = 1, … ,N in multi-dimensional Euclidian space i.e. each point is vector;
  • K – number of clusters set by the user.
  • rnk – an indicator variable with values {0,1} – indicates if data point xn belongs to cluster k;
  • mk – centroid of kth cluster;
  • D – Squared Sum of all distances di to their assigned cluster centroids;
  • Goal is to find such values of variables rnk and m k to minimise D;

Example of initial data and assigned cluster marks with cluster centres after running the K-means algorithm:

  K-means example
Figure 44: K-means example with two clusters

Unfortunately, the K-Menas algorithm does not possess automatic mechanisms to select the number of clusters K, i.e., the user must set it. Example of setting different numbers of cluster centres:

  K-means example
Figure 45: K-means example with three clusters
Elbow method

In K-means clustering, a practical method – the Elbow method is used to select a particular number of clusters. The elbow method is based on finding the point at which adding more clusters does not significantly improve the model's performance. As explained, K-means clustering optimises the sum of squared errors (SSE) or squared distances between each point and its corresponding cluster centroid. Since the optimal number of clusters (NC) is not known initially, it is wise to increase the NCs iteratively. The SSE decreases as the number of clusters increases because the distances to the cluster centres also decrease. However, there is a point where the improvement in SSE diminishes significantly. This point is referred to as the “elbow” [17].

Steps of the method:

  1. Plot SSE against the number of clusters:
    • Computing the SSE for different values of NC, typically starting from NC=2 reasonable maximum value (e.g., 10 or 20).
    • Plotting the SSE values on the y-axis and the number of clusters NC on the x-axis.
  2. Observe the plot:
    • As the number of clusters NC increases, the SSE will decrease because clusters become more specialised.
    • Initially, adding clusters will result in a significant drop in SSE.
    • After a certain point, the reduction in SSE will slow down, not showing a significant drop in the SSE.
  3. The “elbow” point:
    • The point on the curve where the rate of decrease sharply levels off forms the “elbow.”
    • This is where adding more clusters beyond this point doesn't significantly reduce SSE, indicating that the clusters are likely well-formed.
  4. Select optimal NC:
    • The value of NC at the elbow point is often considered the optimal number of clusters because it balances the trade-off between model complexity and performance.

Since the method requires iteratively running the K-means algorithm, which might be resource-demanding, a selection of data might be employed to determine the NC first and then used to run the K-means on the whole dataset.

Limitations:

  • The elbow point is not always obvious; in some cases, the curve may not show a distinct “elbow.”
  • The elbow method is heuristic and might not always lead to the perfect number of clusters, especially if the data structure is complex.
  • Other methods, like the Silhouette score, can complement the elbow method to help determine the optimal NC.
  Elbow example
Figure 46: Elbow example on two synthetic data sets

The figure above demonstrates more and less obvious “elbows”, where users could select the number of clusters equal to 3 or 4.

Silhouette Score

The Silhouette Score is a metric used to evaluate the quality of a clustering result. It measures how similar an object (point) is to its own cluster (cohesion) compared to other clusters (separation). The score ranges from −1 to +1, where higher values indicate better-defined clusters [18].

The Silhouette score considers two main factors for each data point:

  • Cohesion (a(i)) - The cohesion measure for the ith point is the average distance between the point and all other points in the same cluster. It measures the point's proximity to other points in its cluster. A low a(i) value indicates that the point is tightly grouped with other points in the same cluster;
  • Separation (b(i)) – The separation measure fo the ith point estimates the average distance between the point and points in the nearest neighbouring cluster - the cluster that is not its own but is closest to it. A large value for b(i) indicates that the point is far away from the closest other cluster, meaning it is well-separated;

The silhouette score for a point i is then calculated as:

  Silhouette score
Figure 47: Silhouette score

,where:

  • s(i) is the silhouette score for point i.
  • a(i) is the average distance from point i to all other points in the same cluster.
  • b(i) is the average distance from point I to all points in the nearest other cluster.
  • s(i) ≈+1 indicated that the point i is well clustered;
  • s(i) around 0 indicates that the point lies close to the boundary between clusters;
  • s(i) ≈-1 indicated that the point i was most probably wrongly assigned to the cluster;

Steps of the method:

  1. Plot silhouette score (SC) against the number of clusters:
    • Computing the SC for different values of NC, typically starting from NC=2 reasonable maximum value (e.g., 10 or 20).
    • Plotting the SC values on the y-axis and the number of clusters NC on the x-axis.
  2. Observe the plot:
    • As the number of clusters NC increases, the SC shows different score values, which may or may not gradually decrease, as in the case of the “elbow” method.
    • The main goal is to observe the maximum SC value and the corresponding NC value;
  3. Select optimal NC:
    • The value of NC at the maximum SC value is often considered the optimal number of clusters because it balances the trade-off between model complexity and performance.

Limitations:

  • It may not perform well if the data does not have a clear structure or if the clusters are of very different densities or sizes.
  • The Silhouette score might not always match intuitive or domain-specific clustering insights.

An example is provided in the following figure:

  Elbow example
Figure 48: Silhouette example on a synthetic data set

The user should look for the highest score, which in this case is for the 3-cluster option. — MISSING PAGE —

Decision tree-based classification Models

General audience classification iconGeneral audience classification iconGeneral audience classification icon

Introduction

Classification assigns a class mark to a given object, indicating that the object belongs to the selected class or group. In contrast to clustering, classes should be pre-existent. In many cases, clustering might be a prior step to classification. Classification might be slightly understood differently in different contexts. However, in the context of this book, it will be used to describe a process of assigning marks of pre-existing classes to objects depending on their features.

Classification is used in almost all domains of modern data analysis, including medicine, signal processing, pattern recognition, different types of diagnostics and other more specific applications.

Within this chapter, two very widely used algorithm groups are discussed:

  • Decision trees - a fundamental set of methods and their variants are discussed;
  • Random forests - one of the best out-of-the-box methods widely used by data analysts;

Interpretation of the model output

The classification process consists of two steps: first, an existing data sample is used to train the classification model, and then, in the second step, the model is used to classify unseen objects, thereby predicting to which class the object belongs. As with any other prediction, in classification, the output of the model is described by the rate of error, i.e., true prediction vs. wrong prediction. Usually, objects that belong to a given class are called – positive examples, while those that do not belong are called – negative examples.

Depending on a particular output, several cases might be identified:

  • True positive (TP) – the object belongs to the class and is classified as a class member.

Example: A SPAM message is classified as SPAM, or a patient classified as being in a certain condition is in fact, experiencing this condition.

  • False positive (FP) – the object that does not belong to the class is classified as a class member.

Example: A harmless message is classified as SPAM, or a patient who is not experiencing a certain condition is classified as being in this condition;

  • True negative (TN) – the object that is classified as not being a member of the class, in fact, is not a member;

Example: A harmless message is classified as harmless, or a patient not experiencing a certain condition is classified as not experiencing;

  • False negative (FN) – the object that belongs to the class is classified as not belonging to it.

Example: A SPAM message is classified as harmless, or a patient experiencing a certain condition is classified as not experiencing.

While training the model and counting the number of training samples falling into the mentioned cases, it is possible to describe its accuracy mathematically. Here are the most commonly used statistics:

  • Sensitivity = TP(TP+FN)
  • Specificity = TN(FP+TN)
  • Positive predictive value = TP(TP+FP)
  • Negative predictive value = TN(TN+FN)
  • Accuracy = TP+TN(TP+FP+TN+FN)

Training the models

The classification model is trained using the initial sample data, which is split into training and testing subsamples. Usually, the training is done using the following steps:

  1. The sample is split into training and testing subsamples;
  2. Training subsample is used to train the model;
  3. Test subsample is used to acquire accuracy statistics as described earlier;
  4. Steps 1 – 3 are repeated several times (usually at least 10 – 25) to acquire average model statistics;

The average statistics are used to describe the model.

The model's results on the test subsample depend on different factors—noise in the data, the proportion of classes represented in the data (how even classes are distributed), and others which are out of the developer's reach. However, by manipulating the split of the sample, it is possible to provide more data for training and thereby expect better training results—seeing more examples might lead to a better grasp of the class features. However, seeing too much might lead to a loss of generality and, consequently, dropped accuracy on test subsamples or previously unseen examples. Therefore, it is necessary to maintain a good balance between testing and training subsamples, usually 70% for training and 30% for testing or 60% for training and 40% for testing. In real applications, if the initial data sample is large enough, a third subsample is used – a validation set used only once to acquire final statistics and not provided to developers. It usually holds small but representative subsamples in 1-5% of the initial data sample.

Unfortunately, the data sample is not large enough in many practical cases. Therefore, several testing techniques are used to ensure the reliability of statistics and respect the scarcity of data. The method is called cross-validation, which uses the training and testing data subsets but allows data to be saved without using the validation set.

Random sample

 |  Random sample
Figure 49: Random sample

Most of the data is used for training in random sample cases, and only a few randomly selected samples are used to test the model. The procedure is repeated many times to ensure the model's average accuracy. Random selection has to be made without replacements. In the case of using replacements, the method is called bootstrapping, which is widely used and generally is more optimistic.

K-folds

 |  K-folds
Figure 50: K-folds

This approach splits the training set into smaller sets called splits (in the figure above, there are three splits). Then, for each split, the following steps are performed:

  • Model is trained using k-1 folds; in the picture above, every split (row) is divided into k folds, where in sequence, split by the split, an i-th fold is used for testing, while the k-1 folds for training,
  • The model’s accuracy is assessed using the remaining fold for each split iteratively;

The overall performance for the k-fold cross-validation is the average performance of the individual performances computed for each split. It requires extra computing but respects data scarcity, which is why it is used in practical applications.

One out

 |  One out
Figure 51: One out

This approach splits the training set into smaller sets called splits in the same way as previous methods described here (in the figure above, there are three splits). Then, for each split, the following steps are performed:

  • The model is trained using n-1 samples, and only one sample is used for testing the model’s performance.
  • The overall performance for the one-out cross-validation is the average performance of the individual performances computed for each split. It requires extra computing but respects data scarcity, which is why it is used in practical applications.

This method requires many iterations due to the limitations of the testing set.

— MISSING PAGE — — MISSING PAGE —

Introduction to Time Series Analysis

General audience classification iconGeneral audience classification iconGeneral audience classification icon

As has been discussed previously in the data preparation chapter, time series usually represent the dynamics of some process. Therefore, the order of the data entries has to be preserved. As emphasised, a time series is simply a set of data - usually events, arranged by a time marker. Typically, time series are placed in the order in which events occur/are recorded.

In the context of IoT systems, there might be several reasons why time series analysis is needed. The most widely ones are the following:

  • Process dynamics forecasting for higher-performing decision support systems. An IoT system, coupled with appropriate cloud computing or other computing infrastructure, can provide not only a rich insight into the process dynamics but also a reliable forecast using regression algorithms like the ones discussed in the regressions section or more advanced like autoregressive integrated moving average (ARIMA) and seasonal ARIMA (SARIMA) [19] [20].
  • Anomaly detection is one of the highly valued features of IoT systems. In its essence, anomaly detection is a set of methods enabling the recognition of unwanted or abnormal behaviour of the system over a specific time period. Anomalies might be expressed in data differently:
    • A certain event in time: for instance, a measurement jumps over a defined threshold value. This is the simplest type of anomaly, and most of the control systems cope with it by setting appropriate threshold values and alerting mechanisms;
    • Change of a data fragment shape: this might happen to technical systems, where a typical response to control inputs has changed to some shape that is not anticipated or planned. A simple example is an engine’s response to turning it on and reaching typical rpm values. Due to overloads, wearing out mechanics or other reasons, the response might take too long, signalling that the device has to be repaired.
    • Event density: In many technical systems, their behaviour is seasonal–cyclic. Changes in the periods and their absolute values, or their response shapes within the period, are excellent predictors of current or future malfunctioning. So, recognition of typical period shapes and response shapes in time are of high value for predictive maintenance, process control, and other applications of IoT systems.
    • Event value distribution: In most measuring systems, measurements due to imperfection of sensors or systems are distributed around some actual value, providing an estimate of the true value with some variance. Due to mechanical wear, the variance might increase or change in value distribution over time, which is a good indicator and predictor of malfunctioning or possible failures of the system.

Due to its diversity, a wide range of algorithms might be used in anomaly detection, including those that have been covered in previous chapters. For instance, clustering for typical response clusters, regression for normal future states estimation and measuring the distance between forecast and actual measurements, and classification to classify normal or abnormal states. An excellent example of using classification trees based methods for anomaly detection is Isolation forests [21]

  • Understanding of system dynamics, where the system owner is interested in having insightful information on the system functioning to make good decisions on its control or further development. Typical applications are system monitoring, the production of dashboards, different industrial research, and the study of system prototypes.

While in the time series analysis, most of the methods covered here might be employed, anomaly detection and classification cases are outlined through an example of an industrial cooling system in this chapter.

A cooling system case

A given industrial cooling system has to maintain a specific temperature mode of around -18oC. Due to the technology specifics, it goes through a defrost cycle every few hours to avoid ice deposits, leading to inefficiency and potential malfunction. However, at some point, a relatively short power supply interruption has been noticed, which needs to be recognised in the future for reporting appropriately. The logged data series is depicted in the following figure:

 Cooling system
Figure 52: Cooling system

It is easy to notice that there are two normal behaviour patterns: defrost (small spikes), temperature maintenance (data between spikes) and one anomaly – the high spike.

One possible alternative for building a classification model is to use K-nearest neighbours (KNN). Whenever a new data fragment is collected, it is compared to the closest ones and simply applies a majority principle to determine its class. In this example, three behaviour patterns are recognised; therefore, a sample collection must be composed for each pattern. It might be done by hand since, in this case, the time series is relatively short.

Examples of the collected patterns (defrost on the left and temperature maintenance on the right):

 Example patterns
Figure 53: Example patterns

Unfortunately, in this example, only one anomaly is present:

 Anomaly pattern
Figure 54: Anomaly pattern

To overcome data scarcity, a data augmentation technique might be applied, where a number of other samples are produced from the given data sample. This is done by applying Gaussian noise and randomly changing the length of the sample (for the sake of example, the original anomaly sample is not used for the model). Altogether the collection of initial data might be represented by the following figure:

 Data collection
Figure 55: Data collection

One might notice that:

  • Samples of different patterns are different in length;
  • Samples of the same pattern are of different lengths;
  • The interested phenomena (spike) are located at different locations within the samples and are slightly different.

All of the abovementioned issues expose the problem of calculating distances from one example to another since a simple comparison of data points will produce misleading distance values. To avoid it, a Dynamic Time Warping (DTW) metric has to be employed [22]. For the practical implementations in Python, it is highly recommended to visit TS learn library documentation [23].

Now, once the distance metric is selected and the initial dataset is produced, the KNN might be implemented. By providing the “query” data sequence, the closest ones can be determined using DTW. As an example, a simple query is depicted in the following figure:

 Single query
Figure 56: Single query

For practical implementation, the TSleanr package is used. In the following example 10 randomly selected data sequences are produced from the initial data set. While the data set is the same, none of the selected data sequences actually are “seen” by the model due to the randomness. The following figure shows the results:

 Multiple test queries
Figure 57: Multiple test queries

As it might be noticed, the query (black) samples are rather different from the ones found to be “closest” by the KNN. However, because of the DTW advantages, the classification is done perfectly. The same idea as demonstrated here might be used for unknown anomalies by setting a similarity threshold for DTW, known anomalies classification as shown here or even simple forecasting.

Hints for Further Readings on AI

General audience classification iconGeneral audience classification iconGeneral audience classification icon

This chapter has covered some of the most widely used data analysis methods applicable in sensor data analysis, which might be typical for IoT systems. However, it is only the surface of the exciting world of data analytics and AI. The authors suggest the following online resources besides the well-known online learning platforms to dive into this world.

Useful Python libraries
  • SciKit learn library for general data analysis and fundamental AI algorithms SciKit learn: a very useful Python library with complemented detailed documentation and example code snippets;
  • Time series library TSlearn TSlearn: provides very insightful comments and documentation on different algorithms and approaches widely used in time series analysis;
  • Pytorch Pytorch and Keras Keras: community pages for those who seek deep learning resources and more complex models in comparison to those that was covered in this chapter;
  • Scipy Scipy: a very rich library for statistical models in Python.
Useful tools
  • Orange Orange: visual programming tool for data analysis and visualisation;
  • Weka Weka: a ready to use data analysis and visualisation tool;

Cybersecurity in IoT Systems

There is widespread adoption of IoT systems and services in various industries, such as health care, agriculture, smart manufacturing, smart energy systems, intelligent transport systems, logistics (supply chain management), smart homes, smart cities, and security and safety. The primary goal of incorporating IoT into existing systems in various industries is to improve productivity and efficiency. Despite the enormous advantages of integrating IoT into existing systems in various industries, including critical infrastructure, there are concerns about the security vulnerabilities of IoT systems. Businesses are increasingly anxious about the possible risks introduced by IoT systems into their existing infrastructures and how to mitigate them.

One of the weaknesses of IoT devices is that they can easily be compromised. This is because some IoT manufacturers of IoT devices fail to incorporate security mechanisms into the devices, resulting in security vulnerabilities that can easily be exploited. Some manufacturers and developers often focus on device usability and adding features that satisfy the needs of the users while paying little or no attention to security measures. Another reason that IoT device manufacturers and developers pay little or no attention to security is that they are often focused on getting the device to the market as soon as possible. Also, some IoT users focus mainly on the price of the devices and ignore security requirements, incentivising the manufacturers to focus on minimising the cost of the devices while trading off the security of the devices.

Also, IoT hardware constraints make it difficult to implement reliable security mechanisms, making them vulnerable to cyber-attacks. Since batteries with limited energy capacities power IoT devices, they possess low-power computing and communication systems, making it hard to implement secure security mechanisms. Using power-hungry computing and communication systems that would permit the incorporation of reliable security mechanisms will significantly reduce the lifetime of the device (the time from when the device is deployed to when the energy stored in its battery is completely drained). As a result, manufacturers and developers tend to trade off the security of the device with the reliability and lifetime of the device.

A successful malicious attack on an IoT system could result in data deft, loss of data privacy, and further comprise other critical systems that are connected to the IoT systems. IoT systems are increasingly being targeted due to the relative ease with which they can be compromised. Also, they are increasingly being incorporated into critical infrastructure such as energy, water, transportation, health care, education, communication, security, and military infrastructures, making them attractive targets, especially during conventional, hybrid, and cyber warfare. In this case, the goal of the attackers is not only to compromise IoT systems but to exploit the vulnerabilities of the IoT device with the aim of compromising or damaging critical infrastructures. Some examples of attacks that have been orchestrated by exploiting vulnerabilities of IoT devices include:

  • The Mirai Botnet attack: An IoT botnet (a network of IoT devices, each of which runs bots) was used to conduct a massive Distributed Denial of service (DDoS) attack against the internet’s domain name system (DNS) provider Dyn in October 2016. The traffic from the IoT botnet, including devices such as cameras and DVR players, was coordinated to bombard Dyn's DNS servers with traffic until they became overwhelmed and collapsed under the strain. The assault that was sustained for several hours disrupted the services of websites such as Twitter, the Guardian, Netflix, Reddit, CNN and many others in Europe and the US.
  • The Stuxnet attack: It is one of the most well-known IoT attacks. It was designed to target the Iranian uranium enrichment plant in Natanz, Iran. The attack compromised the Siemens Step7 software that was running on a Windows operating system, providing malicious software (worm) access to the industrial program logic controllers. The attack resulted in the damage of several uranium centrifuges, demonstrating the extent to which IoT-based attacks could damage energy systems and critical infrastructure.
  • The Jeep Hack: This was a test attack conducted by a group of researchers in July 2015 on a Jeep SUV. They successfully took control of the vehicle by exploiting a firmware update vulnerability. They demonstrated that this kind of attack can be used to control the speed of the vehicle and also steer it off the road. Therefore, as more IoT sensors are added to vehicles, there is a serious risk that they can be exploited to cause a massive attack on cars, which could result in massive accidents. This kind of vulnerability can be exploited for terror attacks or targeted killings.
  • Cold in Finland: Cybercriminals conducted an IoT-based attack on heating systems in the Finnish city of Lappeenranta by turning off the heating system. They also conducted a DDoS attack on the heating infrastructure, forcing the heating controllers to reboot the system repeatedly and preventing the heating system from ever turning on. This is a serious attack, given the cold temperatures in Finland during the Winter season. A similar kind of attack may be conducted against air conditioning systems in a hot environment, which may cause serious problems for inhabitants. Thus, IoT systems may be leveraged to conduct attacks on critical civilian infrastructures to disrupt the proper functioning of society.
  • The Verkada hack: This attack was conducted against a cloud-based video surveillance service provider, Verkada. The attackers successfully compromised the privacy of their customers (including factories, hospitals, schools, and prisons) by gaining access to live feeds from about 150,000 cameras. This shows the risk of a successful full compromise on IoT cloud/fog computing service providers' customers, especially customers that provide critical services for society.

The attacks mentioned above are just a few examples of how cybercriminals may exploit the vulnerabilities of IoT devices to compromise and disrupt services in other sectors, especially the disruption of critical infrastructure. These examples demonstrate the urgent need to incorporate security mechanisms into IoT infrastructures, especially those integrated with critical infrastructures. The above attack examples also indicate that the threat posed by IoT is real and can seriously disrupt the functioning of society and result in huge final and material losses. It may even result in the loss of several lives. Thus, if serious attention is not given to IoT security, IoT will soon be an Internet of Threats rather than an Internet of Things.

Therefore, IoT security involves design and operational strategies designed to protect IoT devices and other systems against cyber attacks. It includes the various techniques and systems developed to ensure the confidentiality of IoT data, the integrity of IoT data, and the availability of IoT data and systems. These strategies and systems are designed to prevent IoT-based attacks and to ensure the security of IoT infrastructures. In this chapter, we will discuss IoT security concepts, IoT security challenges, and techniques that can be deployed to secure IoT data and systems from being compromised by attackers and used for malicious purposes.

Cybersecurity concepts

IoT designers and engineers need to have a good understanding of cybersecurity concepts. This will help them understand the various kinds of attacks that can be conducted against IoT devices and how to implement security mechanisms on the devices to protect them against cyber attacks. In this section, we discuss some cybersecurity concepts that are required to understand IoT security.

What is cybersecurity

Cybersecurity refers to the technologies, strategies, and practices designed to prevent cyberattacks and mitigate the risk posed by cyberattacks on information systems and other cyber-physical systems. It is sometimes referred to as information technology security as it involves the design and implementation of technologies, protocols, and policies to protect information systems against data thefts, illegal manipulation, and service interruption. The main goal of cybersecurity systems is to protect the hardware and software systems, networks, and data of individuals and organizations against cybersecurity attacks that may bridge the confidentiality, integrity, and availability of these systems.

After understanding when cybersecurity is, it is also important to understand what a cyberattack is. A cyberattack can be considered as any deliberate compromise of the confidentiality, integrity, or availability of an information system. That is unauthorized access to a network, computer system or digital device with a malicious intention to steal, expose, alter, disable, or destroy data, applications or other assets. A successful cyberattack can cause a lot of damage to its victims, ranging from loss of data to financial losses. An organisation whose systems have been compromised by a successful cyber attack could lose its reputation and be forced to pay for damages incurred by customers due to a successful cybersecurity attack.

The question is why should we be worried about cybersecurity attacks, especially in the context of IoT. The widespread adoption of IoT to improve business processes and personal well-being has created an exponential increase in the options available to cybercriminals to conduct cybersecurity attacks, increasing cybersecurity-related risks for businesses and individuals. This underscores the need for IoT engineers, IT engineers, and other non-IT employees to understand cybersecurity concepts.

The confidentiality, integrity and availability (CIA) triad

The CIA triad is a conceptual framework that combines three cybersecurity concepts, confidentiality, integrity, and availability, to provide a simple and complete checklist for implementing, evaluating, and improving cybersecurity systems. That is, they form a set of requirements that must be sacrificed by a cybersecurity system that is well-designed to ensure the confidentiality, integrity, and availability of information systems. It provides a powerful approach to identify vulnerabilities and threats in information systems and then implement appropriate technologies and policies to protect the information systems from being compromised. It provides a high-level framework that guides organisations and cybersecurity experts when designing, implementing, evaluating, and auditing information systems. In the following paragraphs, we briefly discuss the elements of the CIA triad.

Confidentiality

It involves the technologies and strategies designed to ensure that sensitive data is kept private and not accessible to unauthorised individuals. That is, sensitive data should be viewed only by authorised individuals within the organisation and kept private from unauthorised individuals. Some of the data collected by IoT sensors is very sensitive, and it is required that it is kept private and should not be viewed by unauthorised individuals with malicious intentions. Data confidentiality involves a set of technologies, protocols, and policies designed and implemented to protect data against unintentional, unlawful, or unauthorized access, disclosure, or theft. To ensure data confidentiality, it is important to answer the following questions:

  • Who should be able to view the data or have access to the data?
  • Are there laws, regulations, or contracts that require the data to be confidential?
  • Are there certain conditions under which the data may be used or disclosed?
  • How sensitive is the data, and what are the consequences that may be faced if unauthorised individuals access the data?
  • How useful can the data be to unauthorised individuals (e.g., cybercriminals) if they have access to it?

In order to ensure the confidentiality of the data stored in computer systems and transported through computer and telecommunication networks, some security guidelines should be followed:

  • Encrypt sensitive data during storage in computer systems and transportation through computer and telecommunication networks. The process of encryption renders the data unreadable or unintelligible to unauthorised persons, and only those who possess the appropriate keys can decrypt and access the data. By encrypting the data, it is kept confidential, and unauthorised individuals cannot access it unless the encryption scheme used is compromised.
  • Proper management of data access is needed to ensure that only authorised individuals who have the proper privileges can access the data. Users should always authenticate themselves using strong passwords, and where possible, multi-factor (e.g., two-factor) authentication should be used. Also, there should be a regular review of the access rights or privileges of users, and unnecessary rights or privileges should be revoked.
  • The physical location of hardware systems and paper documents should be properly secured. Just as it is very important to control remote access to digital systems, there should also be thorough control of the access to the physical location where the hardware and other critical assets are stored. Even paper documents should be properly sorted and stored in secure locations, and access to those locations must be controlled.
  • Any data, hardware devices, and paper documents that are no longer needed should be securely disposed of as soon as possible.
  • When collecting data, care must be taken to ensure that its privacy or confidentiality is not compromised, especially for sensitive data. Whenever possible, if it is possible to do so without collecting sensitive data, then it should not be collected as one of the ways to avoid the risk that comes with handling sensitive data is not to collect it in the first place if it's possible to do without it.
  • Sensitive data should be used only when necessary; otherwise, it should not be used at all to preserve its confidentiality.
  • Appropriate security systems should be implemented to ensure the confidentiality of data. Some of these measures include access control systems (e.g., firewalls), threat management systems, and attack detection and prevention systems etc.

Integrity

Integrity in cybersecurity involves technologies and strategies designed to ensure that data is not modified or deleted during storage or transportation by unauthorised persons. It is very important to maintain the integrity of the data to ensure that it is consistent, accurate, and reliable. In the context of IoT, integrity is the assurance that the data collected by the IoT sensors is illegally altered during transportation, processing, and storage, making it incomplete, inaccurate, inconsistent, and unreliable. The data can only be modified or altered by those authorised to do so. The collected data must be kept complete, accurate, consistent and safe throughout its entire lifecycle in the following ways [24]:

  • The data must be maintained in its full form with no data elements filtered, truncated or lost to ensure that the data is complete.
  • The accuracy of the data is preserved by ensuring that the data is not altered or aggregated either by human error or malicious attacks in such a way that affects the results of further processing and analysis of the data.
  • The consistency of the data should be maintained by ensuring that the data is unchanged regardless of how or how often it's accessed and no matter how long it's stored.
  • The safety of the data should be ensured by making sure that it is securely maintained and accessed only by authorised applications and individuals. Data security methods such as authentication, authorisation, encryption, backups, etc, can be used to ensure that the data is altered or destroyed by unauthorised applications or individuals.

The IoT system designers, manufacturers, developers, and operators should ensure that the data collected is not lost, leaked, or corrupted during transportation, processing, or storage. As the data collected by IoT sensors is growing rapidly and lots of companies are depending on the results from the processing of IoT data for decision-making, it is very important to ensure the integrity of the data. It must be ensured that the IoT data collected is complete, accurate, consistent and secure throughout its lifecycle, as compromised data is of little or no interest to organisations and users. Also, data losses due to human error and cyberattacks are undesirable for organisations and users. Physical and logical factors can influence the integrity of the data.

  • Physical integrity: It includes the various ways the integrity of the data can be compromised during transportation, storage and retrieval. During the transportation of data, some parts of the data could be lost due to packet losses occurring at the network equipment or packet errors caused by a disturbance in the transmission media. Also, data could be lost due to physical damage to the storage or computing devices. The integrity of the data could be compromised due to the following reasons:
    • Hardware failures and faults.
    • Design failures and negligence
    • Natural failures that may result from the deterioration of the hardware device (e.g., corrosion)
    • Power failures outages
    • Natural disasters
    • Environmentally induced failures resulting from extreme environmental failures like high temperatures.
    • Cyberattacks that are designed to cause hardware failures or power failures (e.g., energy depletion attacks)

The physical integrity of data could be enforced by:

  • Implementing redundancy in data storage systems to ensure that failure of a storage memory will not result in data losses.
  • Implementing battery-protected write cache.
  • Deploying storage systems with advanced error-correcting memory devices,
  • Implementing clustered and distributed file systems.
  • Implementing error-detection algorithms to detect any changes in the data during transportation.
  • Deploying backups that are located in different physical locations.
  • Implement network protection mechanisms to ensure that the data is not corrupted or lost during transportation.

IoT system designers, manufacturers, and developers can adopt a variety of technologies and policies to ensure the integrity of the hardware from the IoT devices and communication to fog/cloud data centres.

  • Logical integrity: Even when there are no hardware issues, there can still be unintended or malicious alterations in the data or data losses during transportation, storage, and retrieval that could alter its integrity. Logical integrity can be compromised by software design flaws and bugs, poor network configurations, as well as human error and cyberattacks. Some of the data integrity risks include:
    • Data may be deleted, wrongly entered, and illegally altered in the storage system.
    • Data may be damaged, lost, or illegally altered during transportation.
    • Data may be stolen, damaged, or illegally altered by a malicious hacker after a successful cyberattack.
    • Data may be stolen, damaged, lost, or illegally altered due to poor network and infrastructure configuration.

Enforcing data integrity is a complex task that requires a careful integration of cybersecurity tools, policies, regulations, and people. Some of the ways that data integrity can be enforced include but are not limited to the following strategies:

  • There should be strict control of access to the data using effective authentication and authorisation tools to ensure that unauthorised persons do not manipulate data.
  • Logs on the actions performed by users should be created and carefully audited to keep track of the changes made by users.
  • Data should be encrypted during transportation and storage to ensure that it is not altered or damaged during transportation or storage.
  • Data protection mechanisms should be used to prevent data losses, e.g., data should be backed up regularly, and error detection and correction communication algorithms should be used.
  • When accessing data to process or analyse it, necessary steps should be taken to ensure that it is not corrupted, lost, or damaged, especially when it is accessed by third parties for analysis.
  • The employees and other stakeholders should be trained to handle the data in such a way that its integrity is not lost, altered, or damaged.

Availability

The computing, communication, and data storage and retrieval systems should be accessible at any time and when needed. Availability in the context of cybersecurity is the ability of authorised users or applications to have reliable access to the information systems when necessary at any time. It is one of the elements of the CIA triad that constitutes the requirement for designing secure and reliable information and communication systems such as IoT. Given that IoT nodes are being integrated into critical infrastructure and other existing infrastructure of companies and individuals, longer downtimes are not tolerated, making availability a critical requirement. Availability could result from any of the following causes:

  • Hardware failures that may result from natural failures resulting from deterioration.
  • Software failures that may result from software design flaws or bugs
  • Cyberattacks, e.g., DoS/DDoS, energy depletion attack in the case of an IoT node.
  • Power failure that may result from power outages or depletion of energy stored in the battery in the case of IoT nodes.
  • Data damage, corruption, or losses during transportation or storage and retrieval that prevent authorised users and applications from having access to the data when needed.
  • Bandwidth bottlenecks and link failures in the communication network that interfere with the transfer of data to users and applications that need them.
  • The downtimes could result from failure, misbehaviour, or malfunctioning of the cybersecurity systems
  • Data to the computing, communication and storage infrastructure resulting from natural disasters, theft, vandalisation, political unrest, or conflict.

Some of the ways to ensure the availability of information systems and data include the following:

  • Creating data backups and storing the backup systems in different geographical locations.
  • Ensuring effective operation and maintenance processes.
  • Ensuring effective and efficient energy sources and energy storage systems.
  • The energy consumption should be minimised in the case of IoT nodes to increase the lifetime of the devices.
  • Software design flaws and bugs should be resolved immediately and as quickly as possible to minimise downtimes.
  • The physical storage locations of hardware infrastructure should be carefully secured.
  • Effective authentication and authorisation mechanisms should be used to ensure that authorised users have access to the systems when needed.
  • There should be careful implementation and configuration of cybersecurity systems to ensure performance degradation and downtimes resulting from the malfunctioning of cybersecurity systems are minimised.
  • Ensuring the networking systems are properly configured with appropriate security mechanisms and networking failures are quickly resolved.

Some commonly used cybersecurity terms

In order to understand advanced cybersecurity concepts and technologies, it is important to have a good understanding of some basic cybersecurity concepts. Below we present some cybersecurity concepts.

Cybersecurity risk: It is the probability of being exposed to a cybersecurity attack or that any of the cybersecurity requirements of confidentiality, integrity, or availability is violated, which may result in data theft, leakage, damage or corruption. It may also result in service disruption or downtime that may cause the company to lose revenue and damage infrastructure. An organisation that falls victim to a successful cyber-attack may lose its reputation and be compelled to pay damages to its customers or to pay a fine to regulatory agencies. Thus, a cybersecurity risk is the potential losses that an organisation or individuals may experience as a result of successful cyberattacks or failures of the information systems that may result in loss of data, customers, revenues, and resources (assets and financial losses).

Threats: It is an action performed with the intention of violating any of the cybersecurity requirements that may result in data theft, leakage, damage, corruption, or losses. The action performed may either disclose the data to unauthorised individuals or alter the data illegally. It may equally result in the disruption of services due to system downtime, system unavailability, or data unavailability. The could that could be considered threats could be infection of devices with viruses or malware, ransomware attacks, denial of service, phishing attacks, social engineering attacks, password attacks, SQL injection, data breaches, man-in-the-middle attacks, energy depletion attacks (the case of IoT devices), or many other attack vectors. Cybersecurity threats could result from threat actors such as nation stations, cybercriminals, hacktivists, disgruntled employees, design errors, misconfiguring of systems, software flaws or bugs, terrorists, spies, errors from authorised users, and natural disasters [25].

Cybersecurity vulnerability: It is a weakness, flaw, or error found in an information system or a cybersecurity system that cybercriminals could exploit to compromise the security of an information system. There are several cybersecurity vulnerabilities, and so many are still being discovered. Still, the most common ones include SQL injection, buffer overflows, cross-site scripting, security misconfiguration [26], weak authentication and authorisation mechanisms, and unencrypted data during transportation or storage. Security vulnerabilities can be identified using vulnerability scanners and performing penetration testing. When a vulnerability is detected, necessary steps should be taken to eliminate it or to mitigate its risk.

Cybersecurity exploit: A cybersecurity exploit is the various ways that cybercriminals take advantage of cybersecurity vulnerabilities to conduct cyberattacks in order to compromise the confidentiality, integrity, and availability of information systems. The exploit may involve the use of advanced techniques (e.g., commands, scripting, or programming) and software tools (proprietary or open-source) to identify and exploit vulnerabilities with the intention of stealing data, disrupting the services, damaging or corrupting the data, and hijacking data or systems in exchange for money.

Attack vector: It is the various ways that attackers may compromise the security of an information system, such as computing, communication, or data storage and retrieval systems. Some of the common attack vectors include

  • Phishing attacks
  • Email attachments,
  • Credential theft using various social engineering techniques,
  • Account takeover to steal or damage data and other resources and to conduct further attacks
  • Cryptanalysis of encrypted data,
  • Man-in-the-middle attacks,
  • Cross-site scripting,
  • SQL injection,
  • Insider threats,
  • Vulnerability exploits (e.g., vulnerabilities in unpatched software, servers, and operating systems),
  • Browser-based attacks, application compromise,
  • Brute-force attacks to compromise passwords,
  • Using malware to take over devices, gain unauthorised access, and may cause damage to data or the information systems,
  • Exploiting the presence of open ports.

The various approaches to eliminate attack vectors to reduce the chances of a successful attack include the following [27]:

  • Encryption of data during transportation, storage, and retrieval.
  • Designing effective security policies and training and compelling employees and stakeholders to apply them.
  • Patching security vulnerabilities by regularly updating the software and hardware and checking the various system configurations to identify any vulnerabilities.
  • Implementing secure network access mechanisms.
  • Performing regular security audits in order to identify and eliminate threats and vulnerabilities before cybercriminals exploit them.
  • Deploying threats (intrusion) detection and prevention systems.

Attack surface: An attack surface is a location or possible attack vectors that cybercriminals can target or use to compromise the confidentiality, integrity, and availability of data and information systems. Organisations and individual should always strive to minimise their attack surfaces as the smaller the attack surfaces, the smaller the likelihood that their data or information systems will be compromised. So, they have to constantly monitor their attack surfaces in order to detect and block attacks as soon as possible and to minimise the potential risk of a successful attack. Some of the common attack surfaces are poorly secured devices (e.g., devices such as computers, mobile phones, hard drives, and IoT devices), weak passwords, a lack of email security, open ports, and a failure to patch software, which offers an open backdoor for attackers to target and exploit users and organizations. Another common attack surface is weak web-based protocols, which hackers can exploit to steal data through man-in-the-middle (MITM) attacks. There are two categories of attack surface, which include [28]

  • Digital attack surface: This kind of attack surface consists of all the software and hardware systems found within the infrastructure of an organisation. These include applications, code, ports, servers, websites, and sensor devices (in the case of IoT devices). With the deployment of tens of millions to hundreds of millions of IoT devices, the attack surfaces created by IoT infrastructure from the sensor layer, through the networking infrastructure, to fog/cloud computing infrastructure is huge.
  • Physical attack surface: This kind of attack surface consists of all endpoint devices that an attacker can gain physical access to, such as desktop computers, hard drives, laptops, mobile phones, Universal Serial Bus (USB) drives, and IoT devices (in the case of IoT systems). Some physical attack surfaces include carelessly discarded hardware that contains user data and login credentials, user passwords that are written on pieces of paper, and unauthorised access to the physical location where sensitive assets are stored.

An effective attack surface management provides the following advantages to organisations and individuals:

  • Identify vulnerabilities and eliminate them.
  • To mitigate the risk posed by cybersecurity threats.
  • Identify new attack surfaces that have been created as they expand their infrastructure and adopt new services.
  • Effective management of access to critical sources and data, minimising the chances of any form of a security breach.
  • Minimise the possibility of successful cybersecurity attacks.

As IT infrastructures increase in size and are connected to external IT systems over the internet, they become more complex, hard to secure, and frequently targeted by cybercriminals. Some of the ways to minimise attack surfaces in order to reduce the risk of cyberattacks include:

  • The implementation of zero-trust policies to ensure that only authorised users and applications can have access to information resources (computing devices, sensor devices, networks, servers, databases, etc.). This eliminates or reduces the chances of unauthorised access that compromises
  • Reducing unnecessary complexities by turning off or removing unused hardware devices and software from the IT infrastructure to reduce the attack surfaces that can be exploited by cybercriminals.
  • Perform regular security audits and scan the entire network and IT systems to identify vulnerabilities (both hardware and software) that could be exploited by cybercriminals and resolve them to reduce the attack surfaces that cybercriminals can exploit.
  • The network should be segmented into smaller networks using firewalls and micro-segmentation strategies to add more barriers to restrict the spread of attacks and reduce attack surfaces.
  • Regular training of employees so that they can adopt security best practices and respect security policies designed to enhance the security of data and information systems.

Encryption: Encryption is the process of scrambling data into a secret code (encrypted data) so that it can only be transformed back into the original data (decrypted) with a unique key by authorised users or applications. It ensures that the confidentiality and integrity of the data are not compromised. That is, it prevents the data from being stolen or illegally altered by cybercriminals. Encryption is often used to protect data during transportation, storage, and processing/analysis. The process of encryption involves the use of a mathematical cryptographic algorithm (encryption algorithm) to scramble data (plaintext) to a cyphertext that can only be unscrambled back into the plain text using another cryptographic algorithm (decryption algorithm) and an appropriate unique key. The cryptographic keys should be long enough that cybercriminals can not easily guess them, be it through a brute-force attack or cryptanalysis. The goals of implementing encryption algorithms in information systems are:

  • To ensure the confidentiality of data, preventing unauthorised users from having access to the data and ensuring that the data is kept secret.
  • To ensure the integrity of the data by ensuring that it is not altered, damaged, or corrupted during storage or transportation.
  • To authenticate the users by verifying the origin of the data to ensure that the users are who they say they are.
  • To ensure non-repudiation by ensuring that a sender of data cannot deny that they are the origin of the data.
  • It also enables organisations to comply with the security requirements of regulators that require that sensitive data should be adequately protected from theft, corruption and illegal alteration.

Cryptographic algorithms can be categorised into two main types as follows:

  • Symmetric encryption: In this type of encryption, the same key is used for encryption and decryption; hence, it is sometimes called the private key or shared key encryption. The encryption key is sent through a secured channel so that it can be used to decrypt the data. The main advantage of this type of encryption scheme is that it is relatively less expensive to create the cypher, making it less computationally expensive and faster to decrypt. A major disadvantage of this type of encryption is that the encryption key could be compromised when it is being transferred from the sender to the receiver. In case a third party views the key, the person or application could use it to decrypt the data, compromising the confidentiality and integrity of the data. Some common examples of symmetric encryption algorithms are Data Encryption Standard (DES), Triple DES (3DES), Advanced Encryption Standard (AES), and Twofish.
  • Asymmetric encryption: In this type of encryption, two different types of keys (private and public keys) are used to encrypt and decrypt the data; hence, it is sometimes called a public key encryption scheme. The public key is shared among the communication parties (senders) so that it can be used to encrypt the data, but only the receiver with the appropriate private key can decrypt the data. Asymmetric cryptographic algorithms are relatively secured but are relatively expensive to generate a cypher and are also computationally expensive to decrypt the ciphertext into the original plaintext. Some examples of public key encryption algorithms include RSA (Rivest-Shamir-Adelman) and elliptic Curve Cryptography (ECC).

Although encryption is very valuable for securing data during transportation, processing, and storage, it still possesses some disadvantages. Some of the drawbacks of encryption are:

  • Cybercriminals can use it to hijack the data of individuals and organisations, demanding a ransom to be paid before they can access their data, the so-called ransomware attack.
  • Effective management of encryption keys to ensure that they cannot be compromised is challenging, making it possible for cybercriminals to access the keys and use them to compromise the confidentiality and integrity of the data.
  • There is a growing anxiety that when quantum computing technologies become mature, they will be able to break advanced encryption schemes that we now depend on for the protection of our data.

Authentication: Authentication is an access control mechanism that makes it possible to verify that a user, device, or application is who they claim to be. The authentication credentials (username and password) are matched against a database of authorised users or data authentication servers to verify their identities and to ensure that they have access rights to the device, servers, application or database. The use of a username or ID and a password for authentication is called single-factor authentication. Recently, organisations, especially those that are dealing with sensitive data (e.g., banks), require their users and applications to provide multiple factors for authentication (rather than only an ID and password), resulting in what is now known as multi-factor authentication. In the case of two factors, it is known as two-factor authentication. The use of human features such as Fingerprint scans, facial or retina scans, and voice recognition is known as biometric authentication [29]. Authentication ensures the confidentiality and integrity of data and information systems by allowing only authenticated users, applications, and processes to have access to valuable and sensitive resources (e.g., computers, wireless networks, wireless access points, databases, websites, and other network-based applications and services).

Authorisation: Just like authentication, authorisation is another process that is often used to protect data and information systems from being abused or misused by cybercriminals and unintended (or intended) actions of authorised users. Authorisation is the process of determining the access rights of users and applications to ensure that they have the right to perform the action that they are trying to perform. That is, unlike authentication, which verifies the identities of the users and then grants them access to the systems, authorisation determines the permissions that they have to perform specific actions. One example of authorisation is the Access Control List (ACL), which allows or denies users and applications access to specific information system resources and to perform certain actions. General users may be allowed to perform some actions, but they may be denied permission to perform certain actions. In contrast, super users or system administrators are allowed to perform almost every action in the system. Also, some users are authorised to have access to some data and are denied access to more sensitive data; thus, in database systems, general users may be permitted to access less sensitive data, and the administrator is permitted to have access to more sensitive data.

Access control: It consists of the various mechanisms designed and implemented to grant authorised users access to information system resources and to control the actions that they are allowed to perform (e.g., view, modify, update, install, delete). It can also be the control of physical access to critical resources of an organisation. It ensures that the confidentiality and integrity of data and information systems are not compromised. Thus, physical access controls physical access to critical resources, while logical access control controls access to information systems (networks, computing nodes, servers, files, and databases). Access to locations where critical assets (servers, network equipment, files) are stored is restricted using electronic access control systems that use keys, access card readers, personal identification number (PIN) pads, auditing and reports to track employee access to these locations. Access to information systems (networks, computing nodes, servers, files, and databases) is restricted using authentication and authorization mechanisms that evaluate the required user login credentials, which can include passwords, PINs, biometric scans, security tokens or other authentication factors [30].

Nonrepudiation: It is a way to ensure that the sender of data does not refute that it sent the data and also that the receiver does not deny that it received the data. It also ensures that an entity that signs a document cannot refute its signature. It is a concept adopted from the legal field and has become one of the five pillars of information assurance, among others, such as confidentiality, integrity, availability, and authentication. It ensures the authenticity and integrity of the message. It provides the identity of the sender to the receiver and assures the sender that the message was delivered without being altered along the way. In this way, the sender and receiver are unable to deny they send, receive or process the data. Signatures can be used to ensure nonrepudiation as long as they are unique for each entity.

Accountability: Accountability requires that organisations take all the necessary steps to prevent cyberattacks and also mitigate the risk of a possible attack. In case an attack occurs, the organisation must take responsibility for the damages and engage relevant stakeholders to handle the consequences and prevent future attacks from happening. That is, it must accept responsibility for dealing with security challenges and fallouts from security breaches.

IoT Hardware and Cybersecurity

A typical IoT architecture consists of the physical layer, which consists of IoT sensors and actuators, which may be connected in the form of a star, linear, mesh, or tree network topology. The IoT devices can process the data collected by the IoT sensors at the physical layer or can be sent to the fog/cloud computing layers for analysis through IoT access and Internet core networks. The Fog/cloud computing nodes perform lightweight or advanced analytics on the data, and the result may be sent to users for decision-making or to IoT actuators to perform a specific task or control a given system or process. This implies that in an IoT infrastructure, we may have IoT devices, wireless access points, gateways, fog computing nodes, internet routers and switches, telecommunication transmission equipment, cellular base stations, servers, databases, cloud computing nodes, mobile applications, and web applications. All these hardware devices and applications constitute attack surfaces that cybercriminals can target to compromise IoT devices.

In implementing IoT security, it is important to consider the kind of hardware found in IoT systems, from the IoT device level through the IoT networks, fog computing nodes, and Internet core networks to the cloud. Security of traditional Internet and cloud-based infrastructure is very complex but less challenging due to the massive amount of computing and communication resources that are deployed to handle cybersecurity algorithms and applications that are used to eliminate vulnerabilities, detect and prevent cyberattacks to ensure the confidentiality, integrity, and availability of data and information systems. In the case of IoT devices, the computing and communication resources are very limited due to the limited energy required to power the IoT device. Hence, energy-hungry and computationally expensive cybersecurity algorithms and applications can not be used to secure IoT nodes. This hardware limitation makes IoT devices vulnerable to cyberattacks and easy to compromise.

IoT hardware vulnerabilities

IoT devices are vulnerable to certain types of security attacks due to the nature of IoT hardware. Some of these vulnerabilities or weaknesses resulting from IoT hardware limitations include:

  • The confidentiality and integrity of sensitive data collected by sensor devices can easily be compromised due to a lack of appropriate cryptographic algorithms or weak cryptographic algorithms. It is difficult to implement strong cryptographic algorithms that are difficult to be compromised by cybercriminals due to limited computing resources in IoT devices. IoT devices use microcontrollers for computing, which are not able to handle strong but computationally expensive cryptographic algorithms. This makes IoT devices vulnerable to man-in-the-middle attacks where the wireless IoT traffic can be captured by cybercriminals and analysed to have access to it if it is not encrypted or if the encryption scheme used is weak.
  • Device manufacturers introduce some of the vulnerabilities of IoT devices. They are often focused on minimising the cost of the devices and the time to market, paying little or no attention to the security requirements or needs of the customers sometimes because customers are often concerned about the prices of the devices, their ease of use and functionalities. In this way, they sometimes ship devices with default passwords, no encryption algorithms implemented, and sometimes without any mechanisms for authentication. This makes the devices vulnerable to attacks.
  • In some IoT deployments, the IoT devices share the same communication channels, making them vulnerable to packet collision attacks, where compromised IoT devices are used to create packet collisions on the channels, forcing the device to deplete its stored energy rapidly and may eventually shut down the device.
  • Since the communication between the IoT devices and between the IoT devices and the access point or gateway is through wireless radio communication channels, the IoT devices are vulnerable to jamming attacks that are designed to force the IoT devices to deplete their stored energy rapidly.
  • IoT devices are also vulnerable to flooding attacks that are designed to flood IoT devices with benign or useless packets so that they will spend more energy in processing these useless packets, rapidly depleting their stored energy and eventually shutting down the device.
  • Since IoT devices are relatively easy to infect with malware, they are vulnerable to a kind of malware attack in which the attacker infects the device with malware that forces the device to perform more computations, rapidly depleting the energy stored in the device and eventually shutting it down.
  • Another type of IoT hardware vulnerability is rout poisoning, in which the attacker creates routing loops, turns some devices into sinkholes, or increases routing paths with the aim of forcing the devices to spend more energy and eventually depleting their energy, reducing the lifetimes of some of the devices in the network.
  • IoT devices can easily be infected and turned into botnets, which can then be used to conduct sophisticated large-scale attacks such as distributed denial of service attacks that can paralyse IT assets (servers and gateways) on a large scale.
  • Another IoT hardware vulnerability is the lack of visibility. Many IoT devices are deployed without appropriate identification numbers (IP addresses), creating blind spots because the devices are not visible to security monitoring tools and can be exploited. Also, the fact that various devices may have different protocols makes it difficult to monitor all the devices within the network, making them weak points for the network.
  • An inefficient firmware verification mechanism makes it possible to tamper with the firmware or reverse engineer it, making the device vulnerable to attacks. Attackers may illegally update the firmware of the device or tamper with it in such a way that they can easily capture the device and use it for further attacks.
  • As a result of poor device management strategies, some organisations or individuals sometimes fail to attend to some devices to ensure that they are well-secured (failing to install necessary updates and patch security holes to gaps), leaving them vulnerable to attacks from cybercriminals.
  • some hardware security vulnerabilities are hard to eliminate, such as side-channel attacks, reverse engineering of the hardware, malware infection, and data extraction, which could be exploited, resulting in a data breach.
  • IoT devices are vulnerable to physical attacks where a criminal can destroy the device or vandalise the device and even access it manually.

IoT hardware attacks

IoT hardware attacks are the various ways that security weaknesses resulting from limitations in IoT hardware can be exploited to compromise the security of IoT data and systems. An attacker may install malware on IoT devices, manipulate their functionality, or exploit their weaknesses to gain access to steal or damage data, degrade the quality of services, or disrupt the services. An attack could conduct an IoT on devices with the aim of using them for a more sophisticated large-scale attack on ICT infrastructures and critical systems. There is an increase in the scale and frequency of IoT attacks due to the increase in IoT attack surfaces, the ease with which IoT devices can be compromised, and the integration of IoT devices into existing systems and critical infrastructure. Some of the common IoT hardware attacks include:

  • Unauthorised access: Some IoT device manufacturers use weak or no security mechanisms as they strive to minimise manufacturing costs and reduce the time to market in order to meet the increase in market demand. They sometimes do not provide mechanisms for necessary updates to patch up security holes. Some create backdoors for remote servicing, which malicious hackers can exploit. In contrast, others use default passwords or no passwords at all, making it easier for attackers to access the device and exploit it to escalate their attacks.
  • Emulation of fake IoT devices: A third party that knows the communication protocol could develop software that can emulate common functionalities between IoT devices and then get the leverage to share false information.
  • Identity Theft: An attacker could steal the identification of legitimate devices and then perform malicious actions within the network without being identified.
  • Injection of fake information: An attacker can inject fake or misleading information to disrupt the intended functionalities. For example, in the case of a food supply chain, a third party could inject false information about the ethylene sensor and make the system think that the transported commodity is already rotten. Therefore, mechanisms must be put in place to protect the system from fake information injection.
  • Firmware-base attacks: When a new security threat is discovered, a new firmware update is required to obtain an updated version to address the security threat. The firmware, security configuration and other features of the device can be cloned. The attacker can also upgrade the firmware of a device with malicious software [31]. The firmware, security configuration and other features of the device can be cloned. The attacker can also upgrade the firmware of a device with malicious software [32].
  • Eavesdropping and man-in-the-middle attacks: Data exchange should be performed in a secure manner that makes data interception by a third party impossible. Traditional data encryption schemes cannot be implemented in IoT devices, requiring lightweight encryption, which is not straightforward and is sometimes times ignored by many manufacturers. Transmitting unencrypted IoT data, including security data, makes IoT networks susceptible to eavesdropping and man-in-the-middle attacks.
  • Energy depletion attacks: In this kind of attack, an attacker tries to increase the energy consumption of a battery-powered IoT device significantly, drain the battery of the device, and eventually shut down the device. Examples of such attacks include Denial of Sleep (DoS), flooding, a carousel, and stretch attacks [33].
  • Vampire attacks: In this kind of attack, an attacker tries to increase the energy consumption of a battery-powered IoT device significantly, drain the battery of the device, and eventually shut down the device. Examples of such attacks include Denial of Sleep (DoS), flooding, a carousel, and stretch attacks [34].
  • Routing attacks: An attacker may manipulate the routing information of the devices to create routing loops, selectively forward packets or intend to use longer routes in order to increase energy consumption. Some of the routing attacks include sinkholes, selective forwarding, wormholes, and Sybil attacks [35].
  • Jamming attacks: It is a kind of denial of service attack in a shared wireless communication channel in which a user may prevent other users from using the shared channel [36]. It is an attack targeted towards the physical layer or data link layer of the IoT wireless network.
  • Brut-force attacks: This kind of attack is aimed at obtaining the login credentials of the detail to gain unauthorised access to the device. For devices with fault passwords, commonly used passwords (e.g., admin), or weak passwords, attackers can get access to these credentials and use them to gain illegal access to the IoT devices.
  • DoD/DDoD attacks: Because adequate security mechanisms are not implemented to harden the security of IoT devices, they can easily be compromised. A large number of IoT devices can be used to constitute an army of botnets to conduct DDoS attacks to saturate the buffers and other resources in the access points, fog nodes and cloud platforms.
  • Packet collision attacks: This attack is common in IoT applications where the devices share the wireless communication channel. An attack can capture some of the devices and then be used to create packet collisions in the communication channel to disrupt the communication and force the devices to consume more energy by trying to transmit packets multiple times and increase the time the devices stay awake to perform communication (or decreases the sleep time of the device). This kind of attack is a type of energy depletion attack.
  • Physical attack on the device: An IoT device may physically be manipulated to extract some vital information or can be physically damaged. It is an essential aspect of IoT-based agriculture as the IoT infrastructure in the fields can be vandalised.

IoT hardware security

It is very difficult to eliminate IoT hardware vulnerabilities due to the hardware resource constraint of IoT devices. Some of the measures for securing IoT devices and mitigating the risk posed by IoT security vulnerabilities include the following:

  • Implementing lightweight encryption schemes on IoT devices: The data stored in the IoT devices (e.g., device authentication data and other sensitive data) should be encrypted to ensure that its confidentiality and integrity are not compromised. The IoT data should be encrypted before being transmitted through any transmission medium. Since traditional cryptographic algorithms are computationally expensive and require strong and energy-hungry computing systems, it is preferable to implement lightweight cryptographic algorithms that require relatively less energy.
  • Implementing robust authentication mechanisms on IoT devices: Rubost authentication mechanisms should be implemented to restrict access to IoT devices and to ensure that all IoT devices that connect to access points and servers are authenticated. This ensures that access to critical resources like access points, gateways, and servers can be controlled to ensure the authenticity of the communication. It is also important to avoid purchasing devices with hardcoded passwords, change default passwords, and create strong passwords for devices.
  • Configuring firewalls to protect devices from traffic-based attacks: The perimeter of the network can be protected by implementing firewalls that reject malicious traffic at the edge of the network. That is, it allows only traffic from legitimate sources and blocks traffic from sources that are deemed to be malicious. It can also be used to segment the network so that the IoT network can be isolated from other networks and attacks on IoT networks can not be spread to other networks. Software firewalls can be configured on individual devices to restrict traffic from unauthorised sources from reaching the devices.
  • Ensure that the software and hardware components are not compromised: The software and hardware components used in IoT devices should be well-tested to ensure that there are no security vulnerabilities in them that malicious attackers may exploit to compromise the security of the data and the devices. Security measures should be included at every stage of the device lifecycle to ensure that well-known vulnerabilities are resolved and that there are security strategies to ensure that the device and data security is not compromised.
  • Implement dedicated security hardware to improve the security of the devices: Dedicated hardware components that are designed specifically to perform security-related functions (e.g., secure communications, energy-efficient cryptographic functions and key management) to ensure real-time security of the devices. Some dedicated hardware components can facilitate the implementation of secure boot process and authentication operations. Another advantage of using dedicated IoT security hardware is that some of them are designed with the goal of striking a balance between the IoT hardware constraint, energy consumption, and security.
  • Always verify the validity and trustworthiness of the software and firmware of the IoT devices: Reliable mechanisms should be implemented to verify the validity and trustworthiness of the software and firmware of IoT devices. In this way, we can check if the software or the operating system of the device has been tampered with or manipulated in such a way that the device is vulnerable to attacks.
  • Regular security checks and updates: Mechanisms should be implemented to check if the device has been tampered with. Also, the firmware and software of the device should be updated and regulated to patch any security holes.
  • Regular security audits should be performed: The IoT network should be regularly audited using vulnerability scanning and security auditing tools to ensure that IoT vulnerabilities (including hardware vulnerabilities) and threats can be detected and resolved before criminals can exploit them.
  • Enforcement of security policies: Sound security policies should be designed and enforced to ensure that the IoT device and data are not easily compromised. For example, the principle of security by design was implemented when designing and implementing IoT hardware and software. Also, all IoT devices must be identified, monitored continuously, and regularly audited to ensure that known vulnerabilities can be resolved on time. Also, attacks against IoT devices should be detected and blocked on time.

IoT Cybersecurity challenges

The security of computer systems and networks has garnered significant attention in recent years, driven by the ongoing exploitation of these systems by malicious attackers, which leads to service disruptions. The increasing prevalence of both known and unknown vulnerabilities has made the design and implementation of effective security mechanisms increasingly complex and challenging. In this section, we discuss the challenges and complexities of

Complexities in Security Implementation

Implementing robust security in IoT ecosystems is a multifaceted challenge that involves satisfying critical security requirements, such as confidentiality, integrity, availability, authenticity, accountability, and non-repudiation. While these principles may appear straightforward, the technologies and methods needed to achieve them are often complex. Ensuring confidentiality, for example, may involve advanced encryption algorithms, secure key management, and secure data transmission protocols. Similarly, maintaining data integrity requires comprehensive hashing mechanisms and digital signatures to detect any unauthorized changes.

Availability is another essential aspect that demands resilient infrastructure to protect against Distributed Denial-of-Service (DDoS) attacks and ensure continuous access to IoT services. The requirement for authenticity involves the use of public key infrastructures (PKI) and digital certificates, which introduce challenges related to key distribution and lifecycle management.

Achieving accountability and non-repudiation involves detailed auditing mechanisms, secure logging, and tamper-proof records to verify user actions and device interactions. These systems must operate seamlessly within constrained IoT environments, which may have limited processing power, memory, or energy resources. Implementing these mechanisms thus demands not only technical expertise but also the ability to reason through subtle trade-offs between security, performance, and resource constraints. The complexity is compounded by the diversity of IoT devices, communication protocols, and the potential for vulnerabilities arising from the integration of these devices into broader networks.

Inability to Exhaust All Possible Attacks

When developing security mechanisms or algorithms, it is essential to anticipate and account for potential attacks that may target the system's vulnerabilities. However, fully predicting and addressing every conceivable attack is often not feasible. This is because malicious attackers constantly innovate, often approaching security problems from entirely new perspectives. By doing so, they are able to identify and exploit weaknesses in the security mechanisms that were not initially apparent or considered during development. This dynamic nature of attack strategies means that security features, no matter how well-designed, can never be fully immune to every potential threat. As a result, the development process must involve not just defensive strategies but also ongoing adaptability and the ability to respond to novel attack vectors that may emerge quickly. The continuous evolution of attack techniques, combined with the complexity of modern systems, makes it nearly impossible to guarantee absolute protection against all threats.

The problem of Where to Implement the Security Mechanism

Once security mechanisms are designed, a crucial challenge arises in determining the most effective locations for their deployment to ensure optimal security. This issue is multifaceted, involving both physical and logical considerations.

Physically, it is essential to decide at which points in the network security mechanisms should be positioned to provide the highest level of protection. For instance, should security features such as firewalls and intrusion detection systems be placed at the perimeter, or should they be implemented at multiple points within the network to monitor and defend against internal threats? Deciding where to position these mechanisms requires careful consideration of network traffic flow, the sensitivity of different segments of the network, and the potential risks posed by various entry points.

Logically, the placement of security mechanisms also needs to be considered within the structure of the system’s architecture. For example, within the TCP/IP model, security features could be implemented at different layers, such as the application layer, transport layer, or network layer, depending on the nature of the threat and the type of protection needed. Each layer offers different opportunities and challenges for securing data, ensuring privacy, and preventing unauthorized access. The choice of layer for deploying security mechanisms affects how they interact with other protocols and systems, potentially influencing the overall performance and efficiency of the network.

In both physical and logical terms, selecting the right placement for security mechanisms requires a comprehensive understanding of the system’s architecture, potential attack vectors, and performance requirements. Poor placement can leave critical areas vulnerable or lead to inefficient use of resources, while optimal placement enhances the overall defence and response capabilities of the system. Thus, strategic deployment is essential to achieving robust and scalable security for modern networks.

The problem of Trust Management

Security mechanisms are not limited to the implementation of a specific algorithm or protocol; they often require a robust system of trust management that ensures the participants involved can securely access and exchange information. A fundamental aspect of this is the need for participants to possess secret information—such as encryption keys, passwords, or certificates—that is crucial to the functioning of the security system. This introduces a host of challenges regarding how such sensitive information is generated, distributed, and protected from unauthorized access.

For instance, the creation and distribution of cryptographic keys need to be handled with care to prevent interception or theft. Secure key exchange protocols must be employed, and mechanisms for storing keys securely—such as hardware security modules or secure enclaves—must be in place. Additionally, the management of trust between parties is often based on these secrets being kept confidential. If any party loses control over their secret information or if it is exposed, the entire security framework may be compromised.

Beyond the management of secrets, trust management also involves the reliance on communication protocols whose behaviour can complicate the development and reliability of security mechanisms. Many security mechanisms depend on the assumption that certain communication properties will hold, such as predictable latency, order of message delivery, or the integrity of data transmission. However, in real-world networks, factors like varying network conditions, congestion, and protocol design can introduce unpredictable delays or alter the sequence in which messages are delivered. For example, if a security system depends on setting time-sensitive limits for message delivery—such as in time-based authentication or transaction protocols—any communication protocol or network that causes delays or variability in transit times may render these time limits ineffective. This unpredictability can undermine the security mechanism's ability to detect fraud, prevent replay attacks, or ensure timely authentication.

Moreover, issues of trust management also extend to the trustworthiness of third-party services or intermediaries, such as certificate authorities in public key infrastructures or cloud service providers. If the trust assumptions about these intermediaries fail, it can lead to a cascade of vulnerabilities in the broader security system. Thus, a well-designed security mechanism must account not only for the secure handling of secret information but also for the potential pitfalls introduced by variable communication conditions and the complexities of establishing reliable trust relationships in a decentralized or distributed environment.

Continuous Development of New Attack Methods Computer and network security can be viewed as an ongoing battle of wits, where attackers constantly seek to identify and exploit vulnerabilities. In contrast, security designers or administrators work tirelessly to close those gaps. One of the inherent challenges in this battle is the asymmetry of the situation: the attacker only needs to discover and exploit a single weakness to compromise a system, while the security designer must anticipate and mitigate every potential vulnerability to achieve what is considered “perfect” security.

This stark contrast creates a significant advantage for attackers, as they can focus on finding just one entry point, one flaw, or one overlooked detail in the system's defences. Moreover, once a vulnerability is identified, it can often be exploited rapidly, sometimes even by individuals with minimal technical expertise, thanks to the availability of tools or exploits developed by more sophisticated attackers. This constant risk of discovery means that the security landscape is always in a state of flux, with new attack methods emerging regularly.

On the other hand, the designer or administrator faces the monumental task of not only identifying every potential weakness in the system but also understanding how each vulnerability could be exploited in novel ways. As technology evolves and new systems, protocols, and applications are developed, new attack vectors emerge, making it difficult for security measures to remain static. Attackers continuously innovate, leveraging new technologies, techniques, and social engineering strategies, further complicating the task of defence. They may adapt to changes in the environment, bypassing traditional security mechanisms or exploiting new weaknesses introduced by system updates or third-party components.

This dynamic forces security professionals to stay one step ahead, often engaging in continuous research and development to identify new threat vectors and implement countermeasures. It also underscores the impossibility of achieving perfect security. Even the most well-designed systems can be vulnerable to the next wave of attacks, and the responsibility to defend against these evolving threats is never-ending. Thus, the development of new attack methods ensures that the landscape of computer and network security remains a complex, fast-paced arena in which defenders must constantly evolve their strategies to keep up with increasingly sophisticated threats.

Security is Often Ignored or Poorly Implemented During Design One of the critical challenges in modern system development is that security is frequently treated as an afterthought rather than being integrated into the design process from the outset. In many cases, security considerations are only brought into the discussion after the core functionality and architecture of the system have been designed, developed, and even deployed. This reactive approach, where security is bolted on as an additional layer at the end of the development cycle, leaves systems vulnerable to exploitation by malicious actors who are quick to discover and exploit flaws that were not initially considered.

The tendency to overlook security during the early stages of design often stems from a focus on meeting functionality requirements, deadlines, or budget constraints. When security is not a primary consideration from the start, it is easy for developers to overlook potential vulnerabilities or fail to implement adequate protective measures. As a result, the system may have critical weaknesses that are difficult to identify or fix later on. Security patches or adjustments, when made, can become cumbersome and disruptive, requiring substantial changes to the architecture or design of the system, which can be time-consuming and expensive.

Moreover, systems that were not designed with security in mind are often more prone to hidden vulnerabilities. For example, they may have poorly designed access controls, insufficient data validation, inadequate encryption, or weak authentication methods. These issues can remain undetected until an attacker discovers a way to exploit them, potentially leading to severe breaches of data integrity, confidentiality, or availability. Once a security hole is identified, patching it in a system that was not built with security in mind can be challenging because it may require reworking substantial portions of the underlying architecture or logic, which may not have been anticipated during the initial design phase.

The lack of security-focused design also affects the scalability and long-term reliability of the system. As new features are added or updates are made, vulnerabilities can emerge if security isn't continuously integrated into each step of the development process. This results in a system that may work perfectly under normal conditions but is fragile or easily compromised when exposed to malicious threats.

To address this, security must be treated as a fundamental aspect of system design, incorporated from the very beginning of the development lifecycle. It should not be a separate consideration but rather an integral part of the architecture, just as essential as functionality, performance, and user experience. By prioritizing security during the design phase, developers can proactively anticipate potential threats, reduce the risk of vulnerabilities, and build systems that are both robust and resilient to future security challenges.

Difficulties in Striking a Balance Between Security and Customer Satisfaction One of the ongoing challenges in information system design is finding the right balance between robust security and customer satisfaction. Many users, and even some security administrators, perceive strong security measures as an obstacle to the smooth, efficient, and user-friendly operation of a system or the seamless use of information. The primary concern is that stringent security protocols can complicate system access, slow down processes, and interfere with the user experience, leading to frustration or dissatisfaction among users.

For example, implementing strong authentication methods, such as multi-factor authentication (MFA), can significantly enhance security but may also create additional steps for users, increasing friction during login or access. While this extra layer of protection helps mitigate security risks, it may be perceived as cumbersome or unnecessary by end-users who prioritize convenience and speed. Similarly, the enforcement of strict data encryption or secure communication protocols can slow down system performance, which, while important for protecting sensitive information, may result in delays or decreased efficiency in routine operations.

Furthermore, security mechanisms often introduce complexities that make the system more difficult for users to navigate. For instance, complex password policies, regular password changes, or strict access control rules can lead to confusion or errors, especially for non-technical users. The more stringent the security requirements, the more likely users may struggle to comply or even bypass security measures in favour of convenience. In some cases, this can create a dangerous false sense of security or undermine the very protections the security measures are designed to enforce.

Moreover, certain security features may conflict with specific functionalities that users require for their tasks, making them difficult or impossible to implement in certain systems. For example, ensuring that data remains secure during transmission often involves limiting access to certain ports or protocols, which could impact the ability to use certain third-party services or applications. Similarly, achieving perfect data privacy may necessitate restricting the sharing of information, which can limit collaboration or slow down the exchange of essential data.

The challenge lies in finding a compromise where security mechanisms are robust enough to protect against malicious threats but are also sufficiently flexible to avoid hindering user workflows, system functionality, and overall satisfaction. Striking this balance requires careful consideration of the needs of both users and security administrators, as well as constant reassessment as technologies and threats evolve. To achieve this, designers must work to develop security solutions that are both effective and as seamless as possible, protecting without significantly disrupting the user experience. Effective user training and clear communication about the importance of security can also help mitigate dissatisfaction by fostering an understanding of why these measures are necessary. In the end, the goal should be to create an information system that delivers both a secure environment and a positive, user-centric experience.

Users Often Take Security for Granted A common issue in the realm of cybersecurity is that users and system managers often take security for granted, not fully appreciating its value until a security breach or failure occurs. This tendency arises from a natural human inclination to assume that systems are secure unless proven otherwise. When everything is functioning smoothly, users are less likely to prioritize security, viewing it as an invisible or abstract concept that doesn't immediately impact their day-to-day experience. This attitude can lead to a lack of awareness about the potential risks they face or the importance of investing in strong security measures to prevent those risks.

Many users, especially those looking for cost-effective solutions, are primarily concerned with acquiring devices or services that fulfil their functional needs—whether it’s a smartphone, a laptop, or an online service. Security often takes a backseat to factors like price, convenience, and performance. In the pursuit of low-cost options, users may ignore or undervalue security features, opting for devices or platforms that lack robust protections, such as outdated software, weak encryption, or limited user controls. While these devices or services may meet the immediate functional demands, they may also come with hidden security vulnerabilities that leave users exposed to cyber threats, such as data breaches, identity theft, or malware infections.

Additionally, system managers or administrators may sometimes adopt a similar mindset, focusing on operational efficiency, functionality, and cost management while overlooking the importance of implementing comprehensive security measures. Security features may be treated as supplementary or even as burdens, delaying or limiting their integration into the system. This results in weak points in the system that are only recognized when an attack happens, and by then, the damage may already be significant.

This lack of proactive attention to security is further compounded by the false sense of safety that can arise when systems appear to be running smoothly. Without experiencing a breach, many users may underestimate the importance of security measures, considering them unnecessary or excessive. However, the absence of visible threats can be deceiving, as many security breaches happen subtly without immediate signs of compromise. Cyber threats are often sophisticated and stealthy, evolving in ways that make it difficult for the average user to identify vulnerabilities before it’s too late.

To counteract this complacency, it’s essential to foster a deeper understanding of the value of cybersecurity among users and system managers. Security should be presented as an ongoing investment in the protection of personal and organizational assets rather than something that can be taken for granted. Education and awareness campaigns can play a crucial role in helping users recognize that robust security measures not only protect against visible threats but also provide long-term peace of mind. By prioritizing security at every stage of device and system use—whether in design, purchasing decisions, or regular maintenance—users and system managers can build a more resilient, secure environment that is less vulnerable to emerging cyber risks.

Security monitoring challenges in IoT infrastructures Security Requires Regular, Even Constant Monitoring, and This is Difficult in Today’s Short-Term, Overloaded Environment. One of the key components of maintaining strong security is continuous monitoring, yet in today's fast-paced, often overloaded environment, this is a difficult and resource-intensive task. Security is not a one-time effort or a set-it-and-forget-it process; it requires regular, and sometimes even constant, oversight to identify and respond to emerging threats. However, the demand for quick results and the drive to meet immediate business objectives often lead to neglect in long-term security monitoring efforts. In addition, many security teams are stretched thin with multiple responsibilities, making it difficult to prioritize and maintain the level of vigilance necessary for effective cybersecurity.

This challenge is particularly evident in the context of the Internet of Things (IoT), where security monitoring becomes even more complex. The IoT ecosystem consists of a vast and ever-growing number of connected devices, many of which are deployed across different environments and serve highly specific, niche purposes. One of the main difficulties in monitoring IoT devices is that some of them are often hidden or not directly visible to traditional security monitoring tools. For example, certain IoT devices may be deployed in remote locations, embedded in larger systems, or integrated into complex networks, making it difficult for security teams to gain a comprehensive view of all the devices in their infrastructure. These “invisible” devices are prime targets for attackers, as they can easily be overlooked during routine security assessments.

The simplicity of many IoT devices further exacerbates the monitoring challenge. These devices are often designed to be lightweight, inexpensive, and easy to use, which means they may lack advanced security features such as built-in encryption, authentication, or even the ability to alert administrators to suspicious activities. While their simplicity makes them attractive from a consumer standpoint—offering ease of use and low cost—they also make them more vulnerable to attacks. Without sophisticated monitoring capabilities or secure configurations, these devices can be exploited by attackers to infiltrate a network, launch DDoS attacks, or compromise sensitive data.

Moreover, many IoT devices are deployed without proper oversight or follow-up, as organizations may prioritize functionality over security during the procurement process. This “set-and-forget” mentality means that once IoT devices are installed, they are often left unchecked for long periods, creating a window of opportunity for attackers to exploit any weaknesses. Additionally, many IoT devices may not receive regular firmware updates, leaving them vulnerable to known exploits that could have been patched if they had been regularly monitored and maintained.

The rapidly evolving landscape of IoT, combined with the sheer number of devices, makes it almost impossible for security teams to stay on top of every potential threat in real time. To address this challenge, organizations need to adopt more robust, continuous monitoring strategies that can detect anomalies across a wide variety of devices, including IoT. This may involve leveraging advanced technologies such as machine learning and AI-based monitoring systems that can automatically detect and respond to suspicious behaviour without the need for constant human intervention. Additionally, IoT devices should be integrated into a broader, cohesive security framework that includes regular updates, vulnerability assessments, and comprehensive risk management practices to ensure that these devices are secure and that any potential security gaps are identified and addressed in a timely manner.

Ultimately, as IoT continues to grow in both scale and complexity, security teams will need to be more proactive in implementing monitoring solutions that provide visibility and protection across all layers of the network. This requires not only advanced technological tools but also a cultural shift toward security as a continuous, ongoing process rather than something that can be handled in short bursts or only when a breach occurs.

The Procedures Used to Provide Particular Services Are Often Counterintuitive Security mechanisms are typically designed to protect systems from a wide range of threats. Still, the procedures used to implement these mechanisms are often counterintuitive or not immediately obvious to users or even to those implementing them. In many cases, security features are complex and intricate, requiring multiple layers of protection, detailed configurations, and extensive testing. When a user or system administrator is presented with a security requirement—such as ensuring data confidentiality, integrity, or availability—it is often not clear that such elaborate and sometimes cumbersome measures are necessary. At first glance, the measures may appear excessive or overly complicated for the task at hand, leading some to question their utility or necessity.

It is only when the various aspects of a potential threat are thoroughly examined that the need for these complex security mechanisms becomes evident. For example, a seemingly simple requirement, such as ensuring the secure transfer of sensitive data, may involve a series of interconnected security protocols, such as encryption, authentication, access control, and non-repudiation, which are often hidden from the end user. Each of these mechanisms serves a critical role in protecting the data from potential threats—such as man-in-the-middle attacks, unauthorized access, or data tampering—but this level of sophistication is not always apparent at first. The complexity is driven by the diverse and evolving nature of modern cyber threats, which often require multi-layered defences to be effective.

The necessity for such intricate security procedures often becomes clearer when a more in-depth understanding of the potential threats and vulnerabilities is gained. For instance, an attacker may exploit seemingly minor flaws in a system, such as weak passwords, outdated software, or unpatched security holes. These weaknesses may not be immediately obvious or may seem too trivial to warrant significant attention. However, once a security audit is conducted and the full scope of potential risks is considered—ranging from insider threats to advanced persistent threats (APTs)—it becomes apparent that a more robust security approach is required to safeguard against these risks.

Moreover, the procedures designed to mitigate these threats often involve trade-offs in terms of usability and performance. For example, enforcing stringent authentication methods may slow down access times or require users to remember complex passwords, which may seem inconvenient or unnecessary unless the potential for unauthorized access is fully understood. Similarly, implementing encryption or firewalls may add processing overhead or introduce network delays, which might seem like a burden unless it is clear that these measures are essential for defending against data breaches or cyberattacks.

Ultimately, security mechanisms are often complex and counterintuitive because they must account for a wide range of potential threats and adversaries, some of which may not be immediately apparent. The process of securing a system involves considering not only current risks but also future threats that may emerge as technology evolves. As such, security measures must be designed to be adaptable and resilient in the face of new and unexpected challenges. The complexity of these measures is a reflection of the dynamic and ever-evolving nature of the cybersecurity landscape, where seemingly simple tasks often require sophisticated, multi-faceted solutions to provide the necessary level of protection.

The Complexity of Cybersecurity Threats from the Emerging Field of Artificial Intelligence (AI) As Artificial Intelligence (AI) continues to evolve and integrate into various sectors, the cybersecurity landscape is becoming increasingly complex. AI, with its advanced capabilities in machine learning, data processing, and automation, presents a double-edged sword. While it can significantly enhance security systems by improving threat detection and response times, it also opens up new avenues for sophisticated cyberattacks. The growing use of AI by malicious actors introduces an entirely new dimension to cybersecurity threats, making traditional defence strategies less effective and increasing the difficulty of safeguarding sensitive data and systems.

One of the primary challenges AI presents in cybersecurity is its ability to automate and accelerate the process of identifying and exploiting vulnerabilities. AI-driven attacks can adapt and evolve in real-time, bypassing traditional detection systems that rely on predefined rules or patterns. For example, AI systems can use machine learning algorithms to continuously learn from the behaviour of the system they are attacking, refining their methods to evade security measures, such as firewalls or intrusion detection systems (IDS). This makes detecting AI-based attacks much harder because they can mimic normal system behaviour or use techniques that were previously unseen by human analysts.

Furthermore, AI’s ability to process and analyze vast amounts of data makes it an ideal tool for cybercriminals to mine for weaknesses. With AI-powered tools, attackers can sift through large datasets, looking for patterns or anomalies that could indicate a vulnerability. These tools can then use that information to craft highly targeted attacks, such as spear-phishing campaigns, that are more convincing and difficult to detect. Additionally, AI can be used to automate social engineering attacks by personalizing and optimizing messages based on available user data, making them more effective at deceiving individuals into divulging sensitive information or granting unauthorized access.

Another layer of complexity arises from the potential misuse of AI in creating deepfakes or synthetic media, which can be used to manipulate individuals or organizations. Deepfakes, powered by AI, can generate realistic videos, audio recordings, or images that impersonate people in positions of authority, spreading misinformation or causing reputational damage. In the context of cybersecurity, such techniques can be employed to manipulate employees into granting access to secure systems or to convince stakeholders to make financial transactions based on false information. The ability of AI to produce high-quality, convincing fake content complicates the detection of fraud and deception, making it harder for both individuals and security systems to discern legitimate communication from malicious ones.

Moreover, AI’s influence in the cyber world is not limited to the attackers; it also has significant implications for the defenders. While AI can help improve security measures by automating the analysis of threats, predicting attack vectors, and enhancing decision-making, it also presents challenges for security professionals who must stay ahead of increasingly sophisticated AI-driven attacks. Security systems that rely on traditional, signature-based detection methods may struggle to keep pace with the dynamic and adaptive nature of AI-driven threats. AI systems in cybersecurity must be continually updated and refined to combat new and evolving attack techniques effectively.

The use of AI in cybersecurity also raises concerns about vulnerabilities within AI systems themselves. AI algorithms, especially those based on machine learning, are not immune to exploitation. For instance, attackers can manipulate the training data used to teach AI systems, introducing biases or weaknesses that can be exploited. This is known as an “adversarial attack,” where small changes to input data can cause an AI model to make incorrect predictions or classifications. Adversarial attacks pose a significant risk, particularly in systems relying on AI for decision-making, such as autonomous vehicles or critical infrastructure systems.

As AI continues to advance, it is clear that cybersecurity strategies will need to adapt and evolve in tandem. The complexity of AI-driven threats requires a more dynamic and multifaceted approach to defence, combining traditional security measures with AI-powered tools that can detect, prevent, and respond to threats in real-time. Additionally, as AI technology becomes more accessible, organizations need to invest in training and resources to ensure that their cybersecurity teams can effectively navigate the complexities introduced by AI in both attack and defence scenarios. The convergence of AI and cybersecurity is a rapidly evolving field, and staying ahead of emerging threats will require constant vigilance, innovation, and collaboration across industries and sectors.

The Difficulty in Maintaining a Reasonable Trade-off Between Security, QoS, Cost, and Energy Consumption One of the key challenges in modern systems design, particularly in areas like network architecture, cloud computing, and IoT, is balancing the competing demands of security, Quality of Service (QoS), cost, and energy consumption. Each of these factors plays a critical role in the performance and functionality of a system, but prioritizing one often comes at the expense of others. Achieving an optimal trade-off among these elements is complex and requires careful consideration of how each factor influences the overall system.

Security is a critical component in ensuring the protection of sensitive data, system integrity, and user privacy. Strong security measures—such as encryption, authentication, and access control—are essential for safeguarding systems from cyberattacks, data breaches, and unauthorized access. However, implementing high-level security mechanisms often increases system complexity and processing overhead. For example, encryption can introduce delays in data transmission, while advanced authentication methods (e.g., multi-factor authentication) can slow down access times. This can negatively impact the Quality of Service (QoS), which refers to the performance characteristics of a system, such as its responsiveness, reliability, and availability. In environments where low latency and high throughput are essential, such as real-time applications or high-performance computing, security measures that introduce delays or bottlenecks can degrade QoS.

Cost is another critical consideration, as organizations need to manage both the upfront and ongoing expenses associated with system development, implementation, and maintenance. Security mechanisms often involve significant costs, both in terms of the resources required to design and deploy them and the ongoing monitoring and updates needed to keep systems secure. Similarly, ensuring high QoS may require investments in premium infrastructure, high-bandwidth networks, and redundant systems to ensure reliability and minimize downtime. Balancing these costs with budget constraints can be difficult, particularly when investing in top-tier security or infrastructure can result in higher operational expenses.

Finally, energy consumption is an increasingly important factor, particularly in the context of sustainable computing and green technology initiatives. Systems that require constant security monitoring, high-level encryption, and redundant infrastructures tend to consume more energy, which not only increases operational costs but also contributes to environmental concerns. In energy-constrained environments, such as IoT devices or mobile applications, managing power usage is particularly challenging. Energy-efficient security measures may not be as robust or may require trade-offs in terms of the level of protection they provide.

Striking a reasonable balance among these four factors requires careful optimization and decision-making. In some cases, prioritizing security can lead to a reduction in system performance (QoS) or increased energy consumption, while focusing on minimizing energy usage might result in security vulnerabilities. Similarly, trying to cut costs by opting for cheaper, less secure solutions can lead to higher long-term expenses if a security breach occurs.

To achieve an effective balance, organizations must take a holistic approach, considering the specific requirements of the system, the potential risks, and the constraints on resources. For example, in critical infrastructure or financial systems, security may need to take precedence over cost or energy consumption, as the consequences of a breach would be too significant to ignore. In contrast, consumer-facing applications may place more emphasis on maintaining QoS and minimizing energy consumption while adopting security measures that are adequate for the threat landscape but not as resource-intensive.

Advanced technologies, such as machine learning and AI, can help in dynamically adjusting the trade-offs based on real-time conditions. For example, AI-powered systems can adjust security measures based on the sensitivity of the data being transmitted or the load on the system, optimizing for both security and performance. Similarly, energy-efficient algorithms and hardware can be employed to minimize power usage without sacrificing too much security or QoS.

Ultimately, achieving a reasonable trade-off between security, QoS, cost, and energy consumption requires a careful, context-specific approach, ongoing monitoring, and the ability to adjust strategies as system requirements and external conditions evolve.

Neglecting to Invest in Cybersecurity Failing to allocate adequate resources to cybersecurity is a critical mistake that many organizations, especially smaller businesses and startups, make. The consequences of neglecting cybersecurity investments can be far-reaching, with potential damages affecting both the organization's immediate operations and its long-term viability. In today's increasingly digital world, where sensitive data and critical infrastructure are interconnected through complex networks, cybersecurity is no longer a luxury or a secondary concern—it is an essential element of any business strategy. Ignoring or underestimating the importance of cybersecurity exposes an organization to a wide range of threats, ranging from data breaches to ransomware attacks, each of which can result in significant financial losses, reputational damage, and legal ramifications.

One of the most immediate risks of neglecting cybersecurity is the increased vulnerability to cyberattacks. Hackers and cybercriminals are continuously evolving their techniques, using sophisticated methods to exploit weaknesses in systems, networks, and applications. Without adequate investment in cybersecurity measures, such as firewalls, encryption, intrusion detection systems (IDS), and multi-factor authentication (MFA), organizations create a fertile ground for these attacks. Once a system is compromised, the damage can be extensive: sensitive customer data may be stolen, intellectual property could be leaked, and systems may be crippled, leading to prolonged downtime and operational disruptions.

Beyond the immediate damage, neglecting cybersecurity can also have a long-term impact on an organization's reputation. In today's hyper-connected world, news of a data breach or cyberattack spreads quickly, potentially causing customers and partners to lose trust in the organization. Consumers are increasingly concerned about the privacy and security of their personal information, and a single breach can lead to a loss of customer confidence that may take years to rebuild. Moreover, businesses that fail to protect their customers' data may also face significant legal and regulatory consequences. Laws such as the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) impose strict requirements on data protection, and failure to comply with these regulations due to inadequate cybersecurity measures can result in heavy fines, lawsuits, and other legal penalties.

Another key consequence of neglecting cybersecurity is the potential for operational disruptions. Cyberattacks can cause significant downtime, rendering critical business systems inoperable and halting normal operations. For example, a ransomware attack can lock organizations out of their systems, demanding a ransom payment for the decryption key. During this period, employees may be unable to access important files, emails, or customer data, and business processes may come to a standstill. This operational downtime not only disrupts the workflow but also results in lost productivity and revenue, with some companies facing weeks or even months of recovery time.

Additionally, the cost of dealing with the aftermath of a cyberattack can be overwhelming. Organizations that do not invest in proactive cybersecurity measures often find themselves spending significantly more on recovery efforts after an incident. These costs can include legal fees, public relations campaigns to mitigate reputational damage, and the implementation of new security measures to prevent future breaches. In many cases, these costs far exceed the initial investment that would have been required to establish a robust cybersecurity program.

Neglecting cybersecurity also puts an organization at risk of missing out on potential opportunities. As businesses increasingly rely on digital technologies, clients, partners, and investors are placing more emphasis on the security of an organization's systems. Organizations that cannot demonstrate strong cybersecurity practices may find themselves excluded from partnerships, denied contracts, or even losing out on investment opportunities. For example, many companies today require their suppliers and partners to meet specific cybersecurity standards before entering into business agreements. Failing to meet these standards can limit growth potential and damage business relationships.

Furthermore, as technology evolves and the digital threat landscape becomes more complex, cybersecurity requires ongoing attention and adaptation. A one-time investment in security tools and protocols is no longer sufficient to keep systems protected. Cybercriminals constantly adapt their tactics, developing new types of attacks and finding innovative ways to bypass traditional defences. Therefore, cybersecurity is an ongoing effort that requires regular updates, continuous monitoring, and employee training to stay ahead of the latest threats. Neglecting to allocate resources for regular security audits, patch management, and staff education leaves an organization vulnerable to these evolving threats.

In conclusion, neglecting to invest in cybersecurity is a risky and potentially catastrophic decision for any organization. The consequences of a cyberattack can be severe, ranging from financial losses and operational downtime to reputational harm and legal penalties. By making cybersecurity a top priority and investing in the right tools, processes, and expertise, organizations can protect their data, systems, and reputation from the growing threat of cybercrime. Cybersecurity is not just a technical necessity; it is a critical business strategy that can safeguard an organization's future and foster trust with customers, partners, and investors.

Vulnerabilities in IoT systems

In order to secure IoT systems to data confidentiality, privacy, and integrity, it is important to understand the various vulnerabilities or security weaknesses of IoT systems that can exploited by cybercriminals. Most of the security vulnerabilities of IoT are found at the physical layer of the IoT reference architecture, which consists of the IoT devices. As discussed in the previous sections, IoT devices have limited computing and communication resources, making it difficult to implement strong security protocols and algorithms that can ensure that the confidentiality, integrity, availability, accountability, and nonrepudiation security requirements of IoT data and systems are satisfied. Hence, the security measures often designed and implemented to secure IoT data and systems are not sufficient, making IoT systems vulnerable to several types of cybersecurity attacks and easier to compromise.

As IoT devices are being integrated into existing systems of businesses, personal devices, household systems, and critical infrastructure, they are becoming attractive targets for cybercriminals, making them vulnerable to constant attacks. Cybercriminals are often searching for security weaknesses (vulnerabilities) in IoT devices that they can exploit in order to steal or damage data, disrupt the quality of service, or coordinate the devices to conduct large-scale attacks such as DoD/DDoS attacks or any attack to compromise other systems, especially critical infrastructures.

Some common IoT vulnerabilities

Given the serious risk posed by security weaknesses in IoT systems to IoT services and other services in society, including the possibility of causing loss of human lives or disrupting society as a whole, it is important to identify IoT security vulnerabilities and address them before cybercriminals can exploit them. The proliferation of diverse IoT devices across various sectors in society with very little or no standardisation and regulation has increased IoT vulnerabilities and attack surfaces that can be leveraged by cybercriminals to compromise the data that is collected using IoT devices and to compromise existing systems. Some of the IoT security vulnerabilities include the following:

  • Embedding of passwords on the IoT devices: In order to facilitate remote technical support, IoT engineers and developers must remotely access the devices for configuration during deployments and troubleshooting during the operations and maintenance of IoT networks with many devices. This makes it easy for cybercriminals to have access to IoT devices and can exploit them for malicious purposes.
  • Lack of authentication and mechanism: Sometimes, some IoT manufacturers ship devices without incorporating any authentication mechanism, making the devices vulnerable to unauthorised access by malicious attackers, which violates the confidentiality, privacy, and integrity of IoT data. They may also take over the devices and use them for malicious purposes. Thus, devices without any form of authentication are rugged devices that can used as an attack surface to conduct advanced attacks on IoT systems and other critical resources.
  • Weak passwords: In order to make their devices easy to use, device manufacturers ship devices with default security such as hardcoded passwords, which users are not able to change or default usernames and passwords or provide a simple way of logging into the device. Since the security credentials set by the manufacturer are easy and are never changed, attackers usually exploit them to gain access to the device, compromising the confidentiality and integrity of the data. They can use the devices for further attacks.
  • Backdoors: Most IoT manufacturers create hidden access mechanisms called back doors (user-id/password or open ports) to permit them to support the devices. Attackers often access these back doors and then exploit them to launch attacks (e.g., botnets and other malware attacks).
  • Failure to install security patches and updates: Some IoT manufacturers do not provide a simple and effective way to install security patches and updates, making it difficult for IoT service providers to resolve security vulnerabilities before they can exploited by cybercriminals. Unlike traditional computer systems that have mechanisms for continuous installation of security updates and notification of security changes due to updates, IoT devices are very simple and lack these features, making them vulnerable to cyberattacks. Also, due to their simple nature, IoT devices are vulnerable to attacks such as unauthorised software and firmware updates. IoT manufacturers do not even release patches or updates of the software that comes on their devices, and attackers exploit them. And even if patches and updates are released, users have difficulties adding them to the device, and most of the vulnerabilities in these devices are never patched
  • Poorly protected network services: The wireless communication channel between the IoT device and the access point or gateway is a major attack surface often used to attack IoT devices. One of the network vulnerabilities due to unprotected network services is Unencrypted communication channels. Because of the energy, cost and processing power constraints, most IoT manufacturers do not implement any cryptographic mechanism to ensure secure communication. This makes it easier for attackers to launch man-in-the-middle attacks on IoT networks. Without protecting the communication between the IoT devices and the servers, confidential data, including authentication credentials, can be compromised and used to launch further attacks, such as DoS/DDoS attacks. Also, there are some unnecessary services, such as unprotected ports, that cybercriminals can exploit. That is, failure to disable unused ports or to protect used ports with a firewall leaves them vulnerable to cybersecurity attacks.
  • Internet exposure: Some IoT devices are connected directly to the internet without firewalls or any form of security mechanism and are likely to be attacked.
  • Unprotected interfaces: Some vulnerabilities in IoT systems can be introduced by poorly secure or unprotected interfaces (e.g., web, backend APIs, cloud, fog interfaces), which make IoT devices and other resources vulnerable to cyberattacks. Weak (and sometimes lack of) authentication/authorisation and cryptographic mechanisms make the communication through these interfaces vulnerable to cyberattacks as there is access control to important resources, lack of accountability, and protection of data and systems from being compromised.
  • Use of outdated components: Sometimes IoT device manufacturers are not able to resolve hardware or software security vulnerabilities that have been discovered in IoT devices, forcing IoT service providers to keep using the devices without any security improvements to deal with the known vulnerabilities. These outdated devices with well-known security vulnerabilities become sweet pots for cybercriminals to exploit, compromise, and damage IoT systems and resources.
  • Supply chain vulnerabilities: The IoT supply chain consists of manufacturers (manufacturers of semiconductor chips, hardware parts, IoT devices, software), distributors, vendors, service providers, and users. Vulnerabilities may be introduced into the IoT devices at any stage of the supply chain. It could be in the form of a piece of compromised software or hardware that has been manipulated or installed to introduce security weaknesses that make IoT devices vulnerable to IoT attacks or easy to compromise. The objective of supply chain attacks could be for cyberespionage (data theft or compromise) and to exploit the devices to launch sophisticated cyberattacks. The use of poorly designed third-party software (such as libraries, drivers, kernels or hardware components) that are installed on the devices or are part of other applications or firmware may introduce several vulnerabilities that may eventually be exploited to compromise the devices or use them for further attacks on infrastructures. One of the sources of supply chain vulnerabilities is the use of third-party software and hardware components without properly checking for security vulnerabilities and resolving them before incorporating the components into IoT products. In some instances, IoT device developers sometimes copy codes from online sources and add them to their programs for IoT devices with the sole purpose of getting the desired functionalities of the device running. Another form of supply chain vulnerability is the implementation of very little or no security mechanism on the IoT devices either by the IoT device manufacturers or developers (when deploying the device), making them vulnerable to attacks. One of the major challenges of supply chain attacks is that the users are hardly aware of these weaknesses and how many of the devices in their infrastructure from different manufacturers possess such vulnerabilities.
  • Outdated firmware: After IoT devices are deployed, some IoT service providers do not update the firmware or software running on the devices for a very long time. Some do not update at all, leaving them with vulnerabilities that may be exploited.
  • Poor device management strategies and policies: Some IoT devices are deployed without unique identifiers to enable the tracking, monitoring, and management of IoT devices. As a result, some IoT nodes sit on the infrastructure without being properly monitored and managed to ensure that any form of vulnerability can be identified and resolved. If the cybersecurity department is not aware of the presence of some IoT nodes, then they cannot protect them, leaving them vulnerable to attacks. Some IT administrators neglect IoT nodes without giving them the same security efforts they do for traditional computing and networking nodes and do not list them on the inventory of assets that need to be protected; thus, the devices are rarely updated and maintained to ensure that they cannot be compromised or exploited.
  • Poor security key management protocols: If the cryptographic keys are compromised, the IoT devices become vulnerable to man-in-the-middle attacks and other kinds of attacks that could disrupt the IoT service or compromise the IoT data.
  • Poor physical hardening of the IoT devices: The fact that IoT nodes are often deployed in outdoor or remote environments makes them physically accessible to criminals who could compromise them. A criminal could either physically damage the device, extract information, or manipulate the device such that it is not able to perform its normal functions. For example, an attack may copy the data stored in the memory of the device and may even replace some components with compromised ones, which could give them remote access to the devices.
  • Data management vulnerabilities: For large-scale IoT deployments with thousands, tens of thousands or hundreds of thousands of IoT nodes, the sheer volume of IoT data collected is so huge that traditional data management systems may not be able to handle them securely. That is, the confidentiality and integrity of the data may be compromised due to data storage, processing, and retrieval vulnerabilities in data management systems, which get worse with the scalability of IoT assets.
  • Vulnerabilities standardisation: Although there are lots of efforts to ensure proper standardisation in the IoT ecosystem, there are standardisation and interoperability issues. It makes it difficult to design an integrated security system to protect IoT devices from different manufacturers with diverse vulnerabilities. The too diversity in the IoT devices from various manufacturers makes it difficult to integrate IoT devices into existing security frameworks, resulting in weak IoT security or security being taken for granted leaving the devices vulnerable to attacks.

Security strategies to mitigate IoT vulnerabilities

Although IoT vulnerabilities cannot all be eliminated, there are best practices that can be adopted to ensure that IoT vulnerabilities are not easily exploited to compromise IoT data and systems. Some of the security measures and practices that can be adopted to harden IoT security and mitigate the risk of an IoT attack resulting from the exploitation of any of the IoT vulnerabilities include the following:

  • Adoption of security by design principles: At every stage of the IoT lifecycle of the IoT systems, from the design, manufacturing, deployment, operation and maintenance to the decommission and disposal stage, security control measures should be considered and incorporated to ensure that IoT data is not compromised or that the device is not exploited to conduct sophisticated attacks. In this way, every stakeholder in the IoT device supply chain is aware of the various vulnerabilities and implements appropriate measures to resolve them and ensure that they cannot be exploited to compromise the IoT devices or data. Security by design requires close collaboration between IoT designers, engineers, and cybersecurity experts to ensure that security is among the key design criteria. Before IoT devices are released to the market and when they are deployed, there should be a tough security assessment (e.g., by penetration testing or vulnerability scanning) to identify potential vulnerabilities in IoT hardware or software components and communication protocols. In case some vulnerabilities are found, they should be resolved as quickly as possible.
  • Design and enforcement of strong password policies: Devices with hardcoded or embedded passwords should not be deployed in IoT infrastructures, and rather than hardcoding passwords on IoT devices, manufacturers should be required to provide the option for users to create user names and passwords for their devices. Default user names and passwords on IoT devices, access points and gateways should be changed. The password used should be strong enough, and simple and overused passwords should be avoided. It is important to use new, unique, and complex passwords that follow strong password policies. Effective password management policies should be implemented, making it easy to easily and securely update and reset passwords.
  • Mandatory authentication: Every IoT device should be required to authenticate before joining the network, and those without authentication mechanisms should be rejected. This implies that every IoT device must be identifiable and can only be admitted into the network after proper authentication. If possible, multifactor (e.g., two-factor) authentication should be implemented. These measures will ensure that only authorised users and IoT devices can have access to IoT resources, reducing the risk of a security breach.
  • Implementation of effective network security mechanisms: IoT network services and protocols should be properly protected. Port forwarding should be disabled, and ports that are not needed should be closed. Authentication should be required to access IoT networks. Also, network security tools such as firewalls, intrusion detection systems, and intrusion prevention systems should be used to inspect the traffic coming from various sources, and malicious traffic sources should be blocked. Secure network protocols such as TLS/SSL and cryptographic protocols should be used to secure the communication channels. Network segmentation techniques should also be employed to isolate IoT networks from the rest of the infrastructure and to isolate the various IoT networks (especially those integrated with critical assets) to contain potential attacks on isolated segments and to mitigate the risk of compromising critical assets.
  • Regular update of software and firmware: Regular installation of software and firmware updates ensures that the latest security patches are applied to fix security holes or gaps, reducing the chances that existing software security vulnerabilities can be exploited. Manufacturers should make the process of installing software and firmware updates or patches to be as simple or easy as possible. In the ideal case, it should be an automatic process or require just a single click without complex installation procedures.
  • Avoid prioritising ease of use over security: Plug-and-play devices require very few or no additional settings or configurations, introducing vulnerabilities as they can easily be exploited. Avoiding the use of plug-and-play devices and other systems that are easier to deploy and use but easier to compromise.
  • Securing the APIs: The APIs that facilitate the communication between the IoT devices, data collection points, and user interfaces should be properly secured by the implementation of strong authentication (e.g., OAuth for secure authentication), encryption (HTTPS to ensure that the data is encrypted), and access control mechanisms (e.g., validating every input to prevent inject attacks) [37]. Thus, the implementation of API security techniques prevents unauthorized devices and users from accessing the IoT devices and compromising the IoT systems or data.
  • Validating firmware using secure boot mechanism: This ensures that the device is running authorised firmware, protecting the device against malicious software and firmware tampering. In this way, the device verifies the digital signature of the firmware during the boot process. It prevents the execution of unauthorized or modified firmware, ensuring the integrity of the device. Thus, manufacturers should incorporate mechanisms to verify the authenticity of the firmware at startup and to securely update the device, which will ensure the security of the devices throughout their lifecycle[38].
  • Use of secure key management systems: Cryptographic keys should be properly managed. In the case of an asymmetric encryption scheme of securing commutation to servers in IoT infrastructures, the PKI and digital certificate infrastructure should be used to ensure the secure management of the keys and to maintain trust.
  • Mitigate risk from outdated components: Vulnerable devices should be updated, replaced, or removed from the network. That can be achieved by deploying an effective monitoring system to ensure tighter monitoring and controls to spot vulnerabilities and resolve them quickly.
  • Implement and enforce zero-trust policies: This means that all devices and users inside and outside of the IoT network/infrastructure must be verified, authorised, and evaluated continuously to ensure that they are not a threat or could introduce some vulnerabilities. Over time, users or devices may be compromised and become a threat to critical resources. Thus, automated zero-trust policies are very important and must be enforced.
  • Leverage machine learning tools: Leveraging machine learning tools to automate some security tasks like vulnerability and attack detection and mitigation techniques. The use of AI tools has been shown to be an effective approach to detecting vulnerabilities and attacks in IoT networks. It will be very useful for very large IoT networks. They have been added to security systems such as SIEM systems for the detection of vulnerabilities, threats, and attacks.
  • Training of staff: Continuous training of IoT designers, developers and engineers to know best security practices that will ensure that they do not design, manufacture, or deploy devices with vulnerabilities that may result from an error or carelessness in the design, manufacturing, and deployment process.
  • Continues education of consumers: A lot of manufacturers neglect security features because users are more focused on their desired functionality, ease of use and cost and rarely pay serious attention to security. Users sometimes misuse the devices and fail to install updates and patches. Continued education of users could be very useful.
  • Physical protection of the devices: Appropriate measures should be taken to ensure that the device is not physically compromised, and if such an event should occur, it should be easily detected. Appropriate measures should be taken to ensure that data is not compromised and that the device is not exploited for further attacks.
  • Implement cyber supply chain best practices: In order to reduce supply chain vulnerabilities, follow secure software development lifecycle methods, conduct a thorough review of code from internal and external sources, avoid using counterfeit hardware and software from very untrusted sources, and review the design and development processes for software and hardware from third parties. Also, check the processes for addressing vulnerabilities by vendors [39].

IoT Attack Vectors

In this section, we discuss the concept of IoT attack vectors, attack surfaces, and threat vectors to clarify the difference between these cybersecurity terms that are often used interchangeably. We discuss some IoT attack vectors that should be taken into consideration when designing cybersecurity strategies for IoT networks and systems. We also discuss some strategies that can be used to eliminate or mitigate the risk posed by IoT attack vectors.

IoT attack vector, attack surface, and threat vector

IoT attack vectors are the various methods that can be used by cybercriminals to access IoT devices in order to launch cyberattacks on the IoT infrastructure or other information system infrastructure of an organisation or the Internet as a whole. They provide a means for cybercriminals to exploit security vulnerabilities to compromise the confidentiality, integrity, and availability of sensitive data. It is very important to minimise the attack vectors to reduce the risk of a security breach. It may cost an organisation a lot of money, and its reputation may also be negatively impacted after a security breach.

The number of attack vectors keeps growing as cybercriminals keep developing numerous simple and sophisticated methods to exploit unresolved security vulnerabilities and zero-day abilities on computer systems and networks. In this way, there is no single solution to mitigate the risk posed by the growing number of attack vectors in classical computer systems and networks. As the number of IoT devices connected to the internet increases, the number of IoT-related attack vectors also increases, requiring the development of a holistic cybersecurity strategy that handles the traditional attack vectors (e.g., malware, viruses, email attachments, web pages, pop-ups, instant messages, text messages, and social engineering, credential theft, vulnerability exploits, and insufficient protection against insider threats) and those that are designed to target IoT systems (e.g., exploitation of IoT-based vulnerabilities such as weak or no passwords, lack of firmware and software updates, unencrypted communications).

In order to defend IoT networks and systems, it is important to understand the various ways a cybercriminal can use to gain unauthorised access to IoT networks and systems. The term threat vector is often used interchangeably with attack vector. An IoT threat vector is the total number of potential ways or methods that cybercriminals can use to compromise the confidentiality, integrity, or availability of IoT data and systems. As IoT networks grow in size and are integrated with other IT and cyber-physical systems, the complexities of managing them increase, and the number of threat or attack vectors increases. Therefore, it is very challenging to illuminate all threat or attack vectors, but IoT-based cybersecurity systems are designed to eliminate threat or attack vectors whenever possible.

An IoT attack surface is the total number of attack vectors that cybercriminals can use to manipulate an IoT network or system to compromise its data confidentiality, integrity, or availability. That is, it is the combination of all IoT attack vectors available to cybercriminals to use to compromise IoT data and systems. It implies that the more IoT attack vectors an organisation has due to the deployment of IoT systems, the larger their cybersecurity attack surface and vice versa. Therefore, in order to minimise the attack surface, organisations must minimise the number of attack vectors.

Some IoT attack vectors

In order to eliminate IoT attack vectors, it is important to understand the nature of some of these attack vectors and their sources and then develop comprehensive security strategies to deal with them. In this section, we will discuss IoT attack vectors from the perception layer to the application layer. Some of the IoT attack vectors or ways in which cybercriminals can gain illegal access to IoT networks and systems (to compromise data security or launch further attacks) include the following:

  • Compromised user or device credentials: Password compromise is one of the most common ways that cybercriminals can gain unauthorised access to IoT systems. This is partly because some IoT device manufacturers ship devices with hardcoded passwords and sometimes with default passwords that are rarely changed. This gives cybercriminals easy access to IoT devices, which they use to conduct sophisticated attacks such as DDoS attacks. Password credentials to log in to IoT IoT mobile and web applications can also be compromised by cybercriminals through data leaks, phishing scams, malware, and brute-force attacks.
  • Weak cryptographic algorithms: It is very challenging to implement strong cryptographic algorithms in IoT devices due to hardware constraints, making it easy for cybercriminals to access IoT data transported over wireless communication channels. Also, the confidentiality of sensitive data stored on IoT devices can easily be compromised. Hence, weak cryptographic algorithms (and data encryption algorithms are not implemented) make it attractive for cybercriminals to try to access IoT data through man-in-the-middle attacks.
  • Open communication ports: Unsecured and unnecessarily open ports (virtual entry points into a device that associates network traffic with a given application or process) can be exploited by cybercriminals to gain access to the device. Every necessarily open and unsecured port is a threat vector that cybercriminals can exploit to attack IoT devices, servers, and applications.
  • Misconfigurations: Poorly configured IoT devices, network devices, servers, computing nodes, and applications can serve as weak points that cybercriminals can exploit to attack the IoT network and systems. Thus, exploitation of vulnerabilities created by misconfiguration is one of the ways in which attackers can gain unauthorised access to IoT networks and systems.
  • Firmware vulnerabilities: Since IoT firmware and software are not regularly updated to patch security holes and to protect IoT devices from newly discovered security vulnerabilities, cybercriminals can exploit unresolved firmware and software vulnerabilities to gain unauthorised access to IoT devices and data. Thus, the exploitation of firmware and software vulnerabilities is one of the ways cybercriminals can easily compromise the security of IoT networks and systems.
  • Zero-day vulnerabilities: Several security vulnerabilities (flaws in hardware or software) are regularly being discovered on a daily, weekly, monthly, or annual basis. Suppose there are security vulnerabilities for which the developer has not released a security patch, or the user has not installed/applied the update. In that case, it is likely attackers will exploit such vulnerabilities to gain unauthorised access to IoT networks and systems. The exploitation of unknown vulnerabilities or software flaws before a security patch is released is called a zero-day attack. Therefore, the exploitation of unresolved known vulnerabilities is one of the attack vectors that cybercriminals use to compromise the security of IoT networks and systems.
  • Cross-site scripting (XSS): It is a browser-based attack vector that can inject or insert malicious code within a browser-based application designed for users to access IoT services. For a lot of IoT applications, the end-users access the IoT services hosted on cloud computing platforms through web and mobile applications using their browsers. Cybercriminals can inject malicious code into IoT web applications, re-direct users to fake websites and trick the browser into executing malicious code that downloads malware that infects user devices. That is, the inserted malicious code can launch itself into an infected script that could infect the user's device and steal information. Hence, since IoT services are provided to users through web-based applications, this kind of attack vector will be targeted by cybercriminals.
  • SQL injection: A lot of IoT data is stored in structured databases and then accessed through web and mobile applications by users and other applications. The data stored in structured databases is often managed using SQL (structured Querry Language), which is a kind of programming language that is used to administer or interact with the database to store, access, and manipulate the data. An SQL injection attack vector is one in which an attacker leverages known vulnerabilities to inject malicious SQL statements into an application to trick the server into allowing the attacker to illegally extract, alter or delete information. In the case of IoT applications in which sensor data is collected and stored in structured databases, this type of attack vector will likely be targeted.
  • Distributed Denial of Service (DDoS) attacks: This type of attack vector involves the use of bots to infect IoT devices and then create a botnet (network of bots) that can be controlled to overwhelm IoT gateways, services, data centres, and web applications with a massive amount of traffic or requests. This type of attack aims to cause the IoT gateways, services, data centres, and web applications to crash, depriving the users of accessing IoT services. The is, the attacker takes over a large number of IoT devices, creates a botnet, and redirects traffic from their devices to IoT gateways, services, data centres, and web applications with the goal of disrupting IoT services.
  • Session hijacking: Cybercriminals can gain unauthorised access to sensitive IoT data through session hijacking. When IoT users login to access IoT service, they are provided with a session key or cookie, so you don't need to log in again. This cookie can be hijacked by an attacker who uses it to gain access to sensitive IoT information [40].
  • Malware infection: This attack vector involves the use of malicious software (malware) designed to take control of an IoT network or system. Malware may corrupt and steal data and can also be used to carry out malicious attacks on multiple IoT devices and other systems. Some examples of malware that can be used to target IoT networks and systems include ransomware (malware that can encrypt valuable IoT data or data of IoT users to deprive legitimate access to the data until a ransom is paid), trojan (malware that can be used to create a backdoor that gives attackers unauthorised access to IoT networks and systems)
  • Phishing: This type of attack vector could be targeted at employees of IoT organisations or users to compromise their login credentials. It involves the use of social engineering strategies where the target is contacted by email, telephone, or text message by someone who is posing to be a legitimate colleague or institution to trick them into providing sensitive data, credentials, or personally identifiable information (PII). It is one of the most commonly used attack vectors to gain unauthorised access to sensitive information, and it is also the starting point for many forms of attacks like ransomware attacks (which often start with phishing campaigns against their targets), and spyware (malware that can share sensitive IoT data to attacks).
  • Brut-force attack: It is another attack vector that is aimed at compromising the authentication credentials and encryption keys to gain unauthorised access to IoT data. It could be done by using a trial and error method to guess the password or encryption key in order to gain unauthorised access to IoT networks, systems, and data. If the password and the encryption key are not strong enough, the attacker can illegally gain access to IoT devices. The use of default passwords and weak encryption schemes in IoT devices makes them susceptible to these kinds of attacks.
  • Physical attacks. This type of attack vector involves the adversary's physical access to the IoT device. If an attacker can physically access deployed IoT devices, it is possible to steal sensitive data and also to compromise the devices and then later use them to conduct attacks on IoT networks and other systems.
  • Insider attack: It is also important to consider the fact that legitimate users or employees could decide to leak sensitive IoT data to external entities, compromising the confidentiality of the data. An insider may also delete sensitive data intentionally or unintentionally. This kind of attack vector should be considered when designing a cybersecurity strategy for IoT networks and systems.
  • Exploitation of supply chain vulnerability: This kind of attack vector involves the exploitation of vulnerabilities present in third-party hardware and software systems. Attacks could go after vulnerabilities that are present in third-party hardware and software systems that the supplier of the hardware or software system may not have discovered. Therefore, vulnerabilities present in third-party products may become entry points for attackers to gain unauthorised access to IoT networks and systems.

The attack vectors discussed above could be grouped into two categories: passive and active attack vectors. Passive attack vector exploits are the various ways that attackers can gain unauthorised access to IoT networks and systems without intruding or interfering with their operation. Examples of their kinds of attack vectors include phishing and other types of social engineering-based attack vectors. On the other hand, active attack vector exploits are those that interfere with the operation of the IoT network and system. An example of this category of attack vector includes DDoD attacks, brute-force attacks, malware attacks, etc.

Strategies to defend against well-known IoT attack vector exploits

In order to address common attack vectors, it is important to understand the nature of the attack vector exploits, including passive and active ones. Most attack vector exploits share some common characteristics, which include the following:

  • The attackers first identify targets that they intend to go after.
  • The attackers use social engineering strategies, malware, phishing, and vulnerability scanning tools to scan the IoT network and other information systems of the targeted victim to identify the vulnerabilities that they intend to exploit.
  • The attackers set out to identical a set of attack vectors that they intend to exploit and then search for the tools required to carry out the attack vector exploits.
  • Attackers gain unauthorised access to the IoT systems, steal sensitive data, install malware, and sometimes escalate the attack by using the devices that they have compromised to carry out further attacks to compromise other system resources.
  • The attack tries to clean their tracks to remain undetected. They also steal valuable data or use computing and communication resources.

It is essential to identify and deploy effective security tools and policies to deal with IoT attack vectors. These security tools and policies should be designed to effectively eliminate or reduce the risk from IoT attack vectors from the IoT perception layer to the application layers. Some of the strategies that can be designed to defend IoT networks and systems against well-known IoT attack vectors include the following:

  • Create secure authentication policies: Ensure that default passwords are replaced with strong passwords. Also, encourage the use of password managers to ensure that login credentials are strong and resilient to brute force attacks.
  • Implementation of strong energy-efficient cryptographic schemes: The IoT data stored in IoT devices, computing devices, network devices, and databases should be encrypted or transformed to a format that is unintelligible to unauthorised entities. Data should be encrypted before being transported over communication networks.
  • Secure communication ports: All communication ports should be secured, and unused ports should be closed to ensure that they are not exploited.
  • Identify and resolve vulnerabilities: Use security monitoring tools to identify and resolve vulnerabilities as quickly as possible to ensure that they are not exploited to compromise the security of the IoT network and systems. Also, install or apply security updates as soon as they are released in order to patch security vulnerabilities that may be targeted by attackers quickly.
  • Enforce the policy of least resistance: Implementation of the principle of least privilege, in which only necessary permissions are granted to firmware components and processes. Also, at the networking and application layers, users should be granted only the privileges that they need. Also, when a user no longer needs certain privileges, they should be deactivated.
  • All IoT devices in the network should be identifiable. In order to avoid unwanted access, every device should have a distinct identity to ensure that they can be effectively monitored and must authenticate before they can access IoT networks and systems.
  • Adoption of secure software development methods. The code should be well-tested and reviewed to ensure that security vulnerabilities can be identified and resolved. Also, we should ensure that the libraries used to implement the device firmware are secured and well-tested. When programming IoT devices, copying of already written code from the internet should be minimised to ensure that it does not introduce security vulnerabilities.
  • Continous monitoring of the IoT devices: Keeping an up-to-date inventory of all connected devices and monitoring the activities within IoT devices and other systems. Automated tools should be used to discover all connected devices and continuously scan them to identify every vulnerability and deal with them.
  • Regular security update and patching: Although managing and installing security updates and patching security gaps for thousands of devices can be challenging, Remote Management and Monitoring (RMM) tools can be used to perform regular security updates and patching. This will ensure that IoT device firmware and software are always up to date.
  • Decommission unused IoT devices: Unused IoT devices should be removed from the IoT network. If any IoT device is not being used, it may not be regularly updated or properly secured, which poses a risk to the IoT network and systems. Thus, any used IoT device and any other hard or software system that is not being used should be removed from the IoT network.
  • Implement centralised management for IoT devices: Managing IoT devices, network traffic and data flow from a single point facilitates the detection of malicious events and swiftly addresses them. It also facilitates the implementation of integrated cybersecurity systems that enforce the implementation of security controls throughout the network.
  • Isolate IoT devices from critical system resources and data: By isolating IoT devices from critical system resources and data, we ensure that even if the IoT network is compromised, the attacker cannot move laterally across the network to compromise critical system resources and networks. By segmenting the network and isolating the IoT devices from some of the networks of the organisation, given the organisation more visibility and control of the network.
  • Use updated antimalware software: Ensure that antimalware software to ensure that they can protect against the latest malware.
  • Deploy attack detection and response tools: Deploy automated attack detection and response tools that can quickly detect and stop cyberattacks as soon as they are launched. AI and machine learning tools should be leveraged to design automated attack prevention, detection and response tools for IoT.
  • Regular and effective training of employees: Employees should be well trained to handle cybersecurity tools and to be able to detect social engineering and phishing attacks designed to trick them into leaking sensitive information.
  • Ensuring supply chain security: Ensuring that third-party hardware and software tools are well-secured so that they do not introduce security vulnerabilities that attackers can exploit. Also, ensure that third-party software is regularly updated on time.
  • Zero-trust security approach: Apply the Zero Trust (ZT) security framework to ensure that all users, whether in or outside the organization’s network, are authenticated, authorized, and continuously validated for security configuration and posture before being granted or keeping access to IoT networks, systems, applications and data.
  • System-based security approach: The IoT security landscape is very complex and is constantly changing, requiring integration of security tools, security policies, people, and diverse types of information and cyber-physical systems. The best way to manage the complex and dynamic interaction of complex components that constitute the IoT infrastructure is to use a system-based approach. Concepts from the growing fields of systems thinking, systems dynamics, and software engineering can be borrowed to model and design robust and secure cybersecurity systems for IoT networks and systems.

IoT Security Technologies

In the previous section of this chapter, we discussed the various IoT vulnerabilities, cybersecurity attacks, and attack vectors and the various best practices to address these vulnerabilities, threats, and attack vectors. In this section, we present the various IoT security technologies and a general methodology for securing IoT networks and systems.

Security technologies for various IoT layers

In order to design a robust and comprehensive cybersecurity security system, a variety of cybersecurity tools are deployed. There is no single cybersecurity tool that can handle security issues at all the layers of the IoT reference architecture. Therefore, appropriate security tools can be implemented at the various layers, from the IoT perception or device layer to the application layer. Therefore, IoT security can be categorised into the following categories:

  • IoT device security
  • IoT network security
  • IoT fog/cloud security
  • IoT application security

The hardware constraints of IoT devices make it hard to deploy traditional end-node security tools like firewalls and anti-malware software to secure them. It is also very difficult to update and patch this device in a similar way to how we update and install security patches in traditional end nodes. However, a lot of efforts are still being made to adapt traditional security technologies to secure IoT devices, although there is a growing need for security technologies that could address the specific security of all IoT nodes at a lower energy and communication cost. Some of the technologies designed to secure IoT devices include:

Lightweight Energy-efficient Encryption Algorithms

To enhance the security of data transmitted by IoT devices, it is critical to implement lightweight cryptographic encryption algorithms designed for efficient performance on devices with limited processing power and energy constraints. Algorithms such as Data Encryption Standard (DES), Advanced Encryption Standard (AES), and other optimized, energy-efficient cryptographic schemes play a pivotal role in protecting data integrity and confidentiality.

Importance of Lightweight Encryption Algorithms for IoT

  1. Efficiency and Suitability: Unlike traditional computing systems, many IoT devices operate with constrained computational resources, limited memory, and reduced battery capacity. Therefore, lightweight cryptographic algorithms are essential because they provide robust encryption without overburdening device capabilities. Algorithms like DES and AES have been adapted into lightweight versions, such as AES-128, which strikes a balance between security and efficiency. These adaptations ensure that IoT devices can encrypt data effectively without significant energy drain or processing delays.
  2. Securing Data in Transit: Encryption algorithms protect data as it is transmitted from IoT devices to central servers, cloud platforms, or other networked endpoints. By encoding the data, these algorithms prevent unauthorized interception or tampering during transmission, ensuring that sensitive information—such as health metrics, industrial sensor readings, or home security footage—remains confidential and intact.

Data Protection During Storage and Transmission

  1. Encryption of Data at Rest: Encryption algorithms extend their utility beyond data transmission; they are also vital for securing data at rest. Data stored in device memory, cloud databases, or on-premise servers must be encrypted to mitigate the risk of data breaches. This is especially critical for IoT applications in sectors such as healthcare, finance, and smart cities, where breaches could lead to significant privacy violations or operational disruptions.
  2. Securing Communication Channels: For data in transit, encryption protocols ensure that communication channels are secure. This can include the use of Transport Layer Security (TLS) in combination with lightweight encryption algorithms to create a secure communication pathway. By encrypting the data packets before transmission and decrypting them at the receiving end, IoT systems can prevent man-in-the-middle (MitM) attacks and other types of eavesdropping.

Firmware Integrity Verification

  1. Ensuring Authentic Firmware Updates: Maintaining the integrity of IoT device firmware is essential for preventing the deployment of malicious updates that could compromise device functionality or provide attackers with unauthorized access. Cryptographic digital signatures play a vital role in this process. Before an IoT device accepts and installs firmware updates, the device verifies the cryptographic signature attached to the update.
  2. Process of Verification: Digital signatures utilize public key cryptography to ensure authenticity. When a firmware update is created, it is signed with a private key held by the manufacturer or trusted source. The IoT device, which holds the corresponding public key, verifies the signature upon receiving the update. If the signature matches, the device confirms that the update has not been tampered with and originates from an authentic source. If the signature fails, the device rejects the update to prevent the installation of potentially harmful software.
  3. Protection Against Unauthorized Modifications: This verification process ensures that firmware updates remain secure from unauthorized modifications, safeguarding devices from potential exploitation. Attackers often attempt to inject malicious code through spoofed or altered firmware. By requiring cryptographic signature verification, IoT ecosystems can defend against these risks and maintain trust in device operation.

Enhanced Security Through Layered Cryptographic Solutions

  1. Ordered List ItemCombining Encryption with Other Security Measures: While encryption is a powerful tool, comprehensive IoT security involves a layered approach that integrates encryption with other security protocols. This can include network segmentation, multi-factor authentication (MFA), and intrusion detection systems (IDS). Combining encryption with these practices helps create a robust defence strategy that protects data and infrastructure from a variety of attack vectors.
  2. Future-Proofing with Emerging Cryptographic Techniques: As IoT technology evolves, so too do the methods employed by cybercriminals. To stay ahead, organizations should look into adopting emerging cryptographic techniques like elliptic curve cryptography (ECC), which offers strong security with lower computational overhead than traditional algorithms. Such advancements ensure that IoT systems remain secure, even as processing power and attack sophistication increase over time.

Implementing lightweight cryptographic algorithms, such as DES and AES, is fundamental for ensuring that data transmitted by IoT devices is secure. These algorithms not only safeguard data during storage and communication but also play a critical role in verifying the integrity of firmware updates. By utilizing cryptographic digital signatures, IoT systems can confirm that updates are authentic and unaltered, reinforcing the trustworthiness of the entire IoT ecosystem. For comprehensive security, integrating these cryptographic practices with other proactive measures ensures resilience against a range of cyber threats.

Secure Firmware Verification and Update Mechanisms

The security and reliability of IoT devices are heavily dependent on their firmware, which acts as the foundational software layer that controls the hardware’s functions. Because IoT devices are typically connected to the internet 24/7, they are exposed to a wide range of cybersecurity threats. This makes regular and secure firmware updates critical to patch vulnerabilities, enhance functionality, and defend against new attack vectors. Without secure mechanisms for firmware verification and updates, IoT devices can become entry points for attackers to compromise network security, disrupt services, or steal sensitive data.

Common Firmware-Based Security Risks in IoT Devices

  1. Weak or No Encryption: Many IoT devices have firmware that lacks sufficient encryption protocols. This oversight leaves the device vulnerable to eavesdropping and unauthorized access by malicious actors who can intercept unencrypted data and use it to compromise the device or network. Implementing robust encryption standards is essential to ensure that data communicated between the device and servers remains secure.
  2. Weak Authentication Measures: IoT firmware often includes hardcoded or weak credentials, which attackers can easily exploit. Such vulnerabilities provide an entry point for unauthorized users to gain control over the device. To mitigate this risk, firmware should be designed to support strong, configurable authentication methods that require users to implement unique, complex credentials.
  3. Absence of Secure Update Mechanisms: The lack of secure update procedures poses significant risks. Firmware that cannot be securely updated or patched leaves devices exposed to known vulnerabilities, allowing attackers to exploit these weaknesses to launch cyberattacks. Secure update mechanisms that involve digital signatures and integrity checks should be incorporated to ensure only authentic and authorized updates are applied.
  4. Risk of Tampering and Alteration: IoT devices without secure boot and update procedures are highly susceptible to tampering. Attackers can modify or replace firmware with malicious code, enabling them to control the device or create persistent backdoors. Implementing secure boot processes ensures that the device only loads firmware that has been verified and authenticated, preventing unauthorized code from executing during start-up.
  5. Threats from Poor Development Practices: Insufficient security measures during the firmware development phase can result in built-in vulnerabilities that attackers can exploit. Poor coding practices or the introduction of security flaws by malicious insiders increase the risk of firmware being compromised. Ensuring robust security protocols during development, such as code reviews, automated security testing, and secure development lifecycles, is crucial for minimizing these risks.

Best Practices for Secure Firmware Verification and Updates

  1. Ordered List ItemSecure Boot Processes: A secure boot process is vital for protecting IoT devices from running unauthorized or malicious firmware during start-up. This process involves cryptographic verification, where the device’s firmware is digitally signed by the manufacturer. The device’s hardware checks this signature before loading the firmware, ensuring that only firmware verified by the manufacturer is allowed to run. This step prevents tampering, unauthorized modifications, and malware injection attacks.
  2. Digital Signatures for Verification: Digital signatures provide an additional security layer by authenticating the source and integrity of firmware updates. The use of public-key cryptography ensures that the firmware has not been altered in transit and comes from a trusted source. Any update that fails the signature verification is rejected, safeguarding the device from potentially harmful code.
  3. Secure Over-the-Air (OTA) Update Mechanisms: OTA updates offer a streamlined way to deliver firmware patches and security updates without physical intervention. An over-the-air (OTA) update is a method used to remotely update the software or firmware of an IoT device without the need for physical intervention. OTA updates allow manufacturers and network administrators to efficiently distribute patches, feature enhancements, security fixes, and bug resolutions to IoT devices connected over a network. This remote update capability is crucial for maintaining device performance, addressing emerging vulnerabilities, and ensuring that devices operate with the latest security protocols. With OTA updates, IoT devices can receive important upgrades seamlessly, reducing the downtime and logistical challenges associated with manual updates. To ensure security, OTA updates should include encrypted data transmission, authentication protocols to verify the source of the update, and integrity checks to confirm that the update has not been tampered with during transit. Proper implementation of OTA mechanisms not only enhances the functionality and security of IoT devices but also strengthens the overall resilience of the IoT ecosystem.
  4. Integrity Checks and Fail-Safe Mechanisms: Incorporating integrity checks during the update process helps ensure that firmware has not been altered or corrupted. Devices should be equipped with rollback mechanisms that revert to a known safe state if an update fails validation or disrupts functionality. This ensures continuous operation and protects against accidental or malicious firmware corruption.
  5. Regular Security Audits and Patch Management: Firmware should be regularly audited for vulnerabilities, even post-deployment. Manufacturers should maintain a proactive approach to identifying potential weaknesses and releasing patches promptly. IoT devices should support automated patch management to streamline the distribution and application of updates while ensuring that each update passes security checks before installation.

The Role of Standards and Regulations

Adhering to industry standards and regulations, such as those outlined by the Internet Engineering Task Force (IETF) and National Institute of Standards and Technology (NIST), can bolster the security of IoT firmware. These guidelines provide best practices for secure development, encryption protocols, and authentication mechanisms. Compliance with these standards helps establish trust among users and aligns with global cybersecurity expectations.

Manufacturers and businesses deploying IoT devices should ensure that their firmware update processes and verification mechanisms comply with relevant security standards. This not only protects devices from cyberattacks but also demonstrates a commitment to security and can provide competitive advantages in industries where data protection is paramount.

Secure firmware verification and update mechanisms are indispensable for maintaining the security and integrity of IoT devices. Implementing a secure boot process that loads and executes only trusted, digitally signed firmware is essential to prevent unauthorized or tampered firmware from running. This measure protects IoT devices from malware injection attacks during start-up. Additionally, secure over-the-air (OTA) update mechanisms should be established to enable the safe delivery of patches and security updates to IoT devices, safeguarding against man-in-the-middle attacks and unauthorized modifications during the update process [41]. These strategies, combined with rigorous development practices and compliance with industry standards, create a robust security framework that supports the safe operation of IoT ecosystems.

Blockchain-based firmware updates

Regular firmware updates for IoT devices are essential to maintain security and functionality; however, ensuring the authenticity, integrity, and compatibility of these updates poses significant challenges. Leveraging blockchain technology can enhance the security and reliability of the entire update process—from generation and signing to distribution, verification, and installation. This approach greatly reduces the risk of malicious tampering, unauthorized modifications, or errors that could compromise devices or networks.

Blockchain technology facilitates transparent collaboration among multiple stakeholders, allowing them to contribute to and review firmware code while maintaining a clear, traceable record of versions and code changes. Digital signatures and cryptographic hashes can be employed to confirm the source's identity and the integrity of the update content. Additionally, blockchain consensus mechanisms and smart contracts provide a robust framework for verifying and executing updates, as well as recording and auditing the results. This ensures a comprehensive and secure process for firmware updates, safeguarding both devices and connected networks.

Antimalware tools for IoT security

Cybercriminals are creating increasingly sophisticated malware to target the specific vulnerabilities of IoT devices. These attacks can vary in severity, from harmless pranks, such as altering the temperature on a smart thermostat, to more serious threats, like taking control of security cameras or compromising industrial control systems. IoT malware differs significantly from traditional computer viruses. These malicious programs are typically engineered to function on devices with limited processing power and memory, making detection and removal more difficult. Additionally, they can quickly propagate through networks of connected devices, forming extensive botnets capable of carrying out powerful distributed denial-of-service (DDoS) attacks.

The variety of IoT malware showcases the ingenuity of cybercriminals, who are continually devising new methods to exploit these devices—often outpacing manufacturers' ability to release timely patches for vulnerabilities [42]. It is advisable to implement comprehensive security technologies to safeguard IoT devices from malware-based threats. Deploying robust anti-malware solutions, including antivirus, antispyware, anti-ransomware, and anti-trojan software, can significantly enhance the protection of IoT devices. These security measures help detect, prevent, and neutralize malicious programs before they can compromise device functionality or data integrity. Given the unique vulnerabilities and limited processing power of many IoT devices, it is crucial to choose lightweight, efficient security solutions tailored to their specific needs. Additionally, integrating these anti-malware tools with real-time threat monitoring and automatic updates can further bolster the defence against rapidly evolving cyber threats.

Effective authentication management technologies such as password management systems and multifactor authentication should be adoption to ensure

Secure Credential Management: Avoid using default or hardcoded credentials in firmware, as attackers can easily discover them and gain unauthorized access. Instead, implement strong authentication mechanisms such as multi-factor authentication to enhance security. Encourage users to change default passwords during the initial setup of the IoT device to prevent potential attacks based on known credentials.

Leveraging SNMP Monitoring for IoT Device Security

Simple Network Management Protocol (SNMP) plays an essential role in maintaining the security and operational integrity of IoT devices within a network. This widely adopted protocol is designed to collect data and manage network-connected devices, ensuring they remain protected against unauthorized access and other security threats. However, to effectively harness the capabilities of SNMP, organizations should utilize robust monitoring and management tools tailored for comprehensive oversight.

The Importance of SNMP Monitoring and Management: SNMP serves as a communication protocol that facilitates the exchange of management information between network devices and monitoring systems. It allows network administrators to oversee a range of connected devices such as routers, switches, IoT sensors, and other hardware. The information collected through SNMP can be invaluable for identifying potential security risks, detecting performance bottlenecks, and preemptively addressing issues before they escalate.

Key Features and Capabilities of SNMP Monitoring Solutions

Centralized Monitoring Platform: SNMP monitoring solutions provide a unified platform for administrators to keep track of all network-connected devices. This centralized approach simplifies the complex task of managing diverse IoT devices, enabling administrators to monitor device traffic, access points, and overall activity in real-time. Such comprehensive visibility ensures that any potential security breach or abnormal behavior can be quickly detected and addressed.

  • Traffic and Activity Analysis: With SNMP tools, network traffic can be analyzed to detect unusual patterns that might indicate malicious activity or unauthorized access attempts. Administrators can identify spikes in data flow, unexpected communication with external servers, or other anomalies that suggest the presence of malware or a cyberattack.
  • Hardware Performance Monitoring: Beyond security, SNMP solutions help monitor the health and performance of network devices. This includes tracking critical metrics such as CPU usage, memory load, and device uptime. By continuously assessing these parameters, administrators can detect signs of hardware failure or performance degradation, allowing for timely maintenance and minimizing the risk of downtime.
  • Custom Alerts and Notifications: One of the standout features of advanced SNMP management tools is the ability to create customized alerts. Administrators can set thresholds for various performance and security indicators, such as bandwidth usage or login attempts. When these thresholds are breached, the system sends out alerts, empowering the team to respond swiftly to potential issues. Customizable notifications ensure that threats are not overlooked and that teams remain proactive in addressing vulnerabilities.
  • Device Discovery and Classification: High-quality SNMP management solutions, such as NinjaOne, offer automated device discovery capabilities. This means that new devices added to the network are immediately identified and logged. The system can then classify these devices based on authentication credentials, device type, and other criteria. This feature is especially useful for maintaining an accurate inventory of all network assets and ensuring that unknown or rogue devices are promptly flagged for review.

Enhancing IoT Security with SNMP: By integrating SNMP monitoring tools into the broader security strategy, organizations can bolster their defense mechanisms and strengthen their IoT ecosystem's resilience. Regular audits and real-time oversight provided by SNMP solutions enable better compliance with security protocols and help maintain the integrity of sensitive data transmitted through IoT devices. Additionally, integrating SNMP data with other cybersecurity tools, such as Security Information and Event Management (SIEM) systems, can provide deeper insights and enhance incident response capabilities.

Best Practices for Implementing SNMP Solutions

  • Configure Access Controls: Ensure that SNMP access is restricted to trusted administrators. Using SNMPv3, which supports encryption and enhanced security features, is recommended to safeguard the data being transmitted.
  • Regularly Update SNMP Software: Keep your SNMP management tools updated to protect against known vulnerabilities and ensure compatibility with the latest device firmware.
  • Utilize Multi-Layered Security: Combine SNMP monitoring with other security measures such as firewalls, intrusion detection systems (IDS), and endpoint protection solutions to create a multi-layered defence strategy.
  • Conduct Training and Awareness: Equip IT teams with the knowledge and training to leverage SNMP tools effectively. Understanding how to interpret SNMP data and respond to alerts is critical for maintaining network security.

Therefore, SNMP monitoring and management are vital for organizations looking to safeguard their IoT infrastructure. By implementing advanced SNMP solutions, businesses can achieve better visibility, proactive threat detection, and comprehensive control over their network, thus enhancing overall security and operational efficiency.

Network Security for IoT: Implementing Robust Encryption Protocols

Ensuring the security of communication between IoT devices and backend servers is a fundamental aspect of a strong network security framework. As IoT ecosystems continue to grow in complexity and scale, protecting the integrity, confidentiality, and authenticity of data transmissions becomes increasingly critical. One of the most effective strategies for securing these interactions is the implementation of robust encryption protocols, such as Transport Layer Security (TLS).

The Importance of Robust Encryption in IoT Security: IoT devices often transmit sensitive data, ranging from personal user information to industrial control signals. This data, if intercepted or tampered with, can lead to severe consequences, including data breaches, unauthorized access, and disruption of essential services. Encryption protocols act as a protective barrier, ensuring that data remains confidential and unaltered as it moves between devices and servers. By encrypting data in transit, organizations can minimize the risks associated with data interception and ensure secure communication.

How TLS Enhances IoT Security

Transport Layer Security (TLS) is a widely recognized encryption protocol designed to secure data transmitted over networks. TLS establishes an encrypted connection between IoT devices and backend servers, protecting data from eavesdropping and tampering. Here’s how TLS helps fortify network security in IoT ecosystems:

  • Data Encryption: TLS uses cryptographic algorithms to encrypt data before it is transmitted. This ensures that even if a malicious actor intercepts the communication, they would be unable to decipher the content without the appropriate decryption key. Encrypted data appears as a random, unreadable sequence, making it highly resistant to unauthorized access.
  • Authentication: TLS supports authentication mechanisms that verify the identities of communicating parties. This prevents man-in-the-middle (MitM) attacks, where attackers could impersonate a device or server to intercept and alter data. Mutual authentication, which can involve both device and server certificates, strengthens trust within the network by confirming that data is only exchanged between verified parties.
  • Data Integrity: TLS protocols incorporate hashing functions that maintain data integrity during transmission. These functions generate a unique checksum or hash value for each data packet. Upon reaching the destination, the hash value is compared to ensure that the data has not been tampered with in transit. If discrepancies are detected, the transmission is flagged as compromised.

Implementing TLS in IoT Networks

Implementing TLS across an IoT network involves several best practices and considerations:

  • Use TLS 1.2 or Higher: It is crucial to use the latest versions of TLS (preferably TLS 1.2 or TLS 1.3) to take advantage of enhanced security features and avoid vulnerabilities present in older versions. TLS 1.3, for instance, simplifies the handshake process and removes outdated algorithms, resulting in stronger security and faster connection establishment.
  • Certificate Management: The implementation of TLS relies on digital certificates issued by trusted Certificate Authorities (CAs). Proper certificate management is essential to maintain secure communications. Organizations should automate the certificate renewal process to prevent disruptions caused by expired certificates. Additionally, IoT devices must be capable of securely storing and managing certificates to protect against theft or misuse.
  • Device Compatibility and Resource Constraints: Given that many IoT devices are constrained by limited processing power, memory, and battery life, it’s important to optimize the implementation of TLS to avoid performance issues. Lightweight versions of TLS, as well as hardware acceleration for cryptographic operations, can be employed to strike a balance between security and device functionality.
  • Regular Security Updates and Patch Management: To keep TLS secure and effective, organizations must stay vigilant about applying security patches and updates. Cybercriminals are constantly developing new techniques to exploit vulnerabilities, so keeping devices and backend servers updated ensures that the encryption mechanisms remain resilient against emerging threats.

Complementary Security Measures

While TLS is a powerful tool for securing data in transit, it should be part of a comprehensive security strategy that includes:

  • Unordered List ItemEnd-to-end Encryption: Implement end-to-end encryption (E2EE) to secure data from the moment it leaves the source until it reaches the destination. This further prevents data exposure in intermediate points of the network.
  • Strong Access Controls: Limit access to encryption keys and certificates by implementing strict access controls and multi-factor authentication (MFA) for administrative roles.
  • Secure Configuration Practices: Ensure that all IoT devices are configured securely to prevent vulnerabilities that could undermine TLS encryption, such as weak default passwords or open ports.

Robust encryption protocols like TLS are essential for safeguarding the communication channels between IoT devices and backend servers. By encrypting data, authenticating parties, and ensuring data integrity, TLS minimizes the risk of unauthorized access and data breaches. However, effective TLS implementation should be complemented with continuous monitoring, updates, and a layered security approach to maximize protection in an increasingly interconnected world.

SIEM Systems Technologies for Integrated IoT Security

Logging and Monitoring for Comprehensive Threat Management

Security Information and Event Management (SIEM) systems play a vital role in the protection of IoT ecosystems by combining logging, monitoring, and advanced data analysis to safeguard devices and networks. These technologies provide a unified platform for collecting and analyzing security data, which is essential for maintaining a secure environment in an increasingly interconnected landscape. Below, we break down how logging and monitoring capabilities contribute to comprehensive IoT security and why they are indispensable for modern organizations.

Real-Time Monitoring and Live Tracking

  1. Ordered List ItemContinuous Monitoring for Rapid Response: SIEM systems enable real-time tracking of IoT device activity and network traffic, allowing security teams to detect and respond to incidents swiftly. Continuous monitoring ensures that any deviation from normal activity is identified promptly, helping prevent potential breaches before they escalate. This capability is crucial in an IoT ecosystem where device behavior can vary widely and new threats can emerge at any moment.
  2. Ordered List ItemGranular Visibility: With SIEM systems, organizations gain a detailed view of their IoT network. This includes monitoring data flows between devices, interactions with backend servers, and communications with external networks. Such visibility ensures that any irregularities, such as unexpected data transmissions or unauthorized access attempts, are flagged immediately for further investigation.

Comprehensive Log Collection and Analysis

  1. Log Aggregation from Diverse Sources: SIEM solutions collect logs from multiple sources across the IoT network, including device event logs, network traffic data, application activity, and user access records. This aggregation allows for a holistic view of the network, making it easier to detect coordinated attacks or patterns that might otherwise go unnoticed.
  2. Anomaly Detection Through Log Analysis: By analyzing logs, SIEM systems can recognize deviations from established baselines and identify unusual behaviour indicative of security incidents. For example, a sudden spike in data transfer from a specific device or an influx of failed login attempts could point to a compromised device or a brute-force attack. Advanced SIEM platforms often use machine learning algorithms to enhance anomaly detection, learning from historical data to better differentiate between benign and suspicious activity.
  3. Behavioral Insights: Logs provide invaluable behavioural insights that can help organizations understand typical device operations and spot deviations. These insights enable security teams to identify potentially malicious behaviour, such as IoT devices attempting to connect to unauthorized endpoints or being used as entry points for lateral movement within a network.

Alert Mechanisms and Incident Response

  1. Automated Alerts for Faster Response Times: A key feature of SIEM systems is the implementation of automated alert mechanisms. These alerts notify administrators in real time when potential security breaches or abnormal activities are detected. Alerts can be configured based on various criteria, such as access attempts from unrecognized IP addresses, unusual data transfers, or unauthorized changes in device configurations.
  2. Customizable Alert Thresholds: Organizations can tailor SIEM alert settings to align with their unique risk profiles and operational needs. Customizable thresholds help filter out noise and focus on high-priority alerts, ensuring that security teams can respond effectively to critical incidents without being overwhelmed by false positives.
  3. Facilitating a Coordinated Incident Response: With centralized data and real-time alerting, SIEM systems provide the tools needed to streamline the incident response process. Security teams can investigate alerts quickly using the contextual data provided by SIEM logs, enabling them to trace the source of a breach, assess its scope, and take corrective action. This coordinated approach minimizes the potential damage and downtime associated with security incidents.

Benefits of Implementing SIEM in IoT Security

  1. Enhanced Threat Detection: The combination of continuous monitoring, log analysis, and alert mechanisms enables SIEM systems to detect threats that might bypass traditional security measures. This is especially important in IoT environments where conventional antivirus solutions may not be feasible due to limited device processing power.
  2. Compliance and Reporting: Many industries are subject to regulations that require organizations to maintain comprehensive logs and audit trails. SIEM systems support compliance by automating the collection and storage of logs, providing clear evidence of security measures, and generating reports needed for regulatory adherence. Compliance reporting features help organizations demonstrate that they are meeting industry standards for data security and privacy. Thus, SIEM systems can enable organisations to generate reports that can be presented to both internal and external security auditors to prove that they are complying with regulatory requirements.
  3. Scalability for Expanding IoT Networks: As IoT networks grow, SIEM systems can scale to accommodate increasing data volumes and new device types. This scalability ensures that organizations can continue to monitor their expanding IoT ecosystem without sacrificing visibility or responsiveness.
  4. Proactive Threat Hunting: In addition to automated monitoring, SIEM systems empower security teams to conduct proactive threat hunting. Analysts can use the system's search and query capabilities to explore logs and uncover potential threats that might not have triggered automatic alerts, allowing for preemptive mitigation measures.
  5. Automated attack detection and response: SIEM systems make it possible to detect and respond to cybersecurity attacks automatically, reducing the damage that cyberattacks can cause. The event correlation engine that analyses the massive amounts of logs generated by IoT devices and other cybersecurity tools (e.g., intrusion detection systems, intrusion prevention systems, antimalware applications, firewalls, and honeypots) can be replaced AI or machine learning models, facilitating the speed and accuracy of attack detection and response.

SIEM systems are an integral part of IoT security, providing a powerful combination of logging, real-time monitoring, and automated alerts to help organizations detect and respond to threats efficiently. By aggregating data from a wide range of sources, analyzing logs for anomalies, and providing comprehensive alerts, SIEM solutions enhance an organization's ability to maintain secure operations in an increasingly connected world. Implementing a high-quality SIEM system ensures that businesses are not only reactive but also proactive in their IoT security efforts, positioning them to handle present and future challenges with confidence.

IoT security methodology: Identifying and Preventing IoT Cyber Threats

Navigating the unpredictable landscape of digital threats is challenging, but effective risk management in an IoT ecosystem is achievable. Businesses of all sizes must integrate robust security protocols into their operations, focusing on enhancing threat detection and response. Dedicated IT administrators or specialized security teams (e.g., security operation centre) should take charge of securing networks, including all connected IoT devices. In order to design and implement robust cybersecurity tools and policies to secure IoT networks and systems, cybersecurity analysts or teams should conduct comprehensive network and software risk assessments, implement robust defensive measures, and leverage SIEM solutions and other security monitoring tools. Some of these strategies have been discussed in [43].

1. Conduct Comprehensive Network and Software Risk Assessments Effective cyber threat intelligence revolves around finding and addressing vulnerabilities within a cybersecurity framework. This process should be continuous, consisting of stages such as planning, data collection, analysis, and reporting. The resulting report should be evaluated and adapted to include any new findings before being incorporated into strategic decisions.

Risk assessments can be broken down into three main types:

Strategic Assessment: Provides executives with insights into long-term challenges and timely warnings. This type of assessment informs decision-makers about the intentions and capabilities of cybercriminals in the current IoT landscape. Tactical Assessment: Offers real-time analysis of events, activities, and reports, supporting daily operations and customer needs. This approach often involves data from sensors and smart meters in industrial IoT systems. Operational Assessment: Tracks potential incidents based on related activities and reports, enabling proactive strategies for managing future incidents and maintaining predictive maintenance. 2. Implement Robust Defensive Measures A comprehensive cybersecurity policy is essential for protecting your IoT ecosystem. This policy should incorporate a range of strategies to minimize risks. Common defensive practices include:

Deploying effective antivirus and anti-malware software Enabling two-factor (2FA) or multi-factor authentication (MFA) Keeping all software updated to patch known vulnerabilities Utilizing attack surface management tools Implementing network segmentation to limit the spread of threats Adopting a zero-trust security model Providing continuous cybersecurity training and awareness programs for employees and endpoint users 3. Leverage SIEM Solutions Security Information and Event Management (SIEM) systems are crucial for real-time cybersecurity management. These solutions enhance security by integrating threat intelligence with incident response, making them an invaluable tool for analyzing security operations within an IoT ecosystem.

SIEM platforms gather event data from applications, devices, and other systems within the IoT infrastructure and consolidate this data into a clear, actionable format. The system issues customizable alerts based on different threat levels. Key benefits of using SIEM solutions include:

Detecting vulnerabilities Identifying potential insider threats Aggregating and visualizing data for improved oversight Ensuring compliance with regulations Managing and analyzing logs effectively

Strengthening IoT Security: Key Protection Strategies

To effectively defend against IoT malware, a comprehensive, multi-layered approach that integrates advanced technology and robust security practices is essential. Here are some expert-recommended best practices discussed in [44]:

  • Implement Network Segmentation: A highly effective way to contain IoT malware and traffic-based attacks (e.g., DDoS attacks) is through network segmentation. By placing IoT devices on separate network segments or VLANs, organizations can prevent malware from spreading and safeguard critical infrastructure. It can also ensure that IoT devices are not turned into botnets and are used to conduct DDoS attacks on network gateways and servers in the IT infrastructure of the organisation or other organisations. Think of it as setting up digital containment zones. An infected IoT device cannot compromise your entire network, and compromised IoT devices cannot be used to launch attacks on the rest of the network and its systems.
  • Ensure Timely Firmware Updates and Patch Management: Many IoT attacks target known vulnerabilities that manufacturers have already patched. Late installation of security updates and patches gives attackers an opportunity to exploit newly discovered vulnerabilities that have already been fixed in the latest updates by device manufacturers. Establishing a disciplined update and patch management protocol is essential to close these security loopholes. Users should treat IoT devices in the same way they treat their computers and smartphones. That is, they should regularly update their devices as the first line of defence against new threats.
  • Strengthen Authentication and Access Controls: Weak or default passwords are a common entry point for IoT malware. It is essential to ensure that effective access control mechanisms are implemented to limit access to IoT networks, devices, servers, and applications only to authorised devices and users. Using strong, unique passwords for each device and enabling two-factor authentication can significantly lower the risk of unauthorized access.
  • Deploy Network Monitoring and Anomaly Detection: Advanced network monitoring tools that detect irregular traffic or unusual behaviour from IoT devices are vital for early threat identification. Machine learning-based systems can help flag potential malware before it spreads. The advantage of using machine learning-based Network Monitoring and Anomaly Detection tools is that they can detect new forms of attacks, unlike signature-based tools.
  • Maintain a Comprehensive Device Inventory: An up-to-date inventory of all IoT devices on the network is crucial for security management. This should include details such as device types, firmware versions, and known vulnerabilities. That is, every device that connects to IoT networks should be identifiable so that they can be effectively monitored and secured to ensure the security of the network as a whole. A compelling need for device visibility is because we can’t protect what we don’t know exists and can't even see it. A complete device inventory forms the backbone of any effective IoT security plan.
  • Conduct Vendor Security Assessments: Some of the vulnerabilities in IoT devices are introduced to the various stakeholders in the IoT device development cycle, from the IoT hardware manufacturer to the firmware and software developers. Before introducing new IoT devices, organizations should thoroughly evaluate vendors and their products. They should also assess their security measures, update policies, and track records for addressing vulnerabilities.
  • Promote Employee Education and Awareness: Human error is a leading cause of security incidents. Regular training on IoT security best practices can help employees recognize risks and understand their role in maintaining a secure environment. Employee training also ensures that IoT security policies are followed during the deployment and operation of IoT networks and systems.

— MISSING PAGE —

Blockchain

Key Concepts of Blockchain

In this chapter, we will explore how blockchain technology, originally developed and can be applied in various fields. While we will primarily use examples related to financial transaction processing, it’s important to understand that blockchain's potential is not limited to this area. This technology offers a flexible framework for implementing decentralized solutions to securely store, share, and protect data across multiple domains.

The term 'blockchain' has come to mean different things to different people. For developers, it's a set of tools and encryption techniques that make it possible to store data securely across a network of computers. In business and finance, it's seen as the technology behind digital currencies and a way to keep track of transactions without needing a central authority. For tech enthusiasts, blockchain is driving the future of the Internet. Others view it as a powerful tool that could reshape society and the economy, moving us toward a world with less centralized control.

At its core, blockchain is a new type of data structure that merges cryptography with distributed computing. The basics of technology comes from Satoshi Nakamoto, who combined these elements to create a system where a network of computers work together to maintain a shared, secure database. In essence, blockchain technology can be described as a secure, distributed database.

Blockchain technology demonstrates that when it is used, people anywhere in the world can trust each other and conduct business directly within large networks without the need for a central authority to manage everything. This trust isn’t based on big institutions but on technology—protocols, cryptography, and computer code. This shift makes it much easier for people and organizations to work together, opening up new possibilities for global collaboration without relying on traditional central institutions.

What is blockchain in simple terms?

A blockchain is a method of storing data. Data is stored in blocks that are linked to the previous block.

Each block contains,

  • a list of transactions;
  • a unique ID for all the data in the block called a hash;
  • a hash of the previous block's data.

Data in the block usually consists of transactions, each block can contain hundreds of transactions (for example person A sends 100 EUR to person B, this transaction describes 3 variables: sender identification, receiver identification and amount).

A hash generated from transaction record is a unique combination of letters and numbers. It's always unique to every block on the blockchain. When the data in the block changes, the hash will also change. When Hash is applied to transaction data it disable option to make changes in record as result has of new record wont be equal to previous value. (for example if we generate hash for records “PersonA, PersonB,100” the hash result of this record will be unique value and will be changed if at least one symbol from original record will be changed.)

Each block also contains the hash of the previous block hence forming a chain structure.

As result if a transaction in any block changes, the hash of the block will change. When the hash of the block changes, the next block will show a mismatch with the previous hash that was recorded by it. This gives blockchain the property of being tamper resistant as it becomes very easy to identify when data in a block has changed. Blockchain has one more property that makes it secure. A blockchain is not stored on one computer or server which is usually the case with database. Instead, it is stored in a large network of computers called a peer to peer network.

Per to peek is a network where all computer plays both server and node role. In most cases such networks does not have centralized server, this role is shared across network nodes. This structure allows the network to remain operational with any number and any combination of available nodes.

Every time a new block of transactions has to be added to network, all members or nodes of the network must check and verify if all transactions in the block are valid. If all nodes in the network are in agreement that the transactions in the block are correct, then the new block will get added to every node's blockchain.

This process is called consensus. Hence, any attacker who tries to tamper with the data on the blockchain must tamper with the data in most of the computers in the peer to peer network.

Blockchain Network Structures and Technologies

Transactions

Block chain technology uses two main types of cryptographic keys to provide the security of transactions and data: public keys and private key. These keys work together to protect the integrity of the blockchain, enabling secure exchanges of digital records and protect user identities. Consider the example of a mailbox. The public key is your email ID which everyone knows about and can send you messages. The private key, on the other hand, is like the password to that mailbox. Only you own it and only you can read the messages inside.

A public key is a cryptographic code that is openly shared and used by others to interact with your blockchain account. It's generated from your private key using a specific mathematical process. Public keys are used to verify digital signatures and to encrypt data that only the private key can decrypt. This ensures that messages or transactions are intended for the correct recipient.

A private key is a secret cryptographic code that grants access to your blockchain records. It must be kept confidential because anyone with access to the private key can control the records associated with the corresponding public key. This key is used for authorizing transactions on the blockchain. When it is necessary to transfer information (make transaction), you use your private key to create a digital signature that proves you are the owner of those transactions.

Public and private keys work together to secure blockchain operations:

  • Encryption and Decryption: When data is encrypted using a public key, only the corresponding private key can decrypt it. This mechanism ensures that even if the data is intercepted, it cannot be read without the private key.
  • Digital Signatures: When a transaction is signed with a private key, the signature can be verified by others using the public key. This verification process confirms that the transaction is authentic and has not been tampered with.
  • Secure Transactions: Blockchain transactions rely on the interplay between public and private keys. The public key directs the transaction to the correct recipient, while the private key authorizes the movement of transactions.

Categories of blockchain.

There are three categories of blockchain:

Public blockchains, anyone can access the database, store a copy and make changes subject to consensus in the public blockchain. Bitcoin, is a classic public blockchain. The key characteristics of a public blockchain are, they are completely decentralized. The network is open to any new participants. All participants, having equal rights, can be involved in validating the blocks and access the data contained in the blocks.

Public blockchains process transactions more slowly because they are decentralized, as result each node should agree on each transaction. This requires time-consuming consensus methods like Proof of Work, which prioritize security over speed.

Private blockchains (in some literature is mentioned as managed blockchains) are closed network that are accessible to authorized or select verified users only. They are often owned by companies or organizations, who use them to manage sensitive data and internal information.

Private blockchain is very similar to existing databases in terms of access restrictions, but is implemented with blockchain technology as result such networks are not aligned with the principle of decentralization.

Since it is accessible only by certain people, there is no requirement for mining of blocks (validating), as result such networks are faster than other types because they does not have necessary in mining, consensus, etc.

Hybrid or consortium blockchains are permissioned based blockchains but in comparison to private blockchains control is provided by group of organizations rather than one coordinator. Such blockchains have more restrictions than public ones, but are less restrictive than private ones. For this reason, they are also known as hybrid blockchains. New nodes are accepted based on a consensus with the consortium. Blocks are validated according to predefined rules defined by the consortium. Access rights can be public or limited to certain nodes. User rights might differ from user to user. Hybrid blockchains are partly decentralized.

Blockchain type selection

When choosing the right type of blockchain for a project, it's important to think about how it will be used, who will use it, and how it needs to perform. There are three main types of blockchains, each suited for different situations:

Private Blockchain:

When to Use: If the blockchain is only for use within a single organization by a specific group of people, a private blockchain is the best option. Advantages: It gives the organization more control over who can join and see the data. It’s good for internal processes like keeping track of company records or managing internal operations. Performance: Since only a few trusted users are involved, the system can run faster and more efficiently because it doesn't need complex methods to agree on things. Examples: Hyperledger Fabric, Corda.

Consortium Blockchain:

When to Use: If the blockchain will be shared by a group of companies or organizations working together, a consortium blockchain is the right choice. Advantages: It allows several organizations to work together while keeping control of who can access the blockchain. It’s great for industries where businesses need to collaborate and share data securely. Performance: Since only trusted groups are involved, it works faster and more efficiently than a public blockchain. Examples: R3, Quorum.

Public Blockchain:

When to Use: If the goal is to create a completely open and decentralized system that anyone can join, such as for cryptocurrencies, a public blockchain is the best fit. Advantages: It allows anyone to participate and offers complete transparency. This is perfect for things like digital currencies where trust needs to be spread across everyone using it. Performance: Public blockchains can be slower and use more energy because they require complex processes to make sure everyone agrees. However, they are highly secure and trustworthy. Examples: Bitcoin, Ethereum.

To summarize – If in your project the blockchain is only for internal use, go with a private blockchain. If it's for a group of related businesses, choose a consortium blockchain. And if it needs to be open to everyone, a public blockchain is the way to go.

Second Generation Applications

While first-generation blockchain applications, such as Bitcoin, primarily focused on decentralized digital currencies, second-generation blockchain applications introduced more sophisticated functionalities. These advancements allowed for broader use cases beyond simple peer-to-peer transactions, laying the groundwork for smart contracts, decentralized applications (dApps), and improved scalability. Second-generation blockchains are often characterized by their enhanced programmability, consensus mechanisms, and adaptability to various industries.

Key Features of Second Generation Blockchain Applications

Smart Contracts

One of the innovations of second-generation blockchain applications is the introduction of smart contracts. Initially pioneered by Ethereum, smart contracts are self-executing agreements where the terms of the contract are written directly into code. Once predetermined conditions are met, the contract is automatically executed. This eliminates the need for intermediaries and significantly reduces transaction costs and delays.

Smart contracts have diverse applications, including financial agreements, supply chain automation, real estate, insurance, and beyond. They have enabled decentralized finance (DeFi) platforms to flourish by providing services like lending, borrowing, trading, and liquidity provision in a trustless, decentralized manner.

Decentralized Applications (dApps)

Second-generation blockchains also serve as platforms for decentralized applications, or dApps, which are applications that run on a blockchain instead of centralized servers. Ethereum, again, was the first platform to popularize the use of dApps by providing a robust infrastructure for developers to build decentralized applications with the Ethereum Virtual Machine (EVM).

dApps are transparent, autonomous, and can operate without a central authority. Their decentralized nature means that they are less vulnerable to censorship and hacking, as they run on a distributed network of nodes rather than a single point of failure. This has led to the creation of various decentralized services, including decentralized exchanges (DEXs), prediction markets, gaming platforms, and more.

Programmability and Turing-Completeness

Unlike Bitcoin, which is specifically designed for financial transactions, second-generation blockchains like Ethereum introduced Turing-completeness. This means the blockchain can process any computational logic and execute any program, given enough resources. This allows developers to create complex and sophisticated blockchain-based applications that can address a wide range of problems.

Other platforms that focus on programmability include EOS, Tezos, Tron, and Solana, all of which allow for the deployment of smart contracts and dApps. These platforms differ from first-generation blockchains by being application-oriented, not just transaction-oriented.

Interoperability

One of the challenges addressed by second-generation blockchains is the need for interoperability between different blockchain networks. Many blockchain applications work in silos, but with the growth of DeFi and dApps, there has been a demand for different blockchain systems to communicate with each other. Interoperability solutions aim to enable blockchains to transfer data, tokens, and assets between them seamlessly.

Projects like Polkadot and Cosmos have focused on creating interoperable blockchain ecosystems. These networks use relay chains and hubs to connect different blockchains, facilitating cross-chain transactions and enabling various blockchain networks to work together. Interoperability helps improve liquidity, expands market reach, and enhances the overall utility of blockchain applications.

Decentralized Finance (DeFi)

One of the most transformative developments of second-generation blockchain applications is Decentralized Finance (DeFi). DeFi refers to a collection of financial services and platforms built on blockchain technology that aim to recreate traditional financial systems such as banks, exchanges, and lending platforms in a decentralized and permissionless way.

DeFi applications leverage smart contracts to create financial services like decentralized lending and borrowing platforms (e.g., Aave, Compound), decentralized exchanges (DEXs) (e.g., Uniswap, Sushiswap), and yield farming platforms. These services allow users to borrow, lend, trade, and earn interest on their digital assets without relying on centralized entities. The global DeFi market has exploded in recent years, with billions of dollars locked in DeFi protocols, transforming how people access and manage financial services.

Governance and Decentralized Autonomous Organizations (DAOs)

Second-generation blockchain applications have introduced new models for decentralized governance, most notably in the form of Decentralized Autonomous Organizations (DAOs). DAOs are blockchain-based entities governed by a set of rules encoded in smart contracts. Token holders typically have voting rights and can collectively make decisions about the organization's direction, including funding, development, and protocol changes.

DAOs aim to provide a transparent, decentralized model of governance, eliminating the need for traditional hierarchical structures. Many DeFi projects and blockchain ecosystems have adopted the DAO model for decision-making processes. For instance, MakerDAO is a popular DAO that governs the Maker Protocol, which allows users to generate the Dai stablecoin.

Examples of Second Generation Blockchain Platforms

Ethereum

Ethereum is the most notable second-generation blockchain platform. It is designed to go beyond cryptocurrency by providing a general-purpose framework for building decentralized applications. Ethereum's ability to execute smart contracts and support decentralized applications has made it the go-to platform for innovators in DeFi, NFTs, and beyond.

EOS

EOS is another second-generation blockchain platform known for its high scalability, faster transaction speeds, and user-friendly development tools. EOS aims to address the scalability issues faced by Ethereum by offering higher throughput and lower transaction fees, making it a popular choice for developers building high-performance dApps.

Cardano

Cardano is a second-generation blockchain platform that focuses on providing a secure and scalable infrastructure for decentralized applications and smart contracts. It uses a unique Proof of Stake (PoS) consensus mechanism called Ouroboros, which is designed to be more energy-efficient than Ethereum's original Proof of Work. Cardano’s research-based development approach emphasizes formal verification to ensure the security and correctness of its blockchain protocols.

Polkadot

Polkadot is a platform designed to enable different blockchains to work together. It introduces the concept of “parachains,” which are parallel chains that can interoperate with each other. Polkadot’s interoperability aims to solve the fragmentation problem by connecting various blockchains, enabling them to exchange information and assets seamlessly.

Solana

Solana is known for its high-performance blockchain, capable of handling thousands of transactions per second. It uses a novel consensus mechanism called Proof of History (PoH), which enables fast block confirmation times, making Solana suitable for high-frequency trading, gaming, and other high-demand dApps.

Expanded Application of Blockchain

Green IoT

Green IoT (G-IoT) is the adoption of energy-efficient procedures (hardware, software, communication, or management) and waste reduction methods (energy harvesting and recycling of e-waste) to conserve resources and reduce waste (including pollutants like carbon dioxide) produced by the IoT ecosystem from the design, manufacturing, deployment and operation of IoT systems from the IoT devices to IoT cloud computing data centres. Green IoT is an emerging field within the IoT ecosystem that is aimed at raising awareness of the sustainability problems that may result from the massive deployment of IoT applications in the various sectors of society (health care, agriculture, manufacturing, intelligent transport systems, smart cities, supply chains, smart homes, and smart energy systems) and exploring ways to address those challenges. These challenges include the increase in energy consumption, which increases the IoT industry's carbon footprint, and the amount of e-waste created resulting from discarding electronic components of IoT devices, especially IoT batteries, as they need to be replaced after a few years.

Although energy-efficient strategies have been developed to minimise the energy consumption of IoT devices, the energy consumption of billions or trillions of IoT devices will be enormous. The amount of traffic generated by IoT devices is increasing exponentially, and it is predicted that by 2024, IoT traffic will constitute about 45% of the total Internet traffic \cite{Alsharif2023}. A rapid increase in the amount of traffic generated by billions to trillions of IoT devices and transported through the Internet to cloud computing platforms will significantly increase the energy consumption of the Internet network infrastructures, especially with the dense deployment of 5G base stations and IoT wireless access points to service IoT devices. Also, a huge amount of energy is consumed by data centres to process or analyse the massive amount of data collected using IoT devices.

Much attention is often focused on the energy consumed by IoT devices, networks, and computing platforms. However, less attention is given to the energy consumed by manufacturing and transporting IoT devices and other ICT systems used in the IoT ecosystem. The carbon footprint of the IoT industry can be traced from mining the minerals required to manufacture IoT devices, the manufacturing process, and the supply chains involved. To realise the green IoT goal, energy efficiency and sustainable practices should be designed to ensure that the mining, manufacturing and supply chains are environmentally friendly or sustainable.

The design and implementation of energy-efficient strategies may significantly reduce the energy consumption of IoT systems. However, the rapid increase in the use of IoT to address problems and increase efficiency and productivity in other sectors of the economy will result in a significant net increase in the energy consumed by these systems. Another approach to enforcing green IoT is using renewable resources such as renewable energy sources to continuously recharge IoT batteries, reducing the maintenance cost of replacing IoT batteries and increasing the amount of e-waste created by the IoT industry.

Another Green IoT strategy is to reuse and recycle IoT components and resources. It will significantly reduce the amount of waste produced by the IoT industry and optimise using natural resources to manufacture IoT devices. Hence, reusing and recycling IoT components and resources is a green IoT strategy to increase the sustainability of the IoT industry.

An effective green IoT strategy should span the entire IoT product lifecycle from the design to production (manufacturing) to the deployment, operations and maintenance, and recycling. The primary goal in each stage is to reduce energy consumption, adopt sustainable resources (e.g., harvesting energy from sustainable energy sources, using sustainable materials) usage, minimise e-waste and other pollutants, and adopt recycling of resource or waste. Therefore, a shift toward Green IoT (GIoT) emphasises the need to adopt energy-efficient practices and processes prioritising resource conservation, waste reduction, and environmental sustainability[45].

Green IoT strategies can be grouped into the following categories: green IoT design, green IoT manufacturing, green IoT manufacturing, green IoT applications, green IoT operation, and green IoT disposal [46].

Green IoT design: Designing IoT hardware, software, management systems, and policies considering the requirement of minimising the energy consumption, carbon footprint and environmental impact of IoT systems. One of the design goals should be to implement energy-efficient strategies to reduce energy consumption and to develop strategies to minimise the amount of e-waste produced from the IoT systems and infrastructures. Green IoT design techniques include Green hardware, green communication and networking infrastructure, green software, green architecture, green software, energy-efficient security mechanisms, and energy harvesting.

Green IoT Operations: Deploying, operating, and managing IoT systems in such a way as to minimise energy consumption and to minimise waste. One such strategy is to switch off idle networking and computing nodes, applying radio resource optimisation mechanisms (e.g., control of the transmission power and the modulation), energy-efficient routing mechanisms, and software energy optimisation mechanisms (improving software code to be energy-efficient and using software optimisation algorithm to minimise energy consumption).

Green IoT applications or use cases: Using IoT applications to reduce energy consumption (or the carbon footprint) and to conserve resources to ensure sustainability in other industries, for example using IoT to reduce energy consumption, water consumption, and the use of chemicals (fertilisers, herbicides, fungicides, insecticides etc) in the agricultural industry. IoT can reduce energy consumption, carbon footprint, waste production, and the over-utilisation of resources in the various sectors of the economy, including manufacturing, energy production, mining, health care, and transportation. Therefore, the massive deployment of IoT in these sectors to address efficiency and productivity challenges should be done in such a way as also to address sustainability issues.

Green IoT waste disposal and management: Reducing the waste created from deploying and operating IoT systems. Renewable energy sources should be used to recharge IoT batteries to reduce the amount of IoT battery waste generated and dumped in landfills. Recycling IoT components and resources should be adopted and promoted to reduce the amount of e-waste generated by the IoT industries and dumped in landfills, which may increase significantly with the large-scale adoption and deployment of IoT systems in the various sectors of the economy.

Green IoT manufacturing: Energy-efficient manufacturing infrastructure for IoT hardware. With the expectation to connect hundreds of billions or trillions of IoT devices to satisfy the demand for IoT to improve various sectors or industries in the evolving tech-driven economy, the carbon footprint from factories manufacturing IoT devices will be enormous. Also, the manufactured IoT systems should be energy efficient.

Green IoT Design

Green IoT design is an IoT design paradigm based on a holistic IoT design framework that focuses on maintaining a balanced trade-off between the functional requirements, Quality of Service (QoS), interoperability, Cost, security, and sustainability within the IoT ecosystem. It emphasises the need to prioritise energy efficiency and the reduction of waste in the IoT ecosystem from manufacturing IoT devices, deployment of IoT systems and operation of IoT systems.

The emergence of modern technologies such as Fifth Generation (5G) mobile networks, blockchain, Artificial Intelligence (AI), and fog/cloud computing are unlocking new IoT use cases in various industries and sectors of the modern technology-driven economy or society. As a result, the number of IoT devices connected to the Internet and the volume of traffic generated from IoT infrastructures will increase significantly, increasing the energy demand in the IoT ecosystem. The result is an increase in the carbon footprint and e-waste (especially from battery-powered IoT devices) from IoT-related services or the IoT ecosystem.

An effective green IoT strategy should span the entire IoT product lifecycle from the design to production (manufacturing) to the deployment, operations and maintenance, and recycling. The primary goal in each stage is to reduce energy consumption, adopt sustainable resources (e.g., harvesting energy from sustainable energy sources, using sustainable materials) usage, minimise e-waste and other pollutants, and adopt recycling of resource or waste. Therefore, a shift toward Green IoT (GIoT) emphasises the need to adopt energy-efficient practices and processes prioritising resource conservation, waste reduction, and environmental sustainability[47].

Green IoT design is a design framework consisting of design, production, implementation, deployment, and operation choices to reduce energy consumption and waste from the IoT ecosystem. They are energy-efficient strategies devised to reduce the carbon footprint from manufacturing, deploying, and operating IoT systems (IoT sensor devices, networking nodes, data centres or computing devices). They are also strategies devised to reduce the waste from IoT infrastructures. They may involve hardware, software, management or policy decisions. The green IoT design framework should consist of the following design considerations: developing and deploying energy-efficient mechanisms, choosing energy sources, and mechanisms to ensure environmental and resource sustainability.

Energy-efficient design

It involves that design and deployment of energy-saving mechanisms to reduce the energy consumption of IoT devices. These mechanisms include the following:

  1. Green computing: Energy-efficient strategies designed to minimise energy consumption or to maximise energy efficiency, decrease the carbon footprint from computing devices and processes in the IoT infrastructures (from the devices at the IoT layer to the computing servers at the fog computing servers).
  2. Green communication and networking: Selecting energy-efficient technologies, products, practices designed to minimise energy consumption or to maximise energy efficiency, decrease the carbon footprint from networking and communication nodes in nodes and processes in IoT infrastructures (from the IoT access nodes, through the Internet core network to the cloud data centres).
  3. Green security: Design and implementation of energy-efficient security algorithms to minimise energy consumption or to maximise energy efficiency in IoT infrastructures.
  4. Green architectures: Designing and organising IoT and other ICT architures within the IoT infrastructure in such a way as to minimise energy consumption or to maximise energy efficiency.
  5. Green hardware design: Design of energy-efficient hardware chips and devices (computing and networking nodes) to minimise energy consumption or to maximise energy efficiency and decrease the carbon footprint from computing and networking hardware nodes in IoT infrastructures. A significant amount of energy can be saved by designing energy-efficient chips and hardware devices. With the increased use of AI and blockchain in IoT applications, energy-efficiency at the hardware level becomes very essential.
  6. Green software design: Optimimising software algorithms and programs to minimise energy consumption or to maximise energy efficiency and decrease the carbon footprint from software programs running of IoT infrastructure.

The above energy-efficient or sustainable computing, security, networking, hardware, and software design strategies can significantly reduce the energy demand from large scale IoT infrastructures deployed through out the world. Although, significant amounts of energy can be saved by applying these strategies, the rapid growth in the size IoT industry may offset these gains, but they offer a significant gain for the environment.

Design choices for energy sources

The type of energy sources required to power IoT infrastructures varies from the IoT cyber-physical infrastructure to the core infrastructures. Electrical and electronics devices in the IoT infrastructure can be powered with energy from:

  1. main: Powering electrical and electronic systems within the IoT infrastructure using electricity from main power supply. It is suitable for power energy hungry devices like networking nodes and servers but not suitable for massive number of IoT devices, especially when the devices are suppose to be mobile.
  2. energy harvesting: To reduce dependence on fossil fuel and other environmental unsustainable energy sources, renewable energy sources are used to power electrical and electronic systems within the IoT infrastructure. The kind of renewable energy source depend on the energy demand of the networking and computing nodes. For IoT devices, energy harvesters that can be scaled down to produce small amount of energy to power small IoT devices while larger energy harvesters that can produce larger amounts of energy are used to supply power-hungry computing and networking nodes.
  3. energy storage: The energy storage systems that are used to stored energy in IoT infrastructure are battery energy storage system (BESS) and super-capacitors. Most IoT devices are often powered by small-sized batteries with limited amount of energy. Due to the intermittent nature of renewable energy sources, large energy storage systems are often used to store extra energy that is harvested. That is, if the energy harvested is more that the load demand of the computing and networking system to be powered within the IoT infratructure, the extra energy is stored in an energy storage system. The energy stored in the energy storage system is then later used to supply the load (IoT infrastructure) when the renewable energy source is no long able to produce sufficient energy to meet the load demand.

Environmental sustainability mechanisms

IoT systems should be designed, implemented and operated in such a way as to ensure conservation of natural resources and to reduce the waste or pollutant that are generated by the IoT industry. Energy-efficient design and use of renewable energy sources are sustainability mechanisms. Deploying energy-efficient mechanisms and using renewable energy reduces the carbon footprint from IoT infrastructures. Other environmental sustainability strategies are:

  1. Use of bio-degradable materials to fabricate some components of the IoT devices.
  2. Reuse of IoT components.
  3. Recycling of some of the waste generated, especial e-waste (electronic parts and batteries) generated from the IoT industry.

Green IoT energy-efficient design and mechanisms

As IoT is adopted to addresses problems in the various sectors of the society or economy, the energy demand of IoT is increasing rapidly in an almost following an exponential trend. As the number of IoT devices increases, the amount traffic created by IoT devices increases, increasing the energy demand of the core networks that are used to transport the IoT traffic and also increasing the energy demand of data centers that are used to analyse the massive amounts of data collected by the IoT devices. The large-scale adoption and deployment of IoT infrastructure and services in the various sectors of the economy will significantly increase the energy demand from the IoT cyberphysical infrastructure (sensor and actuator devices) through the transport network infrastructure the cloud computing data center infrastructure. Therefore, one of the design goals of green IoT is to develop effective strategies to reduce energy consumption. These strategies should be deploy across the IoT architecture stacks. That is, energy-saving strategies should be implemented across all the IoT layers including:

  • The perception or “things” layer: Consists of IoT sensors that collects data and send to computing platforms for analysis and actuators that manipulate physical systems based on the feedback from data analytic platforms.
  • The network or transport layer: Consist of the network network (access and Internet core network) infrastructure that is used to transport the data collected by the sensors to fog or cloud computing platforms and the feedback or commands from the fog or cloud computing platforms to manipulate actuation that control cyberphysical systems at the percetion or things layer.
  • The Application layer: For processing (analysing) and storing the data collected by the IoT sensor devices and transported through the transport layer to the data centers. The results of the computations can be made available to users through applications or send back to the things layer to manipulate actuators.
  • The energy and sustainability management layer: It is an abstract layer that span across all the above three layers, as energy-efficiency and sustainability management is implemented across all the above layers.

At each layer, various energy-efficient strategies are implemented to reduce energy consumption. A large proportion of energy is used for performing computation and for communication at the various layers. A significant amount of energy is saved by deploying energy-efficient computing mechanism (both hardware and software mechanisms), low-power communication and networking protocols, and energy-efficient architectures. Energy-efficiency should be one of the main design, manufacturing, deployment and standardisation goal for green IoT systems. The energy-saving mechanisms may vary from one layer to another but they can be classified into the following categories:

  • Green hardware
  • Green communication and networking
  • Green architectures
  • Green software
  • Green security
  • Green policies

Green IoT hardware

A realistic approach to significantly reduce the energy consumption in IoT systems or infrastructures is to significantly improve the energy efficiency of hardware systems, because a large proportion of energy is used to power the electrical and electronic hardware such as computing nodes, networking nodes, cooling (and air conditioning) systems, and power electronics systems, security, and lighting systems. Recently, a lot of attention is being made to improve the energy efficiency of hardware systems in ICT infrastructures, especially in the IoT industry. The energy-saving mechanisms in IoT infrastructures include:

  • Reducing the size of hardware device
  • Using energy-efficient materials
  • Energy-efficient hardware design
  • Turning off idle devices
  • Energy-efficient manufacturing

To achieve the green IoT vision, it is essential to deploy the energy-efficient hardware in the entire IoT infrastructures (from the perception layer to the cloud) throughout the IoT industry. Green IoT hardware is not limited to energy-efficient hardware design and hardware-based energy-saving mechanisms in the IoT infrastructure but also includes sustainable hardware approaches such as

  • Using disposal and recyclable materials to manufacture IoT hardware
  • Incorporating energy harvesting systems into IoT systems or infrastructure

Reducing the size of hardware device

There have been a significant reduction in the size of electronic hardware from the times of the vacuum tube to modern day semiconductor chips. In the early days of electronics, computers occupied entire floors of build, radio communication systems were large systems integrated into cabinets, and the smallest electronic device at the time was a two way radio system that was often carried on the back [48]. As the sizes of electronics devices decreased, their energy demand also dropped drastically.

Over the past few decades, the sizes of computing and communication devices have decreased significantly, reducing the power required to operate them. Despite the significant progress made by the semiconductor industry to decrease the size of semiconductor chips while improving their performance, there is still a persistent drive to keep decreasing the sizes of semiconductor chips to decrease their cost, reduce energy consumption, and conserve the resources required to manufacture these chips.

One of the Co-founders of Intel, Gordon Moore observed that “the number of transistors and resistors on a chip doubles every 24 months” and it was adopted by the computer industry as the well-known Moore's law and become a performance metrics in the semiconductor or computer chip industry. As more transistors were being packed into a single small-sized chip, the sizes of computing and networks equipment decreased significantly which also translated to a significant decrease in power consumption. Although advanced chip manufacturing have decreased the transistor gate length significantly,current leakage have also increased, resulting to an increase in the power consumption and heat dissipation of chips. Thus, doubling the number of transistor on the chip could double the amount of power consumed by the chip[49].

In some energy-hungry IoT devices, batteries with higher energy capacity are required. The energy capacity of a battery is correlated with its size. That is batteries with higher energy capacities may be larger and heavier, placing a limit to the extend to which the size of the device can be decreased. The energy capacity of the battery may be relatively small but an energy harvesting module is attached to the battery to continuously recharge the battery with energy harvested from the environment. The addition an energy harvesting module may increase the size of the IoT device but it increase the operational life or the lifetime of the device. It should noted that the energy harvested by energy harvesting modules is very small and that the power electronics components also consume energy.

Another approach to keep decreasing the sizes of IoT device and possibly decrease the energy consumption is to integrate the entire electronics of an IoT device, computer or network node into a single Integrated Circuit (IC) called System on a Chip (SoC) [50]. The components is the device or node that are often integrated into an IC or SoC include a Central Processing Unit (CPU), input and output ports, memory, analog input and output module, and the power supply unit. The SoC can efficiently perform specific functions such as signal processing, wireless communication, executing security algorithms, image processing, and artificial intelligence. The primary reason for integrating the entire electronics of a system into a chip is to reduce energy consumption, size, and cost of the system as whole. That is, a system that was originally made of multiple chips is integrated into a single chip that is smaller in size, may be cheaper, and consume less energy. External devices such as the power sources (batteries or energy harvesting, antennas and other analogue electronics components) can be integrated into a SoC to reduce size, energy consumption, and cost.

Using energy-efficient materials -energy-efficient sensors

Energy-efficient hardware design

At the IoT perception layer, some of the energy-efficient mechanisms include:

  1. Energy-efficient sensors (Green sensors): designing IoT sensors to consume as minimum amounts of energy as possible. When selecting the sensors to be used during the design of IoT devices, energy consumption and sustainability should be among the considered design criteria.
  2. Energy-efficient radio modules (Green radio modules): Radio modules are the major consumers energy in IoT devices and designing them to consume minimal amount of energy significantly decreases the energy consumed by IoT devices. When choosing the IoT device to be used for an IoT application, the energy consumption of the radio modules should be taken into consideration.
  3. Low-power microcontroller microprocessors (Green MCUs and ICs): the energy consumption of the microcontroller or microprocessor is very important as these devices are often powered by batteries with limited energy capacity. In selecting IoT devices to be used for an IoT application, the performance and energy consumption of the devices should be prioritised rather than sacrifising one for the other. Some of the design strategies that have been develop to improve the energy efficiency of the microcontroller or microprocessor of IoT devices is
    • Duty cycling: Switching off the microcontroller or microprocessor when the device is idle and then switching it on only when there is needed for processing.
    • Using low power microcontrollers or microprocessors: Choosing very low-power microcontrollers or microprocessors with very limited processing power but consumes relatively small amount of energy.
    • Using energy-efficient CMOS ICs to manufactures MCUs or CPUs: Manufacturing the components of IoT devices using energy-efficient CMOS ICs can significantly reduce the energy consumption of IoT devices.
    • Hardware acceleration and SoC design: Using application specific integrated circuits (ASICSs) to implement hardwired functionalities in an energy-efficient way (e.g., DSP systems, System-in-package(SiP), System-on-Chip (SoC)), resulting in highly compact designs (combining sensors, MCU, batteries, and energy harvesters into a single chip). As tens of billions to trillions of IoT devices are being deployed in in various sectors (e.g., intelligent transport systems, smart health care, smart manufacturing, smart homes, smart cities, smart agriculture, and smart energy) of the society or economy, the amount of traffic generated by IoT devices and transported through the local network and the Internet to fog or cloud computing platforms is also growing rapidly. The amount of computing or processing required to analyse the massive amounts of data generated has also increased significantly. The increase in the amounting of traffic and computing or processing requirement also increases the energy consumption of hardware deployed in the networking and data center infrastructures handling the IoT traffic and data. Some of the hardware-based energy-saving strategies that can be leveraged to reduce the energy consumption of networking and computing nodes in IoT based-infrastructure ( some of which were discussed) in [51] include:
  1. Custom systems-on-chip: A design approach that integrates some or all system components into a single chip which reduces the size of the system compared to the approach of designing the various components of the system separately. Although the size, weight and the energy consumption of the SoC devices may be relatively lower compared to devices designed using separate chips, their performance may be lower. For example, a Raspberry Pi that contains a Broadcom SoC may consume less than 5 W, it processing power may be less than that of computer processors. SoC are used in mobile phones to ensure acceptable computing or processing and networking performance while minimising the energy consumption to extend the battery life. Thus, the SoC design approach will enable a significant reduction in the size of the device and energy consumption without necessarily sacrificing the performance of the devices.
  2. Dynamic frequency scaling: The processor, microprocessor, or microcontroller can be forced into a low-power mode by reducing it's clock frequency or voltage. Also, the power consumption of the peripheral components of the device can be dynamically reduced by dynamically powering down some of the peripherals that are idle (not used at all). The power consumption of the peripherals can be controlled in such a way that they consume power only when necessary. Dynamic frequency or voltage scaling scaling can be be implemented in a software which is then used to monitor and adjust the power and clock frequency or voltage of the processor. Frequency and voltage scaling can be implemented on computing and networking nodes from the IoT perception layer, through the networking or transporting layer to fog/cloud computing layers. Frequency or voltage scaling is a feature that has been implemented in some Intel process in the form of P-states and C-states. The P-states provide a mechanism to the scale the frequency and voltage at which the processor runs to reduce it's power consumption and the C-states are the states of the CPU when it has reduced or turned off some of its selected functions [52].
  3. Low-energy displays: For applications that require the display of information, increasing the energy efficient of the display could decrease the energy consumption of the device.
  4. Hardware data processing (e.g., (AI hardware): Rather than using the CPU for all types of computing or processing tasks, hardware acceleration is employed to shift unique data operation or some specific computing tasks into dedicated hardware. Hardware acceleration refers to the process by which an application offloads some specific computing tasks onto some specialised hardware components (e.g., GPUs, DSP, ASICs etc) within a system to achieve greater efficiency than it is possible using a software that is running solely on a general purpose CPU. [53]. Tasks such as visualization, packet processing, AI processing, cryptography, error correction, and signal processing can be offloaded onto specialised hardware, freeing up the CPU to perform other tasks. Such specific hardware often offer high performance and low energy consumption when compared to CPUs. For example running AI-based tasks on GPUs is more efficient compared to running them on a CPU, which justify why GPUs are more preferable then CPUs. AI specific hardware have been introduced especially for neural networking tasks. Thus, IoT hardware designers should always examine carefully if there are tasks that could be offloaded to specialised hardware to free up the microcontroller or processors, significantly improving performance and energy efficiency.
  5. Cloud computing (remote processing): Cloud computing is a cost-effective and scalable computing paradigm that enables on-demand remote access of computing resources such as software, infrastructure, and platform over the internet. By adopting cloud-based services (software-as-a-service, infrastructure-as-a-service, platform-as-a-service) companies or organisations do not need to invest in hardware infrastructure to host their service, significantly reducing the energy demand of IT services. An interesting strategy that has significantly increased the performance and energy efficiency of IT infrastructure and services is virtualisation. Virtualisation refers to the hardware or software methods that enable the partitioning of a physical machine into multiple instances that run concurrently and share the underlying physical resources, and devices. It involve the use of Virtual Machine Monitor (VMM), also called a hypervisor, to manage the Virtual Machine (VMs) and enable them to share the underlying physical resources (hardware). The sharing of hardware resources by VMs that are hosting multiple services (data analytics, high performance computing, security, etc.) significantly reduces the energy demand from data centers. Several energy-efficient strategies (e.g., switching-off idle servers, energy-efficient task scheduling, and other optimization methods) have been developed and implemented in data centers. The exponential increase in the number of deployed IoT devices and the generation of massive amounts of data they generate and send to fog computing nodes or cloud computing data centers will likely increase the energy consumption of data centers significantly, requiring green cloud computing strategies.
  6. -Photonic computing: In an attempt to increase processing performance and significantly decrease energy consumption researchers and experts in the electronics and computer industries are seeking for ways to use optical devices for data processing, data storage, and data communication. Optical or photonic computing offer high speed, high bandwidth, and low energy consumption benefits that can be exploited to meet the need for high performance computing, high speed communication, and low energy consumption, an can be considered as a promising technology for high performance or high speed computing and communication technologies for computing and networking nodes in the IoT networking/transport and fog/cloud computing layers. The main components of a photonic or optical computing systems are optical processing units (for data processing), optical connectors (for optical data transfer), and optical storage units (for optical data storage). In optical or phototonic computing, light waves (photons) produced by lasers or incoherent sources are exploited as a primary means for carrying out numerical calculations, reasoning, artificial intelligence, data processing, data storage and data communications for computing unlike in traditional computers where these functions are performed using electrons [54]. A major challenge in optical or photonic computing systems is the inefficiencies or performance bottlenecks introduced when converting electrical signals to optical and optical signals to electrical as there is still a need to interface them with existing digital computing and communication systems.
  7. Improving the energy efficiency of mobile radio networks: The adoption of Low-Power Wide Area (LPWA)cellular technologies (e.g., NB-IoT, LTE-M) have enabled the deployment of IoT networking services over existing mobile network [55]. More than 50% of the energy consumption of cellular base station is consumed by power amplifiers. Improving the efficiency of the power amplifier of wireless access network nodes (e.g., improving the efficiency of the power amplifier of 4G/5G/6G base stations). Another strategy to reduce the energy demand of cellular mobile base station is to centralise or shift some of the base band processing to the cloud or a pool of base band units, the so-called Cloud Radio Access Network (C-RAN).
  8. Turning off idle networking or computing nodes: The most popular energy-efficient management strategy is to switch off idle devices or components. This approach can be applied from the IoT perception layer to the fog/cloud computing layer.

Green computing

The increasing proliferation of IoT devices in almost every sector or industry developing and developed economies have resulting in the increase in the amount of data collected from the environment, increasing the demand for processing or computing. IoT devices and traditional devices require high performance, QoS, and longer battery life which can be achieved primarily by developing strategies that can improve both the computing performance and energy consumption. Green or sustainable computing is the practice of developing strategies to maximise energy efficiency (minimise the energy consumption) and to minimise the environmental impact from the design and use of computer chips, systems, and software, spanning across the supply chain from from the extraction of raw materials needed to make computers to how systems are recycled [56].

Green computing strategies can be implemented in software or hardware. Some of the hardware-based green computing strategies have been discussed above on the section on Green IoT hardware. The software strategies will be discussed on the section on Green IoT software below. A major green computing strategy that is improving both computing performance and energy efficiency is hardware acceleration. Hardware accelerators such as GPUs and Data Processing Units (DPUs) are major green computing drivers because they provide high performance and energy efficient computing for AI, networking, cybersecurity, gaming, and High Performance Computing (HPC) services or tasks. It is estimated that about 19 terawatt-hours of electricity a year of electricity could be saved if all AI, HPC and networking computing tasks could be offloaded to GPUs and DPU accelerators. With increasing use of sophisticated data analytics and AI tools to process the massive amounts of data generated by IoT devices, green computing strategies such as hardware acceleration will be very essential [57].

Green computing is not only about devising strategies to reduce energy consumption. It also include leveraging high performance computing resources to tackle climate related challenges. For example the use of GPUs and DPUs to run run climate models (e.g., prediction of climate and weather patterns) and to develop other green technologies (e.g., energy-efficient fertilizer productions, development of battery technologies etc.). A combination of IoT and green computing technologies is providing powerful tools to scientists, policymakers, and companies to tackle complex climate related problems.

Green IoT Communication and Networking infrastructure

The data gathered or generated by IoT devices is often sent to processing node ( edge nodes, fog computing nodes or cloud computing data centers) that are often located at some distance away from the devices. As the data generated by the IoT devices increases, the traffic to be transported across the network infrastructure increases, requiring upgrades on the infrastructure to handle the growing traffic, resulting in a corresponding increase in the energy demand. Apart of computing, communication is the largest energy consumer in IoT infrastructures. In an IoT device, must of the energy is consumed by the wireless communication module. Some green IoT communication and networking mechanisms include:

  1. Low-power networking and communication technologies
  2. Energy-efficient data transmission
  3. Network level offloading of computation
  4. Energy-efficient communication

Green IoT architectures

Green IoT Software

Green IoT security

Advance Green Manufacturing

The development of advanced design and manufacturing processes to produce energy-efficient chips is one of the strategies that is currently being used to reduce the energy consumption to achieve the green computing and communication goals. Given the rapid adoptio of smart phones and IoT systems, producing energy-efficient chips is very important. An example to illustrate how advanced manufacturing may significantly reduce the energy consumption in Computing and communication devices is the A-series chips used in Apple's iPhones. The power consumption of the 7-nm A12 chip is $50\%$ less than that of its 10-nm A11 predecessor. Also the 5-nm A14 chip is $30%$ more power efficient than the 7-nm A13 chip, and the 4-nm A16 is $20%$ more power-efficeint than the 5-nm A15. [58].

A similar trend has been can be observed in the PC industry although there is no guarantee that more advanced chip manufacturing processes with keep improving the performance and energy efficiency of chips.

(discuss chips in 4G/5G base stations)

Green IoT policies

Design consideration for energy sources for IoT devices

Scalability

Minimum maintenance

Mobility

The energy requirements

-Devices that require a continuous supply of power cannot be powered solely by a battery.

Flexibility

Efficiency

The need for back energy sources

Minimum cost

sustainability

Green and environmentally friendly

Energy sources for IoT

The electrical and electronic devices in IoT infrastructure require electrical energy to operate. The energy requirements of the device depend on its size, computing or processing requirements, traffic load, and other mechanical and electrical loads that need to be handled, especially in IoT applications where the feedback commands from fog/cloud computing platforms are used to control a physical process or system through actuators. The main power sources for IoT devices are:

  • main power
  • energy storage systems
  • energy harvesting systems

Main power

In IoT applications where the hardware devices do not need to be mobile and are energy-hungry (consume a significant amount of energy), they can be reliably powered using main power sources. The main power from the grid is in the form of AC power and should be converted to DC power and scaled down to meet the power requirement of sensing, actuating, computing, and networking nodes. The hardware devices are the networking or transport layer, and those at the application layer (fog/cloud computing nodes) are often power-hungry and are supplied using energy from the grid.

A drawback of using the main power to supply an IoT infrastructure with many IoT devices that depend on the main power source is the complexity of connecting the devices to the power source using cables. In the case of hundreds or thousands of devices, supplying them using the main power is impractical. If the energy from the main source is generated using fossil fuels, then the carbon footprint from the IoT infrastructure increases as it's energy demands increase.

Energy storage systems

Energy storage systems are systems that are used to store energy so that it can be consumed later. In IoT infrastructures, some sensors, actuators, computing and networking nodes and other electrical systems are powered using energy storage systems. The energy is stored in forms that readily be converted into electrical energy required to power the IoT devices, computing and networking nodes and other electrical systems in the IoT infrastructure. In some scenarios, electrical energy from a main power supply or local renewable energy plants (or energy harvesting systems) is converted to storable energy forms and stored in energy storage systems to be used when the source is not able to generate energy to meet the needs of the electrical systems in the IoT infrastructure. Energy storage systems can be categorised depending on the form of the energy (mechanical, electrical, chemical, and thermal energy) that is stored and then subsequently converted into electrical energy. The various categories of energy storage systems include:

  1. Electrostatic energy storage systems:
  2. Magnetic energy storage system
  3. Electrochemical energy storage systems
  4. Chemical energy storage systems: The electrical energy generated is converted to chemical energy and stored in the form of chemical fuels that can be easily converted into electrical energy. The energy generated can be stored in chemical forms such as hydrogen for a long time and then used when necessary. In this case, energy is harvested from renewable energy sources such as solar or wind when conditions are good like during the spring or summer and used during winter when conditions are not favourable for renewable energy generation.
  5. Mechanical energy storage systems: The electrical energy produced is converted into mechanical energy (e.g., potential and kinetic energy) which is stored by a mechanical energy storage system. The mechanical energy is stored in such a way that it can easily be converted back to electrical energy for consumption. Examples of mechanical energy storage systems include; pumped hydro energy storage systems, gravity energy storage systems, compressed air energy storage systems, and flywheel energy storage systems. Mechanical energy storage systems are very large and complex and may be used as an energy storage option for fixed IoT infrastructures like base station sites or data centres provided that there is space for it and that the geography of the area is suitable. It may not be suitable as an energy storage option for small IoT systems that are constrained by size and weight.
  6. Electrothermal energy storage system: The electrical energy generated is converted to thermal energy which is stored and used for heating, cooling or converted purposes for large-scale infrastructure (e.g., base stations, core network infrastructure or fog/cloud data centres). The thermal energy can be stored in such a way that it can be converted into electrical energy for consumption.
  7. Hybrid energy storage system

Most IoT devices are often powered using a small energy storage system (e.g., battery or supercapacitor) with very limited energy capacity. The energy storage system, in the form of a battery or supercapacitor, is charged to its full capacity when the device is being deployed. When all the energy stored in the energy storage system is completely consumed or drained, the device is shut down. The time from when the deployed to when all the energy stored in its energy storage system is consumed is called the lifetime of the device. The capacity of the energy storage is often chosen in such a way as to satisfy the energy consumption demand of the device and ensure a longer lifetime for the device. In a massive deployment of thousands or hundreds of thousands of IoT devices, frequent replacement or recharging of batteries or supercapacitors can be very tedious and costly and may also degrade the quality of service.

The use of an energy storage system is recommended mainly for IoT devices that require a very small amount of power (in the order of micro or mili Watts) to operate and spend most of their time in sleep mode to save energy. It is desired that the lifetime of a low-power IoT device powered by a small battery should be at least a decade. The energy capacity of the energy storage systems is contained by its size and weight. That is, increasing the capacity of an energy storage system increases its size or weight but it is desired to keep the size and weight of IoT devices to be as small as possible, especially in IoT applications where mobility is very important.

The computing and networking nodes at the edge/fog/cloud layer of the IoT architecture are energy-hungry devices that are not often powered solely by energy storage systems. They are often powered by a main power source from an electricity grid or from renewable energy sources (e.g., wind, solar, pumped hydro-power). A backup energy storage system is often installed to store energy so that when the main power source fails (especially in the case where energy is generated from renewable energy sources as they are intermittent in nature), the energy storage system will supply the computing or networking node until the main source is restored.

  1. Batteries
    • Lithium-ion batteries
    • Lead acid batteries
    • Alkaline batteries
    • 3D-printed Zinc batteries
    • Solid-state thin film batteries
  2. Supercapacitor
  3. Superconducting magnetic energy storage
  4. Hybrid energy storage system

Energy storage systems for edge/fog/cloud layer devices (access points, base stations, fog computing nodes, cloud data centers)

  1. Battery energy storage systems
  2. Hydrogen energy storage systems
  3. Thermal energy storage systems
  4. Supercapacitors
  5. superconducting magnetic energy storage
  6. Pumped hydro energy storage
  7. Hybrid energy storage systems

electrical energy storage (supercapacitor, superconducting magnetic energy storage) mechanical energy storage (flywheel, pumped hydro storage, CAES) chemical storage (Including cold storage, such as conventional batteries, flow batteries, hydrogen energy storage, gas storage, biomass and cryogenic energy storage (liquid air energy storage)) Thermal energy storage (aquiferous cold energy storage, cryogenic energy storage, high-temperature storage, such as water tanks, phase-change materials, and concrete thermal storage.)

Energy harvesting systems

In order to deal with limitations of energy storage systems such as the limited lifetime (the time from when an IoT device is deployed to when all the energy stored in its energy storage system is depleted or consumed), maintenance complexity, and scalability, energy harvesting systems are incorporated into IoT systems to harvest energy from the environment. The energy can be harvested from the ambient environment (energy sources naturally present in the immediate environment of the device, e.g., solar, wind, thermal, Radio frequency energy sources) or from external sources (the source of energy is from external systems, e.g., mechanical or human body) and then converted into electrical energy to power IoT devices or storage in an energy storage system for later usage.

Energy harvesting is the process of capturing energy from the ambient environment or external energy sources and then converting it to electrical energy, which is used to supply the IoT systems or stored for later usage. An energy harvesting system converts energy from an unusable form to useful electrical energy, which is then used to power the IoT devices or stored for later usage.

Energy harvesting from ambient energy sources
The energy can be harvested from ambient sources (environmental energy sources) such as solar and photovoltaic, Radio Frequency (RF), flow (wind and hydro energy sources), and thermal energy sources. Ambient energy harvesting is the process of capturing energy from the immediate environment of the device (ambient energy sources) and then converting it into electrical energy to power IoT devices. The ambient energy harvesting systems that can be used to harvest energy to power IoT devices, access points, fog nodes or cloud data centres include:

  • Solar and photovoltaic energy harvesting: capturing natural light energy (in the case of light) or artificial light (in indoor deployments) and converting it into electrical energy to power IoT devices.
  • Radio frequency (RF) energy harvesting: Capturing RF energy from the environment and converting it into electrical energy to power IoT devices.
  • Flow energy harvesting: Converting the energy generated from the flow of air (e.g., wind energy harvesting) or water (e.g., hydro energy harvesting) into electrical energy to power IoT or other IT infrastructures.
  • Thermal: Capturing the energy that is generated from temperature differences and converting it into electrical energy to power IoT systems and other IoT infrastructures.
  • Acoustic noise: Capturing the energy resulting from the pressure waves produced by a vibrating source and converting it into electrical energy to power IoT devices.

Harvesting energy from external sources

  1. Energy harvesting from mechanical sources
    • Vibration energy harvesting: harvesting the energy created by vibrations (e.g., due to car movements, operations of machines etc.) and converting it into useful electrical energy, which can be used to power IoT devices or stored in the battery for later use.
    • Pressure energy harvesting: Harvesting the energy from pressure sources and converting it into useful electrical energy.
    • Stress-strain energy harvesting: Harvesting energy from mechanical vibrations by exploiting the property of some materials (e.g., piezoelectric materials) that, when they are subject to mechanical strain, produce an electrical charge that is proportional to the stress applied to it.
  2. Energy harvesting from human body sources

Human body energy harvesting is the process of harvesting energy from the human body and then converting it to electrical energy, which is used to power wearable IoT devices, especially IoT devices designed for smart health applications. The source of energy could be from the vibration or deformations created by human activity (mechanical energy). The source of energy could be from human temperature differences or gradients (thermal energy) or from human physiology (chemical energy).

  • Human activity energy harvesting: Capturing the biomechanical energy resulting from human activities (walking, cycling, running, and other forms of exercises) and then converting it into useful electrical energy that can be used to power the IoT devices or stored for later use.
  • Human physiological energy harvesting: Capturing the biochemical energy resulting from human physiological processes and then converting it into electrical energy that can be used to power IoT devices, especially medical implantable IoT devices.

Energy harvesting for IoT systems

Energy harvesting systems

Energy harvesting for edge class systems

Energy energy harvesting for fog/cloud class systems

Green IoT design trade-offs
Figure 58: Key design contraints for green IoT.

Green IoT Applications

Smart grids

Smart Agriculture

Smart manufacturing

Smart home

Intelligent transport systems

Smart cities


[1] VDI/VDE 2206 “Entwicklung mechatronischer und cyber-physischer Systeme”
[2] M. G. S. Wicaksono, E. Suryani, and R. A. Hendrawan. Increasing productivity of rice plants based on iot (internet of things) to realize smart agriculture using system thinking approach. Procedia Computer Science, 197:607–616, 2021.
[3] N. Silvis-Cividjian. Teaching internet of things (iot) literacy: A systems engineering approach. In Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET), Montreal, QC, Canada, 2015. IEEE.
[4] M. G. S. Wicaksono, E. Suryani, and R. A. Hendrawan. Increasing productivity of rice plants based on iot (internet of things) to realize smart agriculture using system thinking approach. Procedia Computer Science, 197:607–616, 2021.
[5] OMG SysML v. 1.6 [https://sysml.org/]
[6] ISO/IEC/IEEE 29119-1:2022 Software and systems engineering — Software testing Part 1: General concepts
[7] Jain, A., Mittal, S., Bhagat, A., Sharma, D.K. (2023). Big Data Analytics and Security Over the Cloud: Characteristics, Analytics, Integration and Security. In: Srivastava, G., Ghosh, U., Lin, J.CW. (eds) Security and Risk Analysis for Intelligent Edge Computing. Advances in Information Security, vol 103. Springer, Cham. https://doi.org/10.1007/978-3-031-28150-1_2
[9] Dickey, D. A.; Fuller, W. A. (1979). “Distribution of the Estimators for Autoregressive Time Series with a Unit Root”. Journal of the American Statistical Association. 74 (366): 427–431. doi:10.1080/01621459.1979.10482531. JSTOR 2286348.
[10] Blair, R. Clifford; Higgins, James J. (1980). “A Comparison of the Power of Wilcoxon's Rank-Sum Statistic to That of Student's t Statistic Under Various Nonnormal Distributions”. Journal of Educational Statistics. 5 (4): 309–335. doi:10.2307/1164905. JSTOR 1164905.
[11] Everitt, B. S. (August 12, 2002). The Cambridge Dictionary of Statistics (2 ed.). Cambridge University Press. ISBN 978-0521810999.
[12] Upton, Graham; Cook, Ian (21 August 2008). Oxford Dictionary of Statistics. Oxford University Press. ISBN 978-0-19-954145-4.
[13] 3. Stigler, Stephen M (1997). “Regression toward the mean, historically considered”. Statistical Methods in Medical Research. 6 (2): 103-114. doi:10.1191/096228097676361431. PMID 9261910
[14] josephsalmon.eu/enseignement/TELECOM/MDI720/datasets/Galton.txt - Cited on 03.08.2024.
[16] Understanding K-means Clustering in Machine Learning | by Education Ecosystem (LEDU) | Towards Data Science https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1 – Cited 07.08.2024.
[17] Robert L. Thorndike (December 1953). “Who Belongs in the Family?”. Psychometrika. 18 (4): 267–276. doi:10.1007/BF02289263. S2CID 120467216.
[18] Peter J. Rousseeuw (1987). “Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis”. Computational and Applied Mathematics. 20: 53–65. doi:10.1016/0377-0427(87)90125-7.
[19] Hyndman, Rob J; Athanasopoulos, George. 8.9 Seasonal ARIMA models. oTexts. Retrieved 19 May 2015.
[20] Box, George E. P. (2015). Time Series Analysis: Forecasting and Control. WILEY. ISBN 978-1-118-67502-1.
[21] IsolationForest example — scikit-learn 1.5.2 documentation
[22] Gold, Omer; Sharir, Micha (2018). “Dynamic Time Warping and Geometric Edit Distance: Breaking the Quadratic Barrier”. ACM Transactions on Algorithms. 14 (4). doi:10.1145/3230734. S2CID 52070903.
[23] Romain Tavenard, Johann Faouzi, Gilles Vandewiele, Felix Divo, Guillaume Androz, Chester Holtz, Marie Payne, Roman Yurchak, Marc Rußwurm, Kushal Kolar, & Eli Woods (2020). TSlearn, A Machine Learning Toolkit for Time Series Data. Journal of Machine Learning Research, 21(118), 1-6.
[25] Abi Tyas Tunggal, What is Cybersecurity Risk? A Thorough Definition, https://www.upguard.com/blog/cybersecurity-risk, 2024
[26] Rapid 7, Vulnerabilities, Exploits, and Threats, https://www.rapid7.com/fundamentals/vulnerabilities-exploits-threats/
[31] O. Garcia-Morchon, S. Kumar, S. Keoh, R. Hummen, R. Struik, Security Considerations in the IP-based Internet of Things draft-garcia-core-security-06, Internet Engineering Task Force (IETF), https://tools.ietf.org, 2013 accessed on 28/02/2020.
[32] O. Garcia-Morchon, S. Kumar, S. Keoh, R. Hummen, R. Struik, Security Considerations in the IP-based Internet of Things draft-garcia-core-security-06, Internet Engineering Task Force (IETF), https://tools.ietf.org, 2013 accessed on 28/02/2020.
[33] A. Rayes, S. Salam, Internet of Things-From Hype to Reality, Springer Nature, 2017.
[34] A. Rayes, S. Salam, Internet of Things-From Hype to Reality, Springer Nature, 2017.
[35] A. Rayes, S. Salam, Internet of Things-From Hype to Reality, Springer Nature, 2017.
[36] S. Yan-Qiang, W. Xiao-dong, Handbook of Research on Developments and Trends in Wireless Sensor Networks: From Principle to Practice, DOI: 10.4018/978-1-61520-701-5.ch015, IGI Global Knowledge Disseminator, https://www.igi-global.com/chapter/jamming-attacks-countermeasures-wireless-sensor/41122, 2010, access date: 7/03/2020
[37] Bruno Rossi, Top 10 IoT Vulnerabilities and How to Mitigate Them, https://sternumiot.com/iot-blog/top-10-iot-vulnerabilities-and-how-to-mitigate-them/
[38] Bruno Rossi, Top 10 IoT Vulnerabilities and How to Mitigate Them, https://sternumiot.com/iot-blog/top-10-iot-vulnerabilities-and-how-to-mitigate-them/
[39] Anna Chung and Asher Davila, Risks in IoT Supply Chain, https://unit42.paloaltonetworks.com/iot-supply-chain/
[40] Abi Tyas Tunggal, What is an Attack Vector? 16 Critical Examples, https://www.upguard.com/blog/attack-vector, 2024
[41] Lauren Ballejos, How to Secure IoT Devices, 2024
[42] Duplocloud, Defending Against IoT Threats: A Comprehensive Guide to IoT Malware Protection, https://duplocloud.com/blog/defending-against-iot-threats-a-comprehensive-guide-to-iot-malware-protection/
[43] Kyle Chin, What is the Internet of Things (IoT)? Definition and Critical Risks, https://www.upguard.com/blog/internet-of-things-iot, 2024
[44] Duplocloud, Defending Against IoT Threats: A Comprehensive Guide to IoT Malware Protection, https://duplocloud.com/blog/defending-against-iot-threats-a-comprehensive-guide-to-iot-malware-protection/
[45] Corey Glickman, “Green IoT: The shift to practical sustainability.” ETCIO.com (cio.economictimes.indiatimes.com, July 2023, Accessed on Aug. 24, 2023
[46] Thilakarathne, Navod Neranjan and Kagita, Mohan Krishna and Priyashan, WD Madhuka “Green internet of things: The next generation energy efficient internet of things.”Applied Information Processing Systems: Proceedings of ICCET 2021, pp. 391-402, 2022, Springer
[47] Corey Glickman, “Green IoT: The shift to practical sustainability.” ETCIO.com (cio.economictimes.indiatimes.com, July 2023, Accessed on Aug. 24, 2023
[48] Electronic Components, “Using modern technology to reduce power consumption”, June 2021, accessed on August 2023, https://www.arrow.com/en/research-and-events/articles/using-modern-technology-to-reduce-power-consumption
[49] Partner Perspectives, “Moore's Law Is Dead. Where Is Energy Saving Heading in the Electronic Information Industry?”, https://www.lightreading.com/moores-law-is-dead-where-is-energy-saving-heading-in-electronic-information-industry/a/d-id/781014, 2022, accessed on Sept. 7, 2023
[50] Anysilicon, “What is a System on Chip (SoC)?”, https://anysilicon.com/what-is-a-system-on-chip-soc/, accessed on: Sept 7, 2023
[51] Electronic Components, “Using modern technology to reduce power consumption”, June 2021, Accessed on Sept. 18, 2023
[52] Microsoft, “P-states and C-States”, https://learn.microsoft.com/en-us/previous-versions/windows/desktop/xperf/p-states-and-c-states, accessed on Oct. 2, 2023
[53] Heavy AI, “Hardware acceleration”, https://www.heavy.ai/technical-glossary/hardware-acceleration, accessed on Oct. 2, 2023
[54] Molly Loe, “Optical computers: everything you need to know”, TechHQ, May 2023, accessed on Oct. 4, 2023
[55] e.g., 2G/3G/4G/5G
[56] Rick Merritt “What is Green Computing?” NVIDIA, https://blogs.nvidia.com/blog/2022/10/12/what-is-green-computing/, 2022, accessed on Oct. 4, 2023
[57] Rick Merritt “What is Green Computing?” NVIDIA, https://blogs.nvidia.com/blog/2022/10/12/what-is-green-computing/, 2022, accessed on Oct. 4, 2023
[58] Partner Perspectives, “Moore's Law Is Dead. Where Is Energy Saving Heading in the Electronic Information Industry?”, https://www.lightreading.com/moores-law-is-dead-where-is-energy-saving-heading-in-electronic-information-industry/a/d-id/781014, 2022, accessed on Sept. 7, 2023