BOOK

Table of Contents

Authors

IOT-OPEN.EU Reloaded Consortium partners proudly present the Advanced IoT Systems book. The complete list of contributors is juxtaposed below.

Riga Technical University
  • Agris Nikitenko, Ph. D., Eng.
  • Karlis Berkolds, M. sc., Eng.
  • Aleksejs Jurenoks, Ph.D., Eng.
Silesian University of Technology
  • Piotr Czekalski, Ph. D., Eng.
  • Krzysztof Tokarz, Ph. D., Eng.
  • Godlove Suila Kuaban, Ph. D, Eng.
Tallinn University of Technology
  • Raivo Sell, Ph. D., ING-PAED IGIP
External Contributors
  • DCB Distribution & Consulting Becker, Friedhelm Becker
Technical editing
  • Marta Nikitenko, RTU publishing.
Graphic Design and Images
  • Blanka Czekalska, M. sc., Eng., Arch.
  • Piotr Czekalski, Ph. D., Eng.

Preface

This book is intended to provide readers with a comprehensive knowledge of IoT systems design on a conceptual level. It covers IoT design methodologies, IoT system architectures, IoT data-related aspects, cybersecurity in IoT systems, blockchain in IoT and green IoT. Almost every top-level chapter of the book constitutes a separate study module related to a selected aspect of the IoT system design so that this book can be treated as a set of separate guides as well as a solid and complete workbook for an entire IoT course for the master's level.

The primary target group of readers are master's students and industrial system designers such as CTOs. This book constitutes a comprehensive manual for IoT technology; however, it is not a complete encyclopedia nor exhausts the market. The reason for it is pretty simple – IoT is so rapidly changing technology that new devices, ideas and implementations appear daily. Once familiar with this book's contents, the reader will understand the IoT systems design methodologies, tools and challenges. This book, however, comprises topics that are pf particular industrial interest - including but not limited to data analytics, cybersecurity and introduction to blockchains, which illustrated the diversity of the IoT world and technology landscape.

Even if most of the content is somehow technology-aware, the authors assume that persons studying the contents must have at least general knowledge of the technical details of IoT and embedded systems, including main components on the electronics level, networking, data processing and security.
For this reason, students starting their journey with IoT systems should instead start with the “Blue Book” first, which covers all technical aspects of IoT needed as background knowledge for this high-level approach towards IoT systems design.

Playing with real or virtual hardware and software is always fun, so keep going!

 

Project Information

This Book was implemented under the wings of the following project:

  • Cooperation Partnerships in Higher Education, 2022, IOT-OPEN.EU Reloaded: Education-based strengthening of the European universities, companies and labour force in the global IoT market, project number: 2022-1-PL01-KA220-HED-000085090,
  • Horizon 2020 Research Innovation and Staff Exchange Programme (RISE) under the Marie Skłodowska-Curie Action, Programme H2020-EU.1.3.3. - Stimulating innovation by means of cross-fertilisation of knowledge, Grant Agreement No 871163: Reliable Electronics for Tomorrow’s Active Systems.

Erasmus+ Disclaimer
This project has been funded with support from the European Commission.
This publication reflects the views of only the author, and the Commission cannot be held responsible for any use that may be made of the information contained therein.

Copyright Notice
This content was created by the IOT-OPEN.EU Reloaded Consortium 2022-2025.
The content is Copyrighted and distributed under CC BY-NC Creative Commons Licence, free for Non-Commercial use.

CC BY-NC

In case of commercial use, please get in touch with IOT-OPEN.EU Reloaded Consortium representative.

Introduction

For now (2024-2025), we have experienced rapid growth in the Internet of Things (IoT) domain, as expressed by the number of scientific publications, the market volume, and other indicators suggesting that IoT has come to stay for long. IoT is one of the top priorities in Horizon Europe’s Research and Innovation strategic plan, which, among different Thematic areas, recognizes IoT as one of the most important under the Technology thematic group [European Commission, Directorate-General for Research and Innovation, Synopsis report – Looking into the R&I future priorities 2025-2027, Publications Office of the European Union, 2023, https://data.europa.eu/doi/10.2777/93927]. Knowing the importance of IoT technologies, how can one contribute to the domain by developing, using and designing IoT systems for different applications? This book, “The Green Book,” which continues the previous one, “Introduction to the IoT”, provides the background needed for design methods, IoT data analysis, cybersecurity essentials, and other vital topics.

The book is organised into the following chapters:

IoT Design Methodologies

IoT systems are networked cyber-physical systems (CPS) and include components from three main domains: the hardware, primarily electromechanical devices; software, mostly microcontroller-specific process control software; and communication infrastructure. To develop an IoT solution, all aspects of these three domains must be designed in great synergy. When looking at the component level, the main building block of the IoT system is a node. The node is usually a microcontroller-based device dedicated to performing a specific task. The most common task is to perform measurements from the environment, but the node can also act as an actuator or user interface. In addition, IoT nodes can provide all kinds of supportive functions, like logging, providing time, storage, etc. However, the main function is still connected to the three main functions of sensing the environment, actuating or interfacing with humans, i.e. user interfaces. Today, CPS is created by expanding mechatronic systems with additional inputs and outputs and coupling them to the IoT. In principle, the IoT system is similar to classical smart systems, e.g., robots or mechatronic systems. These systems can be decomposed into three interconnected domains: process control by software, mechanical movements, and sensing of physical parameters from the system's environment. The figure below (2) demonstrates how these domains are interconnected to act as a smart system.

Smart System Components and Interactions
Figure 2: Smart System Components and Interactions

The IoT system has a similar purpose to general smart systems. Still, the main difference is that the IoT system is a distributed solution of smart functions using internet infrastructure. Similar functionality is decomposed into smaller devices acting as a single functioning device rather than a complex system. Nevertheless, when all small nodes are interconnected and can exchange messages with each other despite their location, we get a powerful system dedicated to performing automation tasks in vast application domains. The following figure represents the IoT system architecture, distributed nature, and communication function.

Smart IoT System Components and Interactions
Figure 3: Smart IoT System Components and Interactions

Even if the IoT system has a different component architecture from a regular mechatronic system, the development methodologies can be easily adapted from the domains of mechatronic systems and software system design. IoT systems have their specifics, but at the conceptual level, they are like any other smart software-intensive system. Thus, the methodologies are not IoT-specific but combinations and adaptions from related domains.

Product development process

The product development process is a well-established domain with many different concepts. Over time, as the software part is increasing in today's technical systems, more and more software development methodologies are integrated into the physical product development process. IoT systems are similar in their component level to cyber-physical systems consisting of characteristics and features of mechatronic, software and network systems. Thus, the existing product design methodologies are also logical choices to apply to the IoT system design process. Despite the product's nature, the general product design process has several iteration steps through the design stages.

The classical product design process starts with requirement analysis followed by conceptual design. When the design candidate is selected, the detail design stage develops domain-specific solutions like mechanical, electrical, software, etc. The next stage is integrating domain-specific design results into one product and validating the solution. In addition, the product design process must deal with manufacturing preparation, maintenance and utilisation planning. The figure 4 illustrates the general process for most technical system designs, regardless of the application fields. However, depending on the system specifics, several other relevant stages and procedures might be essential to pass.

General Product Design Stages
Figure 4: General Product Design Stages

V-model

IoT systems are a combination of mechatronic and distributed software systems. Therefore, design methodologies from these domains are most relevant for IoT systems. For example, the well-known V-model (figure 5) has long been used for the software development process but has also adapted to the mechatronic design process. The Association of German Engineers has issued the guideline VDI 2206 - Design methodology for mechatronic systems (Entwicklungsmethodik für mechatronische Systeme) [1]. This guideline adopts the V-model as a macro-cycle process. The V-model is in line with general product design stages but emphasises the verification and validation through the whole development process. The execution of processes happens sequentially, similar to V-shape, thus the name. The actual design process runs through several V-shape macro-cycles. Every cycle increases the product maturity. For example, the output of the first iteration can be just an early concept-proof prototype where the last iteration output is ready to deploy the system. How many iterations are needed depends on the complexity of the final product. The figure below presents the IoT system design adapted to the V-model. The only difference from mechatronic systems is a domain-specific design stage. However, every general stage has several internal procedures and IoT-specific sub-design stages which must be addressed.

V-model for IoT systems
Figure 5: V-model for IoT systems

The new product development starts with customer input or other motivation, e.g., a business case, which must be carefully analysed and specified in a structured way. Requirements are not always clearly defined, and to put effort into proper requirement engineering pays off to save significantly from later design stages. It is not good practice to start designing a new system or solution when requirements are not adequately defined. At the same time, rarely all information is available initially, and requirements may be refined or even changed during the design process. Nevertheless, well-defined and analysed requirement specifications simplify the later design process and reduce the risk of expensive change handling at later stages. The initial requirements are articulated from the stakeholder's perspective, focusing on their needs and desires rather than the system itself. In the subsequent step, these requirements are translated into a system-oriented perspective. The resulting specification from the requirements elicitation process provides a detailed description of the system to be developed.

The second design stage is system architecture and design, which is dedicated to developing concepts for the whole system. Concept development and evaluation are decomposed into several sub-steps and procedures. For example, different concept candidates' development, assessment of concept candidates, and selecting the best concept solution for further development. Once the concept solution is selected and validated with requirements, the final solution candidates can be frozen, and the development will enter the detailed design stage. In the detailed design stage, domain-specific development occurs, including hardware, software, network structure, etc. Integration and validation will follow once the domain-specific solutions are ready at the specified maturity. The final step before the first prototype solution is complete system testing and again verifying and validating according to the system requirements.

The whole process may be repeated as often as necessary, depending on the final system's maturity level. If only proof of concept is needed, one iteration might be enough—frequently the case for educational projects. However, for real customer systems, many V-cycle iterations are usually performed. Once the design process is completed, the system enters the production stage, and then the focus goes to system/user support and maintenance. However, like in modern software-intensive systems, constant development, bug fixes, upgrades, and new feature development are standard practice.

Challenges

When designing an IoT system, there are common design challenges, as in any other system engineering project, but also a few IoT-specific aspects. The engineering team must deal with difficulties similar to those of mechatronic and software system design. Some relevant vital elements to address when designing and deploying a new IoT system:

  • New IoT systems often require organisational and working culture changes, which are usually underestimated. Changing workers' mindsets to collaborate with new IoT systems might be a critical issue frequently underestimated during design.
  • Due to their complexity and dependence on several existing systems, IoT projects tend to take much longer to implement than anticipated.
  • IoT systems are multi-domain solutions and thus require engineering skills from very different fields, some of which might not be available, such as microcontroller programming, sensors, data communication, cybersecurity, etc.
  • Interconnectivity issues can be critical as the IoT system components must be able to communicate with each other, but many protocols, network architectures, and even electrical connectors may fail.
  • Data security is often underestimated. IoT systems are not standalone but, in most cases, interconnected through the public internet. Implementing cybersecurity is very challenging because the overall system security is defined by its weakest segment.
  • Scalability and dealing with legacy equipment. IoT systems often update old heavy machinery in the industry and combine old and new technologies. This might be more challenging than expected and, in some cases, extremely costly to eliminate all interfacing issues.

IoT System Design Principles

It is expected that billions or trillions of IoT devices will be deployed in the various sectors of the society or economy (e.g., intelligent transport systems, smart health care, smart manufacturing, smart homes, smart cities, smart agriculture, and smart energy) to deliver better customer experience, provide more value to the market, and to solve significant problems such as climate change, national security, and public safety. Integrating massive numbers of IoT nodes, networking nodes, and computing devices or applications into the existing infrastructures in various industries will increase their complexity. It is, therefore, essential to follow some design principles to ensure that IoT systems designed to solve problems or create unique value in the various industries are adequately designed to fulfil their intended functions and are easier to operate, maintain, and scale.

IoT system design has its own set of challenges as IoT systems often contain multiple components or elements (e.g., sensors and actuators, cyber-physical devices, networking nodes, computing nodes) interacting with one another to collect data, manipulate physical systems, transport data packets, and analyse the collected data to deliver better customer experience, create value, or solve a specific problem. Below are some practical IoT system design principles that should be considered when designing IoT systems.

Conduct proper research

Before designing IoT systems, it is essential to understand the customers' problems or challenges before designing an IoT solution to address them. The designer must think from the perspective of customers and then design a research study to understand the customers' problems and the existing solutions they have. Then, the designer must find out how IoT solutions can address those challenges. It is only after understanding the actual problem that the customers are facing and how IoT solutions could address them that IoT system designers should engage in developing a solution to address them.

An IoT system may be designed not only to solve a problem or pain that potential customers are feeling but could be designed to create unique value. Innovative IoT solutions could create exceptional value to make their potential customers productive and competitive. It is required that IoT system designers understand the unique value that their system or solution is going to offer to their potential customers to improve their productivity, competitive advantage, or user experience. It is, therefore, required to conduct proper research before engaging in the project.

The research process could include defining research questions, defining the market segment, sending out questionnaires to potential customers, conducting interviews with relative stakeholders in the target market, talking with sales representatives of potential customers, and attending industry conferences. The research findings should be well documented and analysed by all the stakeholders and the design team before the IoT project is launched so that the designers can cater to the customers' needs during the design process.

Focus on the values, needs or problems of users

The features to be included in the IoT solution should align with users' needs and problems and the value they can derive from the products to improve their productivity, competitive advantage or experience. The users are sometimes unaware of the value of IoT solutions or how they could address some of their problems, making them reluctant to adopt IoT solutions. Another barrier preventing users from adopting IoT solutions is uncertainties regarding cost, usability, returns on investments, and security concerns. Thus, the design team is responsible for addressing those user concerns when designing IoT solutions.

It is essential to answer the following questions:

  • What value will be delivered to the users by the IoT solution to be designed?
  • What are some of the barriers that will prevent users from adopting the IoT solution to be designed?
  • How will the IoT solution be designed to address the users' needs, problems and challenges?
  • How will the IoT solution be designed to deal with the user adoption barriers?
  • Which features are to be added to the IoT solution to be designed, and will they address the problems of users and deliver the value that will improve their productivity, competitive advantage, and quality of experience?

Addressing the above questions carefully during the research and technical design stages is essential. Thus, when designing IoT systems, focusing on the users' values, needs, and problems is crucial.

Adopt a system-based design approach

The Internet of Things (IoT) is still in its early stages. We still have the opportunity to ensure that IoT systems are scalable, energy efficient, cheap, and secure by design while providing acceptable QoS. Another design requirement for IoT systems is interoperability. A holistic system-based approach is required to attain all these design goals and the goals of other stakeholders (network operators, service providers, regulators, and end users). There is a need for the development of formal methods and tools for the design, operation, and maintenance of IoT systems, networks, and applications in such a way as to satisfy the goals of the various stakeholders with minimal unintended consequences.

An IoT system often consists of multiple elements, such as the cyber-physical system (sensors and actuator device) deployed to collect data from the environment and to manipulate physical systems, communication systems deployed to transport data within the IoT infrastructure, and computing systems deployed to process the massive of data collected by the sensor and send feedback to actuators to automate physical processes or to human operators to make some decisions (or take some actions). One of the elements of the IoT infrastructure is the cyber security system, which should interact with other systems within the IoT infrastructure to deliver the required service. The IoT system is sometimes designed to interact with others to provide a specific value or solve a particular problem. It is, therefore, essential to adopt a system-based approach when designing IoT systems to ensure that the interaction between the various IoT elements and other existing systems of the organisation or users delivers the expected value or addresses the problems they are designed to solve. System thinking, design thinking, and systems engineering methods and tools can be leveraged to develop formal tools for designing IoT systems.

Incorporate security measures

Users are concerned about possible security weaknesses that could appear in their infrastructure after integrating IoT solutions. IoT system designers should incorporate security mechanisms into their solutions to address the users' security concerns. Sometimes, IoT system designers are preoccupied with implementing features that are required to address customers' problems or deliver the expected value to customers. They may ignore the implementation of features that address customers' security concerns. Some IoT device manufacturers and service providers are often preoccupied with minimising manufacturing and deployment costs and the “time-to-market” such that security concerns are ignored or considered later.

Securing an IoT infrastructure's data hardware and software assets is essential and should be considered when designing IoT infrastructures. IoT system designers should consider a robust cyber security system as a subsystem within the IoT system to be designed and how the cyber security system will interact with other subsystems to deliver a secured IoT solution to the users. The IoT cyber security system consists of multiple elements that work together to provide an effective security solution to protect the data and other IT assets within an IoT infrastructure. Some of the cyber security features that should be considered when designing IoT solutions include:

  • Cryptography: Encryption
  • Access control
  • Attack detection and prevention
  • Honeypots
  • Runtime monitoring
  • Firewalls

A significant security weakness in IoT infrastructures is often at the IoT device level. Because the batteries that power these devices have a limited energy capacity, their computing and communication capabilities are minimal, making it difficult to implement reliable but sophisticated security mechanisms. As a result, it is easy to compromise these devices to disrupt IoT services and sometimes turn them into an army of botnets to conduct massive and sophisticated distributed denial of service attacks on the IoT infrastructure as a whole and the Internet. Maintaining a rational trade-off between performance, energy consumption, and security is essential.

The IoT security threats to be considered during IoT system design are not only those from external attackers but also those from internal attackers. The threats could be within, and there should be a mechanism to deal with internal threats. The internal threats could be from disgruntled employees (users) and reckless or careless ones who may perform operations that may breach or compromise some of the IT assets within the IoT infrastructures. Therefore, the IoT system designer must understand every possible error that may occur when operating IoT systems and then take care of them when designing the IoT solution and ensure that the users are aware of such errors and well-equipped to handle them.

The security aspects to be considered when designing IoT systems are not only cyber security aspects but also the physical security aspects. The physical security of the IoT infrastructure should be considered when designing and deploying them. Some adequate measures should be designed to address threats to the physical security of IoT devices.

Incorporate green and environmental sustainability measures

Energy and environmental sustainability are among the essential constraints to consider when designing and deploying IoT infrastructures. Since IoT devices are designed to be minor, light, and powered by small batteries with limited energy capacity, energy efficiency is a primary design criterion when developing IoT devices. To reduce the energy consumption of IoT devices to a minimum level, low-power communication and networking technologies, low-power computing hardware and software, and low-power security mechanisms are incorporated into IoT devices. As the amount of data collected by the IoT devices from the environment increases, the traffic transported through the networking infrastructure to edge/fog/cloud computing nodes or data centres increases the energy consumed for data communication and computing purposes. The increase in energy consumed by IoT infrastructures increases the carbon emission from the IoT industry, which increases sharply with the rapid increase in the large-scale adoption of IoT in the various sectors of the economy.

In addition to energy efficiency, it is essential to minimise the amount of waste the IoT industry creates. IoT devices are powered by batteries with minimal energy capacity, resulting in a very short lifetime for IoT devices (the lifetime of an IoT device is the time to deplete all the energy stored in the battery of the IoT, requiring a recharge or change of battery). If the IoT batteries are replaced within a very short time (less than a decade), then with the deployment of tens of billions or trillions of IoT devices globally, there will be a problem of how to dispose or recycle the IoT batteries. There is already an environmental problem in managing the massive amount of batteries and e-waste the electronics industry generates. The problem will worsen if environmental sustainability is not considered as one of the design criteria when designing IoT devices. Some of the green and environmental sustainability strategies that should be considered when designing IoT devices include:

  • Green IoT hardware: Designing energy-efficient IoT hardware and incorporating hardware-based energy-saving mechanisms in IoT devices (e.g., shutting down idle devices).
  • Green IoT communication infrastructure: Designing energy-efficient networking and communication infrastructure and adopting low-power networking and communication technologies for IoT networks.
  • Green IoT architectures: Adopting energy-efficient networking, communication, and communicating architectures. For example, edge/fog computing-based architectures can be adopted where lightweight processing is shifted from the cloud data centres (often located far away from the IoT devices) to energy-efficient edge/fog computing nodes (closer to the IoT nodes). This kind of architecture improves the performance (decreases the packet delays and packet losses). Also, it increases energy efficiency as it decreases the energy consumed in transporting IoT packets through core networks to cloud data centres and reduces the computing demand of the cloud data centres, reducing their energy demand. The edge/fog nodes are sometimes energy-efficient (low-power) computing devices like Raspberry Pi.
  • Green IoT software: Designing energy-efficient software and algorithms for processing IoT data and IoT security mechanisms.
  • Green energy sources for IoT systems: Energy harvesters are incorporated into IoT devices to harvest energy from the environment to charge the energy storage systems (battery or capacitor/supercapacitor/ultracapacitor), which supplies the IoT device when the renewable sources are not able to generate a sufficient amount of energy to power the IoT devices directly. Using renewable energy sources also increases the lifetime of the IoT devices, decreasing the maintenance cost of changing the IoT batteries or capacitors/supercapacitors/ultracapacitors and minimising the amount of waste generated from the IoT industry.
  • Green IoT policies: Policymakers should also develop green IoT regulations and standards to be followed when designing green and sustainable IoT solutions.
  • Green IoT education: An education strategy should raise public awareness of the need for green and sustainable IoT solutions so that IoT users, developers, and service providers consider environmental sustainability when making their choices.

The IoT application context should be considered

When designing IoT solutions, it is essential to consider the physical, social, and environmental context in which the device will be used. The features and specifications when designing IoT devices depend on the context of the application. The IoT systems intended for small agriculture, smart cities, smart health care, smart homes, intelligent transport systems, Internet of military things (Military Internet of Things (MIoT) or Battlespace Internet of Things (BIoT)), or smart energy should take into consideration the physical or social realities that may impact the integration of IoT systems into a given sector to fulfil a defined goal or purpose. For example, IoT devices designed for agricultural, disaster/emergency response, or battlefield purposes should operate sustainably in harsh conditions that may differ from IoT devices designed for smart homes or medical or health care purposes.

To consider the application context, it is recommended to treat the entire IoT use case as a system of which the IoT system being designed is part. In this way, the interaction between the IoT system being designed and other existing systems in the sector (e.g., cities, homes, factories, transportation infrastructure, health care infrastructures, etc.) are modelled using system engineering or systems dynamics modelling tools to ensure that the system to which the IoT system being designed is part of functions as a whole. Integrating IoT systems into existing systems in an organisation's infrastructure may create new problems that do not exist or may not benefit the organisation. Hence, it is essential to consider the application context and apply a system-based approach when designing IoT systems or solutions.

Effective data management strategies

IoT devices collect massive amounts of data from the environments, which should be carefully managed to ensure data privacy or prevent the abusive use of personal data. Incorporating IoT devices into critical infrastructure such as energy, water, transportation, and health care infrastructure poses a national security risk for most countries, enforcing the case for effective data management. The collected IoT data should be protected adequately during processing, transmission, and storage in compliance with data security regulations or standards.

Data ownership issues, the kind of data that should be collected, and what the IoT service provider is permitted to do with the data should be considered when designing IoT solutions. The designers should ensure they comply with existing regulations or standards on data collection, management, and processing. Hence, the designers should ensure that the data of users is effectively managed by answering the following questions:

  • What type of data should be collected?
  • Who owns the data?
  • Where is the data stored?
  • What do the IoT service providers intend to do with the data?
  • What information is expected from the data, and how will it be used?
  • What mechanisms are designed to protect the data during processing, transmission and storage?

Ensure scalability and flexibility

The IoT market is growing steadily, requiring IoT systems to be designed with the possibility of quickly scaling them up with increasing demand for IoT services. When developing IoT systems, it is essential to anticipate future growth and expansion and then provide the flexibility to expand the infrastructure to add more resources to meet the increase in service demand. Scalability and flexibility can be ensured by implementing a modular and flexible architecture that can be adapted to satisfy the growing demand. Also, the hardware, software, computing, networking, energy, and security choices should be made in such a way as to ensure that the designed IoT systems can handle current demand and future growth in data volume, traffic, and computing demand as demand for IoT services increases.

Interoperability and compatibility are significant barriers to ensuring scalability and flexibility when designing IoT systems. To ensure scalability, the IoT systems should be designed to integrate and interoperate seamlessly with the existing infrastructure of the organisation and those of other partners. The hardware and software design choices should be made in such a way as to ensure interoperability and compatibility so that it will be easier to scale up the IoT infrastructure. That is, “plan carefully, choose wisely, and design intelligently for a successful IoT system” should be the driving philosophy in IoT systems design [2].

Design intuitive, user-friendly, and simple user interfaces

The user interface for IoT systems should be intuitive, user-friendly, and simple enough for users to operate IoT systems with minimal difficulties or challenges. To ensure that the IoT system being designed can compete with other IoT products in the markets, it should be simple and can be operated relatively easily. Users are often reluctant to adopt complex products that are difficult to use, manage, or maintain and quickly drop such products. They are often quick to adopt simple products that are easy to use, operate, and maintain. It is essential to follow IoT design thinking principles that facilitate the design of IoT systems with intuitive, user-friendly, and simple user interfaces. An IoT designer should prioritise simplicity and clarity to create intuitive, user-friendly, and simple user interfaces to improve users' experience.

Develop effective testing and quality assurance plans/methodologies

Testing and quality assurance are essential phases in the IoT system development life cycle. Testing and quality assurance enable the development of IoT systems that meet and satisfy the customers' needs, provide satisfactory performance, and are compatible and interoperable with existing IoT systems and other IT infrastructures of organisations. Comprehensive testing and quality assurance inspection plans developed during the IoT system design phase ensure that stress tests and audits can be carried out to ensure that the design goals (performance, security, sustainability, interoperability, cost, etc.) and national (or regional) regulatory rules or standards are fulfilled.

Effective performance test plans can ensure that the designed IoT system can withstand high stress and still provide users with acceptable service and experience. Security tests and audits enable IoT system designers and developers to identify potential vulnerabilities and threads and to ensure compliance with security regulations and standards. Effective testing and quality assurance plans can also provide compatibility and interoperability of the designed IoT system with other IoT systems (devices and networks), which is essential to ensure seamless integration to deliver the desired quality of service and experience to the users. Therefore, by implementing robust testing procedures, IoT system designers can ensure that the IoT system they are designing can meet the highest standards of quality and reliability [3], satisfying the needs of their users and satisfying their performance expectations.

Ensure low-cost deployment, operation, and maintenance

An effective deployment, operation and maintenance plan is essential to ensure that the IoT systems being designed are cost-effective or affordable, providing the users with reasonable returns on their investments. Every IoT system development cycle stage should be carefully planned to minimise the design, manufacturing, deployment, operation, and maintenance costs. It is recommended to carefully document the deployment, operation, and maintenance procedures in such a way as to ensure that the deployed IoT systems or infrastructure can easily be deployed, operated, and maintained, requiring minimal intervention and human resources.

In IoT applications where thousands, tens of thousands, or millions of IoT devices are deployed and spread across a wide geographical area, deployment, operation, and maintenance procedures are tedious and costly. Effective deployment, operation, and maintenance plans and tools are essential to ensure acceptable performance (reducing downtime and improving the QoS or QoE). Monitoring and preventive maintenance plans to prevent failures or breakdowns and reactive maintenance plans to restore the system after breakdowns to reduce downtime should be carefully designed and documented. Expansion or scalability plans should be created to enable cost-effective expansion and extension of the IoT system to handle more users or to satisfy customers' expectations.

It is essential to develop training and support plans to ensure that the users are well trained and supported to effectively use and manage the designed IoT system to satisfy their needs. Reducing the need for human intervention is essential to keep the cost low. Deployment, operation, and maintenance tasks should be automated, especially for large-scale IoT infrastructures. Automation reduces deployment, operation, maintenance, security monitoring, and response costs. The IoT devices should be deployed to operate for decades without needing maintenance or replacement of parts for several decades. Therefore, IoT system designers should ensure that the deployment, operation, and maintenance costs are as low as possible.

Develop working prototypes before mass production

In the early stage of the IoT system development life cycle, developing a working prototype that is well-tested and satisfies the users' needs may be necessary. A well-tested and working prototype is required before mass production or deployment of the IoT system. Developing a working prototype before mass production or deployments helps resolve many functional, performance, security, deployment, maintenance, and sales issues, increasing the chances of success and long-term adoption and sustainability for the IoT product or project.

When a working prototype is created, several iterations may be required to improve the product to satisfy the organisation's or users' needs. The prototype should meet the required design goals (functionalities, performance, security, scalability, interoperability, and sustainability goals) before the system can be mass-produced or deployed. Therefore, getting the product or solution right is essential through the rapid and iterative development of a complete working prototype that satisfies every technical and user design goal.

Consider feedback from user-created use cases or requirements

The feedback from the various use case applications where the IoT system being designed is deployed should provide user feedback that can be used to improve the production or solution. Users may expect or require features absent from the developed system or solution. IoT designers should be able to improve their designs to cater to users' needs or requirements. The users may use the designed system in ways that the designers did not expect. The designers should have a mechanism to follow up with the users to learn the various methods and contexts in which the systems are being used. Therefore, the ideas from the user feedback should be used to improve the design and adapt the system to satisfy the needs of its users.

IoT System Design Goals

IoT (Internet of Things) systems represent a convergence of hardware, software, and networking technologies to create seamless, intelligent solutions for various applications. To achieve their full potential, IoT systems must be designed with clear and comprehensive goals that ensure robustness, user-friendliness, scalability, and security. Here’s a detailed exploration of the primary design goals for IoT systems (figure 6):

IoT System Design Goals
Figure 6: IoT System Design Goals

User Satisfaction

User satisfaction is the cornerstone of IoT design, ensuring systems deliver intuitive, accessible, and valuable experiences. Achieving high user satisfaction requires the following:

1. Ease of Use: Interfaces and interactions should be simple and require minimal learning. Intuitive designs reduce user frustration and increase adoption rates. Tools like user testing, usability studies, and iterative feedback loops are critical in refining systems to align with user expectations.
Example: A smart thermostat with a user-friendly mobile app allows users to control home temperatures effortlessly, even remotely.

2. Reliability: Consistent performance is key to building trust. IoT devices must operate seamlessly without frequent failures, downtime, or lag. High reliability enhances user confidence and system usability.

3. Customisation and Personalisation: IoT systems should cater to individual user preferences. Features like custom schedules, modes, or settings enable personalisation, enhancing the perceived value of the system.
Example: Smart lighting systems allow users to adjust brightness and colour based on mood or activity.

3. Accessibility: Designs must accommodate diverse user abilities. Accessibility features, such as voice commands or compatibility with assistive technologies, ensure inclusivity.

Security by Design

Security is a non-negotiable aspect of IoT systems, as they often handle sensitive data and are susceptible to cyber threats. Security measures should be integrated into the design phase to ensure:

  • End-to-End Encryption: All data transmissions between devices and servers should be encrypted to protect against interception and unauthorised access.
  • Authentication and Authorisation: Strong user authentication (e.g., multi-factor authentication) ensures only authorised access to devices and data.
  • Secure Firmware Updates: IoT devices should support verified and secure updates to patch vulnerabilities and enhance functionality without risking security breaches.
  • Threat Modeling: Conducting threat assessments during the design process helps proactively identify and mitigate potential vulnerabilities.

Efficient Data Management and Privacy

IoT systems generate immense volumes of data, making efficient management and strict privacy protection paramount.

1. Data Minimisation: Collect only the data necessary for functionality, reducing privacy risks and simplifying data storage and processing.

2. Data Anonymisation: Implement anonymisation techniques to protect user identities while enabling data analysis. Example: Anonymising health data from wearables to comply with regulations like GDPR.

3. Secure Storage: Encryption and access controls should be used to protect stored data on devices, local servers, or in the cloud.

4. Transparency: Clearly communicate to users how their data will be collected, used, and shared. Transparency fosters trust and compliance with legal standards.

Green and Sustainable Design

With growing environmental concerns, sustainability is a critical consideration in IoT system design:

1. Energy Efficiency: Optimise devices to consume minimal energy, extending battery life and reducing electricity usage. Employ low-power communication protocols like Zigbee or LoRaWAN.

2. Sustainable Materials: Use recyclable, biodegradable, or eco-friendly materials to reduce the environmental footprint.

3. Lifecycle Management: Design systems with end-of-life considerations, including recycling or safe disposal of components.

4. Adaptive Energy Use: Employ strategies like sleep modes for devices to conserve energy when idle.

Cost-Effectiveness

IoT solutions should balance affordability with quality to promote widespread adoption.

1. Affordable Components: Use reliable, cost-efficient hardware to reduce production costs without sacrificing performance.

2. Optimised Manufacturing: Streamline manufacturing processes through modular designs or economies of scale.

3. Low Maintenance Costs: Design self-maintaining systems or those requiring minimal intervention to reduce long-term costs.

Scalability and Flexibility

IoT systems must accommodate future growth and evolving user needs.

1. Modular Architecture: Design systems with modular components that can be upgraded or expanded without overhauling the entire solution.

2. Interoperable Standards: Use open standards and protocols to ensure compatibility with devices from different manufacturers.

3. Dynamic Resource Management: Implement mechanisms to allocate resources dynamically based on demand, ensuring optimal performance as the system grows.

Reliable Connectivity

Seamless connectivity is fundamental for IoT systems to operate effectively.

1. Network Resilience: Incorporate failover mechanisms to maintain operations during network disruptions.

2. Low-Latency Communication: Real-time data transfer is critical for applications like autonomous vehicles—technologies like 5G and Wi-Fi 6 address these needs.

3. Edge Computing Integration: Process data locally to reduce reliance on central servers, improving reliability and responsiveness.

4. Protocol Optimisation: Use IoT-specific protocols like MQTT and CoAP, tailored for low-power and constrained environments.

Energy Efficiency

Energy efficiency enhances device longevity and reduces operational costs.

1. Low-Power Hardware: Select components optimised for minimal energy consumption, such as microcontrollers with sleep modes.

2. Adaptive Power Management: Adjust energy usage based on real-time activity levels.

3. Energy Harvesting: Incorporate technologies that harness energy from ambient sources, such as solar or kinetic energy, to extend device life.

Interoperability

Interoperability ensures seamless communication and collaboration across diverse devices and platforms.

1. Standardised Protocols: Enable communication across systems using common protocols like MQTT, HTTP/HTTPS, and CoAP.

2. Open APIs and SDKs: Facilitate integration by providing developers with tools for building complementary services.

3. Middleware Solutions: Employ middleware to aggregate and harmonise data from different devices, ensuring compatibility and ease of management.

IoT design goals are the foundation for developing resilient, efficient, and user-centred solutions. IoT systems can address current challenges by prioritising security, scalability, sustainability, and interoperability while remaining adaptable to future advancements. This comprehensive approach ensures that IoT solutions meet user expectations and align with broader societal and environmental objectives.

IoT System Design Challenges

The Internet of Things (IoT) transforms industries, lifestyles, and economies by enabling interconnected devices to collect, share, and act on data. However, its rapid expansion is accompanied by significant technical, economic, and societal challenges. Below, we delve deeper into these issues, exploring their nuances and potential mitigation strategies (figure 7).

IoT System Design Challenges
Figure 7: IoT System Design Challenges

Device Hardware Limitations

IoT devices often rely on compact, energy-constrained hardware, such as batteries or capacitors, to function. These energy storage systems have limited capacities and once depleted, the devices shut down unless recharged or replaced. Managing the energy needs of hundreds or thousands of such devices in an IoT ecosystem becomes a significant logistical and financial burden.

Design Constraints and Strategies

1. Minimising Energy Consumption:
IoT device design prioritises energy efficiency to prolong operational lifetimes and reduce maintenance costs. Common strategies include:

  • Low-power computing devices: Utilising microcontrollers with optimised performance-to-power ratios.
  • Low-power communication protocols: Leveraging protocols such as ZigBee, LoRaWAN, or BLE for energy-efficient data transfer.
  • Energy-efficient security mechanisms: Implementing lightweight cryptographic techniques to balance security needs with energy limitations.

2. Energy Management:
Mechanisms such as sleep modes or duty cycling are integrated to deactivate idle components, thereby conserving energy. However, this often compromises quality of service (QoS). Striking a balance between energy savings and performance remains a design challenge.

3. Energy Harvesting:
Incorporating energy harvesting systems (e.g., solar, thermal, or kinetic energy) can supplement energy needs, reducing reliance on batteries. Yet, these systems face limitations, including intermittent energy availability and integration challenges due to size and weight constraints.

Connectivity Issues

Data is the backbone of IoT systems, making robust connectivity essential. IoT devices primarily rely on wireless networks to communicate, which introduces complexities in ensuring reliability, speed, and cost-efficiency.

Challenges in Connectivity

1. Network Performance Trade-offs:
Energy-efficient protocols (e.g., BLE, Zigbee, LoRa WAN, and SigFox) often compromise throughput, latency, and reliability, leading to packet delays, losses, or collisions. Balancing energy efficiency and network performance is a core challenge.

2. Scalability in Dense Deployments:
In urban areas, where wireless networks overlap, interference and bandwidth limitations degrade performance. This is especially critical for real-time IoT applications like healthcare monitoring or autonomous systems.

3. Cost of Connectivity:
Small and medium-sized businesses often struggle with the high costs of maintaining IoT networks. Reducing operational expenses without compromising connectivity quality is a priority.

Solutions to Connectivity Challenges

  • Adoption of advanced networking technologies such as 5G and edge computing to enhance speed and reduce latency.
  • Employing hybrid connectivity solutions that combine wireless and wired networks for reliability.
  • Optimising network design to ensure cost-effective, scalable, and robust connectivity.

Energy and Sustainability Issues

With billions of IoT devices deployed globally, the systems' energy demands and environmental impact have become significant concerns.

Energy and Environmental Challenges

1. Massive Energy Demand:
IoT devices, networks, and data centres collectively require substantial energy, increasing their carbon footprint.

2. Sustainability Concerns:

  • IoT devices' production, operation, and disposal contribute to electronic waste.
  • Data transmission and processing in cloud systems further exacerbate energy consumption.

Mitigation Strategies

  • Energy-Efficient Design: Prioritising low-power technologies and algorithms.
  • Energy Harvesting Integration: Leveraging renewable energy sources to power devices.
  • Circular Economy Practices: Promoting reuse, recycling, and environmentally friendly manufacturing processes.

Interoperability and Scalability Issues

The diversity of hardware, software, and communication protocols in IoT ecosystems creates significant interoperability challenges, especially when integrating devices from multiple vendors.

Challenges

  • Lack of standardised protocols leads to fragmented ecosystems, making device integration complex and costly.
  • Scalability issues arise when expanding networks, particularly when handling increased data traffic and device management.

Solutions

  • Adoption of open standards such as 6LoWPAN and MQTT to ensure compatibility.
  • Utilising middleware solutions to facilitate communication between heterogeneous devices.
  • Implementing modular designs that simplify network expansion.

Regulation, Standardisation, and Governance

The absence of universal IoT standards impedes collaboration and innovation while increasing security vulnerabilities.

Regulatory Challenges

  • Ensuring data privacy, security, and ethical use of IoT systems.
  • Developing governance frameworks that accommodate diverse stakeholders, including manufacturers, service providers, and users.

Steps Forward

  • Collaborative efforts by organisations like IETF and ISO to develop global standards.
  • National and international regulations to enforce compliance, protect consumer rights, and foster interoperability.

IoT Security Issues

IoT systems are prone to cyber threats due to their distributed nature and resource-constrained devices.

Security Concerns

  • Inadequate security mechanisms in low-cost devices expose them to attacks like data breaches, botnets, and device hijacking.
  • The interconnected nature of IoT systems amplifies risks, as a single compromised device can jeopardise the entire network.
  • Integrating strong security mechanisms in IoT is challenging due to hardware constraints.
  • Some manufacturers ship devices without adequate security mechanisms, leaving them vulnerable to cyberattacks.

Mitigation Strategies

  • Implementing strong encryption and authentication protocols.
  • Regular firmware updates and vulnerability assessments.
  • Educating stakeholders about secure practices.

Data Ownership and Management Issues

The debate over data ownership is complex, involving technical, legal, and ethical dimensions.

Key Challenges

  • Defining ownership among stakeholders (e.g., users, providers, and third parties).
  • Ensuring data privacy, integrity, and availability across its lifecycle.

Proposed Solutions

  • Developing clear data governance frameworks to outline policies and responsibilities.
  • Leveraging blockchain technology for transparent and secure data management.

Cost Issues

High design, deployment, and maintenance costs can discourage IoT adoption, particularly among smaller organisations.

Balancing Cost and Quality

  • Cheaper devices often compromise quality and security, increasing long-term expenses.
  • Strategies to lower costs without sacrificing essential features include economies of scale, open-source solutions, and government subsidies.

User Acceptance and Adoption

The success of IoT systems depends on their perceived value and ease of use.

Challenges in Adoption

  • Stakeholders may resist due to cost, complexity, and privacy concerns.
  • Lack of education and awareness about IoT benefits.

Solutions

  • Conducting user training and providing transparent information.
  • Highlighting ROI and long-term benefits to stakeholders.

The potential of IoT to revolutionise industries and improve quality of life is immense. However, its growth depends on addressing hardware, connectivity, security, sustainability, and adoption challenges. By focusing on innovative solutions, robust governance, and stakeholder collaboration, the IoT ecosystem can overcome these hurdles and achieve its transformative potential.

IoT System Design Methodologies

The need for system-based IoT design methods

The Internet of Things (IoT) is still in its formative phase, presenting a critical window of opportunity to design and implement IoT systems that are scalable, cost-effective, energy-efficient, and secure. These systems must be developed to deliver acceptable Quality of Service (QoS) while meeting essential requirements such as interoperability and seamless integration across different devices and platforms.

Achieving these ambitious design objectives requires a comprehensive, system-based approach that considers the diverse priorities of various stakeholders, including network operators, service providers, regulatory bodies, and end users. Each group brings its requirements and constraints, and balancing these is essential to ensure the system's overall success.

To support this, there is a significant need for the development of robust formal methods, advanced tools, and systematic methodologies aimed at designing, operating, and maintaining IoT systems, networks, and applications. Such tools and methods should be capable of guiding the process to align with stakeholder goals while minimising potential unintended consequences. This approach will help create resilient and adaptive IoT ecosystems that meet current demands and are prepared for future technological advancements and challenges.

System thinking, design thinking, and systems engineering methodologies provide powerful frameworks for developing formal tools for designing and deploying complex IoT systems. These interdisciplinary approaches enable a comprehensive understanding of how interconnected components interact within a larger ecosystem, allowing for the creation of more resilient, efficient, and effective IoT solutions.

A practical example of leveraging these methodologies can be found in the work referenced in [4], where system dynamics tools were applied to design IoT systems for smart agriculture. Researchers constructed causal loop diagrams in this study to map and analyse the intricate interplay between multiple factors impacting rice farming productivity. By visually representing the causal relationships within the agricultural system, they identified key drivers and dependencies that influence outcomes. This insight allowed them to propose an IoT-based smart farming solution to optimise productivity through data-driven decision-making informed by these interdependencies.

The value of system dynamics and systems engineering tools extends beyond smart agriculture. These methods can simplify the design and analysis of complex IoT systems, networks, and applications across various sectors. They offer a structured way to break down the complexity of interconnected systems, ensuring that the resulting IoT solutions are cost-effective, reliable but also secure, and energy-efficient. This approach ensures that the needs of diverse stakeholders—including developers, network operators, regulatory bodies, and end-users—are met effectively.

Moreover, system dynamics tools have proven beneficial in educational contexts, particularly for teaching IoT courses. Educators can help students grasp the complexity of IoT systems and concepts more intuitively by adopting a system-centric approach. This holistic teaching method supports learners in understanding how various components and processes interact within an IoT ecosystem, thereby fostering a deeper comprehension of the subject matter and preparing them for real-world IoT challenges, as demonstrated in the findings of [5].

While numerous IoT-based systems are being individually developed and tested by practitioners and researchers, these efforts often fall short of addressing the practical reality that IoT systems must ultimately interact with each other and human users. This interconnectedness underscores the need for a holistic, system-centric design methodology to manage IoT systems' complexity and interdependencies effectively. The design of these systems should move beyond isolated functionalities to consider the broader ecosystem in which they operate, including human interaction, cross-system communication, and scalability.

Several studies have ventured into leveraging methods and tools to design IoT systems—for example, research referenced in [6] utilised causal loop diagrams to study the intricate interactions between systems and stakeholders, identifying key feedback loops influencing productivity. This approach provided actionable insights and recommendations on improving efficiency and performance within specific applications, such as smart agriculture. Using causal loop diagrams in such studies highlights the importance of visualising and understanding complex IoT ecosystems' relationships and feedback mechanisms.

However, it is crucial to incorporate both qualitative and quantitative system dynamics tools to advance IoT systems' design and operational robustness. While causal loop diagrams are practical for modelling qualitative interactions and identifying feedback structures, quantitative methods are needed to simulate and analyse the dynamic behaviour of IoT systems under various conditions. Integrating both approaches makes it possible to model the structure and the real-time, data-driven interactions among different IoT components.

This highlights the urgent need to develop a comprehensive, multi-faceted framework that blends system thinking, design thinking, and systems engineering tools. Such an integrated approach would support the end-to-end design, operation, and maintenance of IoT systems, networks, and applications. The goal would be to create systems that align with the objectives of various stakeholders, including developers, service providers, network operators, regulators, and end-users while minimising unintended consequences such as system inefficiencies, vulnerabilities, or user dissatisfaction.

System thinking enables a broad, interconnected view that helps identify and understand the relationships and dependencies across components. Design thinking ensures that solutions are user-centric, addressing real needs through iterative prototyping and feedback. Systems engineering brings discipline and structure, employing established methodologies and tools to optimise system performance and reliability.

IoT systems can be designed to be technically proficient, adaptable, scalable, and aligned with stakeholder needs by developing a framework that synergises these approaches. This will foster sustainable, resilient IoT ecosystems capable of evolving alongside technological advancements and societal demands, paving the way for a future where IoT seamlessly integrates into everyday life, supporting everything from smart cities to connected healthcare with minimal risk and maximal benefit.

Integrating systems thinking, design thinking, and engineering methodologies into developing IoT systems can significantly enhance their design and implementation. These approaches facilitate the creation of robust, scalable, and efficient IoT solutions tailored to modern applications' complex requirements while addressing the stakeholders' needs.

Linear Thinking in IoT Design Methodologies

Linear thinking is crucial in designing and implementing IoT systems, offering a structured, step-by-step approach to problem-solving and development. In IoT, where multiple components must work seamlessly together, a logical and sequential methodology helps ensure clarity, efficiency, and precision.

Characteristics of Linear Thinking in IoT Design

  1. Sequential Development Process: IoT systems are designed through a series of well-defined stages, such as requirement analysis, device selection, network design, and application integration.
  2. Cause-and-Effect Focus: Every design decision in an IoT system impacts subsequent steps, such as how sensor data influences processing or how network protocols affect data flow.
  3. Rule-Based Implementation: Adherence to industry standards and best practices, like using MQTT for messaging or ensuring data security through encryption protocols, is central to linear IoT design.
  4. Predictability: A linear approach ensures predictable outcomes, such as reliable communication between IoT devices and backend systems.

Applications of Linear Thinking in IoT Design Methodologies

Linear thinking in IoT is applied throughout the design lifecycle, helping teams address specific challenges methodically and systematically.

Structured System Development

In IoT design, linear thinking enables the structured development of systems by organising tasks into sequential phases (figure 8):

Linear Thinking in IoT Design Methodologies - Structured System Development Flow
Figure 8: Linear Thinking in IoT Design Methodologies - Structured System Development Flow
  1. Defining Objectives: Identify the purpose of the IoT solution, such as monitoring energy usage or automating logistics.
  2. Selecting Hardware: Choose sensors, actuators, and devices that align with the objectives.
  3. Designing Network Architecture: Establish connectivity protocols and infrastructure for seamless data transfer.
  4. Developing Applications: Implement data analysis, visualisation, and device control software.
  5. Testing and Deployment: Validate system functionality before deployment and monitor post-deployment performance.

Troubleshooting and Optimisation

Linear methodologies simplify troubleshooting in IoT systems. For example, diagnosing connectivity issues can follow a logical sequence (figure 9):

Linear Thinking in IoT Design Methodologies - Troubleshooting and Optimisation Flow
Figure 9: Linear Thinking in IoT Design Methodologies - Troubleshooting and Optimisation Flow
  1. Check the device functionality.
  2. Verify network configurations.
  3. Analyse communication protocols.
  4. Inspect backend systems and applications.
  5. Integration of IoT Systems

Linear thinking aids in integrating IoT systems with other technologies. For example, a smart home IoT solution might involve sequential integration of sensors, cloud platforms, and mobile applications to ensure a seamless user experience.

Benefits of Linear Thinking in IoT Design

  1. Clarity and Simplicity: Linear thinking provides a clear framework for IoT design, breaking down complex projects into manageable tasks. This clarity is essential when dealing with multidisciplinary teams working on diverse system components.
  2. Efficiency in Development: By following a sequential methodology, teams can avoid redundancies and focus on delivering functional IoT systems on schedule. For instance, designing the network architecture before implementing security protocols ensures that resources are allocated effectively.
  3. Dependability and Predictability: IoT systems designed using linear thinking are reliable and predictable. Following a clear progression from hardware setup to application development ensures that all components work harmoniously.

Limitations of Linear Thinking in IoT Design

Despite its advantages, linear thinking may not address all aspects of IoT design effectively:

  1. Complexity Management: IoT systems often involve interconnected components where feedback loops and dynamic interactions make linear methodologies insufficient.
  2. Inflexibility: Linear thinking may struggle to adapt to evolving requirements or unforeseen changes during development.
  3. Limited Innovation: Focusing solely on predefined steps can hinder creative problem-solving, which is often needed in IoT for innovative use cases.

Complementing Linear Thinking with Non-Linear Approaches

To address these challenges, linear thinking in IoT design can be combined with non-linear approaches like:

  1. Systems Thinking: To understand the interdependencies between IoT components.
  2. Agile Methodologies: To iterate rapidly and adapt to changes.
  3. Design Thinking: To foster user-centric innovations.

Linear thinking provides a strong foundation for IoT design methodologies by ensuring clarity, efficiency, and dependability. It is particularly effective in addressing well-defined problems and structured tasks. However, it should be complemented with flexible, iterative approaches to meet IoT systems' complexity and dynamic nature. This balanced methodology enables organisations to design IoT solutions that are reliable, functional, innovative, and adaptable to future needs.

Design Thinking in IoT Design Methodologies

Design Thinking, a human-centred and innovative methodology, plays a transformative role in developing Internet of Things (IoT) solutions. By focusing on empathy, creativity, and collaboration, Design Thinking allows designers to craft IoT systems that deeply resonate with users, address real-world challenges, and deliver tangible value. This iterative and non-linear approach ensures that solutions remain user-focused while adapting to evolving needs and complexities. Below, we explore the application of Design Thinking to IoT design, breaking down its phases and highlighting its importance. The process is presented in a diagram (figure 10), and each step is described below.

Phases of Design Thinking in IoT Design
Figure 10: Phases of Design Thinking in IoT Design

Phases of Design Thinking in IoT Design

Empathise: Understanding Users in IoT Contexts

The foundation of Design Thinking lies in understanding the users —those who will interact with and benefit from IoT solutions. This phase involves:

  1. Observing User Behavior: Studying how users engage with their environment, existing devices, and technologies.
  2. Conducting Interviews and Surveys: Gathering qualitative insights to uncover user needs, motivations, and pain points.
  3. Analysing Context-Specific Challenges: For IoT, this could mean understanding how users interact with connected devices in smart homes, healthcare, or industrial settings.
  4. Building Empathy Maps: Visual tools to document user behaviours, emotions, and thought processes.

Example: In designing a smart thermostat, empathising involves understanding how users perceive temperature comfort, their schedules, and preferences for energy savings.

Define: Framing IoT Challenges with User-Centricity

With insights from the empathise phase, designers synthesise the data to articulate the problem clearly. This phase involves:

  1. Creating User Personas: Defining archetypes of users to focus on their specific needs.
  2. Drafting Problem Statements: These statements reflect the user's perspective, such as: “How might we design an IoT device that ensures seamless and secure remote control for elderly users unfamiliar with technology?”
  3. Scoping the IoT Problem: Aligning user needs with technical and business constraints to frame achievable goals.

Example: Defining the problem for a wearable health tracker could focus on addressing user concerns about data privacy and ease of use.

Ideate: Generating Creative IoT Solutions

The ideation phase encourages brainstorming innovative solutions for the defined problem. Activities include:

  1. Brainstorming Sessions: Generating a wide range of ideas without judgment.
  2. Mind Mapping: Connecting concepts like device features, usability, and scalability.
  3. Scenario Planning: Envisioning how IoT devices will function in different user contexts.
  4. Leveraging Multidisciplinary Teams: Collaboration between designers, engineers, and data scientists fosters diverse perspectives.

Example: For a smart irrigation system, ideation might explore options like soil-moisture sensors, weather-based predictions, and AI-powered water usage optimisation.

Prototype: Building Tangible IoT Concepts

In this phase, designers create prototypes to bring ideas to life. For IoT, this could involve:

Developing Low-Fidelity Prototypes: Sketches, mock-ups, or digital wireframes to demonstrate the user interface or functionality. Building Hardware Models: Using components like Arduino or Raspberry Pi to test device interactions and connectivity. Simulating IoT Scenarios: Creating controlled environments to test data flow and device responses. Example: A smart refrigerator prototype might include a basic app interface to demonstrate how users can view inventory and set the temperature remotely.

Test: Validating IoT Prototypes with Users

The testing phase ensures IoT solutions align with user expectations and functional requirements. This involves:

  1. User Feedback: Observing how real users interact with the prototype and collecting qualitative and quantitative feedback.
  2. Iterative Refinement: Using feedback to refine design elements, such as device form factors, UI/UX, or data processing logic.
  3. Performance Testing: Evaluating factors like connectivity, latency, and reliability in real-world conditions.

Example: Testing a smart door lock might involve scenarios where users remotely unlock doors via a mobile app, identifying issues like connectivity lag or interface confusion.

Iterative Nature of Design Thinking in IoT

Design Thinking is inherently iterative, requiring designers to revisit previous phases as new insights emerge. This flexibility is crucial for IoT systems, where user needs, technological advancements, and environmental factors can evolve rapidly.

Example Iterations

  1. Returning to Ideation: Incorporating user feedback to explore alternative solutions.
  2. Refining Prototypes: Addressing hardware compatibility or improving battery life based on test results.

Benefits of Design Thinking in IoT Design

  1. User-Centric Solutions: Ensures IoT systems are intuitive, accessible, and aligned with real user needs.
  2. Enhanced Innovation: Encourages creative problem-solving to develop unique, competitive IoT solutions.
  3. Flexibility: Adapts to changing requirements, making it ideal for dynamic IoT environments.
  4. Improved Adoption Rates: User-focused designs are more likely to gain acceptance and trust.
  5. Cross-Functional Collaboration: Facilitates teamwork across disciplines, leveraging diverse expertise.

Challenges of Applying Design Thinking to IoT

  1. Complexity in Empathy: Understanding user interactions with IoT systems often involves multiple stakeholders and diverse use cases.
  2. Technical Constraints: Translating user needs into viable IoT solutions requires balancing feasibility, cost, and scalability.
  3. Data Privacy and Security: Designing user-centric IoT solutions must address data protection and compliance concerns.

Design Thinking is an invaluable methodology for IoT design. It enables teams to create solutions that prioritise users while addressing technical and business challenges. Its iterative and collaborative nature ensures that IoT systems remain adaptable, innovative, and effective. By integrating empathy, creativity, and feedback into the design process, Design Thinking helps organisations deliver IoT solutions that resonate deeply with users and stand out in a competitive landscape.

Systems Thinking in IoT Design Methodologies

Systems Thinking is a holistic approach to analysing and solving complex problems by understanding a system's relationships, interactions, and interdependencies. In the context of Internet of Things (IoT) design, Systems Thinking becomes crucial because IoT systems are inherently complex, comprising interconnected devices, networks, data flows, and user interactions. By adopting Systems Thinking, IoT designers can address the challenges of scalability, interoperability, and sustainability while ensuring that solutions align with user needs and broader organisational goals.

What is Systems Thinking?

Systems Thinking views an IoT system as an integrated whole rather than isolated components. It emphasises:

  1. Interconnections: Understanding how different devices, networks, and software interact.
  2. Feedback Loops: Identifying how system outputs affect inputs, creating dynamic behaviours.
  3. Emergent Properties: Recognising that the whole system often exhibits behaviours and capabilities that individual components cannot achieve alone.
  4. Context Awareness: Considering the system's environment, including social, economic, and technological factors.

For IoT, Systems Thinking ensures that solutions are robust, scalable, and adaptable to changing environments.

Key Principles of Systems Thinking in IoT Design Fundamental principles of systems thinking in IoT design are presented in figure 11 and discussed below:

Key Principles of Systems Thinking in IoT Design
Figure 11: Key Principles of Systems Thinking in IoT Design

Holistic Perspective

  • Focus on the entire IoT ecosystem, including hardware, software, networks, users, and external systems.
  • Example: In smart city solutions, consider how traffic sensors interact with public transportation systems, environmental data, and citizen behaviour.

Understanding Interdependencies

  • Map the relationships between IoT devices, cloud services, and edge computing systems.
  • Example: A smart home ecosystem includes interdependencies between thermostats, lighting systems, and security cameras. A failure in one device might cascade across the system.

Feedback Loops and Adaptability

  • Incorporate mechanisms to gather feedback from users and devices to adapt the system dynamically.
  • Example: A smart irrigation system uses feedback from soil moisture sensors to optimise water usage based on weather patterns.

Focus on Context and Environment

  • Analyse how external factors, such as regulatory changes, technological advancements, and user behaviour, impact the IoT system.
  • Example: An industrial IoT system must account for varying factory conditions, such as temperature, humidity, and power fluctuations.

Emergent Behaviour Analysis

  • Anticipate how new patterns and behaviours might emerge when components interact.
  • Example: In connected healthcare, data from wearable devices might reveal trends in patient health that were not visible through isolated monitoring.

Steps to Apply Systems Thinking in IoT Design Methodologies

Figure 12 presents a workflow for the systems thinking approach for IoT design methodologies. Details are discussed below.

Systems Thinking in IoT Design Methodologies
Figure 12: Systems Thinking in IoT Design Methodologies

Define the System's Purpose and Boundaries

  • Clearly articulate the IoT system's goals and scope.
  • Identify system boundaries to determine what lies within the system (devices, users, data flows) and outside (external regulations, competing systems).

Example: For a smart factory, the purpose might be to optimise production efficiency, and the boundaries might include connected machinery, inventory systems, and supply chain interactions.

Identify Components and Stakeholders

  • Catalog the IoT system's physical and digital components (e.g., sensors, actuators, cloud platforms, edge devices).
  • Identify all stakeholders, including users, developers, IT administrators, and external partners.

Example: In an IoT-based energy management system, stakeholders might include utility companies, building managers, and end-users monitoring their energy consumption.

Map Interconnections and Data Flows

  • Use tools such as system diagrams, flowcharts, or digital twins to visualise how components interact.
  • Analyse the data flow between devices, gateways, cloud systems, and end-users.

Example: A connected vehicle system requires mapping interactions between GPS devices, onboard diagnostics, traffic data servers, and driver interfaces.

Analyse Feedback Loops

  • Identify positive and negative feedback loops to understand system dynamics.
  • Design for self-correcting mechanisms that prevent system instability.

Example: In a smart thermostat, a feedback loop might ensure that when the temperature exceeds a set point, cooling systems are activated, and adjustments are logged for future optimisation.

Consider Scalability and Interoperability

  • Design systems that can scale to accommodate more devices or users without performance degradation.
  • Ensure interoperability with existing standards and technologies to avoid vendor lock-in.

Example: A smart city IoT platform must handle a growing number of sensors, from traffic cameras to air quality monitors, while integrating with diverse protocols like MQTT and CoAP.

Address Security and Privacy Holistically

  • Treat security and privacy as systemic properties rather than add-ons.
  • Evaluate vulnerabilities across the IoT ecosystem, including devices, networks, and cloud platforms.

Example: In healthcare IoT, secure patient data transmission requires end-to-end encryption, secure APIs, and robust access control mechanisms.

Monitor and Iterate

  • Continuously monitor system performance and user feedback to identify areas for improvement.
  • Use iterative design to adapt to changing needs and technologies.

Example: A smart logistics platform might adjust its route optimisation algorithms based on real-time traffic patterns and delivery delays.

Benefits of Systems Thinking in IoT Design

  1. Enhanced Resilience: By understanding interdependencies, designers can create systems that withstand failures and adapt to changing conditions.
  2. Scalability: Systems Thinking helps design IoT architectures that can grow seamlessly with increased demand.
  3. Improved Efficiency: Holistic optimisation ensures that resources like bandwidth, power, and computational capacity are used effectively.
  4. Innovation: By analysing emergent behaviours, Systems Thinking can uncover novel opportunities for functionality and value.
  5. Sustainability: Considering environmental and social impacts ensures IoT solutions align with broader sustainability goals.

Challenges of Systems Thinking in IoT Design

  1. Complexity Management: Mapping all interactions and interdependencies can be time-consuming and resource-intensive.
  2. Balancing Focus: Maintaining a high-level perspective while addressing detailed technical issues can be challenging.
  3. Dynamic Environments: IoT systems often operate in rapidly changing contexts, requiring frequent reassessment and adaptation.

Stakeholder Alignment: It can be challenging to ensure that all stakeholders understand and agree on the system's purpose and design.

Systems Thinking is an indispensable methodology for IoT design, offering a comprehensive framework to tackle the inherent complexity of interconnected systems. Systems Thinking enables designers to create robust, scalable, and user-focused IoT solutions by focusing on interdependencies, feedback loops, and the broader context. Its emphasis on holistic analysis and adaptability ensures IoT systems meet current needs and evolve gracefully with emerging challenges and opportunities.

System Dynamics Modeling for IoT Systems

System dynamics is a practical application of Systems Thinking, originally developed at MIT in the 1950s. It provides a framework for understanding and modelling the complex behaviour of systems by emphasising the interconnections, feedback loops, and time delays inherent in such systems. Practitioners and researchers in system dynamics employ various modelling and simulation tools to explore the implications of hypothesised causal relationships and understand system dynamics over time. Sample closed-loop system dynamics modelling methodology is present in figure 13.

Closed-loop System Dynamics Modelling Methodology
Figure 13: Closed-loop System Dynamics Modelling Methodology

Closed-system thinking methodology can be applied to overcome the limitations of open-loop or linear thinking approaches. Linear thinking typically involves problem identification, information gathering, evaluating alternative solutions, selecting the best option, and implementing the policy. However, this approach often generates unintended consequences because it operates in silos, addressing isolated issues without considering the broader goals or interactions within the system.

IoT systems are often designed to interact with other information systems, cyber-physical systems in industries, critical infrastructures (energy, water distribution, heating, health care, and transportation systems), and people (management systems). The interaction between IoT systems and other existing systems may create unintended consequences that must be considered at the design stage. There are also interactions between the various components of the IoT system that need to be considered. These interactions need to be modelled, and their impact evaluated and factored into the design of IoT systems and strategies devised to deal with possible unintended consequences that may arise.

System dynamics provides a modelling framework for analysing the complex interactions between IoT systems. IoT systems consist of multiple interconnected components (such as sensor networks, data processing units, communication infrastructures, management systems, and stakeholders like policymakers and users) that work together to achieve the diverse goals of the stakeholders as shown in Figure 13. Each IoT system comprises numerous interdependent parts interacting to perform their intended functions, and any modification in one part can affect the overall system performance. The effectiveness of IoT systems relies on the seamless interaction of all constituent components. However, these interactions, including stakeholder involvement, may lead to unintended consequences. Therefore, a system-centric approach is critical for designing and operating IoT systems to meet design objectives and address the expectations of all stakeholders.

The stakeholders involved may have conflicting priorities. For example, the main goal of system users might be to optimise operational efficiency, while the aim of technology developers could be to maximise data integration capabilities, and policymakers may focus on ensuring privacy, security, and environmental sustainability. Using the Systems Thinking framework, these stakeholders can apply tools such as causal loop diagrams to map the interconnections, feedback loops, and relationships (including nonlinear and causal dependencies) within the IoT ecosystem. Additionally, stock-and-flow models can be employed to simulate resource utilisation (e.g., data processing capacity or energy consumption) and to monitor accumulations such as system load or greenhouse gas emissions in IoT-supported applications. These models enable the creation of predictive frameworks that management teams or policymakers can leverage to design interventions, ensuring that the goals of diverse stakeholders are met effectively and sustainably.

System Dynamics Modeling Framework
The system dynamics modelling process involves several key steps (figure 14):

System Dynamics Modelling Process
Figure 14: System Dynamics Modelling Process
  1. Development of a Reference Model: Establishing a baseline representation of the system to understand its current structure and behaviour.
  2. Causal Loop Diagrams (CLDs): Creating diagrams that capture the structure of the complex system, identify causal relationships, and highlight feedback loops.
  3. Stocks and Flows: Representing the accumulation of resources (stocks) and their changes over time (flows) within the system.
  4. Mathematical Modeling: Developing equations to describe the relationships between system components quantitatively.
  5. Dimensional Analysis: Ensuring consistency in the units and scales of all variables and parameters used in the model.
  6. Computer Simulations: Using computational tools to simulate system behaviour under different scenarios and over time.
  7. Sensitivity Analysis: Assessing how changes in key parameters or assumptions impact system outcomes.
  8. Policy and Design Testing: Simulating various policies, design, or optimisation changes to evaluate their potential effectiveness and identify unintended consequences.

Core Assumptions of System Dynamics
System dynamics is based on the premise that a system's underlying structure determines its observed behaviour or trends. It emerges from the interaction of key elements, including physics, information availability, and decision-making rules.

The following structural elements are considered in modelling IoT systems:

1. Accumulations:

  • Packet Queues: Data packets accumulating in network buffers.
  • Battery Energy Systems: Energy content changes during charging and discharging cycles.
  • Information Spread: The “population” of users influenced by fake news or disinformation over time.
  • Stock changes: Changes of stocks in IoT-controlled production or industrial systems, e.g., changes in liquid level in IoT-controlled industrial system.

2. Causal Structures:
Identifying cause-and-effect relationships between components in the system.

3. Delays:
Recognising that the effects of actions or interventions often manifest after a time lag may impact decision-making.

4. Perceptions:
Correct or biased views of cause-and-effect relationships influence how problems are approached.

5. Pressures:
External or internal pressures resulting from perceptions of system challenges or opportunities.

6. Affects, Emotions, and Irrationalities:
Accounting for human factors that drive behaviours and decisions, often deviating from purely rational models.

7. Policies:
Rules and protocols, such as energy management policies or data prioritisation schemes, govern decisions.

8. Incentives:
Motivations that drive individual or system-level actions, such as minimising energy use or optimising throughput.

Defining Dynamics in IoT Systems

The system's dynamics are represented through graphs over time, capturing the variation of key variables and performance metrics as the system evolves. These graphs help to visualise the following:

  • How specific interventions or policies influence the system.
  • The emergence of feedback loops and time delays.
  • Variations in performance metrics such as latency, throughput, or energy consumption.

By leveraging simulation results, we aim to plot and analyse these variations, providing actionable insights into how IoT systems behave under different conditions.

Why System Dynamics for IoT Systems?

System dynamics modelling offers a comprehensive approach to understanding the complexities of IoT systems, mainly when dealing with interactions between diverse components, feedback loops, and time-dependent behaviour. This methodology is especially relevant for IoT systems, where challenges such as data congestion, resource constraints, and dynamic user behaviour can significantly impact system performance. This is also very important in IoT systems that monitor and control industrial processes or critical systems.

By integrating system dynamics with IoT-specific considerations, we can:

  • Predict unintended consequences of policy changes.
  • Enhance system resilience through robust design.
  • Optimise performance metrics such as energy efficiency, data flow, and service reliability.
  • Improves the monitoring and control of industrial systems or critical infrastructures.

Industrial IoT Specific Design Considerations

IoT is a key technology enabler for Industry 4.0 and is increasingly being implemented in manufacturing. This subset of IoT, known as Industrial IoT (IIoT), integrates IoT functionality into industrial settings. While new production systems often come with IoT capabilities by default, many manufacturing companies still rely on legacy equipment that can be upgraded using IoT solutions. Upgrading existing machinery is especially important, as manufacturing equipment is typically designed to last for decades, making frequent replacements impractical. Consequently, IIoT is essential for modernising older machinery to meet today's data-driven production demands, enhance efficiency, reduce downtime, minimise production waste, and lower the overall carbon footprint.

Recently, a new industrial paradigm called Industry 5.0 has emerged. Industry 5.0 builds on the principles of Industry 4.0, with a stronger emphasis on human well-being, resilience, and sustainability. In this context, IoT plays a vital role in achieving these objectives.

Main features of IIoT

Although the general concepts and architecture of Industrial IoT (IIoT) are similar to typical IoT, the industrial sub-domain has specific features and requirements for designing IoT solutions for industry. Industrial applications can be divided into various fields, such as manufacturing and production, energy and utilities, transportation and logistics, agriculture and farming, construction and building, and automotive. Each field has specific needs but shares common critical factors crucial for implementing IoT systems. The most common ones are listed below.

  • Industry Standards: All aspects of IoT systems, such as hardware, software, interfaces, and data formats, must adhere to industry standards and protocols.
  • Reliability and Robustness: In industrial environments, hardware components must withstand harsher conditions, which can often be costly to maintain or repair. The installation and management of IoT hardware components and software upgradability must be considered from the conceptual design stage.
  • Enhanced Security and Safety: As IoT devices connect to real production machinery, cybersecurity and general safety play a significant role. Unauthorised access to heavy machinery can lead to substantial financial losses or even fatal injuries.
  • Scalability and Interoperability: Once the system is implemented, it is common for new equipment or production lines to be added over time. The IoT system must be designed so that new production resources can be easily integrated without restarting the conceptual design. Additionally, production often involves legacy equipment, and IoT can facilitate the integration of modern and legacy systems.
  • Data Protection and Privacy: Data is one of the most valuable assets in modern industry. If the workforce is included in the IoT system monitoring domain, special attention must be given to data protection and privacy concerns.
  • Cost Considerations: IoT systems are intended to make industrial processes more efficient and safe. Balancing the costs of development, installation, and maintenance with the system's added value is often a critical design consideration for industrial IoT.

These aspects must be addressed early in the IoT system design process. Designing IIoT systems requires careful consideration of several critical factors to ensure the successful deployment and operation of IoT solutions in industrial environments. In addition to the listed factors, many industry domain-specific requirements may rule over general industrial requirements. A well-designed IIoT system can enhance productivity, optimise resource usage, and improve safety, ultimately providing significant value to industrial operations. By focusing on these key features during the design process, industries can fully harness the potential of IIoT to drive innovation and remain competitive in an increasingly connected world.

System Modelling

Model-based Systems Engineering (MBSE) is a systems engineering approach that prioritises using models throughout the system development lifecycle. Unlike traditional document-based methods, MBSE focuses on developing and using various models to depict different facets of a system, including its requirements, behaviour, structure, and interactions.

System Modeling Language

The systems modelling language (SysML)[7] is a general-purpose modelling language for systems engineering applications. It supports the specification, analysis, design, verification and validation of a broad range of systems and systems-of-systems. SysML plays a crucial role in the MBSE methodology. SysML provides nine diagram types to represent different aspects of a system. These diagram types (figure 15) help modellers visualise and communicate various perspectives of a system's structure, behaviour, and requirements.

Diagrams in SysML
Figure 15: Diagrams in SysML

Requirements

Product development, including IoT systems development, commences with the proper engineering of requirements and the definition of use cases. The customer establishes requirements; here, the term “customer” encompasses a broad spectrum. In most instances, the customer is an individual or organisation commissioning the IoT system. However, it could also be an internal customer, such as a different department within the same organisation or entity. In the latter case, the customer and the developer are the same. Nonetheless, this scenario is the exception rather than the rule. The importance of conducting a thorough requirement engineering process remains constant across all cases.

 

The customer often inadequately defines requirements, and many parameters or functions remain unclear. In such cases, the requirement engineering stage assumes pivotal importance, as poorly defined system requirements can lead to numerous changes in subsequent design phases, resulting in an overall inefficient design process. In the worst-case scenario, this may culminate in significant resource wastage and necessitate the restart of system development amid a project. Such occurrences are not only costly but also time-consuming. While avoiding changes during the design and development process is impossible, proper change management procedures and resource allocation can significantly mitigate the impact on the overall design process.

This section uses an industrial IoT system as a case study to present examples of SysML diagrams. The context of this case study revolves around a wood and furniture production company with multiple factories across the country. Each factory specialises in various stages of the production chain, yet all factories are interconnected. The first factory processes raw wood and prepares building elements for the subsequent two. The second factory crafts furniture from the prepared wood elements, while the third factory assembles customised products by combining building elements and production leftovers. These factories utilise a range of modern and automated machinery, while others employ classical mechanical machines with limited automation.

The company seeks an IoT solution to ensure continuous production flow, minimise waste, and implement predictive maintenance measures to reduce downtime. In the following examples, we utilise this case study, presenting fragments as examples without covering the entire system through diagrams.

Let's consider a fragment of customer input regarding functional requirements for the system:

  • The system must provide real-time machine status (ok, err, waiting for service) for every machine requiring periodic maintenance (totalling 54 machines across three plants).
  • The system must measure critical machine parameters linked to the most frequent failures.
  • The system must enable authorised operators to change the machine status to “require maintenance manually”.
  • [functional requirements continue]

Furthermore, the non-functional requirements include:

  • The developed system must use the existing wireless or wired internal network; no new cables or wireless networks should be installed.
  • Installed devices and sensors must not obstruct or interfere with production units or production processes.
  • The cost per unit must not exceed 50 €.
  • [non-functional requirements continue]

Based on fragments of the requirement list like the ones above, we can construct a hierarchical requirement diagram (req, figure 16) with additional optional parameters to precisely specify all individual requirements. Not all individual requirements need to be defined at the same level. If insufficient information is available at the current stage, requirements can be further refined in subsequent design iterations.

 Requirement Diagram of the IoT System
Figure 16: Requirement Diagram of the IoT System

Use case diagrams (uc) at the requirement engineering stage allow for the visualisation of higher-level services and identification of the main external actors interacting with the system services or use cases. They can be subsequently decomposed into lower-level subsystems. Still, in the requirement design stage, they facilitate a common understanding of the IoT system under development by different stakeholders, including management, software engineers, hardware engineers, customers, and others.

The following use case diagrams describe the high-level context of the IoT system (figure 17).

Use Case Diagram of the High-level Context of the IoT System
Figure 17: Use Case Diagram of the High-level Context of the IoT System

System Architecture

System architecture defines the system's physical and logical structure and interconnections between the subsystems and components. For example, block definition diagrams (bdd) can determine the system's hierarchical decomposition into subsystems and even component levels. The figure below shows a simple decomposition example of one IoT sensing node. It is essential to understand that blocks are one of the main elements of SysML and, in general, can represent either the definition or the instance. This is the fundamental concept of system design and the pattern used in system modelling. Blocks are named with stereotype notation «block» and its name. It also may consist of several additional compartments like parts, references, values, constraints, operations, etc.) In this example, Operations and values are demonstrated. Relationships between blocks describe the nature and requirement for the block's external connection. The most common relationships are associations, generalisations and dependencies. All of these have specific arrowheads that define the particular relationship. In the following example (figure 18), a composite association relationship (dark diamond arrowhead) is used to represent the structural decomposition of the subsystem.

Block Definition Diagram of Sensing Node
Figure 18: Block Definition Diagram of Sensing Node

One can define component interactions and flows with the internal block diagram (ibd). Cross-domain components and flows can be used in one single diagram, especially in the conceptual design stage. The ibd is closely related to bdd and describes the usages of the blocks. The interconnections between parts of blocks can be very different by nature; in one diagram, you can define flows of energy, matter, and data and services required or provided by connections. The following example (figure 19) shows how the data flows from the sensor to the website user interfaces in a simplified way.

 Internal Block Diagram of Data Flow Between Nodes
Figure 19: Internal Block Diagram of Data Flow Between Nodes

System Behaviour

The system behaviour of an IoT system defines the implementation of system services and functionality. The combination of hardware, software, and interconnections enables the offering of the required services and functionality and establishes the system's behaviour. It comprises cyber-physical system activities, actions, state changes, and algorithms. For example, we can define a system sensing node software general algorithm with an activity diagram (act), as presented in the figure 20.

Activity Diagram of Sensing Node's Software Algorithm
Figure 20: Activity Diagram of Sensing Node's Software Algorithm

Requirement verification and validation

Property prognosis and assurance are conducted during the complete development process. Expected system properties are forecasted early on using models. Property validation, which accompanies development, continuously examines the pursued solution through investigations with virtual or physical prototypes or a combination of both. The property validation includes verification and validation. Verification is the confirmation by objective proof that a specified requirement is fulfilled. At the same time, validation proves that the user can use the work result for the specified application [8].

SysML enables the tracking and connecting of requirements with different elements and procedures of the model. For example, the SysML requirement diagram (figure 21) captures requirements hierarchies and the derivation, satisfaction, verification, and refinement relationships. The relationships provide the capability to relate requirements to one another and to system design models and test cases.

Requirement Validation and Verification
Figure 21: Requirement Validation and Verification

SysML is a comprehensive graphical modelling language designed to visualise a system's structure, behaviour, requirements, and parametrics, enabling effective communication of this information to others. It defines nine types of diagrams, each with a unique role in conveying specific aspects of system design.

IoT Architectures

Due to the rapid development of communication technologies and novel data transmission carriers and protocols, IoT systems have emerged from the world of wireless sensor networks. These networks have already shown flexibility and resilience in different application domains, including healthcare, manufacturing, domestic services, etc. While IoT systems applications shift toward more data-intensive applications, their technical solutions and architectures are still essential to provide valid and trustworthy data for complex and reliable decisions.

IoT Reference Architectures

This chapter focuses on the architectural design of IoT networks and systems. It leverages the well-known four-layered IoT reference architecture shown in figure 22 to discuss the methodologies and tools for the design of IoT networks and systems. An IoT reference architecture is a strategic blueprint detailing the key components and their interactions within an IoT ecosystem. It offers a robust framework for designing, developing, and deploying effective IoT solutions, ensuring a cohesive and scalable system architecture. The IoT reference architecture outlines the foundational layers and components required for the seamless operation of IoT systems. Each layer is critical in ensuring efficient data collection, transmission, processing, and utilisation in an IoT ecosystem.

4 Layered IoT Architecture Model
Figure 22: 4 Layered IoT Architecture Model

Perception Layer: The Data Collection and Interaction Layer

The perception layer forms the foundation of the IoT ecosystem by interacting directly with the physical world. It comprises various IoT-enabled devices, sensors, and actuators that gather data or influence the environment. Recent advances in hardware and low-power computing also bring data processing capabilities to this layer, including simple AI tasks.

Components

  1. Sensors: Devices that detect and measure parameters such as temperature, humidity, pressure, light, motion, and sound. Examples include temperature sensors, proximity sensors, and accelerometers.
  2. Actuators: Devices that execute actions in response to commands, such as motors, relays, and smart locks.
  3. IoT Devices: Smart gadgets, such as cameras, wearable devices, and smart home appliances, capable of both sensing and acting.

Functionality

  • Collects raw data from the environment.
  • Interfaces with actuators to enact physical changes or respond to user commands.

This layer serves as the IoT system's “eyes and hands,” enabling it to sense and influence its surroundings.

Transport Layer: The Communication Backbone

The transport layer, called the network layer, facilitates connectivity between IoT devices and the broader system. It ensures that data captured at the perception layer is reliably transmitted to data processing units. This layer provides various communication models, including device-to-device and device-to-cloud communication.

Components

  1. Communication Protocols: These include MQTT, CoAP, HTTP, and WebSocket, tailored to support lightweight and efficient IoT communication.
  2. Networking Infrastructure: Gateways, routers, modems, and switches that route and manage traffic between devices and systems.
  3. Connectivity Technologies:
  • Short-range: Wi-Fi, Bluetooth, Zigbee, NFC.
  • Long-range: Cellular (4G/5G), LoRaWAN, Sigfox.
  • Satellite for remote or global coverage.

Functionality

  • Ensures secure and seamless data transmission.
  • Handles device discovery, authentication, and network management.
  • Bridges the gap between localised IoT systems and centralised data platforms like cloud servers.

This layer is the “nervous system” of the IoT architecture, enabling the flow of information across the ecosystem.

Data Processing Layer: The Intelligence Hub

The data processing layer is responsible for aggregating, filtering, analysing, and deriving actionable insights from the data collected by IoT devices. Depending on the application's requirements, this layer can operate at the edge (closer to the devices) in the fog or the cloud.

Components

  1. Edge Computing Devices: Localised processing units that enable near-real-time data analysis, reducing latency and bandwidth usage.
  2. Fog Computing Devices: Components located between the Edge and Cloud, fog computing devices provide distributed computing services that allow advanced data operations on a limited scale and ensure a more flexible approach to IoT data security and processing. They also optimise data transmission through aggregation and preprocessing for the Cloud Platforms.
  3. Cloud Platforms: centralised systems for large-scale data storage, advanced analytics, and extensive AI tasks such as machine learning model training.
  4. Data Pipelines: Tools for data ingestion, transformation, and integration with enterprise systems. Examples include Apache Kafka and AWS IoT Core.
  5. AI and Analytics Engines: Algorithms and tools for predictive analytics, anomaly detection, and decision-making.

Functionality

  • Cleanses and normalises raw data for processing.
  • Performs analytics to extract patterns, trends, and actionable insights.
  • Supports automated decision-making and triggers responses in real time.

This layer acts as the “brain” of the IoT system, transforming raw data into meaningful intelligence.

Application Layer

The Application Layer is also known as the User Interaction and Value Creation Layer. The Application Layer transforms processed data into end-user functionalities and value-driven solutions. It consists of software applications, services, and user interfaces that allow users to interact with and benefit from the IoT system.

Components

  1. Applications: Solutions tailored to specific use cases, such as smart home automation, industrial IoT monitoring, and healthcare diagnostics.
  2. Visualisations Tools: Dashboards and reporting tools that intuitively present data insights.
  3. APIs and Integration Services: Enable connectivity with third-party applications and systems.

Functionality

  1. Provides user interfaces for monitoring, control, and configuration.
  2. Supports real-time decision-making and alerts for critical events.
  3. Drives advanced use cases such as predictive maintenance, automated workflows, and AI-driven decision support.

This layer represents the “face” of the IoT system, delivering tangible benefits and user-centric solutions.

Key Insights and Integration of Layers

  1. Seamless Integration: The layers are interdependent and must work harmoniously. For instance, data collected by the perception layer is meaningless without the processing layer's intelligence or the application layer's usability.
  2. Scalability and Flexibility: IoT systems must be designed to scale with increasing devices, data volumes, and user demands. Each layer should support modular expansion.
  3. Security Across Layers: Robust security measures, such as encryption, authentication, and intrusion detection, must be integrated at every layer to protect data and devices from threats.

Organisations can build resilient and efficient IoT ecosystems tailored to their specific needs by leveraging a well-structured IoT reference architecture. This layered approach ensures that every component, from sensors to user applications, contributes to a cohesive and value-driven system. The discussion on IoT architectures presented in the remaining parts of this chapter is based on the IoT reference architecture presented above.

Components of IoT Network Architectures

IoT Network Architecture is composed of a variety of layers, including Edge-class IoT devices such as sensors and actuators, access points enabling devices to connect to the Internet and services, fog-class devices performing preliminary data processing such as aggregation and conversion, core Internet network and finally a set of cloud services for data storage and advanced data processing. A sample model is present in figure 23.

 IoT Network Architecture Components
Figure 23: IoT Network Architecture Components

IoT nodes

IoT nodes are the fundamental building blocks of an IoT system, enabling the capture, processing, and transmission of data across connected devices. These nodes often operate in energy-constrained environments and are connected to an access point, which links them to the Internet, using low-power communication technologies (LPCT). These technologies enable cost-effective, reliable connectivity while adhering to the limitations of battery-operated or energy-harvesting devices. They encompass wireless access technologies at the physical layer for establishing connectivity and application layer communication protocols for managing data exchange over IP networks.

Wireless Access Technologies

Wireless access technologies are pivotal in connecting IoT devices to a network. They can be categorised into short- and long-range technologies and divided into licensed and unlicensed options. The selection of a specific technology depends on application requirements such as range, power consumption, scalability, and cost.

Short-Range Technologies

Short-range technologies are ideal for IoT applications in localised settings, such as smart homes, industrial automation, and personal devices. Examples include:

  • Bluetooth/Bluetooth Low Energy (BLE): Widely used for wearables and short-range communication with mobile devices.
  • ZigBee: Suitable for low-power mesh networks in home automation and smart lighting.
  • Z-Wave: Popular for smart home devices due to low power consumption and ease of integration.
  • IEEE 802.15.4: A foundation for standards like ZigBee and 6LoWPAN.
  • Near Field Communication (NFC): Designed for very short-range communication, commonly used in payment systems and secure data transfer.

Long-Range Technologies

Long-range communication is critical for IoT applications spanning large areas, such as agriculture, utilities, and logistics. Examples include:

  • LoRaWAN: A low-power wide-area network ideal for rural and remote IoT deployments.
  • Sigfox: An ultra-narrowband technology suited for simple and low-data IoT applications.
  • NB-IoT: A cellular-based LPWAN technology optimised for deep indoor coverage and long battery life.
  • LTE-M (Cat-M1): Supports higher bandwidth IoT applications while maintaining energy efficiency.

Licensed vs. Unlicensed Technologies

  • Licensed Technologies: Operate over spectrum owned by cellular operators, offering more excellent reliability and guaranteed QoS but often at higher costs.
  • Unlicensed Technologies: Use publicly available spectrum (e.g., LoRaWAN, ZigBee) and are cost-effective over time. However, operators must build and maintain their infrastructure, incurring upfront capital expenditures.

Low Power Wide Area Networks (LPWAN) LPWAN technologies are transformative for IoT because they provide long-range connectivity with ultra-low power consumption. These technologies are particularly suited for large-scale deployments where devices must operate autonomously for extended periods (up to a decade) without frequent maintenance or battery replacement.

Key Benefits of LPWAN Technologies

  • Wide-Area Coverage: Reliable communication over distances of several kilometres, even in challenging environments.
  • Ultra-Low Power Operation: Prolonged battery life for IoT devices, minimising maintenance.
  • Low-Cost Connectivity: Reduces both CAPEX and OPEX, making IoT deployments more economical.
  • Scalability: Supports the connection of thousands or millions of devices in a single network.
  • Acceptable Quality of Service (QoS): Sufficient for most IoT use cases, including environmental monitoring, asset tracking, and smart agriculture.

Popular LPWAN Protocols

  • LoRaWAN: Leverages chirp spread spectrum for long-distance, low-power communication.
  • Sigfox: Uses ultra-narrowband technology for low data rate applications.
  • NB-IoT and LTE-M: Cellular-based LPWAN technologies offering enhanced indoor coverage and higher data rates.

While LPWAN protocols excel at transmitting text data, multimedia applications (e.g., images and audio) may require data compression techniques to balance bandwidth and energy efficiency. For instance, in smart agriculture, images from field cameras or audio from livestock monitoring systems might need to be compressed before transmission.

Application Layer Communication Protocols

Application layer protocols manage data exchange between IoT devices and platforms, ensuring efficient and reliable communication even in resource-constrained environments. These protocols address the limitations of traditional HTTP, offering lightweight and optimised alternatives.

Key Application Layer Protocols

1. Constrained Application Protocol (CoAP):

  • A lightweight, UDP-based protocol designed for resource-constrained devices.
  • Standardised by the IETF (RFC 4944 and 6282) and suitable for low-power and lossy networks.
  • Employs a request-response model, enabling efficient communication between devices and servers.

2. MQTT (Message Queuing Telemetry Transport):

  • A TCP-based publish-subscribe protocol ideal for IoT systems requiring real-time data exchange.
  • Utilises a central message broker to distribute packets between publishers and subscribers.
  • MQTT-SN (Sensor Network): A variant optimised for UDP, reducing overhead for constrained networks.

3. Advanced Message Queuing Protocol (AMQP):

  • A flexible protocol designed for high-performance messaging, often used in industrial IoT systems.
  • Provides robust support for message reliability and transactional operations.

4. Lightweight M2M (LWM2M):
Specifically tailored for IoT device management, enabling firmware updates, configuration, and resource monitoring.

5. UltraLight 2.0:
A minimalistic protocol designed for low-power IoT applications, focusing on reducing message size and complexity.

IoT nodes rely on advanced wireless access technologies and application layer protocols to establish seamless connectivity, optimise energy efficiency, and support diverse use cases. The selection of these technologies should align with the application's specific requirements, ensuring a balance between performance, scalability, and cost. With the rise of LPWAN and lightweight communication protocols, IoT systems are increasingly capable of supporting massive, energy-efficient deployments in various domains, from smart cities to industrial automation.

The IoT Gateway node

The Internet of Things (IoT) Gateway is a pivotal component in IoT ecosystems, serving as the interface between IoT devices—such as sensors, actuators, and edge nodes—and the broader network infrastructure, including cloud platforms and external data analytics systems. The gateway facilitates seamless data transmission, device management, and integration, enabling efficient communication within the IoT network. By bridging IoT nodes that cannot directly communicate with each other or the Internet, IoT gateways are vital in ensuring interoperability and scalability across diverse devices and protocols.

Core Functions of IoT Gateway nodes

IoT gateways serve multiple essential functions that enhance the overall effectiveness of IoT deployments:

  • Protocol Translation: Many IoT devices use diverse communication protocols, such as ZigBee, LoRaWAN, WiFi, or Bluetooth. The gateway standardises this data into formats compatible with the broader network, ensuring interoperability.
  • Data Aggregation: Gateways collect data from multiple devices, combining and preprocessing it to reduce bandwidth consumption and streamline cloud integration.
  • Edge Computing: By performing local computations, such as filtering, analytics, or decision-making, gateways reduce latency and alleviate the workload on cloud infrastructure.
  • Security Management: Gateways act as a security checkpoint, encrypting data and ensuring secure communication between devices and the cloud.
  • Device Management: They facilitate remote monitoring, configuration, and firmware updates for connected devices, enabling efficient maintenance.

Hardware Solutions for IoT Gateway nodes

IoT gateways often rely on resource-constrained, cost-effective computing devices that provide sufficient processing power while maintaining energy efficiency. Examples include:

  • Raspberry Pi: A versatile and affordable option for IoT gateway implementations, capable of running lightweight operating systems and software for data aggregation, preprocessing, and communication.
  • Orange Pi: Similar to Raspberry Pi, it offers flexibility and affordability and is suitable for edge computing tasks and IoT connectivity.
  • NVIDIA Jetson Nano Developer Kit: This is a more powerful solution for applications requiring edge AI and machine learning. It enables advanced analytics and real-time decision-making at the gateway level.
  • BeagleBone Black: Known for its robustness, it is often used in industrial IoT applications.

These devices can run lightweight algorithms to perform local data processing, real-time analytics, and storage, minimising the dependency on cloud resources. Additionally, they can support multiple protocols, making them highly adaptable to various IoT deployment scenarios.

The Role of Edge Computing in IoT Gateway Nodes

IoT gateways equipped with edge computing capabilities significantly enhance the performance and efficiency of IoT networks:

  • Reduced Latency: Local processing enables real-time decision-making, which is critical for time-sensitive applications such as healthcare or industrial automation.
  • Bandwidth Optimisation: Gateways reduce the overall network load by filtering and aggregating data before transmission to the cloud.
  • Enhanced Security: Localised data processing limits the exposure of sensitive information to external threats.
  • Autonomous Operation: In environments with intermittent connectivity, gateways with edge computing can function autonomously, ensuring uninterrupted operations.

Smart IoT Solutions with Gateway Nodes

IoT gateways pave the way for scalable, adaptable, and energy-efficient IoT deployments. They act as enablers for diverse applications, including:

  • Smart Agriculture: Gateways using LoRaWAN or Sigfox provide connectivity to remote sensors, monitoring soil moisture, weather conditions, and livestock health.
  • Smart Cities: WiFi-enabled gateways support high-speed communication for smart lighting, traffic management, and public safety systems.
  • Healthcare IoT: Gateways integrated with BLE or WiFi connect wearable devices to centralised systems for real-time patient monitoring and diagnostics.
  • Industrial IoT (IIoT): Gateways facilitate predictive maintenance and process optimisation by connecting sensors in manufacturing or logistics environments.

IoT gateways are indispensable for creating seamless, secure, and efficient IoT networks. By bridging diverse devices, translating protocols, and enabling edge computing, these gateways ensure the scalability and functionality of IoT solutions across industries. Their integration with modern wireless technologies and edge devices makes them a cornerstone for the growing adoption of IoT in real-world applications.

Fog and Edge Computing Nodes

In the rapidly expanding Internet of Things (IoT) landscape, fog and edge computing nodes play a critical role in bridging the gap between IoT devices and centralised cloud computing infrastructure. These nodes decentralise data processing, bringing computational resources closer to the source of data generation, enhancing responsiveness, reducing latency, and alleviating the load on cloud data centres. While “fog computing” and “edge computing” are often used interchangeably, they have distinct scopes. Fog computing is a broader architecture integrating processing at intermediate layers, such as gateways or local servers. In contrast, edge computing focuses on computations directly at or near the device level. These approaches offer a synergistic framework for efficient, real-time, and scalable IoT systems.

Key Characteristics of Fog and Edge Computing

1. Decentralised Processing:
Fog and edge nodes process data locally or in close proximity to IoT devices, minimising the need for constant communication with cloud servers.

2. Layered Architecture:

  • Edge Computing: Processing occurs at or near the data source, such as within sensors, cameras, or IoT-enabled machinery.
  • Fog Computing: Adds an intermediary layer where routers, gateways, or local servers perform more advanced tasks, such as data aggregation, filtering, and lightweight analytics.
  • Real-Time Capability: Localised processing enables low-latency responses, which is essential for critical applications like autonomous vehicles, healthcare systems, and industrial automation.

Advantages of Fog and Edge Computing

1. Reduced Latency
Traditional cloud computing involves data transmission over long distances, leading to delays. Fog and edge nodes address this issue by processing data closer to the source, ensuring faster response times critical for real-time applications such as:

  • Industrial Automation: Real-time anomaly detection and predictive maintenance.
  • Autonomous Vehicles: Rapid decision-making for navigation and safety.
  • Healthcare Monitoring: Immediate alerts for abnormal patient data from wearable devices.

2. Bandwidth Optimization
By preprocessing data locally, fog and edge nodes minimise the volume of raw data sent to the cloud, reducing bandwidth consumption and associated costs. For instance:

  • In smart agriculture, edge devices filter environmental data, sending only essential metrics to the cloud for long-term analysis.
  • In smart cities, local fog nodes manage traffic data, sending summarised insights to centralised systems.

3. Enhanced Scalability
Decentralising computational tasks allows IoT networks to scale efficiently without overwhelming cloud infrastructure. Fog computing enables a hierarchical distribution of workloads, supporting vast IoT deployments in industries like energy, transportation, and logistics.

4. Improved Security and Privacy
Localised data processing reduces exposure to cyber threats during data transmission. Additionally, sensitive data can remain within predefined geographical boundaries to comply with regulations such as GDPR (General Data Protection Regulation).

5. Resilience in Intermittent Connectivity
In scenarios with unreliable continuous cloud access, fog and edge nodes ensure autonomous operations by performing critical tasks locally.

Use Cases for Fog and Edge Computing 1. Industrial IoT (IIoT):

  • Real-time monitoring and control of manufacturing equipment.
  • Predictive maintenance to prevent costly downtime.

2. Smart Cities:

  • Traffic management using local sensors and cameras to optimise flow and reduce congestion.
  • Distributed energy management for power grids.

3. Healthcare:

  • Continuous monitoring of patients with wearable devices.
  • Localised data analysis for faster diagnosis and intervention.

4. Autonomous Systems:

  • Drones for delivery and surveillance.
  • Vehicles with edge-enabled sensors for real-time navigation and obstacle avoidance.

5. Agriculture:

  • Precision farming using environmental sensors.
  • Crop health monitoring with drone-mounted edge devices.

Fog Computing and Artificial Intelligence (AI)

Integrating artificial intelligence (AI) with fog computing enhances the capabilities of IoT systems by enabling real-time analytics and decision-making at the edge.

AI-Enabled Fog Nodes:

  • Perform localised data analysis using lightweight AI models.
  • Support inferencing tasks like object detection at the edge to avoid latency from cloud-based AI processing.

Distributed AI Processing:

  • Fog nodes handle intermediate tasks like preprocessing and feature extraction, while cloud servers perform more computationally intensive AI training.
  • This hierarchical distribution ensures efficient utilisation of resources across the network.

Examples

  • Smart Retail: AI-enabled fog nodes analyse customer behaviour in-store, providing personalised recommendations without cloud dependency.
  • Energy Management: Predictive analytics performed locally to optimise energy distribution in real-time.

Technologies Enabling Fog and Edge Computing

1. Hardware Solutions:

  • Raspberry Pi: Affordable, energy-efficient computing for edge processing.
  • NVIDIA Jetson Nano: Edge AI for applications requiring advanced analytics.
  • Edge Servers: High-performance devices for fog computing in industrial environments.

2. Software Frameworks:

  • Kubernetes at the Edge: Manages containerised applications across fog and edge nodes.
  • OpenFog Consortium Standards: Ensures interoperability and scalability.

3. Networking Protocols:

  • MQTT and CoAP: Lightweight communication protocols optimised for edge environments.
  • 5G Networks: Enhances connectivity for mobile fog and edge nodes, supporting high-speed, low-latency communication.

Future Trends in Fog and Edge Computing

1. Integration with 5G: The rollout of 5G networks will further enhance fog and edge computing by providing high-speed, low-latency communication, supporting advanced use cases like AR/VR and autonomous systems.

2. Edge AI Innovations: Continued development of efficient AI models for edge devices will expand their capabilities, enabling predictive maintenance, fraud detection, and environmental monitoring applications.

3. Decentralised Architectures: Blockchain technology may be integrated with fog and edge nodes to ensure secure, tamper-proof data processing and storage.

4. Green Computing Initiatives: Energy-efficient hardware and renewable energy integration will drive sustainable fog and edge solutions.

Fog and edge computing represent transformative advancements in IoT system architecture, addressing the limitations of traditional cloud-centric models. By bringing computational resources closer to data sources, these approaches enable real-time analytics, reduce bandwidth requirements, and improve system reliability. As IoT deployments continue to grow in complexity and scale, the adoption of fog and edge computing will be instrumental in achieving responsive, secure, and efficient solutions across industries. With advancements in AI, 5G, and edge hardware, the future of fog and edge computing promises even greater integration and innovation.

Internet core networks

Internet core networks are the backbone of the Internet of Things (IoT), enabling seamless connectivity and data exchange between billions of devices and cloud computing platforms. These networks are integral to the operation of IoT systems, ensuring the reliable transmission of vast amounts of data generated by interconnected sensors, actuators, and devices, collectively called IoT nodes.

IoT nodes capture and generate significant data volumes that need to be processed to extract actionable insights. This data journey involves two key communication paths:

  • Uplink: Data flows from IoT nodes to the cloud for processing and analysis.
  • Downlink: Processed data, insights, control commands, or feedback are transmitted back to IoT nodes for execution.

This bidirectional communication underpins critical IoT applications, such as smart cities, industrial automation, healthcare systems, and autonomous vehicles. These applications rely on low-latency and high-throughput networks to support real-time responsiveness and data-driven decision-making, making the role of core networks indispensable.

Challenges in Handling IoT Traffic over Core Networks
While internet core networks provide essential connectivity for IoT systems, the exponential growth in IoT devices introduces unique challenges that must be addressed to ensure reliable, secure, and efficient operations.

1. Security Vulnerabilities
Transiting vast amounts of IoT data over core networks exposes the ecosystem to heightened cyber-attack risks. Common threats include:

  • Data Interception: Unauthorised entities accessing sensitive information during transmission.
  • Distributed Denial-of-Service (DDoS) Attacks: Disrupting network services by overwhelming them with malicious traffic.
  • Unauthorised Access: Exploiting weak authentication to control IoT devices.

To mitigate these risks, robust security measures are essential:

  • End-to-end Encryption: Ensures data confidentiality during transmission.
  • Secure Authentication Protocols: Protect against unauthorised access.
  • Continuous Network Monitoring: Identifies and neutralises threats in real-time.

Without comprehensive security frameworks, IoT systems are vulnerable to breaches, data theft, and operational disruptions, which could compromise safety and reliability.

2. Maintaining Quality of Service (QoS)
The massive volume of IoT traffic places immense pressure on core networks, potentially leading to:

  • Congestion: Overloaded network pathways.
  • Latency Issues: Delays in data transmission and processing.

Even minor QoS degradation can result in severe consequences for applications such as autonomous vehicles, industrial automation, and telemedicine, including operational failures or safety hazards.

Solutions for QoS Optimisation:

  • Traffic Prioritisation Mechanisms: Assign higher priority to time-sensitive data.
  • Dynamic Network Optimisation: Use intelligent routing to reduce bottlenecks.
  • Adaptive Bandwidth Allocation: Scale resources based on traffic demands.

By ensuring consistent QoS, core networks can meet the stringent demands of real-time IoT applications.

3. Energy Consumption
The continuous transmission and processing of IoT data across core networks require substantial energy resources, contributing to:

  • High Operational Costs: Increasing expenditure for network providers.
  • Environmental Impact: Elevated carbon emissions from energy-intensive processes.

Strategies for Sustainable Energy Management:

  • Energy-Efficient Network Equipment: Reduce power consumption without compromising performance.
  • Optimised Data Routing: Minimise transmission distance and energy usage.
  • Edge Computing Integration: Process data closer to its source, reducing the load on core networks and conserving energy.

Adopting these strategies helps balance operational demands with environmental responsibility, paving the way for greener IoT infrastructures.

4. Network Management Complexity
The dynamic and large-scale nature of IoT traffic introduces significant challenges in network administration, such as:

  • Coordinating Diverse Data Flows: Managing the simultaneous transmission of varied IoT data.
  • Load Balancing: Distributing network traffic to prevent overloads.
  • Scaling Resources: Adapting to the growth of IoT devices and applications.

Traditional network management approaches often fall short of addressing these complexities. Advanced solutions include:

1. Software-Defined Networking (SDN):

  • Centralised Control: Decouples network control from hardware, enabling flexible and automated management.
  • Dynamic Configuration: Adapts routing paths to optimise traffic flow.

2. Network Function Virtualisation (NFV):

  • Virtualised Network Functions: Replace hardware-based functions with software, allowing rapid scaling and efficient resource utilisation.
  • Cost Reduction: Decreases reliance on expensive, dedicated hardware.

Together, SDN and NFV enhance agility, scalability, and resilience, making them indispensable tools for managing complex IoT ecosystems.

The Future of Core Networks in IoT The rapid expansion of IoT networks demands continuous innovation in core network technologies. Future advancements are likely to focus on:

1. 5G and Beyond

  • Low Latency: Essential for real-time applications such as autonomous vehicles and industrial automation.
  • High Bandwidth: Supports massive IoT deployments with diverse traffic profiles.

2. AI-Driven Network Management

  • Predictive Analytics: AI can anticipate traffic patterns and optimise routing proactively.
  • Self-Healing Networks: AI-enabled systems can detect and resolve issues autonomously, reducing downtime.

3. Blockchain for Secure IoT Communication

  • Tamper-proof Transactions: Blockchain ensures the integrity of data during transmission.
  • Decentralised Security: Reduces reliance on centralised servers, mitigating single points of failure.

4. Green Networking Initiatives

  • Renewable Energy Integration: Powering network nodes with solar or wind energy.
  • Energy-Aware Protocols: Dynamically adjust network operations to conserve energy.

Internet core networks are the lifeline of IoT ecosystems, enabling seamless data transmission and real-time responsiveness across diverse applications. However, the rapid growth of IoT introduces challenges, including security vulnerabilities, QoS maintenance, energy consumption, and network management complexities.

Core networks can meet the evolving demands of IoT systems by adopting advanced technologies such as SDN, NFV, edge computing, and AI-driven management and implementing robust security measures and energy-efficient practices. These innovations will ensure a sustainable, secure, and efficient future for IoT, driving transformative advancements across industries in an increasingly connected world.

Cloud computing data centres

IoT devices are typically constrained by limited computational power and memory, so they rely heavily on cloud data centres for advanced analytics and data storage. IoT cloud computing represents the intersection of cloud technology and the rapidly expanding Internet of Things (IoT) domain, offering a robust framework for processing and managing the massive data streams of IoT devices.

Cloud computing has transformed IT operations, providing unparalleled advantages in cost-effectiveness, scalability, and flexibility. When combined with IoT, these benefits are amplified, enabling seamless access to a broad array of computing resources—ranging from software to infrastructure and platforms—delivered remotely over the Internet. This integration allows IoT devices to connect to cloud environments from virtually any location, enabling real-time data processing, efficient resource management, and dynamic scalability.

By leveraging cloud computing, organisations can minimise the complexities and financial burdens of maintaining on-premises IT infrastructure. This capability accelerates the deployment of IoT solutions and reduces costs, empowering businesses to focus on innovation and growth rather than infrastructure management.

Key Benefits of IoT Cloud Computing

1. Cost Reduction and Resource Optimisation
One of the primary advantages of IoT cloud computing is the significant cost savings it offers by eliminating the need for extensive physical infrastructure. Traditionally, organisations had to invest heavily in on-premises data centres, incurring substantial costs related to hardware procurement, maintenance, security, and periodic upgrades.

Cloud computing shifts these responsibilities to service providers, who manage the infrastructure on behalf of users. This model reduces capital expenditure and operational costs, freeing up financial and human resources. For small and medium-sized enterprises (SMEs), this shift is particularly transformative, granting access to cutting-edge computing resources that were previously unaffordable.

Additionally, the pay-as-you-go model of cloud services ensures that organisations only pay for the resources they use, enabling efficient cost management and scaling.

2. Enhanced Security and Data Management
Cloud computing enhances data security by leveraging the expertise of leading service providers, who implement advanced measures to protect data and applications from cyber threats. Key security features include:

End-to-End Encryption: Protects data during transmission and storage. Regular Updates and Patches: Ensures systems are safeguarded against emerging vulnerabilities. Robust Authentication Mechanisms: Prevents unauthorised access. By outsourcing security to cloud providers, organisations can achieve a level of protection that would be costly and complex to maintain independently.

Furthermore, cloud platforms offer scalable and flexible storage solutions to accommodate the dynamic data volumes generated by IoT devices. Automated maintenance and updates ensure consistent performance and reduce the risk of downtime or data loss.

3. Accelerating IoT Application Development
IoT cloud computing provides developers with a robust ecosystem of tools, frameworks, and services that streamline application development. This environment allows for:

  • Rapid Prototyping and Deployment: Developers can quickly create, test, and launch IoT applications.
  • Infrastructure-Free Development: Eliminates the need to manage physical servers, enabling developers to focus on functionality and innovation.
  • Enhanced Collaboration: Cloud platforms support real-time collaboration, allowing teams to work together from different locations.

These advantages lead to faster rollout times for IoT applications and foster continuous innovation.

4. Support for IoT-Specific Cloud Platforms
The rise of IoT has driven the development of cloud platforms tailored to the unique demands of IoT systems. Popular platforms such as Microsoft Azure IoT Suite, Amazon AWS IoT, and DeviceHive offer comprehensive services, including:

  • Device Management: Streamlining the onboarding, configuration, and monitoring of IoT devices.
  • Real-Time Data Processing: Analysing data as it is generated for actionable insights.
  • Advanced Analytics: Supporting predictive analytics, machine learning, and AI-driven decision-making.
  • Application Hosting: Providing a reliable environment for deploying IoT solutions.

These platforms enable businesses to implement IoT solutions quickly and cost-effectively, eliminating the need for extensive in-house infrastructure while maintaining flexibility and scalability.

Strategic Advantages of IoT Cloud Integration

The integration of IoT and cloud computing extends beyond cost efficiency and operational convenience, offering strategic benefits that drive business transformation:

1. Real-Time Insights:
Cloud-based analytics enable organisations to process and act on IoT data in real-time, improving decision-making and responsiveness. For example, in industrial automation, real-time data can predict equipment failures and trigger preventive actions, minimising downtime and costs.

2. Enhanced Operational Efficiency:
Cloud-based IoT platforms optimise workflows by automating repetitive tasks, streamlining processes, and improving resource allocation. For instance, smart city systems use cloud analytics to manage traffic flow, reduce energy consumption, and respond to emergencies more effectively.

3. Scalability for Growing IoT Ecosystems:
Cloud platforms are inherently scalable, allowing businesses to expand their IoT deployments without the need for additional physical infrastructure. This scalability supports long-term growth and adapts to fluctuating demands.

4. Innovation Enablement:
Cloud computing reduces the burden of infrastructure management, freeing up resources for innovation. It enables businesses to explore new IoT use cases and develop next-generation applications.

The Future of IoT Cloud Computing

As IoT continues to expand, the role of cloud computing will grow increasingly pivotal in supporting its evolution. Emerging trends and technologies shaping the future of IoT cloud computing include:

  • Edge and Fog Computing Integration: Combining edge computing with cloud infrastructure to process data closer to its source, reducing latency and bandwidth usage.
  • AI-Driven IoT Analytics: Leveraging artificial intelligence to extract deeper insights from IoT data and enable predictive and prescriptive analytics.
  • Serverless Architectures: Facilitating cost-effective, on-demand resource utilisation for IoT applications.
  • Blockchain for IoT Security: Ensuring data integrity and secure transactions across IoT networks.

IoT cloud computing is a cornerstone of the modern IoT ecosystem, providing the scalability, flexibility, and efficiency needed to manage the massive data volumes generated by connected devices. By reducing costs, enhancing security, and accelerating application development, cloud computing empowers organisations to harness the full potential of IoT.

As the integration of these technologies continues to advance, IoT cloud computing will remain a driving force behind innovation and global connectivity, enabling a more innovative, more interconnected future.

IoT Software Applications

IoT devices are naturally network-enabled and communication-oriented. For this reason, software development on any component of the IoT ecosystem requires a specific approach driven by communication requirements, energy efficiency, and other aspects of IoT network architecture.
The value of IoT lies not just in the devices themselves but in the software applications that leverage the data generated by these devices to provide actionable insights and drive automation. These software applications are at the heart of IoT solutions and can be designed for various purposes. Let's explore the different aspects of IoT Software Applications in detail.

1. Monitoring

Monitoring is one of the most common IoT application categories. In this use case, IoT devices (such as sensors, cameras, or smart meters) continuously collect data about the environment, processes, or systems they are designed to observe.
The role of the software application is to collect and aggregate data.
The software interfaces with the devices to retrieve real-time data, such as temperature, humidity, energy consumption, or security status.

  • Analyse the data: Visualisation tools and dashboards allow users to view trends and patterns in real time, making it easy to monitor critical metrics.
  • Alert and notify: When the system detects anomalies or values that exceed predefined thresholds, the software can send alerts or notifications to stakeholders, such as technicians or facility managers.

For example, in industrial applications, IoT sensors might monitor equipment for signs of wear and tear, allowing a company to detect potential failures before they cause disruptions. In healthcare, IoT devices can continuously monitor patient vitals and send updates to doctors or hospitals for immediate action.

2. Control

Control-oriented IoT applications allow users to interact with and manage devices or systems remotely. This can include turning devices on or off, adjusting settings, or configuring them to operate in specific modes. Control applications offer the following capabilities:

  • Remote Device Management: Users can remotely access devices (such as smart thermostats, lights, or machinery) to change configurations, reset them, or check their operational status.
  • Automation and Scheduling: IoT devices can be controlled based on automated rules or schedules. For example, an IoT-enabled irrigation system can be set to water crops at specific times of the day based on weather conditions or soil moisture levels.
  • Access Control: In security systems, IoT devices such as smart locks or cameras can be controlled to allow or deny access to a specific location. Users can lock/unlock doors remotely or view live feeds to ensure security.

For example, IoT applications might control lighting, heating, and even security systems in a smart home from a central interface like a smartphone app.

3. Automation

Automation is one of the most transformative aspects of IoT applications. By automating processes based on real-time data, IoT can eliminate the need for manual intervention and optimise systems for greater efficiency. Key functions of IoT automation applications include:

  • Smart Decision-Making: Automation is driven by data insights. For instance, an IoT-enabled HVAC system can automatically adjust the temperature based on the number of people in a room or the outside weather.
  • Process Optimisation: IoT sensors may monitor machine performance in manufacturing and trigger automated actions, such as switching production lines or adjusting settings for energy savings. This ensures optimal performance without requiring human oversight.
  • Predictive Automation: Leveraging advanced analytics and machine learning, IoT systems can predict future trends or events, triggering automatic actions. For example, a smart fridge might reorder items when it detects that supplies are running low or based on usage patterns.

In agriculture, IoT-enabled irrigation systems can automatically adjust water flow based on soil moisture readings, ensuring that crops receive optimal care without human input.

4. Data-Driven Insights

One of the most significant advantages of IoT applications is their ability to extract valuable insights from the vast amounts of data generated by devices. These insights can inform business decisions, optimise operations, and improve outcomes across various sectors. Key capabilities of data-driven IoT applications include:

  • Data Analytics: IoT applications often incorporate advanced analytics tools that process and analyse data to generate insights. This can include historical trend analysis, predictive analytics, and anomaly detection.
  • Reporting: The data collected can be presented in comprehensive reports, giving users a detailed view of system performance or activity. This is especially useful for management or decision-makers who rely on actionable insights to make informed choices.
  • Machine Learning and AI: Many IoT systems incorporate machine learning algorithms that allow the system to learn from the data over time, improving its ability to predict future events or optimise performance automatically.

IoT data can track vehicle performance, predict maintenance needs, and enhance fuel efficiency in the automotive industry. Similarly, in the energy sector, IoT applications help to analyse consumption patterns and make adjustments that improve energy efficiency and reduce costs.

5. Security and Privacy

IoT applications also play a critical role in securing IoT devices and the data they generate. As the number of connected devices increases, ensuring the privacy and security of sensitive information is essential. IoT security applications focus on:

  • Device Authentication: Ensuring that devices accessing the network are authorised and cannot be tampered with.

Data Encryption: Securing data both in transit and at rest to prevent unauthorised access or breaches.

  • Real-time Monitoring: Constantly monitoring the health and security of IoT devices and systems to detect and respond to potential threats.

For example, in a smart home, an IoT security system could monitor unauthorised access attempts and alert homeowners while enabling remote surveillance.

6. Integration with Other Systems
Many IoT applications are not standalone but integrate with other systems or platforms to enhance functionality. These integrations span various sectors, including enterprise resource planning (ERP), customer relationship management (CRM), and cloud platforms. Some common integrations include:

  • ERP Systems: In manufacturing, IoT data can feed into an ERP system, automatically updating inventory levels, tracking production progress, and informing supply chain decisions.
  • Cloud Computing: Many IoT applications rely on cloud infrastructure to store and analyse large datasets, providing scalability and reducing the need for on-premise hardware.
  • Third-Party Services: IoT applications often integrate with third-party platforms, enabling additional capabilities such as weather forecasting, supply chain logistics, or data analytics.

For example, in smart cities, IoT applications integrate with traffic management systems, environmental sensors, and city services, enabling more efficient and responsive urban management.

The true value of IoT applications lies in their ability to convert raw data from connected devices into actionable insights, drive automation, and improve decision-making. Whether for monitoring, control, or automation, IoT applications are revolutionising industries by improving efficiency, reducing costs, and enhancing user experiences. As IoT technology evolves, the potential for even more advanced, intelligent, and integrated applications will only grow, further embedding IoT into our daily lives and business operations.

IoT Network Security Systems

Nowadays, virtually every IoT system processes sensitive data directly or indirectly. Many of those systems are mission-critical ones.
As the number of IoT devices grows, the need for robust security measures becomes even more critical. Protecting the sensitive data collected by these devices from unauthorised access, tampering, or misuse is paramount to ensure the integrity and privacy of users and organisations. Thus, network security systems should be considered when designing IoT networks and systems to ensure they're secure by design.

Security in IoT Networks:
Security within IoT networks is a multifaceted concern, as IoT devices often operate in decentralised and dynamic environments. These devices communicate through wireless networks, making them vulnerable to various cyberattacks. Given that IoT systems are frequently connected to the cloud or other external networks, vulnerabilities in one device can expose the entire network to risks. Hence, strong security protocols are essential for data protection in these networks.

Key Security Measures

  • Encryption: Encryption is one of the most fundamental techniques to protect data transmitted across IoT networks. It ensures that even if malicious actors intercept data, it remains unreadable without the appropriate decryption key. Both data at rest (stored data) and data in transit (data being transmitted) can be encrypted. IoT devices often use advanced encryption standards (AES), Transport Layer Security (TLS), or Secure Socket Layer (SSL) protocols to safeguard the communication between devices and the cloud or other endpoints. This makes it difficult for attackers to gain meaningful access to sensitive data.
  • Authentication: Authentication verifies the identity of both the devices and the users interacting with the IoT network. With IoT systems often comprising many different types of devices, each with varying levels of capabilities, ensuring that only legitimate devices can join the network is critical. Authentication mechanisms can include device certificates, biometrics, and multi-factor authentication (MFA) for users. Device authentication ensures that only authorised devices can communicate within the network, reducing the risk of a rogue or compromised device gaining access to sensitive information.
  • Authorisation: Once authenticated, the authorisation process dictates what actions a device or user can perform within the network. Authorisation systems define roles and permissions, ensuring that devices only have access to data and resources necessary for their function. For example, a smart thermostat may be authorised to adjust temperature settings but not to access user data stored in the cloud. This limits the potential impact of a compromised device by preventing it from performing unauthorised actions that could lead to data breaches or system failures.
  • Data Integrity: Ensuring data integrity involves preventing unauthorised data alteration. Integrity measures like hash functions or digital signatures verify that the data sent from one device to another has not been tampered with. This is essential in IoT networks where real-time data is constantly being exchanged, as any modification in this data can result in inaccurate readings, malicious activities, or faulty system behaviour.
  • Intrusion Detection and Prevention Systems (IDPS): IoT networks are prone to cyberattacks, such as denial-of-service (DoS) attacks, malware, or unauthorised access attempts. Intrusion detection systems (IDS) and intrusion prevention systems (IPS) are critical in identifying and blocking suspicious activities in real-time. These systems monitor the network for unusual behaviour patterns or unauthorised actions and respond promptly to mitigate potential threats before they can cause harm.
  • Firmware and Software Updates: Keeping devices' firmware and software up to date is essential to IoT security. Security vulnerabilities can be discovered in IoT devices over time. If these devices are not regularly updated with patches or new software versions, they can become easy targets for attackers. Many IoT devices now include features allowing remote updates, ensuring the system remains protected against newly discovered threats.
  • Secure Network Architecture: The design of the IoT network itself plays a crucial role in security. Segmentation of the network can limit the scope of damage if a device is compromised. By creating isolated segments, IoT networks can minimise the impact of a breach, preventing attackers from moving laterally across the entire system. In addition, virtual private networks (VPNs) and private communication channels can enhance security further, protecting communication between devices and their control centres.
  • Physical Security: Physical security is also an essential aspect of IoT device protection besides cyber threats. Devices located in publicly accessible places or vulnerable environments can be tampered with or stolen, leading to a loss of control or data misuse. Protecting IoT devices physically through tamper-resistant hardware, secure storage solutions, and proper disposal methods ensures that attackers cannot quickly gain unauthorised access by physically compromising a device.
  • Challenges in IoT Security: While these security measures are critical, implementing them in IoT networks presents several challenges. Many IoT devices have limited computational power and storage, making implementing complex encryption or authentication mechanisms difficult. Additionally, the sheer volume of IoT devices increases the attack surface, making monitoring and responding to every threat more challenging. Moreover, the rapid pace of IoT innovation and the frequent introduction of new devices and technologies can lead to inconsistent security practices across the industry, leaving gaps that attackers can exploit.

Securing IoT networks requires a comprehensive, multi-layered approach that addresses various security aspects. By implementing measures like encryption, authentication, authorisation, and regular software updates, organisations can significantly reduce the risk of data breaches and unauthorised access to IoT systems. While IoT security presents significant challenges, these challenges can be mitigated with careful planning, robust protocols, and a proactive security strategy.

IoT Networks

An IoT (Internet of Things) network comprises interconnected IoT nodes, including sensors, actuators, and fog nodes. Each IoT node typically includes several key components: a power supply system, a processing unit (such as microprocessors, microcontrollers, or specialised hardware like digital signal processors), communication units (including radio, Ethernet, or optical interfaces), and additional electronic elements (e.g., sensors, actuators, and cooling mechanisms). These components work in unison to enable the node to collect, process, and transmit data effectively, supporting various IoT applications.

The architecture of a typical IoT network is structured into four main layers: the perception layer, the fog layer, the Internet core network (transport layer), and the cloud data centre. This multi-layered structure allows for scalability, efficiency, and optimised data processing.

  • IoT Perception Layer: This foundational layer consists of IoT devices, such as sensors and actuators, responsible for collecting data from their surrounding environment. These devices can range from simple temperature and humidity sensors in smart homes to complex monitoring systems in industrial settings. Depending on their configuration, these devices may perform preliminary data processing to filter or compress data before transmission. For example, motion sensors in a security system might only transmit data when movement is detected, thereby conserving energy and bandwidth. This layer consists primarily of a network of IoT nodes connected directly to each other or an access point, depending on the network topology chosen for the given IoT network deployment scenario. The IoT nodes are connected directly to each other or an access point via low-power wireless communication technologies.
  • Fog Computing Layer: The fog computing layer acts as an intermediary between the IoT devices at the IoT network layer and the cloud. It provides localised, lightweight processing capabilities that help reduce latency and bandwidth usage. The fog layer can handle real-time data analysis, decision-making, and local storage tasks by processing data closer to the source. This is particularly useful in applications requiring immediate responses, such as autonomous vehicles, healthcare monitoring, and smart manufacturing systems. Fog computing enhances the network's overall performance and reduces the burden on centralised cloud resources.
  • Transport Layer (Internet Core Network): This layer transmits data between the perception and fog layers and the cloud data centre. It is the backbone of IoT communication, leveraging various networking technologies such as wireless networks (e.g., Wi-Fi, LTE, 5G), wired connections (e.g., Ethernet), and even optical networks for high-speed data transfer. The transport layer ensures reliable and secure data flow, using protocols that safeguard data integrity and reduce transmission errors. This layer's efficiency directly impacts the overall responsiveness and performance of the IoT network.
  • Cloud Data Center layer: The cloud data centre layer represents the centralised processing hub where advanced data analytics, complex computation, and long-term data storage occur. It can handle vast amounts of data from IoT devices across the network. The cloud layer employs powerful data analytics tools, machine learning algorithms, and big data technologies to extract insights and generate actionable outcomes. For instance, data collected from smart grids can be analysed to optimise energy distribution, while data from medical sensors can support remote patient monitoring and predictive healthcare interventions. The processed information is returned to users or devices to facilitate informed decision-making or automated physical responses (control of physical systems).

In an IoT network, the seamless integration of these layers enables efficient data collection, processing, and transmission. This layered approach supports diverse applications, from smart homes with automated climate control and security systems to large-scale industrial automation, smart cities, and agricultural monitoring. The robust structure of IoT networks allows for scalable solutions that can adapt to the needs of various industries, enhancing productivity, efficiency, and quality of life.

IoT Network Topologies

IoT networks are structured networks in which nodes are organised according to a defined hierarchy. An IoT network topology is a given arrangement or configuration of IoT devices to form an IoT network. IoT network topology refers to the structural layout of devices (nodes) in an IoT network, shaping how devices communicate and how data flows between them. The choice of topology significantly impacts the network’s performance, reliability, scalability, and cost. Below is an expanded discussion of fundamental IoT network topologies, their attributes, advantages, challenges, and use cases.

1. Star Topology

Star Topology
Figure 24: Star Topology

In a star topology (figure 24), all devices are connected directly to a central hub or gateway, the network’s communication and coordination point. The nodes are within the radio propagation of the gateway. Thus, they can communicate directly with the gateway, but if a node is out of the propagation or coverage range of the gateway, it is cut off from the network.

Advantages

  • Simplicity: Straightforward design makes implementation and maintenance easier.
  • Failure Isolation: If a device fails, it does not affect other devices in the network.
  • Ease of Management: Centralised communication simplifies monitoring and troubleshooting.
  • Low Latency: Direct communication with the hub reduces delays in data transmission.

Disadvantages

  • Single Point of Failure: The entire network is disrupted if the central hub fails.
  • Scalability Limits: The central hub can become a bottleneck as the number of devices increases.
  • Distance Constraints: Communication is limited by the maximum range between devices and the hub.

Use Cases

  • Home Automation: Smart lighting, thermostats, and security cameras communicating with a central hub.
  • Agricultural Monitoring: Sensors reporting soil and weather conditions to a centralised gateway.

2. Tree Topology

 Tree Topology
Figure 25: Tree Topology

Tree topology (figure 25) organises devices hierarchically, with a root node at the top and subsequent devices forming branches at multiple levels. It is a structured extension of the star topology. In this type of topology, some nodes operate as relays for others. If one of the relays fails (crashes or experiences poor link quality), all the descendant nodes that depend on it will be disconnected from the network.
There is a particular case of the tree-of-trees topology available (among others in Bluetooth) called Scatternet.

Advantages

  • Scalability: Devices can be added at any level of the hierarchy, making it suitable for large networks.
  • Organised Data Flow: Hierarchical design facilitates efficient routing and data aggregation.
  • Distributed Processing: Intermediate nodes can process data locally, reducing load on the root node.

Disadvantages

  • Higher-level Dependency: Failure at higher levels can disconnect entire branches of the network.
  • Complex Setup: Requires careful planning and configuration to optimise performance.
  • Maintenance Challenges: Troubleshooting issues in large tree networks can be time-consuming.

Use Cases

  • Smart Cities: Streetlights and traffic systems are organised hierarchically.
  • Industrial IoT: Layered monitoring systems for production lines or warehouses.

3. Mesh Topology

Mesh Topology
Figure 26: Mesh Topology

In a mesh topology (figure 26), each device is interconnected with one or more devices, creating multiple communication paths. Mesh networks can be partial (some nodes connected) or full (all nodes interconnected). It extends the tree topology by adding redundant paths. Each node in the network has at least two neighbours to which the packet can be transmitted. Therefore, if some nodes fail, the multi-hop networks or the traffic flow will not be interrupted.

Advantages

  • High Reliability: Multiple paths ensure communication continues even if some nodes fail.
  • Self-healing: Dynamic rerouting of data enhances robustness and fault tolerance.
  • Scalability: New devices can be added without significant reconfiguration.
  • Load balancing: The network can implement load balancing easily due to multiple routing paths.
  • Optimal Coverage: Mesh topology can extend communication over large areas.

Disadvantages

  • High Complexity: Implementation and management are challenging, especially in entire mesh networks.
  • Advanced Network Stack: Software and hardware implementation of the network stack is more complex due to the need to implement routing mechanisms even for simple IoT nodes.
  • Energy-intensive: Devices in the network usually require more power for constant communication and data forwarding in the always active nodes supporting data relay.
  • Higher Costs: Increased hardware requirements for maintaining multiple connections.

Use Cases

  • Smart Grids: Power distribution systems with redundancy.
  • Disaster Recovery: Emergency communication networks in affected areas.
  • Industrial IoT: Critical systems requiring fail-safe communication.

4. Linear Topologies

Linear Topology
Figure 27: Linear Topology

Linear topology (figure 27) sequentially connects devices, linking each node to its immediate neighbours. A variation of this topology is a linear topology with redundancy, which allows each node to connect to its two adjacent neighbours both in front and behind. This setup provides backup routing capabilities in case one of the nodes fails. In linear topologies, all nodes, except for the last one, must be capable of functioning as data relays.

Advantages

  • Simplicity: Straightforward and cost-effective to set up.
  • Geographic Suitability: Ideal for applications aligned linearly, such as pipelines or conveyor belts.
  • Ease of Expansion: New devices can be added to the ends without disrupting the network.

Disadvantages

  • Single Point of Failure: Failure of any device or connection affects all downstream nodes.
  • Latency: Data travels through intermediate nodes, increasing transmission times.
  • Limited Scalability: Long networks can experience signal degradation.

Use Cases

  • Infrastructure Monitoring: Pipeline integrity, railway tracks, or highways.
  • Agriculture: Sequential monitoring of irrigation systems or crop fields.

5. Bus Topology

Bus Topology
Figure 28: Bus Topology

In a bus topology (figure 28), all devices share a common communication backbone, and data is broadcast across the bus.

Advantages

  • Cost-effectiveness: Minimal cabling requirements reduce deployment costs.
  • Easy Implementation: Straightforward setup and operation.
  • Low Data Collision: Suitable for small networks with limited activity.

Disadvantages

  • Backbone Dependency: Failure of the main communication bus disrupts the network.
  • Performance Limitations: Adding more devices increases collision risk and reduces efficiency.
  • Troubleshooting Challenges: Identifying and resolving faults in the backbone can be difficult.

Use Cases

  • Temporary Monitoring Systems: Event monitoring or short-term projects.
  • Small IoT Deployments: Basic automation in homes or small businesses.

6. Ring Topology

Ring Topology
Figure 29: Ring Topology

Ring topology (figure 29) arranges devices in a closed loop, where data travels around the ring in one or both directions.

Advantages

  • Consistent Performance: Equal access to the network ensures reliable data transmission.
  • Fault Tolerance: Bidirectional communication prevents disruption in case of a single failure.
  • Predictable Data Flow: Ensures orderly and systematic communication.

Disadvantages

  • Failure Sensitivity: A single point of failure can disrupt unidirectional rings.
  • Latency: Larger rings result in longer transmission times.
  • Inflexibility: Adding or removing nodes requires reconfiguration.

Use Cases

  • Industrial Automation: Networks in factories or assembly lines.
  • Sensor Arrays: Environmental monitoring in circular layouts like greenhouses.

7. Hybrid Topology

Hybrid Topology
Figure 30: Hybrid Topology

Hybrid topology (figure 30) combines elements of multiple topologies to create a customised network that leverages their strengths and minimises weaknesses.

Advantages

  • Flexibility: Adaptable to a wide range of applications and environments.
  • Scalability: Supports growth by integrating different topologies as needed.
  • Resilience: Combines the reliability of mesh or tree structures with the simplicity of star or bus designs.

Disadvantages

  • Complexity: Design and configuration are challenging due to heterogeneous components.
  • High Costs: Increased hardware and implementation expenses.
  • Integration Issues: Ensuring smooth communication between different topologies can be difficult.

Use Cases

  • Smart Cities: Integrating smart homes, traffic systems, and utility monitoring into a unified network.
  • Industrial IoT: Complex systems requiring multiple topology types for optimal performance.

Choosing the proper IoT network topology requires carefully evaluating the application’s needs, including reliability, scalability, cost, and energy efficiency. Often, IoT deployments use a combination of topologies to optimise performance across diverse requirements. Understanding each topology’s strengths and limitations is essential for designing effective IoT networks.

IoT Network Design Consideration and Challenges

Designing an Internet of Things (IoT) network requires tackling an intricate mix of technical, operational, and economic factors. These challenges stem from the diverse requirements and constraints of IoT applications. It is essential to consider these factors and challenges when designing IoT networks. Below is a brief discussion of these factors and challenges, also listed in the figure 31.

IoT Network Design Consideration and Challenges
Figure 31: IoT Network Design Consideration and Challenges

Hardware Limitations

IoT devices are typically constrained by size, cost, and power limitations. These limitations present several design challenges:

  • Processing Power: Most IoT devices use low-power microcontrollers with limited computational capabilities. These devices struggle with resource-intensive tasks, requiring reliance on edge or cloud computing for complex data processing.
  • Memory Constraints: Limited memory affects the ability to store data locally or run advanced algorithms. Devices often rely on real-time data transmission to compensate, which can strain network resources.
  • Environmental Durability: Devices deployed in outdoor or industrial environments must endure extreme conditions like temperature fluctuations, dust, moisture, or physical impact, necessitating rugged and resilient designs.
  • Cost-efficiency vs. Capability: Budget constraints for mass production often limit the use of high-performance materials or components, pushing manufacturers to balance functionality and affordability.

Range

IoT networks vary significantly in terms of communication range, which influences their architecture and cost:

  • Short-range Communication: Technologies like Zigbee, BLE, and Wi-Fi are suitable for localised applications like smart homes but less effective for large-scale deployments without additional infrastructure.
  • Long-range Communication: LoRaWAN, Sigfox, and NB-IoT provide extensive coverage for smart cities or agricultural monitoring, but they often have lower data rates, making them unsuitable for high-bandwidth applications.
  • Obstacles and Signal Loss: Signals may degrade due to physical barriers, interference, or weather conditions, requiring strategic placement of gateways and nodes to maintain reliable coverage.
  • Multi-hop Networks: Mesh networks help extend the range by using intermediate nodes but introduce complexity in routing and potential latency issues.

Bandwidth

Efficient bandwidth management is critical to ensure the smooth operation of IoT networks:

  • Diverse Application Demands: Some applications, such as video surveillance, require high bandwidth, while others, like temperature sensors, need minimal data transfer. This variability complicates resource allocation.
  • Spectrum Limitations: IoT networks often rely on shared, unlicensed spectrum, which can become congested, particularly in dense urban environments.
  • Scalability: As the number of devices in a network grows, ensuring consistent performance becomes increasingly complex, necessitating advanced traffic management techniques.
  • Optimisation Strategies: Technologies like edge computing, data compression, and prioritisation protocols help reduce bandwidth consumption and ensure critical data is transmitted first.

Energy Consumption and Battery Life

Energy efficiency is vital for IoT devices, especially those deployed in remote locations:

  • Power Constraints: Devices are often battery-powered, and replacing batteries frequently is impractical in large-scale or inaccessible deployments.
  • Energy-efficient Protocols: Protocols like Zigbee, Z-Wave, and LoRa are designed for low-power operation but come with data rate or latency trade-offs.
  • Energy Harvesting: Emerging technologies such as solar panels, kinetic energy systems, or thermoelectric generators aim to extend device lifespans but are still cost-prohibitive for widespread use.
  • Smart Sleep Modes: Devices can conserve energy by entering low-power states when not actively transmitting data. However, this approach may affect responsiveness in latency-sensitive applications.

Quality of Service (QoS)

Delivering consistent performance in IoT networks is challenging due to the following factors:

  • Intermittent Connectivity: Devices in remote or mobile scenarios may experience connectivity disruptions, impacting real-time applications.
  • Data Collisions: Shared communication channels, particularly in wireless systems like Wi-Fi, can suffer from packet collisions, leading to retransmissions and delays.
  • Interference: Overlapping frequencies with other devices (e.g., Wi-Fi routers or microwave ovens) can degrade signal quality.
  • Reliability and Maintenance: IoT devices often operate in hard-to-access locations, necessitating designs prioritising minimal maintenance and high reliability. Predictive maintenance and robust hardware design can mitigate failure risks.

Security

Security remains one of the most critical and challenging aspects of IoT network design:

  • Device Vulnerabilities: Many IoT devices lack the computational power for robust encryption, making them susceptible to attacks.
  • Network-wide Threats: Breaches at any node can compromise the entire network, as seen in botnet attacks like Mirai.
  • Data Protection: IoT networks handle sensitive information such as personal health data or industrial process details, requiring stringent data security measures.
  • Scalability of Security Solutions: Implementing secure authentication, firmware updates, and key management for thousands or millions of devices is a significant logistical challenge.

Flexibility

IoT networks need to be adaptable to meet evolving application requirements:

  • Orchestration: Dynamic management of devices and their data flows, enabled by technologies like Software-Defined Networking (SDN), improves network efficiency and adaptability.
  • Programmability: Support for remote updates and over-the-air (OTA) firmware upgrades ensures the network can incorporate new functionalities without requiring hardware replacements.
  • Modularity: Modular designs enable easy expansion or integration of new devices and technologies, reducing future upgrade costs.

Cost

Balancing performance and affordability is a persistent challenge in IoT network design:

  • Device Costs: Manufacturers must keep hardware costs low without sacrificing essential features.
  • Infrastructure Investments: Deploying gateways, repeaters, or base stations for network coverage increases initial setup costs.
  • Operational Costs: Power consumption, connectivity subscriptions, and periodic maintenance contribute to long-term expenses.
  • Scalability: While economies of scale can lower per-device costs, initial deployments often face high upfront costs, deterring smaller organisations.

Interoperability

Ensuring seamless interaction between diverse devices and platforms is essential for IoT success:

  • Protocol Diversity: Ensure device compatibility is complex with many communication standards (e.g., Zigbee, Z-Wave, MQTT).
  • Vendor Lock-in: Proprietary solutions may restrict the integration of third-party devices, limiting network flexibility.
  • Standardised APIs: Developing and adopting universal APIs and communication frameworks facilitates interoperability and enhances ecosystem collaboration.

User Interface Requirements

The usability of IoT systems directly impacts user adoption and satisfaction:

  • Ease of Use: Intuitive interfaces are essential for non-technical users to configure and monitor devices.
  • Customisation Options: Advanced users require customisable dashboards and control mechanisms to meet specific application needs.
  • Cross-platform Accessibility: Interfaces must function seamlessly across smartphones, tablets, and computers.

Standardisation

A lack of unified standards hinders IoT scalability and integration:

  • Fragmented Ecosystem: The coexistence of multiple, often incompatible standards complicates device interoperability.
  • Regulatory Variations: Differences in regional regulations, such as spectrum allocation, further complicate standardisation.
  • Continuous Evolution: Rapid technological advancements necessitate frequent updates to standards, leading to inconsistencies during transition periods.

In addressing these considerations, IoT network designers must adopt a holistic approach that balances technical requirements, user needs, and cost constraints while embracing innovation and collaboration to build scalable, reliable, and secure systems.

IoT Communication and Networking Technologies

The backbone of the Internet of Things (IoT) lies in its communication and networking technologies, which enable the seamless interconnection of devices and facilitate data exchange across networks. These technologies are fundamental to the functioning of IoT systems and are tailored to meet various needs, including scalability, energy efficiency, cost, and performance. They can be broadly categorised into network access technologies, networking technologies, and high-level communication protocols. Sample protocol stack for IoT Communication Networks is present in figure 32.

Many IoT protocols exist across the network communication stack and implement more than one layer, e.g. BLE. Still, the figure is simplified to present the protocol's main origin.
Sample IoT Communication Network Stack
Figure 32: Sample IoT Communication Network Stack

The IoT Network Access Technologies

IoT network access technologies serve as the backbone of the Internet of Things (IoT) ecosystem by providing the essential means to connect devices to a network and enable seamless data communication. These technologies ensure that devices, sensors, and actuators can transmit and receive data efficiently, allowing the coordination and functionality required for IoT applications. The choice of technology depends on the specific requirements of the IoT application, which may vary significantly based on factors such as range, power consumption, data rate, cost, network density, and environmental constraints.

For example, IoT applications in smart homes and wearable technology prioritise low power consumption and short-range connectivity. In contrast, industrial IoT, smart agriculture, and smart cities often require long-range communication with low power usage to connect devices spread across large areas. Understanding the strengths and limitations of each access technology is critical to optimising network performance, reliability, and cost-effectiveness. IoT access technologies can be broadly categorised into short-range and long-range communication technologies, each tailored to address specific use cases in IoT deployments:

Short-Range Technologies

Short-range technologies are designed for close proximity communication, typically ranging from a few centimetres to a few hundred meters. They are often used in localised IoT applications like smart homes, wearable devices, and industrial automation.

Examples include technologies like Radio Frequency Identification (RFID), which is widely used for inventory tracking; Near Field Communication (NFC), which powers secure contactless payments; and Bluetooth Low Energy (BLE), which supports low-power connections in consumer electronics and medical devices. Short-range communication technologies are typically characterised by low latency, making them ideal for applications requiring frequent and real-time communication between devices.

Radio Frequency Identification (RFID)

Description
Radio Frequency Identification (RFID) technology leverages electromagnetic fields to wirelessly identify, track, and communicate with objects. The system typically consists of two main components: RFID tags, which contain stored data, and RFID readers, which capture and process this data. The tags can be attached to physical objects, enabling them to transmit information when brought into proximity with an RFID reader.

RFID tags are further classified into two types:

1. Passive RFID Tags

  • These tags do not have an internal power source and rely on the electromagnetic energy emitted by the reader to activate and transmit data.
  • They are cost-effective, lightweight, and widely used in retail inventory management and supply chain tracking applications.
  • Passive tags have a limited read range, typically a few centimetres to a few meters.

2. Active RFID Tags

  • These tags are equipped with an onboard battery, enabling them to transmit signals over longer distances, often up to several hundred meters.
  • They are ideal for applications requiring extended range or continuous tracking, such as asset management in extensive facilities or vehicle monitoring.

RFID systems operate across various frequency ranges, including:

  • Low Frequency (LF): 125–134 kHz, suitable for short-range applications like animal tracking.
  • High Frequency (HF): 13.56 MHz, commonly used for contactless payment systems and library management.
  • Ultra-High Frequency (UHF): 860–960 MHz, enabling faster read speeds and longer ranges, ideal for logistics and inventory management.

Applications

RFID technology is widely employed in various sectors, including:

  • Retail: For inventory tracking and anti-theft systems.
  • Healthcare: To manage medical equipment and patient identification.
  • Transportation: For toll collection and fleet management.
  • Logistics: To streamline supply chain operations by automating tracking and reducing manual errors.

RFID's ability to wirelessly and efficiently capture real-time data has made it an indispensable tool in IoT applications, bridging the gap between physical objects and digital systems.

Advantages

  • Passive RFID tags are battery-free, inexpensive, and durable.
  • Ideal for inventory management, logistics, and asset tracking.
  • High-speed identification even in bulk item scenarios.

Limitations

  • Limited operational range (a few centimetres to a few meters).
  • Performance can be impacted by interference from metals or liquids.

2. Near Field Communication (NFC)

Near-field communication (NFC) is a specialised subset of Radio Frequency Identification (RFID) technology that enables wireless communication between devices over a very short range, typically 10 centimetres or less. Operating at a frequency of 13.56 MHz, NFC facilitates secure, fast, and convenient data exchange by bringing two NFC-enabled devices close together. Unlike standard RFID systems, NFC allows bidirectional communication, meaning both devices can send and receive data. This feature makes NFC more versatile, enabling it to support a broader range of applications beyond simple identification and tracking.

Key Characteristics of NFC

  • Short Range: NFC's limited communication range enhances security by reducing the likelihood of unauthorised data interception.
  • Ease of Use: NFC interactions require minimal setup and are typically initiated by tapping or bringing devices close together.
  • Low Power Consumption: NFC is energy-efficient and can operate in passive mode, where one device (e.g., an NFC card) does not require its power source and is powered by the electromagnetic field generated by the active device (e.g., a smartphone or reader).

Modes of Operation

NFC supports three primary modes of operation:

  • Peer-to-Peer Mode: This mode allows two NFC-enabled devices, such as smartphones, to exchange data directly. It is commonly used for file sharing or contact information exchange.
  • Read/Write Mode: This mode allows an NFC-enabled device to read data from or write data to an NFC tag, such as scanning product information in retail or retrieving digital content from a poster.
  • Card Emulation Mode: This mode enables an NFC device to act as a contactless card, which is commonly used in payment systems, access control, or public transportation.

Applications

NFC is widely adopted in various domains due to its security, simplicity, and versatility:

  • Contactless Payments: Used in services like Apple Pay, Google Pay, and Samsung Pay, enabling secure, tap-to-pay transactions.
  • Access Control: For secure entry to buildings, offices, or vehicles using NFC-enabled cards or smartphones.
  • Public Transportation: Simplifies ticketing and fare collection with NFC-based cards or mobile apps.
  • Retail and Marketing: Enhances customer engagement by enabling interactions with NFC-enabled posters, smart shelves, or product labels.
  • Healthcare: Facilitates patient identification, medical equipment tracking, and secure data sharing between devices.
  • IoT Integration: NFC is increasingly used to quickly configure and pair IoT devices, such as smart home gadgets or wearables.

NFC's combination of security, ease of use, and broad application potential makes it a cornerstone technology in the modern IoT ecosystem. It seamlessly connects devices and services for enhanced user experiences.

Advantages

  • Highly secure due to proximity requirements.
  • Simple to use and ideal for contactless payments, secure access, and peer-to-peer sharing applications.

Limitations

  • Extremely short range limits broader IoT applications.
  • Less efficient for high-speed or high-volume data transfer.

3. Bluetooth Low Energy (BLE)

Bluetooth Low Energy (BLE) is an advanced iteration of Bluetooth technology designed to meet low-power IoT application demands. It operates in the globally available 2.4 GHz Industrial, Scientific, and Medical (ISM) frequency band and is engineered to balance power efficiency, performance, and cost. BLE is ideal for devices requiring long battery life and intermittent data transmission, such as wearables, sensors, and smart home gadgets.

Key Features of BLE

  • Low Power Consumption: BLE uses significantly less energy than classic Bluetooth by employing optimised communication protocols and a sleep-mode mechanism, where the device remains inactive until data transmission is needed.
  • Efficient Data Exchange: BLE is designed for low-data-rate applications, utilising smaller data packets and streamlined connection setups to reduce overhead and improve efficiency.
  • Wide Compatibility: BLE is widely supported by modern smartphones, tablets, and computing devices, enabling seamless communication across various IoT ecosystems.
  • Range: BLE offers a communication range of up to 100 meters (depending on environmental factors), which makes it suitable for short- to medium-range applications.
  • Secure Communication: BLE supports advanced encryption and authentication mechanisms, ensuring secure data transfer between devices.
  • Adaptive Frequency Hopping (AFH): BLE uses AFH to avoid interference in crowded 2.4 GHz bands, improving reliability in environments with multiple wireless technologies.

Advantages of BLE

  • Extended Battery Life: Small batteries allow devices to run for months or even years, making BLE ideal for IoT applications with constrained power sources.
  • Cost-Effectiveness: BLE modules are affordable and easily integrated into IoT devices.
  • Flexibility: BLE supports many IoT use cases, from simple sensor networks to interactive user device applications.

Limitations of BLE

  • Limited Bandwidth: BLE is optimised for small data transfers, which may not be suitable for high-bandwidth applications like streaming audio or video.
  • Shorter Range than Some LPWANs: While BLE offers moderate range, it falls short compared to long-range IoT technologies like LoRa or SigFox.
  • Interference: Operating in the 2.4 GHz band can lead to interference in environments with overlapping WiFi, classic Bluetooth, or other wireless signals.

Applications of BLE

  • Wearable Devices: BLE is widely used in fitness trackers, smartwatches, and medical wearables due to its low power needs and compatibility with smartphones.
  • Smart Home: Enables communication between smart home devices like lights, locks, and thermostats.
  • Beacons: BLE-based beacons are used for proximity-based services, including indoor navigation, retail promotions, and asset tracking.
  • Healthcare: Facilitates wireless connectivity in medical devices for monitoring vital signs, transmitting data to healthcare providers, and ensuring patient mobility.
  • Industrial IoT: Used in predictive maintenance and environmental monitoring through BLE-enabled factory sensors.
  • Gaming and AR/VR: Supports controllers and peripherals for augmented reality (AR), virtual reality (VR), and gaming systems.

BLE is a key enabler of the IoT revolution, bridging devices with varying resource constraints and providing robust, energy-efficient connectivity. Its versatility makes it a popular choice for applications requiring cost-effective, low-power wireless communication, making it integral to the growth of interconnected smart systems.

4. Zigbee

Description

Zigbee is a wireless communication protocol designed specifically for low-power, low-data-rate applications, making it a popular choice for Internet of Things (IoT) networks. It operates primarily in the 2.4 GHz ISM band but can also use 868 MHz (Europe) and 915 MHz (US) bands, offering global versatility. Zigbee is well-suited for applications requiring short-range communication and mesh networking, such as smart homes, industrial automation, and healthcare monitoring systems.

Key Features of Zigbee

1. Low Power Consumption: Zigbee is optimised for battery-powered devices that need to run for extended periods (typically several years) without frequent battery replacements or recharges. It achieves this through low power consumption during active and idle states, making it ideal for sensor networks and other energy-constrained IoT applications.

2. Mesh Networking

  • One of Zigbee's standout features is its mesh networking capability, which allows devices to relay messages to one another. In a Zigbee mesh network, devices can act as routers, which means data can be transmitted across longer distances by hopping through intermediate devices. This increases the network's range and reliability compared to simple point-to-point communication.
  • Mesh networking also adds redundancy, enhancing the network's resilience. Multiple paths are available for data to travel, which can dynamically adjust to avoid failures or interference.

3. Short-Range Communication

  • Zigbee is designed for short-range communication, typically in the range of 10-100 meters in an open environment. However, the actual range can be extended in a mesh configuration using additional devices as repeaters. This short-range capability makes Zigbee ideal for applications where devices are located within close proximity to one another, such as home automation or industrial control systems.

4. Low Data Rates

  • Zigbee supports low data rates, typically 20 kbps to 250 kbps, which is sufficient for applications that transmit small amounts of data at infrequent intervals. For instance, it works well in applications like smart lighting, environmental monitoring, and security systems, where exchanging data does not require high bandwidth.

5. Security

  • Zigbee provides robust security features, including AES-128 encryption for data confidentiality, message integrity, and authentication. This is important in IoT applications where secure communication is crucial, such as healthcare, home automation, and industrial systems.

6. Scalability

  • Zigbee networks can support large numbers of devices. The mesh networking model allows Zigbee networks to scale efficiently, as additional devices can be added without disrupting the overall network performance. Zigbee can support networks with up to 65,000 devices, making it suitable for small-scale and large-scale IoT deployments.

Zigbee Network Topologies

Zigbee supports multiple network topologies, each suited for different application requirements:

  • Star Topology: In a star topology, devices communicate directly with a central coordinator. This is a simpler topology where the central coordinator is the hub that manages the communication of all connected devices. It is often used in small-scale deployments where simplicity is key.
  • Mesh Topology: In a mesh topology, devices (known as routers) can communicate with each other, forwarding data to other devices if necessary. The coordinator manages the network, while routers extend the range and redundancy. This topology is ideal for larger deployments where robustness and reliability are essential, such as in industrial or smart home applications.
  • Cluster Tree Topology: A combination of the star and mesh topologies, this structure features a central coordinator, and child devices communicate with their parent device (router). This topology is commonly used in large networks requiring hierarchical organisation.

Applications of Zigbee

Zigbee is used in various IoT applications, especially those that require low power, short-range communication, and mesh networking. Some of the key applications include:

  • Smart Homes: Zigbee is commonly used in smart home devices such as smart lighting, smart locks, thermostats, motion sensors, and security systems. Its low power consumption and mesh networking capabilities are ideal for creating scalable and reliable home automation solutions.
  • Industrial IoT (IIoT): Zigbee is used in industrial environments for asset tracking, monitoring equipment, environmental sensing, and process automation. It enables efficient communication among various sensors and control devices, ensuring smooth operations in factories and warehouses.
  • Healthcare and Medical Monitoring: Zigbee can be used in healthcare applications such as patient monitoring systems, wearable health devices, and remote patient management. Its low energy usage ensures that devices like wearable sensors can operate for extended periods without frequent battery changes.
  • Smart Energy Management: Zigbee is widely used in smart meters for energy consumption monitoring, building energy management systems, and smart grid applications. Its ability to communicate with multiple devices in a mesh network is beneficial for monitoring and managing energy usage efficiently.
  • Agriculture and Environmental Monitoring: Zigbee is used in precision agriculture to monitor soil moisture, weather conditions, and crop health. The mesh network capability enables long-range coverage over large agricultural fields, where sensor data must be routed across vast distances.

Advantages of Zigbee.

  • Low Power Consumption: Zigbee's energy efficiency makes it ideal for IoT applications requiring long battery life, such as sensor networks or devices that must operate continuously without recharging.
  • Scalability and Range: The mesh networking model allows Zigbee networks to scale easily, supporting thousands of devices and extending communication range over large areas by utilising intermediate routers.
  • Security: Zigbee provides strong security features, including encryption and authentication, to ensure safe and private communication, which is crucial in many IoT applications.
  • Interoperability: Zigbee is an open standard, meaning that devices from different manufacturers can work together, creating a flexible ecosystem for IoT applications.

Limitations of Zigbee

  • Limited Data Rate: The low data rate (20-250 kbps) makes Zigbee unsuitable for high-bandwidth applications, such as video streaming or large file transfers.
  • Limited Range: While Zigbee supports mesh networking to extend range, its direct communication range is limited to around 10-100 meters, which may not be sufficient for large outdoor deployments without additional devices to extend the coverage.
  • Congestion in High-Density Networks: In environments with a large number of Zigbee devices, such as crowded smart home networks, communication congestion can occur, affecting performance.

Zigbee is a versatile and energy-efficient IoT networking technology that is well-suited for a wide range of low-power, short-range applications. Its mesh networking capabilities, low power consumption, and scalability make it an excellent choice for smart homes, industrial IoT, healthcare, and energy management systems. While it may not be ideal for high-bandwidth applications, it excels in use cases where small amounts of data must be transmitted over a reliable and resilient network of devices.

Long-Range Technologies

Long-range communication technologies are designed to connect devices over large distances, often spanning several kilometres. These technologies are critical for IoT deployments in rural areas, industrial environments, and outdoor applications like smart agriculture, smart cities, and environmental monitoring. Long-range technologies prioritise energy efficiency and scalability, often sacrificing data rates to ensure consistent performance in low-power and resource-constrained environments.

Notable examples include Low-Power Wide-Area Networks (LPWAN) technologies like LoRa and SigFox, which enable long-range communication with minimal power consumption. Cellular IoT technologies such as Narrowband IoT (NB-IoT) and LTE-M leverage existing mobile networks to provide reliable and scalable connectivity for IoT devices. Additionally, satellite IoT solutions extend coverage to remote and maritime areas, enabling global IoT connectivity.

Low Power Wide Area Networks (LPWAN)

LPWAN technologies are a class of wireless communication protocols engineered to meet the unique demands of IoT applications requiring long-range connectivity, low power consumption, and support for massive deployments. These technologies are particularly suited for scenarios where devices operate on limited power sources, such as batteries, for extended periods—sometimes years—while transmitting small amounts of data over long distances. LPWANs have become a cornerstone of outdoor IoT deployments, enabling connectivity in areas where traditional networking solutions like WiFi or cellular networks would be inefficient or too costly. They are commonly used in applications ranging from environmental monitoring to smart agriculture and industrial IoT.

Key Characteristics of LPWAN

  • Low Power Consumption: LPWAN technologies are designed for energy efficiency, allowing devices to function on minimal power for prolonged periods. This is achieved through efficient data encoding and duty cycling.
  • Extended Range: LPWAN systems can communicate over distances ranging from several kilometres in urban areas to over 10-15 kilometres in rural or open environments. This range depends on the specific technology and environmental factors.
  • Low Data Rate: LPWANs are optimised for transmitting small payloads, typically a few bytes to kilobytes. This makes them ideal for IoT applications requiring periodic updates, such as sensor readings or status reports.
  • Cost Efficiency: LPWAN solutions minimise operational and deployment costs through lightweight infrastructure and simple device designs. Many LPWAN networks, such as LoRa and SigFox, operate in unlicensed frequency bands, reducing spectrum costs.
  • Massive Device Support: LPWAN networks can handle thousands to millions of connected devices per gateway, making them ideal for large-scale IoT deployments such as smart cities or industrial monitoring.
  • Varied Spectrum Usage: LPWAN technologies operate in both unlicensed (e.g., ISM bands) and licensed (e.g., cellular) spectrum, providing flexibility in deployment and regulatory compliance.

Advantages of LPWAN

  • Prolonged Device Lifespan: Suitable for battery-powered devices operating for years without frequent maintenance.
  • Wide Coverage: Facilitates connectivity in remote or hard-to-reach areas, such as rural farms or underground infrastructure.
  • Cost-Effective Infrastructure: Enables low-cost IoT deployments compared to traditional cellular solutions.

Challenges of LPWAN

  • Limited Data Throughput: LPWAN is unsuitable for high-bandwidth applications like video streaming or real-time communication.
  • Network Latency: Increased latency in some LPWAN solutions may not suit time-sensitive applications.
  • Fragmentation: The variety of LPWAN standards can create compatibility and interoperability challenges.

Applications of LPWAN

  • Smart Agriculture: LPWAN enables remote monitoring of soil conditions, crop health, and weather patterns, helping farmers optimise resource use and improve yield.
  • Environmental Monitoring: Used for tracking air quality, water levels, and wildlife movement in remote areas.
  • Smart Cities: Facilitates IoT solutions such as smart street lighting, waste management, and parking systems.
  • Industrial IoT: Monitors factory equipment performance and environmental conditions, reducing downtime and enhancing productivity.
  • Utilities: Powers smart meters for gas, electricity, and water, enabling efficient resource management and billing.
  • Asset Tracking: Ensures real-time location monitoring of goods, vehicles, or livestock over vast areas.

LPWAN technologies have revolutionised IoT by addressing the challenges of long-range communication and energy efficiency. They continue to drive innovation in industries requiring scalable, low-cost connectivity across diverse and remote environments.

1. LoRa (Long Range)

Description

LoRa (Long Range) is a leading networking technology used for long-range, low-power, and low-data-rate IoT (Internet of Things) applications. It is part of the LPWAN (Low Power Wide Area Network) family, specifically designed to meet the unique needs of IoT systems by offering long-range communication capabilities while maintaining energy efficiency. LoRa technology is best known for its ability to support IoT devices deployed across vast areas, including rural and remote locations. It is ideal for many use cases, from smart cities to agriculture and environmental monitoring.

LoRa uses a Chirp Spread Spectrum (CSS) modulation technique, which is central to its ability to provide long-range communication while keeping power consumption low. Chirp Spread Spectrum spreads the signal over a wide frequency band, making it more resilient to interference, improving the signal-to-noise ratio, and allowing extended-range communications. This feature enables LoRa to perform well in various environments, even where traditional wireless communication technologies like WiFi or Bluetooth would struggle.

LoRa operates in unlicensed frequency bands (typically 868 MHz in Europe, 915 MHz in North America, and 433 MHz in Asia). IoT devices using LoRa can communicate without paying spectrum licenses, reducing deployment costs.

Key Features of LoRa Technology

  • Long-Range Communication: One of LoRa's most significant advantages is its ability to communicate over long distances. In ideal conditions, LoRa devices can transmit data over distances up to 10-15 kilometres (6-9 miles) in rural areas and 2-5 kilometres (1-3 miles) in urban environments. This long-range capability enables the deployment of IoT applications in areas that would otherwise be inaccessible using short-range wireless technologies like WiFi or Bluetooth.
  • Low Power Consumption: LoRa is optimised for low-power operation, making it ideal for IoT devices that run on small batteries for extended periods (often years). Devices can be configured to send data infrequently, and the technology is designed to support low-duty cycles, meaning devices use minimal power when idle. This energy efficiency makes LoRa well-suited for applications such as remote sensors, agricultural monitoring, and asset tracking, where battery life is critical.
  • Low Data Rate: LoRa is designed for low-data-rate applications, typically in the range of 0.3 kbps to 27 kbps, depending on the environmental conditions and the device's configuration. While it is unsuitable for high-bandwidth applications like video streaming or real-time voice communication, it is perfect for transmitting small amounts of data, such as sensor readings, device status updates, or location information.
  • Scalability: LoRa networks are highly scalable, meaning many IoT devices can be added to a network without overwhelming the infrastructure. LoRa uses a star topology, where end devices communicate with gateways (base stations), which relay the data to a central server or cloud platform. This star network structure allows for easy network expansion by adding more gateways to increase coverage and support devices.
  • Resilience to Interference: LoRa's Chirp Spread Spectrum modulation technique helps improve resilience to interference. This is particularly important in urban environments with high RF (radio frequency) interference from other wireless devices. The wide frequency range used by LoRa allows it to operate effectively in noisy environments, making it a reliable choice for IoT deployments in challenging conditions.
  • Geolocation Capabilities: LoRa can provide geolocation services without needing GPS, which can be especially useful for asset tracking and fleet management applications. A device's location can be triangulated with high accuracy by measuring the signal strength of LoRa signals received by multiple gateways, even in areas where GPS signals may be weak or unavailable.

LoRaWAN – The Network Protocol

LoRaWAN (LoRa Wide Area Network) is the protocol that operates on top of LoRa and enables communication between devices and a central server or cloud platform. While LoRa defines the physical layer and the radio communication, LoRaWAN adds the necessary protocols for routing, addressing, and managing communication within a LoRa network.

LoRaWAN supports private networks (where a single organisation manages the infrastructure) and public networks (where multiple users share a common infrastructure). The LoRaWAN protocol defines several key features:

  • End-to-End Security: LoRaWAN incorporates strong security features, including data encryption at both the device and network levels, ensuring that data transmitted over the network is secure from interception or tampering.
  • Adaptive Data Rate: LoRaWAN includes an adaptive data rate mechanism, which allows devices to adjust their transmission rate based on network conditions, further optimising power consumption and network capacity.
  • Network Layer Management: LoRaWAN provides mechanisms for managing devices, gateways, and data transmission, ensuring the network can efficiently handle many devices.
  • Class A, B, and C Devices: LoRaWAN defines three device classes to accommodate different communication needs:
    • Class A: Battery-operated devices that only communicate when they have data to send and are primarily designed for low-power applications.
    • Class B: Devices that receive downlink messages and uplink communication at scheduled times.
    • Class C: Devices continuously receiving downlink messages (used in applications requiring frequent two-way communication).

Advantages

  • Long Range: LoRa provides exceptional communication range, enabling IoT networks to cover large areas, such as farms, cities, and industrial sites, with fewer gateways and less infrastructure.
  • Energy Efficiency: LoRa is one of the most energy-efficient communication technologies available, which is ideal for remote, battery-powered devices that need to operate for years without frequent recharging.
  • Cost-Effectiveness: LoRa operates on unlicensed frequency bands, reducing infrastructure costs. The long-range and low-power features also reduce the need for expensive infrastructure and frequent maintenance.
  • Easy Deployment: LoRa networks are easy to deploy and scale. New gateways can be added to expand coverage, and devices can communicate over long distances, minimising the need for multiple network layers.
  • Scalable and Flexible: LoRa can support many devices across large areas, making it suitable for many IoT applications, from small-scale deployments to large-scale industrial networks.

Use Cases

  • Smart Cities: LoRa is used in smart city applications to monitor and manage infrastructure such as street lighting, waste management, and traffic systems. It helps reduce energy consumption and optimise resources.
  • Agriculture: LoRa enables precision agriculture, where IoT sensors deployed on farms can monitor soil moisture, temperature, and other environmental factors. This data is transmitted over long distances to central systems for analysis, helping farmers make better decisions.
  • Asset Tracking: LoRa is widely used to track goods, assets, and livestock over long distances. It allows for real-time monitoring and can be used for supply chain management, fleet management, and logistics applications.
  • Environmental Monitoring: LoRa is used in environmental monitoring systems that track air quality, water levels, and pollution in remote or hard-to-reach areas, providing valuable data for sustainable practices.
  • Industrial IoT: LoRa supports IoT applications in manufacturing, oil and gas, and energy management, where sensors monitor equipment conditions, track assets, and optimise operations in vast industrial sites.

Limitations

  • Low Data Rate: LoRa is suitable for low-data-rate applications, so it cannot support applications that require high bandwidth, such as video streaming or large data transfers.
  • Limited Communication Frequency: The duty cycle of LoRa devices is limited by regulations in some regions, meaning they can only transmit a certain amount of data per hour or day to avoid network congestion.
  • Interference: While LoRa's chirp spread spectrum technology helps mitigate interference, it still operates in unlicensed spectrum, meaning it could face interference from other devices in crowded environments.

LoRa technology offers a powerful solution for long-range, low-power IoT applications. It can support large-scale networks over vast geographic areas. Its simplicity, energy efficiency, and scalability make it ideal for various industry applications. By combining long-range communication with minimal power consumption, LoRa is driving the growth of the IoT ecosystem, particularly in areas where other wireless communication technologies fall short.

2. SigFox

Description

SigFox is a proprietary Low-Power Wide-Area Network (LPWAN) solution designed specifically for ultra-narrowband communication in the Internet of Things (IoT). It is a unique, highly energy-efficient technology that enables long-range connectivity for many IoT devices. SigFox operates in unlicensed radio frequency bands (typically 868 MHz in Europe, 915 MHz in North America, and 433 MHz in some parts of Asia) and utilises ultra-narrowband (UNB) communication to transmit small packets of data over long distances.

The key feature of SigFox is its ultra-narrowband technology, which significantly reduces the spectrum used by each signal. Unlike traditional wireless communication technologies, which use broader bandwidths for communication, SigFox's UNB communication minimises the energy and spectrum requirements, making it particularly well-suited for IoT devices that transmit small amounts of data over long distances without consuming much power. As a result, SigFox can provide reliable coverage across large areas, with an effective range of up to 50 kilometres in rural environments and 10-15 kilometres in urban areas.

SigFox's design is based on simplicity and efficiency, which are reflected in how it handles data. Each SigFox message can carry a payload of up to 12 bytes and is transmitted in short bursts. These small message sizes are ideal for many IoT applications where devices only need to send simple, periodic updates (e.g., sensor readings or status updates). SigFox operates on a star topology, where devices communicate directly with base stations (or “anchors”) that relay the data to the SigFox cloud platform for processing and integration with other systems.

Advantages of SigFox

  • Low Power Consumption: One of SigFox's major strengths is its low energy usage, which enables devices to run on small batteries for years. This makes it ideal for battery-operated, remote IoT devices in applications like smart metering, asset tracking, and environmental monitoring.
  • Long Range: SigFox provides long-range communication, allowing devices to transmit data over great distances without needing a cellular infrastructure or expensive equipment. This makes it especially useful for rural areas or hard-to-reach locations where traditional wireless networks may struggle.
  • Scalable Infrastructure: SigFox operates through a global network of base stations, which means IoT devices can connect to the network without needing local infrastructure. This results in cost-effective deployment and the potential for global scalability in regions where SigFox coverage is available.
  • Low Cost: SigFox's simplicity and minimal bandwidth requirements translate into lower operational costs for IoT deployments. Its straightforward infrastructure, and small data packet sizes reduce device costs and data plan expenses compared to other solutions like cellular networks.
  • Reliable Connectivity: SigFox's robust communication protocol is resistant to interference and can handle communication in challenging environments such as remote or rural areas with limited infrastructure.

Limitations of SigFox

  • Limited Data Rate: SigFox is designed for low data rate applications, with a maximum payload size of 12 bytes per message. This makes it unsuitable for applications requiring high data throughput, such as video streaming or large file transfers.
  • Limited Message Frequency: Devices on the SigFox network are restricted to sending a limited number of messages per day (typically 140 messages), which may not be suitable for use cases requiring frequent communication or real-time updates.
  • Geographic Coverage: While SigFox has a growing global presence, its coverage is still limited compared to more widely deployed technologies like cellular networks or WiFi. This could pose challenges in regions where SigFox base stations have not yet been deployed.
  • Network Dependence: SigFox operates as a centralised network with proprietary infrastructure, meaning devices rely on SigFox's base stations for communication. This limits the flexibility and autonomy of decentralised solutions or networks with more widespread infrastructure options.

Use Cases

SigFox is particularly suited for IoT applications that require low-bandwidth, long-range connectivity with minimal power consumption. Some everyday use cases include:

  • Smart Metering: Collecting utility data (e.g., electricity, water, or gas consumption) from remote locations with low-power devices.
  • Asset Tracking: This involves tracking the location of vehicles, equipment, or goods across vast areas, especially in industries such as logistics and supply chain management.
  • Environmental Monitoring: Deploying sensors in remote areas to monitor environmental parameters such as air quality, soil moisture, or temperature.
  • Smart Agriculture: Enabling farmers to monitor crops, livestock, and machinery in rural or agricultural environments without complex infrastructure.

SigFox is a highly efficient and cost-effective LPWAN technology for long-range, low-power, and low-data-rate IoT applications. Its strengths lie in its simplicity, scalability, and suitability for applications requiring infrequent, small data transmissions over large distances. However, its limited data rate and message frequency constraints may not be suitable for high-bandwidth or real-time communication requirements.

3. Narrowband IoT (NB-IoT)

Description

NB-IoT (Narrowband IoT) is a cellular-based, low-power wide-area network (LPWAN) technology designed specifically for IoT (Internet of Things) applications. It is optimised to provide wide-area coverage, low power consumption, and support for many connected devices. Unlike traditional cellular networks, NB-IoT is designed to meet the unique needs of IoT devices, offering extended battery life, cost-effective communication, and reliable coverage in challenging environments.

Developed as part of the 3GPP (3rd Generation Partnership Project) standards, NB-IoT is a low-bandwidth technology that uses narrow channels within existing cellular networks to deliver robust IoT connectivity. It operates primarily in licensed spectrum bands, leveraging the infrastructure deployed by mobile network operators, making it a cost-effective solution for global IoT connectivity.

NB-IoT operates in a narrowband, typically using a 200 kHz channel, which is significantly smaller than the bandwidth used by other cellular technologies like LTE. This narrow channel is optimised for low data-rate transmissions and is designed to efficiently handle small, infrequent data bursts. The technology uses existing cellular infrastructure but requires a modified version of the standard LTE (Long-Term Evolution) framework. NB-IoT can be deployed in standalone mode (where it is deployed independently of other cellular technologies) or in in-band mode (where it uses unused resources within existing LTE networks).

Devices using NB-IoT typically send small packets of data with low frequency, making the technology well-suited for applications where devices don't need continuous communication but must report data periodically. NB-IoT also supports power-saving mechanisms that allow devices to sleep for extended periods between transmissions. This is ideal for IoT devices in remote locations or situations requiring long battery life.

Key Features of NB-IoT

  • Low Power Consumption: One of the primary benefits of NB-IoT is its low-power operation, which makes it suitable for battery-powered IoT devices that must operate for years without requiring frequent recharging or battery replacement. This low power consumption is achieved through mechanisms such as extended idle modes.
  • Wide Coverage: NB-IoT operates on existing cellular networks and can provide extensive coverage, including deep indoor and underground areas where traditional cellular signals might struggle to reach. This makes it particularly useful for applications in remote or challenging environments, such as smart metering, asset tracking, and industrial automation.
  • Large Device Capacity: NB-IoT is designed to support many devices in a small area. It can handle up to 50,000 devices per cell, making it ideal for use cases that involve dense deployments of IoT devices, such as smart city applications, environmental monitoring, and industrial IoT.
  • Reliable Communication: NB-IoT offers a high level of reliability, with features such as enhanced coverage, low latency, and robust error correction. These capabilities are crucial for mission-critical applications such as smart grids, healthcare, and asset management, where reliable data transmission is essential.
  • Secure Communication: Since NB-IoT operates over licensed cellular bands, security is built into the cellular infrastructure, ensuring secure communication between devices and the network. It benefits from the encryption, authentication, and integrity checks inherent in cellular technology, providing robust protection against cyber threats.
  • Cost-Effective: NB-IoT is cost-effective for both device manufacturers and service providers. Using existing cellular infrastructure reduces deployment costs, while the technology's low-power nature means IoT devices can operate with minimal energy consumption, reducing operational costs over time.
  • Low Latency: NB-IoT typically has low latency, allowing real-time or near-real-time data exchange. This is important for use cases like remote monitoring or real-time tracking, where fast communication is essential.

Advantages of NB-IoT

  • Extended Coverage: NB-IoT's ability to penetrate deep indoor spaces and rural areas allows it to be deployed in challenging environments where other cellular technologies, like 3G or 4G, may have difficulty reaching. This feature makes NB-IoT ideal for water metering, waste management, and underground asset tracking applications.
  • High Device Density: NB-IoT is highly scalable and can handle thousands of devices per base station. This benefits urban environments or applications like smart cities, where large-scale deployments of connected devices are necessary.
  • Low Cost: NB-IoT's low operational costs benefit both device manufacturers and network operators. The simplified hardware requirements of NB-IoT devices, coupled with the existing infrastructure of cellular networks, contribute to lower deployment and maintenance costs.
  • Improved Battery Life: NB-IoT's low data rates and efficient power management ensure that devices can operate on a single battery charge for extended periods (up to 10 years or more). This is particularly advantageous for remote sensing devices or applications with difficult or costly battery replacements.
  • Scalability and Flexibility: NB-IoT can scale from small deployments to massive IoT networks without significant infrastructure changes. NB-IoT networks can support large-scale rollouts with minimal effort, whether for a few hundred devices or tens of thousands.
  • Global Coverage: As NB-IoT operates on licensed cellular bands, it offers the potential for global coverage, allowing devices to work seamlessly across different countries and regions without worrying about local network operators or unlicensed spectrum availability.

Limitations of NB-IoT

  • Low Data Rates: NB-IoT is designed for low data-rate applications, but its maximum theoretical data rate is around 250 kbps. This makes it unsuitable for high-bandwidth applications like video streaming or large data transfers. NB-IoT is best suited for applications that require small, infrequent data packets.
  • Higher Latency for Large Payloads: While NB-IoT has low latency for small packets of data, latency increases when transmitting more considerable amounts of data. This could be a limitation for use cases where higher data rates and lower latency are essential.
  • Requires Cellular Network Support: Network operators must provide the necessary infrastructure since NB-IoT operates over cellular networks. Devices cannot be connected in areas without NB-IoT coverage unless operators expand their coverage.
  • Limited Device Mobility: NB-IoT is optimised for stationary or low-mobility applications. While it can support mobility, such as tracking devices, it is not designed for high-speed, mobile applications like vehicle telematics or real-time GPS tracking.

Use Cases of NB-IoT

  • Smart Metering: NB-IoT is ideal for smart water, gas, and electricity metering systems, where devices are distributed over vast geographical areas and must transmit periodic readings.
  • Asset Tracking: NB-IoT can be used to track valuable assets, such as containers, vehicles, or shipments, across large areas with minimal power consumption and reliable coverage.
  • Smart Cities: In smart city applications, NB-IoT can monitor and control infrastructure such as street lighting, waste management, parking systems, and environmental sensors.
  • Agriculture: NB-IoT is well-suited for precision agriculture applications, such as soil moisture monitoring, livestock tracking, and crop management, where long-range connectivity and low-power operation are critical.
  • Healthcare: NB-IoT can be used in healthcare applications for remote patient monitoring, medical asset tracking, and telemedicine services, providing low-latency, reliable connectivity for devices that require frequent data exchange.
  • Industrial IoT (IIoT): NB-IoT is also used in industrial applications, including predictive maintenance, machine monitoring, and supply chain management, where reliable communication and low power consumption are essential.

NB-IoT represents a key advancement in IoT networking technologies, offering long-range coverage, low power consumption, high device density, and cost-effective connectivity. Its ability to operate on existing cellular networks and deliver reliable communication for low-data-rate applications makes it ideal for various IoT use cases, particularly in remote monitoring, asset tracking, and smart cities. Although it is unsuitable for high-bandwidth applications, its extensive coverage, scalability, and security make it a vital technology for IoT ecosystems across the globe.

4. LTE-M (Long-Term Evolution for Machines)

Description

LTE-M, or Long Term Evolution for Machines, is a cellular-based networking technology designed explicitly for the Internet of Things (IoT). It is part of the broader LTE (Long-Term Evolution) family, the backbone of most modern mobile communication systems. LTE-M, however, has been optimised for low-power, wide-area (LPWA) IoT applications, offering a balance between low power consumption and relatively higher data rates compared to other IoT technologies like NB-IoT (Narrowband IoT). LTE-M is primarily used for machine-to-machine (M2M) communications, where devices such as sensors, meters, trackers, and industrial equipment must connect to the network to transmit small or moderate amounts of data. LTE-M operates within the licensed spectrum and is built to leverage the existing LTE infrastructure. It is a natural choice for mobile network operators looking to extend their coverage to IoT devices with relatively higher mobility and more substantial data throughput needs.

LTE-M operates in licensed spectrum, leveraging the existing cellular infrastructure that supports 4G LTE technologies. It can be deployed as a standalone solution or alongside other IoT technologies, such as NB-IoT, to provide different coverage and data rate options for various IoT use cases. The architecture of LTE-M is similar to that of standard LTE, but it is optimised for lower power consumption and low-data applications. LTE-M utilises FDD (Frequency Division Duplex) for data communication, allowing simultaneous two-way communication and providing a more efficient link for IoT devices. LTE-M devices are typically connected for long periods, sending data in bursts or based on scheduled events (e.g., temperature readings and location updates). This allows LTE-M devices to stay in sleep modes and only transmit data periodically, conserving energy and maximising battery life.

Key Features of LTE-M

  • Low Power Consumption: One of LTE-M's core features is its low-power operation, which is ideal for battery-powered IoT devices requiring extended battery life. LTE-M supports Power Saving Mode (PSM) and Extended Discontinuous Reception (eDRX), which help reduce power consumption by allowing devices to sleep for extended periods and only wake up to transmit or receive data.
  • High Mobility Support: LTE-M offers better mobility support than NB-IoT, making it suitable for use cases that require moving devices, such as vehicle telematics, fleet management, and asset tracking. LTE-M devices can maintain connectivity while driving across network cells, enabling continuous communication for applications involving mobile or nomadic IoT devices.
  • Higher Data Rates: LTE-M supports higher data rates than NB-IoT, allowing for more substantial data throughput. This is ideal for IoT applications that require more than basic sensor data transmission. LTE-M typically provides speeds up to 1 Mbps (downlink) and 375 kbps (uplink), making it suitable for applications like video streaming from cameras, real-time data transfer, and remote diagnostics in industrial machines.
  • Global Coverage: LTE-M uses existing LTE networks, which are already widespread. This makes it possible for LTE-M devices to connect to the network in any region where LTE infrastructure is available. This enables global IoT connectivity without requiring an entirely new network deployment.
  • Low Latency: LTE-M typically offers low-latency communication, which is essential for real-time or near-real-time applications. The latency in LTE-M can be as low as 50–100 ms, making it suitable for use cases that require quick responses, such as healthcare monitoring, smart cities, and industrial control systems.
  • Security: LTE-M benefits from the strong security features built into LTE networks, including encryption, authentication, and integrity checks. These security features are crucial for protecting sensitive IoT data and ensuring that devices are securely connected to the network, which is essential for industrial and healthcare applications.
  • Scalability: LTE-M supports massive IoT device deployments. Like other cellular IoT technologies, it can handle thousands of devices per base station, which is critical for large-scale IoT applications like smart cities, connected fleets, and remote monitoring.
  • Voice Support (VoLTE): Unlike other IoT technologies, LTE-M supports Voice over LTE (VoLTE), enabling voice services for IoT devices. This feature is helpful for remote worker communication, security systems with voice capability, and telemedicine devices requiring two-way voice communication.

Advantages of LTE-M

  • Higher Data Throughput than NB-IoT: LTE-M supports more significant data transmission than NB-IoT. This feature benefits applications requiring moderate bandwidth, such as real-time remote monitoring, telemetry, and fleet management.
  • Broad Global Availability: LTE-M uses the LTE infrastructure already widely deployed worldwide. This means LTE-M devices can use global coverage without additional deployments or infrastructure investments, reducing time-to-market and operational costs.
  • Flexible Application Range: LTE-M is versatile and can be used in many IoT applications, from low-bandwidth use cases such as smart meters and environmental monitoring to more data-intensive applications like connected health devices and industrial automation.
  • Low Cost: As LTE-M devices can operate on existing LTE networks, there is no need for specialised infrastructure or frequency spectrum licensing. This helps keep costs low for both network providers and device manufacturers.
  • Battery Longevity: LTE-M devices are designed to support long battery life, often ranging from 5 to 10 years, depending on usage. Power-saving features such as PSM and eDRX ensure that devices only consume power when necessary, making LTE-M ideal for long-lasting deployments in remote areas.
  • Ideal for Mobile IoT: LTE-M's ability to support mobility makes it a perfect fit for applications involving moving devices, such as asset tracking, fleet management, and vehicle telematics.

Limitations of LTE-M

  • Higher Power Consumption than NB-IoT: While LTE-M is more power-efficient than traditional cellular technologies like 3G or 4G, it still consumes more power than NB-IoT. This may make NB-IoT a better choice for applications that demand ultra-low power consumption for extended battery life, such as remote sensors or smart agriculture applications.
  • Moderate Coverage in Deep Indoor or Underground Areas: While LTE-M has much better coverage than traditional cellular systems, it may still have limited penetration in some deep indoor or underground environments when compared to other LPWAN technologies like LoRa or NB-IoT, which are better optimised for long-range communication in rural or obstructed environments.
  • Higher Cost Compared to Other LPWAN Solutions: LTE-M offers better coverage and data rates than technologies like LoRa or SigFox. The overall costs associated with deploying and operating an LTE-M network might be higher due to cellular infrastructure and spectrum licensing fees.
  • Limited Data Rates for Very High Bandwidth Applications: Although LTE-M supports higher data rates than NB-IoT, it still has limitations regarding high-bandwidth applications like video streaming or large-scale data transmission. More traditional cellular technologies like 4G LTE or 5G would be better suited in such cases.

Use Cases of LTE-M

  • Smart Cities: LTE-M can support various smart city applications, including smart lighting, waste management, parking management, and environmental monitoring. Its ability to handle moderate data rates and support high device densities makes it ideal for urban IoT deployments.
  • Connected Health: LTE-M can be used in telemedicine and remote health monitoring applications, providing connectivity for devices like wearables, patient monitoring systems, and medical equipment. The technology's support for mobility and moderate data rates suits these use cases well.
  • Fleet Management and Asset Tracking: LTE-M's mobility support makes it an excellent choice for fleet management and asset-tracking applications. Devices can be installed on vehicles or valuable assets to transmit location data, performance metrics, and environmental conditions in real time.
  • Industrial IoT (IIoT): LTE-M is highly applicable in smart manufacturing, predictive maintenance, and remote monitoring of industrial assets. It can provide real-time data from equipment and machinery to detect failures early, monitor performance, and optimise operations.
  • Smart Metering: LTE-M is also used for smart metering applications in utilities, including water, gas, and electricity meters. Devices can send data periodically for billing and consumption analysis, reducing the need for manual readings.
  • Supply Chain Management: LTE-M enables real-time tracking of goods, inventory, and shipments, improving supply chain visibility and operational efficiency. It provides reliable connectivity for tracking devices used in logistics and transportation.

LTE-M is a versatile, scalable, and efficient IoT networking technology that balances low power consumption with higher data rates, global coverage, and excellent mobility support. It is well-suited for many IoT applications, particularly those involving mobile devices or moderate data throughput needs.

5. Haystack

Description

Haystack is an open-source, low-power, wide-area network (LPWAN) technology designed to provide long-range, scalable communication solutions for the Internet of Things (IoT). It aims to address the challenges of IoT deployments that require long-range communication while maintaining energy efficiency, ease of integration, and cost-effectiveness. While not as widely known as LoRa or SigFox, Haystack offers a robust solution for IoT networks that must scale over large areas, particularly in industrial and infrastructure monitoring applications.

Haystack is designed to operate to enable connectivity over large areas using unlicensed radio spectrum bands (like 868 MHz, 915 MHz, etc.), which lowers the cost of deployment since there is no need to pay for spectrum licenses. It uses a combination of technologies and protocols to ensure efficient communication in environments with low power consumption and long-range needs.

Haystack devices communicate through LPWAN gateways and use data aggregation and mesh networking strategies to extend their reach and enable scalable IoT deployments. These devices typically operate in a star or mesh network topology, where they communicate directly with the gateway or hop from one device to another to get data to a central gateway.

Key Features of Haystack Technology

  • Long-Range Communication: Like other LPWAN technologies, Haystack supports communication over long distances, typically up to several kilometres in urban environments and up to 15-20 kilometres in rural areas, depending on the environment. This makes it an ideal solution for large-scale IoT deployments such as citywide infrastructure monitoring, agriculture, and industrial applications.
  • Low Power Consumption: One of Haystack's key advantages is its low power consumption, which is essential for IoT devices that must operate for extended periods without requiring frequent battery replacements or recharging. Devices using Haystack technology can be optimised for low-duty cycles, meaning they transmit data only when necessary, conserving power between transmissions.
  • Scalable Networks: Haystack is designed to scale efficiently, supporting the addition of large numbers of IoT devices to a network. Multiple gateways and devices enhance the network's range and capacity, creating a flexible and scalable infrastructure. Haystack's network can be expanded by adding more nodes, enabling it to handle thousands of devices across vast geographic areas.
  • Data Aggregation: Haystack employs data aggregation techniques to reduce the amount of traffic sent over the network. This is particularly useful in applications where devices monitor sensors or collect environmental data over time. Rather than transmitting every reading, Haystack can aggregate data and transmit it in batch form, which helps to optimise the network's capacity and reduces power usage.
  • Security: Haystack incorporates various security protocols to ensure the safe transmission of data over the network. These security measures include end-to-end encryption and authentication, protecting the data from unauthorised access and ensuring that only legitimate devices can communicate on the network.
  • Open-Source and Interoperability: One of Haystack's standout features is its open-source nature, which makes it an attractive choice for organisations and developers who want a flexible and cost-effective LPWAN solution. Being open-source also encourages interoperability, as different devices and platforms can work together seamlessly, contributing to the growth of a larger ecosystem.
  • Mesh Networking: Haystack can operate in mesh network configurations, which means that devices can forward data to each other, creating a more flexible and resilient network infrastructure. This is particularly useful in environments where direct communication with a central gateway might not be feasible due to distance or obstructions. Mesh networking allows data to be passed from one device to another, ensuring the information reaches its destination even in challenging environments.

Haystack vs. Other LPWAN Technologies

  • LoRa: While Haystack and LoRa are part of the LPWAN family and offer similar long-range capabilities, LoRa is more widely adopted, and its ecosystem is better established. LoRa also benefits from a large community, more extensive commercial deployments, and a well-developed protocol, LoRaWAN. However, Haystack stands out with its open-source nature, which may appeal to users looking for customisable or low-cost solutions.
  • SigFox: SigFox is another well-known LPWAN technology, but it operates on a proprietary basis and relies on a centralised network. Haystack, on the other hand, offers more flexibility by supporting private deployments and open standards. Additionally, while SigFox excels in ultra-narrowband communication, Haystack is designed to be more scalable and versatile for different types of IoT applications.
  • NB-IoT: NB-IoT (Narrowband IoT) is a cellular-based LPWAN technology backed by major cellular operators. It provides highly reliable coverage but at a higher cost due to the need for cellular infrastructure. Haystack, as an open-source technology, provides an alternative that avoids the costs associated with cellular networks and can be deployed independently.

Applications of Haystack Technology

  • Smart Cities: Haystack can be used for smart city applications, including street lighting, waste management, environmental monitoring, and infrastructure management. Its long-range and low-power features are well-suited for IoT devices that must be deployed in large numbers across urban areas.
  • Agriculture and Precision Farming: In agriculture, Haystack is used to monitor soil moisture, temperature, crop health, and environmental conditions. The long-range capabilities allow sensors to be placed over large farming areas, helping farmers optimise irrigation, pesticide use, and harvest planning.
  • Industrial IoT (IIoT): Haystack can support industrial applications such as remote asset management, predictive maintenance, and condition monitoring. By deploying sensors on equipment and machinery, industries can track performance and detect failures before they occur, reducing downtime and maintenance costs.
  • Supply Chain and Logistics: In logistics, Haystack can be used to track assets, manage inventory, and monitor environmental conditions during transportation. Businesses can improve asset visibility and efficiency by integrating Haystack into logistics networks.
  • Environmental Monitoring: Haystack is ideal for environmental monitoring in areas where infrastructure is sparse or hard to reach, such as remote regions. It can monitor air quality, water levels, pollution, and other critical environmental data in real time, providing valuable insights for climate change mitigation and disaster management.
  • Healthcare: Haystack's long-range and low-power features also suit healthcare applications such as patient monitoring, medical equipment tracking, and emergency alert systems. It can facilitate communication between wearable health devices, hospitals, and medical staff, ensuring timely responses in critical situations.

Challenges and Limitations of Haystack

  • Limited Ecosystem: Although Haystack's open-source nature provides significant flexibility, it is still a relatively new and niche technology compared to LoRa and SigFox. As a result, fewer commercial offerings and third-party integrations limit the number of devices and gateways that support Haystack out-of-the-box.
  • Regulatory and Spectrum Availability: Like many LPWAN technologies, Haystack operates in unlicensed frequency bands. However, these frequencies are subject to regional regulations, and interference from other devices operating on the same bands may affect the network's reliability, particularly in congested environments.
  • Lower Adoption: Due to its relatively low adoption and smaller developer community, Haystack may face challenges in gaining traction compared to more widely used LPWAN technologies like LoRa and SigFox. The availability of commercial support and a mature ecosystem can influence the choice of technology for large-scale deployments.

Haystack represents a promising LPWAN solution for IoT deployments, particularly for those seeking a flexible, cost-effective, and open-source alternative to more established technologies. It excels in long-range communication, low power consumption, and scalability, making it suitable for various IoT applications, especially in industrial, agriculture, and smart city domains. However, its adoption is still growing, and its ecosystem is not as developed as other LPWAN technologies, meaning it may not yet be the first choice for every IoT deployment.

The IoT Networking Technologies

Networking technologies establish the foundation for communication between IoT devices and systems, ensuring efficient routing, addressing, and connectivity. The networking technologies for IoT are based on the IPv6 (Internet Protocol version 6). It is the latest version of the Internet Protocol (IP) designed to address the limitations of its predecessor, IPv4. IPv6 introduces a vastly larger address space and enhanced features tailored to modern networking needs, making it a cornerstone for the Internet of Things (IoT). With the exponential growth of IoT devices, IPv6 plays a critical role in enabling seamless communication, scalability, and efficient management.

Key Features of IPv6

  • Expanded Address Space: IPv6 uses 128-bit addresses compared to the 32-bit addresses in IPv4. This results in an astronomical number of possible addresses (approximately 340 undecillion), ensuring every IoT device can have a unique IP address, even in massive deployments.
  • Simplified Address Configuration: IPv6 supports stateless address autoconfiguration (SLAAC), allowing devices to automatically configure their addresses without needing a DHCP server. This is highly advantageous for IoT environments, where devices are deployed in large numbers and may need to function autonomously.
  • Efficient Packet Handling: The IPv6 header is simpler and more efficient than the IPv4 header, reducing processing overhead. This is crucial for IoT devices with limited computational resources.
  • Improved Mobility Support: IPv6 is designed with native support for mobility, enabling seamless communication for IoT devices that change locations, such as connected vehicles or mobile healthcare devices.
  • Integrated Security Features: IPv6 mandates using IPsec (Internet Protocol Security) for encryption and authentication, ensuring secure communication between IoT devices and networks.
  • Multicasting: IPv6 supports multicasting, which allows devices to send a single message to multiple recipients simultaneously. This is particularly useful in IoT applications like sensor data distribution or firmware updates.
  • Elimination of NAT (Network Address Translation): With its vast address space, IPv6 eliminates the need for NAT, enabling end-to-end connectivity. This simplifies communication and reduces latency, which is critical for real-time IoT applications.
  • Enhanced Quality of Service (QoS): IPv6 includes flow labelling for identifying and prioritising data packets, ensuring better performance for time-sensitive IoT applications like video surveillance or telemedicine.

Benefits and Applications of IPV6 in IoT

  • Scalability: The massive address space provided by IPv6 is a foundational requirement for IoT ecosystems, where billions of devices need unique identifiers. It supports the expansion of smart cities, industrial IoT, and connected healthcare systems.
  • Direct Device-to-Device Communication: By eliminating NAT, IPv6 enables direct communication between IoT devices. This simplifies network architecture and reduces latency in relaying data through intermediate devices.
  • Efficient Multicast Communication: IPv6's multicast capabilities benefit IoT scenarios like smart grids, environmental monitoring, and industrial automation, enabling efficient data dissemination to multiple devices.
  • Mobility and Portability: IPv6's support for mobility is critical for IoT devices that operate in dynamic environments, such as autonomous vehicles, drones, and wearable health monitors.
  • Security and Privacy: The integration of IPsec ensures secure communication, which is vital for protecting sensitive data in IoT applications like smart homes, financial transactions, and healthcare monitoring.

IPv6 Technologies for IoT Networking

Several protocols and technologies built on IPv6 are specifically tailored for IoT applications:

1. 6LoWPAN (IPv6 over Low-Power Wireless Personal Area Networks)

A lightweight adaptation of IPv6 for resource-constrained devices, 6LoWPAN allows IoT devices to operate efficiently over low-power, low-data-rate wireless networks.

Features

  • Compresses IPv6 headers to fit within the small frame size of IoT networks.
  • Supports mesh and star topologies.

Use Cases: Smart homes, industrial IoT, and environmental monitoring.

2. RPL (Routing Protocol for Low-Power and Lossy Networks)

A routing protocol designed for IPv6 networks with constrained devices and lossy communication links.

Features

  • Supports hierarchical routing for efficient data aggregation.
  • Optimised for networks with varying link qualities, such as wireless sensor networks.

Use Cases: Smart cities, precision agriculture, and remote monitoring systems.

3. ND (Neighbor Discovery Protocol)

An IPv6 protocol is used for device discovery and address resolution in IoT networks.

Features

  • Enables devices to discover each other without manual configuration.
  • Facilitates seamless communication in dynamic IoT environments.

Use Cases: Connected vehicles, healthcare devices, and smart appliances.

4. CoAP (Constrained Application Protocol)

Although not exclusively an IPv6 technology, CoAP operates over IPv6 to provide lightweight RESTful communication for constrained IoT devices.

Features

  • Designed for low-power, low-bandwidth networks.
  • Integrates seamlessly with IPv6 for secure and efficient communication.

Use Cases: Smart lighting, HVAC systems, and energy management.

Challenges of IPv6 in IoT

  • Adoption Barriers: Despite its advantages, IPv6 adoption is still ongoing. Many legacy systems and networks continue to rely on IPv4, requiring dual-stack solutions that support both protocols.
  • Complexity: While IPv6 simplifies certain aspects of networking, its implementation in large-scale IoT deployments can be complex, requiring expertise in configuration and management.
  • Device Constraints: Some IoT devices, especially older or ultra-low-cost ones, may lack the hardware or software support needed for full IPv6 functionality.
  • Interoperability: Ensuring seamless communication between IPv6-enabled IoT devices and IPv4-based systems can be challenging, necessitating translation mechanisms like NAT64 or proxy servers.

Real-World Applications of IPv6 in IoT

  • Smart Cities: IPv6 supports the massive scale of connected devices in smart cities, from streetlights to traffic management systems and public safety sensors.
  • Industrial IoT (IIoT): Industrial environments benefit from IPv6's ability to connect thousands of sensors, actuators, and controllers, enabling real-time monitoring and automation.
  • Connected Healthcare: IPv6 facilitates secure and scalable networks for wearable devices, remote monitoring systems, and smart medical equipment.
  • Smart Energy Management: IPv6 enables efficient communication between smart meters, grid controllers, and energy consumption devices.
  • Environmental Monitoring: Applications like weather stations, pollution monitoring, and wildlife tracking use IPv6 to manage large-scale sensor networks.

IPv6 is a transformative IoT technology that addresses scalability, security, and efficiency challenges in connected ecosystems. Its vast address space, robust features, and compatibility with advanced IoT protocols make it an essential enabler for the IoT revolution. By leveraging IPv6, organisations can build scalable, secure, and future-proof IoT networks that cater to diverse applications across industries.

The IoT High-Level Communication Technologies

High-level communication protocols define how IoT devices communicate with each other or cloud services.

1. MQTT (Message Queue Telemetry Transport)

MQTT is a lightweight, publish-subscribe messaging protocol ideal for constrained devices. It uses a central broker to exchange communication among IoT devices, and IoT nodes connect to it. Devices play the role of either a publisher or subscriber, or both. MQTT uses topicsto address the data that is uniquely exchanged uniquely. Subscribers “subscribe” to selected topics, and the broker is responsible for ensuring the proper distribution of the messages. Subscribers can use wildcards to subscribe to a single topic, but many can be subscribed simultaneously. Communication models are virtually N:N, so one publisher can send messages to many subscribers, and a subscriber can receive messages from multiple publishers. The broker can retain messages and has a feature of the “last will” to notify when it detects a broken connection. Regular implementation of the MQTT uses TCP to connect nodes to the broker.

Advantages

  • Low bandwidth consumption and reliable communication.
  • Suited for environments with intermittent connectivity.

Disadvantages

  • Regular MQTT implementation uses TCP-based connections that are less energy efficient than UDP.
  • Uses a central broker that collapses all the solutions in case of failure.

2. AMQP (Advanced Message Queuing Protocol)

AMQP is designed to deliver robust messages in enterprise-grade IoT systems. It uses mechanisms similar to MQTT, with a central server, also called a broker, implementing so-called “exchanges” with queues. AMQP is flexible with various exchange models that ensure correct flow from the Publishers to the Consumers. AMQP has an acknowledgement mechanism even if it uses TCP: it is to ensure delivery in non-reliable networks. Currently, a predefined set of exchanges is given in the 0.9 version of the protocol implementation. It includes Direct Exchange, Fanout Exchange, Topic Exchange and Headers Exchange, but users can define other models. The service's address uses URI schema, which is similar to the CoAP.

Advantages

  • Ensures secure and reliable message transmission.
  • Supports message queuing and advanced messaging features.
  • Very flexible on the broker's exchange messaging configuration.
  • Message's payload is up to 2GB.

Disadvantages

  • Starting protocols implementation version 1.0, all models are freely definable and lack common standards, which causes incompatibility between systems.
  • Bootstrapping development with AMQP is much more time-consuming than MQTT and CoAP.
  • Uses a central broker that collapses all the solutions in case of failure.

3. CoAP (Constrained Application Protocol)

CoAP is a RESTful protocol for resource-constrained IoT devices. In CoAP, every node provides a service virtually available to any connecting client, so the messaging model is 1:1 but distributed among devices. In CoAP, there is no central broker opposite the MQTT and AMQP. Each IoT node can create a service endpoint. CoAP is similar to HTTP but much more straightforward regarding resources and implementation. CoAP uses UDP and URI to address endpoints. A URI can contain the IP/service addressing name and a path and port. Implementation foresees scenarios with delayed replies to the request message for lazy devices. Because of the underlying UDP protocol, communication is stateless, but each request-response pair is identified with a token. CoAP's specification has a “discovery” mechanism so that IoT devices can present their endpoints to the other devices connected to the network in an automated way. CoAP messages can be proxied and cached.

Advantages

  • Lightweight and efficient communication using UDP.
  • Optimised for low-power IoT environments.
  • Communication is distributed among multiple devices.
  • There is a standardisation for discovery protocol.
  • Efficient and compact stack implementation.

Disadvantages

  • IoT networks using NAT will struggle with providing their endpoints beyond the gateway as no central broker could be located in a public addressing space, as in the case of AMQP and MQTT protocols.

4. Lightweight machine-to-machine (LWM2M) Lightweight Machine-to-Machine (LWM2M) is a communication protocol for managing IoT devices with constrained resources. Developed by the Open Mobile Alliance (OMA), it offers an efficient, interoperable framework for device management and data exchange between IoT devices and management platforms. LWM2M is particularly suited for devices with limited computational power, memory, or energy resources, such as battery-powered sensors or actuators.

Key Features:

1. Resource Efficiency:

  • Optimised for constrained devices using low-bandwidth networks.
  • Operates over CoAP (Constrained Application Protocol), which uses UDP for lightweight communication.

2. Interoperability:

  • Promotes standardised interactions between IoT devices and cloud platforms, ensuring vendor compatibility.

3. Security:

  • Provides robust security through DTLS (Datagram Transport Layer Security), ensuring encryption and authentication.

4. Device Management:

  • Includes functionalities such as firmware updates, remote diagnostics, and configuration management.
  • Structured around a client-server model where IoT devices act as LWM2M clients and management systems act as LWM2M servers.

5. Data Models:

  • It relies on a well-defined object hierarchy for managing device resources, making it highly organised and scalable.

Advantages

  • Minimal resource consumption.
  • Support for lifecycle management of IoT devices.
  • Enhanced security for constrained environments.

5. UltraLight 2.0

  • UltraLight 2.0 is a lightweight text-based protocol designed to enable minimal complexity communication between IoT devices and platforms. It is part of the FIWARE ecosystem, a popular open-source platform for smart applications, and is widely used in IoT deployments where simplicity and low overhead are critical.

Key Features

1. Minimalism:

  • UltraLight 2.0 is a simple, human-readable protocol optimised for devices with limited processing power and memory.
  • The protocol uses straightforward text strings to encode messages, avoiding the complexity of binary protocols.

2. Low Bandwidth Usage:

  • It minimises data payload size by design, making it well-suited for low-bandwidth or intermittent networks.

3. Compatibility with FIWARE:

  • Specifically tailored to work seamlessly with the FIWARE ecosystem, enabling integration with its context brokers (e.g., Orion Context Broker) for IoT data management.

4. Ease of Implementation:

  • Simple structure and encoding allow developers to implement UltraLight 2.0 without requiring extensive protocol expertise.

5. Stateless Communication:

  • It operates over HTTP or HTTPS using stateless interactions, making it lightweight and scalable.

Advantages

  • Very low resource requirements, suitable for constrained devices.
  • Easy to understand and implement for developers.
  • Supports rapid prototyping in FIWARE-enabled IoT ecosystems.

IoT Network Design Methodologies

Designing a network for the Internet of Things (IoT) requires a strategic approach integrating scalability, security, efficiency, and interoperability. IoT network design methodologies revolve around creating robust, flexible, and efficient networks supporting diverse devices, applications, and services. These methodologies emphasise handling large volumes of data, ensuring real-time communication, and maintaining high levels of security and reliability. This section explores the principles, methodologies, challenges, and best practices for designing IoT networks.

Key Principles of IoT Network Design

Below is a list of principles regarding IoT Network Design. Those principles vary from application to application but, in general, include (figure 33):

Key Principles of IoT Network Design
Figure 33: Key Principles of IoT Network Design
  • Scalability: IoT networks must accommodate the addition of millions of devices without degrading performance. This includes planning for future expansion regarding devices, data traffic, and services.
  • Interoperability: IoT systems often comprise devices from various vendors using different communication protocols. Designing for interoperability ensures seamless communication and data exchange.
  • Low Latency: Real-time applications like autonomous vehicles or healthcare monitoring require minimal latency to ensure timely actions and responses.
  • Energy Efficiency: Many IoT devices operate on battery power. Networks must minimise energy consumption to prolong device lifespans.
  • Security and Privacy: IoT networks must protect sensitive data from unauthorised access, breaches, and malicious attacks through encryption, secure protocols, and access controls.
  • Reliability: Networks should offer high uptime and ensure consistent performance, even during peak usage or failures.
  • Cost-Effectiveness: The design should balance performance with budget constraints, ensuring efficient resource utilisation.

IoT Network Design Methodologies

A short review of the IoT Network Design Methodologies is presented in figure 34 and described below.

IoT Network Design Methodologies
Figure 34: IoT Network Design Methodologies

1. Hierarchical Design
A hierarchical approach organises the IoT network into distinct layers, typically categorised as:

  • Perception Layer (Device Layer): Includes sensors, actuators, and devices that collect data.
  • Network Layer: Responsible for data transmission between devices and processing units via communication protocols.
  • Application Layer: Handles data processing, storage, and service delivery to end users.

Advantages

  • Simplifies management.
  • Optimises resource allocation at each layer.
  • Enhances scalability and modularity.

2. Edge-Centric Design
It focuses on processing data closer to where it is generated, at the network edge. Edge devices like gateways and edge servers handle computation, storage, and analysis.

Advantages

  • Reduces latency for time-sensitive applications.
  • Decreases data transmission costs by minimising reliance on cloud services.
  • Enhances privacy by processing sensitive data locally.

3. Mesh Networking
It employs a decentralised design where devices connect directly to each other in a peer-to-peer manner. Mesh networks are often used in smart homes, industrial IoT, and smart cities.

Advantages

  • High reliability due to redundant paths.
  • Simplifies network expansion.
  • Reduces single points of failure.

4. Centralised Design
It involves a hub-and-spoke model in which devices connect to a central controller, gateway, or server for data processing and management.

Advantages

  • Simplifies monitoring and control.
  • Suitable for small-scale IoT deployments.
  • Centralises security measures.

5. Cloud-Based Design
Data from IoT devices is transmitted to a centralised cloud platform for processing, storage, and management. Cloud providers also offer analytics, machine learning, and application integration services.

Advantages

  • Unlimited scalability and computing power.
  • Simplifies data analysis and application deployment.
  • Offers built-in security and redundancy.

6. Hybrid Design
It combines edge and cloud computing to leverage their benefits. Critical, low-latency tasks are processed at the edge, while large-scale analytics and storage are handled in the cloud.

Advantages

  • Balances latency and scalability.
  • Optimises resource utilisation.
  • Enhances flexibility for diverse applications.

Steps in IoT Network Design

Standard design workflow for IoT Networks includes the following steps (figure 35):

Steps in IoT Network Design
Figure 35: Steps in IoT Network Design

1. Requirement Analysis:
Identify the purpose of the IoT system, including device types, communication needs, expected data volumes, and performance requirements.

2. Topology Selection:
Choose the most suitable topology (e.g., star, mesh, tree, hybrid) based on the use case, device distribution, and scalability needs.

3. Protocol and Communication Technology:
Select protocols and technologies for connectivity:

  • Short-range: Bluetooth, Zigbee, Wi-Fi.
  • Long-range: LoRaWAN, NB-IoT, LTE-M.
  • Wired: Ethernet, Powerline communication.
  • Hybrid: Combining short-range and long-range technologies.

4. Bandwidth and Capacity Planning
Ensure the network can handle peak data loads without performance degradation.

5. Security Architecture:

  • Integrate encryption, authentication, and access control mechanisms.
  • Implement intrusion detection and prevention systems (IDPS).

6. Energy Management
Design for energy efficiency using low-power communication protocols and scheduling device wake-up times.

7. Testing and Optimisation

  • Conduct rigorous performance, reliability, and security testing under real-world conditions.
  • Optimise the design based on feedback and test results.

Challenges in IoT Network Design

IoT network design is a demanding process, and once started, is should target several challenges, including (figure 36) those presented and discussed below.

Challenges in IoT Network Design
Figure 36: Challenges in IoT Network Design

1. Device Diversity:
Supporting multiple device types, protocols, and standards is complex and may lead to compatibility issues.

2. Scalability:
Managing millions of devices and their data streams requires robust and scalable solutions.

3. Security Threats:
IoT networks are vulnerable to attacks such as DDoS, data breaches, and device hijacking. Integrating security systems into IoT networks is challenging due to hardware and networking resource constraints.

4. Latency Sensitivity:
Real-time applications demand ultra-low latency, which can be challenging in distributed environments.

5. Resource Constraints:
Balancing performance and energy efficiency for resource-constrained devices is a persistent challenge.

6. Regulatory Compliance
IoT networks must adhere to regional and industry-specific data privacy and security regulations.

Best Practices for IoT Network Design

Due to the complexity of the design process and the variety of approaches and options, there are some best practices as the IoT market nowadays has grown with many large and small-scale real-life use cases. Each application has its specific requirements, but some standard best practices exist as presented in figure 37 and discussed below.

Best Practices for IoT Network Design
Figure 37: Best Practices for IoT Network Design

1. Use Standardised Protocols:
Ensure compatibility and interoperability by adopting widely accepted standards like MQTT, CoAP, and IPv6.

2. Implement Redundancy:
Incorporate failover mechanisms and redundant pathways to enhance reliability.

3. Prioritise Security:
Encrypt data, use secure boot processes, and enforce least privilege access policies.

4. Adopt Modular Architecture
Design the network using modular components to simplify maintenance and scalability.

5. Monitor and Manage:
Deploy monitoring tools to track performance, detect anomalies, and optimise resource utilisation.

6. Optimise for Energy Efficiency:
Use low-power wireless technologies and energy-efficient hardware.

IoT technologies are closely related to the development of general ITC technologies. At the moment, significant factors driving the development of the IoT networks are discussed below and shortly presented in figure 37.

Emerging Trends in IoT Network Design
Figure 38: Emerging Trends in IoT Network Design

1. 5G/6G Networks: Future IoT networks will leverage 5G/6G technologies to achieve ultra-low latency, massive connectivity, and enhanced reliability.

2. AI-Driven Network Management: Artificial intelligence (AI) and machine learning (ML) are used to optimise IoT network performance and predict potential failures.

3. Blockchain for Security: Blockchain technology is increasingly used to secure IoT networks by providing immutable, decentralised record-keeping.

4. Digital Twins: Digital twins enable real-time simulation and optimisation of IoT networks, improving design and operation.

5. Fog Computing: Extending the capabilities of edge computing, fog computing processes data closer to devices, enhancing speed and efficiency.

IoT network design methodologies are critical for creating robust, scalable, and secure ecosystems that can handle the diverse demands of IoT applications. By adhering to structured methodologies and staying informed about emerging trends, organisations can build IoT networks that are efficient, reliable, and prepared for future challenges.

IoT Network Design Tools

The design of a robust IoT (Internet of Things) network is fundamental to the success of any IoT project. A well-architected network ensures reliable communication between IoT devices, minimises latency, optimises power consumption, and enables efficient data transfer. However, building an IoT network is complex, requiring the integration of various technologies, protocols, and platforms. IoT network design tools assist in modelling, simulating, and managing the networks interconnecting the myriad IoT devices. This section explores the types of IoT network design tools, their features, and their use cases. A short list of tools is presented in the diagram 39.

IoT Network Design Tools
Figure 39: IoT Network Design Tools

Categories of IoT Network Design Tools

IoT network design tools can be classified into the following categories:

  1. Network Simulation Tools
  2. Network Protocol Design Tools
  3. IoT Connectivity and Communication Tools
  4. IoT Network Topology Design Tools
  5. Performance and Load Testing Tools
  6. Security Testing and Validation Tools
  7. End-to-End IoT Network Platforms
Network Simulation Tools

Before deployment, network simulation tools allow developers to create and test IoT networks virtually. These tools simulate the behaviour of devices, communication protocols, and network conditions, allowing for better planning, optimisation, and troubleshooting.

Common Tools
a. Cisco Packet Tracer

  • Features: Network simulator and visual tool for IoT networks.
  • Use Case: It is widely used for learning and testing IoT network designs. It allows the simulation of network protocols like TCP/IP, HTTP, and MQTT.
  • Key Benefits: Low cost, easy-to-use interface, and the ability to simulate IoT device configurations.

b. OMNeT++

  • Features: Open-source, modular simulation framework for simulating IoT and wireless networks.
  • Use Case: Primarily used for academic research, OMNeT++ allows the simulation of large-scale IoT networks, including modelling communication protocols like Zigbee, LoRa, and NB-IoT.
  • Key Benefits: Flexibility in modelling network conditions, protocol analysis, and support for various IoT scenarios.

c. NS3 (Network Simulator 3)

  • Features: A discrete-event network simulator supporting IoT protocols, 5G, and Wi-Fi simulations.
  • Use Case: Ideal for testing network performance, including IoT communication methods such as LoRaWAN, Zigbee, and NB-IoT.
  • Key Benefits: High-level simulation capabilities, scalability, and integration with real-world traffic patterns.

d. Castalia

  • Features: A simulation environment for wireless sensor networks, including IoT devices.
  • Use Case: Often used in academic research to simulate low-power IoT networks and energy consumption.
  • Key Benefits: Focus on energy-efficient devices, low-power sensor networks, and resource-constrained environments.
Network Protocol Design Tools

IoT networks require robust communication protocols to enable devices to exchange data efficiently. Network protocol design tools help define and optimise these protocols, ensuring they meet the specific needs of IoT environments.

Common Tools

a. Wireshark

  • Features: A popular network protocol analyser that supports many IoT protocols like MQTT, CoAP, and HTTP.
  • Use Case: Wireshark is used to capture and analyse packets in the network to diagnose issues with IoT protocol communication.
  • Key Benefits: Real-time packet inspection, detailed protocol analysis, and customisable filters.

b. Mininet
Features: A network emulator that creates custom virtual network topologies for testing network protocols. Use Case: Used to test the interaction of IoT protocols and evaluate their scalability. Key Benefits: High flexibility in designing and emulating IoT network topologies and protocols.

c. MQTT.fx

  • Features: This tool for MQTT protocol testing provides a client interface for monitoring and interacting with MQTT brokers.
  • Use Case: Used for testing communication between IoT devices using the MQTT protocol.
  • Key Benefits: Allows testing and troubleshooting of MQTT-based communication, including message payload inspection.
IoT Connectivity and Communication Tools

Connectivity is at the heart of any IoT network. These tools are designed to help manage and optimise the communication between IoT devices and their associated infrastructure (gateways, clouds, etc.).

Common Tools

a. LoRaWAN Network Server (LNS)

  • Features: A tool for managing LoRaWAN (Long Range Wide Area Network) devices commonly used for low-power, long-range IoT communication.
  • Use Case: It is widely used in applications like smart agriculture and remote monitoring where long-range connectivity is critical.
  • Key Benefits: Efficient management of LoRaWAN devices, network traffic monitoring, and data encryption.

b. Zigbee2MQTT

  • Features: Connects Zigbee devices to an MQTT broker, providing a standardised way of communicating with Zigbee IoT devices.
  • Use Case: Commonly used for home automation applications like smart lighting and thermostats.
  • Key Benefits: It enables seamless communication between Zigbee and MQTT systems and supports a wide range of Zigbee devices.

c. NB-IoT (Narrowband IoT) Design Tools

  • Features: Tools designed to simulate and optimise narrowband IoT networks that use cellular connectivity.
  • Use Case: Ideal for smart city applications, asset tracking, and industrial IoT solutions where low bandwidth and energy efficiency are critical.
  • Key Benefits: Enables the design and optimisation of networks with low power and high device density.
IoT Network Topology Design Tools

Designing an efficient network topology is critical in IoT systems. These tools help create the architecture of an IoT network, determine how devices communicate with each other, and ensure data flows efficiently.

Common Tools

a. UVexplorer

UVexplorer is a network discovery and visualisation tool that simplifies the mapping and monitoring of network devices. For more details, see [9].

Features Useful for IoT Networks

1. Network Discovery:

  • UVexplorer uses SNMP, ICMP, WMI, and other protocols to discover network devices.
  • An IoT network can identify connected devices such as sensors, gateways, and IoT hubs.

2.Topology Mapping:

  • Provides visual topology maps that show the relationships between IoT devices and other network components.
  • Helps design IoT networks by identifying potential bottlenecks and areas with redundant or insufficient connectivity.

3. Device Inventory:

  • Generates an inventory of all devices in the IoT network with detailed information about each device.
  • Enables asset tracking for large IoT deployments, ensuring all devices are accounted for.

4. Troubleshooting:

Quickly identifies issues like unreachable devices, misconfigurations, or overloaded connections, which are critical in IoT networks where uptime is essential.

Possible use in IoT Network Design

  • Pre-Deployment: Helps planning IoT devices' physical and logical layout by visualising the network.
  • Post-Deployment: Validates the network design by ensuring all devices are correctly configured and connected.
  • Scalability: Assists in scaling IoT networks by providing insights into device distribution and potential expansion areas.

b. Lucidchart

  • Features: A web-based diagramming tool for designing IoT network topologies.
  • Use Case: Ideal for creating detailed network topology diagrams representing device connections, data flow, and communication protocols.
  • Key Benefits: Intuitive drag-and-drop interface, real-time collaboration, and extensive template library.

c. ManageEngine OpManager ManageEngine OpManager is a comprehensive network management tool designed to monitor, manage, and maintain the health of IT and IoT infrastructure.

Features Useful for IoT Networks

1. Real-Time Monitoring:

  • It can continuously monitor the health and performance of IoT devices, including sensors, controllers, and gateways.
  • Tracks metrics such as uptime, latency, and device status.

2. Alerting and Notifications:

  • Sends real-time alerts for device downtime, threshold breaches, or abnormal behaviour.
  • Essential for proactive IoT network management to minimise downtime.

3. Performance Management:

  • Provides detailed insights into the performance of devices and links in the IoT network.
  • It also helps identify underperforming devices or overloaded network segments.
  • 3. Custom Dashboards:
  • Allows the creation of dashboards tailored to specific IoT use cases, displaying critical metrics for the entire network.
  • Integration with IoT Protocols:
Performance and Load Testing Tools

IoT networks need to be able to handle high device densities and traffic loads without compromising performance. These tools allow for testing the performance of IoT networks under varying conditions.

Common Tools

a. iPerf

  • Features: Network testing tool that measures bandwidth and performance between two devices.
  • Use Case: Used for testing network throughput and latency in IoT systems.
  • Key Benefits: Measures critical network metrics and helps to optimise network conditions.

b. JMeter

  • Features: Open-source performance testing tool that supports IoT network stress testing.
  • Use Case: Used to test IoT networks' scalability and load-handling capabilities, including simulated device traffic.
  • Key Benefits: Detailed reporting, scalability, and extensibility.

c. LoadRunner

  • Features: A performance testing tool that can simulate the load from thousands of IoT devices.
  • Use Case: Employed to understand how IoT networks perform under heavy loads and ensure optimal configuration before full deployment.
  • Key Benefits: Scalable testing, detailed performance metrics, and compatibility with IoT protocols.
Security Testing and Validation Tools

Security is a significant concern in IoT networks. These tools help to identify vulnerabilities and ensure that IoT systems are secure against cyber threats.

Common Tools

a. Wireshark (as mentioned above)

  • Use Case: Analyses network traffic for vulnerabilities, including IoT-specific communication protocols like MQTT, CoAP, and Zigbee.
  • Key Benefits: Helps identify potential security gaps in IoT network communication.

b. Nessus

  • Features: A vulnerability scanning tool that checks for known security issues.
  • Use Case: Used to perform security audits on IoT devices and networks, identifying vulnerabilities before deployment.
  • Key Benefits: Comprehensive vulnerability scanning, frequent updates, and user-friendly reporting.

c. Kali Linux

  • Features: A security-focused operating system with a suite of penetration testing tools.
  • Use Case: Employed to test IoT network security, including identifying insecure communication channels or exposed devices.
  • Key Benefits: A comprehensive suite of tools for ethical hacking and security validation.
End-to-End IoT Network Platforms

End-to-end IoT network platforms provide a complete solution for managing IoT networks, from device connectivity to cloud-based data analytics and security.

Mathematical Modeling as a Tool for Designing IoT Networks

Designing efficient, reliable, and scalable IoT networks requires addressing challenges such as resource optimisation, communication reliability, scalability, energy efficiency, and security. Mathematical modelling is a powerful tool for tackling these challenges by providing a structured framework for analysing, simulating, and optimising IoT systems.

Key Applications of Mathematical Modeling in IoT Network Design

1. Network Topology Design
Mathematical models help design network topologies by optimising the placement of devices and gateways. Graph theory often represents IoT networks, where devices are nodes and communication links are edges. Models analyse the trade-offs between cost, latency, and coverage, enabling the design of efficient topologies.

  • Example: Finding the optimal placement of base stations in a smart city to maximise coverage while minimising deployment costs.

2. Resource Allocation and Optimisation
IoT networks have limited resources like bandwidth, energy, and computational power. To allocate resources effectively, Optimisation techniques, such as linear programming (LP), integer programming, and heuristic methods, are used.

  • Example: Energy-aware scheduling models optimise the energy consumption of sensor nodes to extend network lifetime.

3. Communication and Data Flow Management
Mathematical models ensure reliable data transmission in IoT networks by addressing packet loss, latency, and congestion issues. Queueing theory is often applied to model data traffic, while game theory can optimise device decision-making.

  • Example: Modeling multi-hop communication to minimise delays in industrial IoT applications.

4. Scalability Analysis IoT networks often grow as more devices are added. Mathematical models help predict the network's performance under scaling scenarios and determine the maximum capacity before degradation occurs.

  • Example: Using queuing models to analyse the impact of increasing device density on data throughput.

5. Security and Privacy Modelling
Ensuring data security and privacy is critical in IoT networks. Cryptographic algorithms and intrusion detection systems are often modelled using probability theory and stochastic processes to evaluate their effectiveness.

  • Example: Markov models for intrusion detection systems to predict potential security breaches.

6. Energy Efficiency
IoT devices, especially in wireless sensor networks, often rely on battery power. Mathematical models optimise energy usage through sleep-wake cycles, energy harvesting, and efficient communication protocols.

  • Example: Optimisation models to balance energy consumption between data collection and transmission in a remote monitoring system.

Mathematical Techniques Commonly Used in IoT Design

1. Optimisation Techniques

  • Linear Programming (LP)
  • Integer Programming (IP)
  • Nonlinear Programming (NLP)
  • Multi-objective Optimisation

2. Stochastic Processes and Probability Models

  • Markov Chains
  • Diffusion approximation
  • Poisson Processes

3. Graph Theory

  • Minimum Spanning Tree for optimal connectivity
  • Shortest Path algorithms for routing

4. Game Theory

  • Nash Equilibrium for resource allocation
  • Cooperative strategies in device-to-device communication.

5. Queueing Theory

  • Traffic modelling
  • Latency and throughput analysis

Advantages of Mathematical Modelling in IoT Networks

  • Predictive Insights: Models provide foresight into network behaviour under various conditions, enabling proactive design adjustments.
  • Efficiency: Optimising resource allocation reduces costs and improves performance.
  • Scalability: Models guide the design of networks that can handle growth without significant redesign.
  • Customisation: Models can be tailored to specific applications, such as smart homes, healthcare, or industrial automation.
  • Reliability: Robust models ensure that networks maintain performance despite uncertainties or failures.

Challenges and Future Directions

  • Complexity: Modelling real-world IoT networks is challenging due to their heterogeneous and dynamic nature.
  • Computational Overheads: Solving complex models may require high computational resources, making real-time application difficult.
  • Integration with AI: Combining mathematical models with machine learning techniques can enhance predictive and adaptive capabilities.

Future research may focus on hybrid approaches, integrating mathematical models with simulation and AI to address the evolving complexity of IoT ecosystems. Mathematical modelling will remain a cornerstone in designing robust, efficient, and future-ready IoT networks.

System Dynamics Modelling as a Tool for Designing Secure and Efficient IoT Systems, Applications, and Networks

The Internet of Things (IoT) is a transformative technological paradigm still in its early stages of development. As IoT adoption continues to grow, there is an opportunity to design systems that are scalable, energy-efficient, cost-effective, interoperable, and secure by design while maintaining an acceptable level of Quality of Service (QoS). Achieving these objectives requires a holistic, system-centric approach that balances stakeholders' diverse and sometimes conflicting goals, including network operators, service providers, regulators, and end users.

The Need for Systems Thinking and System Dynamics in IoT

IoT systems are inherently complex, involving the interaction of heterogeneous devices, communication protocols, networks, applications, and stakeholders. Traditional design approaches, which often focus on isolated components, fail to address the interdependencies and dynamic behaviours that characterise these systems. Systems Thinking and System Dynamics (SD) provide a structured framework for analysing and addressing this complexity.

Key Benefits of Systems Thinking in IoT

  1. Holistic Understanding: Enables designers to view the IoT ecosystem as interconnected, capturing the interdependencies between devices, networks, users, and the environment.
  2. Identification of Feedback Loops: This helps understand how actions taken in one part of the system may influence others, leading to unintended consequences.
  3. Stakeholder Goal Alignment: Balances the needs of different stakeholders by identifying trade-offs and synergies.
  4. Improved Decision-Making: Facilitates the exploration of alternative scenarios, enabling informed choices during the design, operation, and maintenance phases.

Application of System Dynamics in IoT Design

System Dynamics (SD), as an extension of Systems Thinking, uses modelling and simulation tools to analyse the structure and behaviour of complex systems over time. By employing both qualitative and quantitative methods, SD helps in the design and operation of IoT systems with the following objectives:

1. Modeling Interactions:
SD tools like causal loop diagrams (CLDs) and stock-and-flow diagrams are instrumental in visualising the interactions between IoT devices, networks, and environmental factors. For instance:

  • CLDs can map the relationships between energy consumption, device uptime, and security mechanisms.
  • Stock-and-flow models can represent data accumulation, energy usage, and latency in IoT networks.

2. Scenario Analysis: SD allows the simulation of various operational scenarios, such as introducing new devices, changes in traffic patterns, or security breaches, to predict system behaviour and identify potential vulnerabilities.

3. Optimisation of Resource Utilisation:
SD can identify energy consumption, bandwidth allocation, and computational resource usage inefficiencies by modelling IoT networks and guiding cost and energy efficiency improvements.

4. Designing Secure IoT Systems:
Security in IoT is a critical challenge due to the heterogeneity of devices and networks. SD can:

  • Model the impact of potential attacks on system performance.
  • Simulate the effects of different security measures, such as encryption or anomaly detection, on latency and energy consumption.
  • Evaluate trade-offs between security and other performance metrics.

Feedback-Driven Improvement: SD models incorporate feedback loops, which are crucial for designing systems capable of self-adaptation. For example:

  • Positive feedback loops can represent the propagation of security breaches in IoT networks.
  • Negative feedback loops can simulate the activation of mitigation mechanisms, such as automated device isolation.

Case Studies and Applications in IoT Security and Efficiency

1. Smart Agriculture (e.g., Rice Farming):
As demonstrated in a study cited in [10], SD was used to develop causal loop diagrams to understand the interactions between environmental factors, IoT-enabled sensors, and farming outcomes. By identifying key leverage points, the researchers proposed IoT-based solutions to enhance rice productivity while minimising resource use.

2. Energy Management in Smart Grids:
IoT systems in smart grids involve dynamic interactions between energy generation, storage, and consumption. SD has been applied to:

  • Model energy flows and predict usage patterns.
  • Optimise the integration of renewable energy sources.
  • Enhance grid resilience against cyberattacks.

3. Healthcare IoT:
In IoT-enabled healthcare systems, SD tools have been used to analyse:

  • Patient monitoring device interactions.
  • The trade-offs between data privacy, real-time monitoring, and system scalability.
  • Feedback loops in health outcomes and device reliability.

4. IoT Security Simulation:
SD models simulate the effects of cyberattacks, such as Distributed Denial of Service (DDoS), to evaluate the resilience of IoT networks. These simulations help design proactive strategies, such as anomaly detection algorithms and dynamic resource allocation.

Comprehensive Framework for IoT Design
A comprehensive framework is needed to address IoT systems' growing complexity and evolving requirements. This framework should integrate:

  1. Systems Thinking: This is used to conceptualise IoT systems as interconnected ecosystems.
  2. System Dynamics: For modelling and simulating dynamic interactions and behaviours.
  3. Design Thinking: For user-centric innovation, focusing on ease of use, scalability, and adaptability.
  4. Systems Engineering: For formalising processes in the design, implementation, and maintenance of IoT systems, ensuring alignment with stakeholder goals.
  5. Quantitative and Qualitative Approaches: Combining causal loop diagrams (qualitative) and stock-and-flow models (quantitative) to capture IoT systems' structural and behavioural aspects.

The application of Systems Thinking and System Dynamics in IoT security and efficiency offers a powerful approach to navigating the complexities of modern IoT ecosystems. By focusing on feedback loops, stakeholder goals, and holistic modelling, these methodologies provide the tools to design IoT systems that are secure and reliable but also scalable, interoperable, and energy-efficient. Future research should emphasise the development of integrated frameworks that combine qualitative insights with quantitative rigour, paving the way for robust IoT solutions that address current and emerging challenges.

IoT System Architectures

IoT v.s. Wireless Sensor Networks (WSNs)

People often think of IoT systems as WSN systems (figure 40), which is usually a close but inaccurate concept. There are several distinctive features of WSNs among other systems:

  1. Wireless: Item Sensors that observe (sense) their surroundings and transmit data to the data concentration hub are sensor devices, sometimes called Nodes or Motes.
  2. Self-configuration Typically: WSNs can self-configure due to their open architecture, hoping communication is based on Node-to-Node communication with dynamic pathfinding. So, they are capable of infrastructure-less deployment.
  3. Limited resources: Power consumption and size limit computing power and memory available. Therefore, typically, WSNs provide values through a synergy of multiple simple measurements instead of a single and more complex one like a hyperspectral camera or MIMO radar.
 Typical WSN Architecture
Figure 40: Typical WSN Architecture

WSN systems, depending on their application and technical solutions, might be split into several groups:

  1. Terrestrial WSNs enable the use of large numbers of nodes in unstructured or random deployments or structured (pre-planned) deployments. Solar energy might be used as an additional power source besides limited battery and energy-saving (low-duty cycle) use policies in both cases.
  2. Underground WSNs: Usually structured deployment underground with limited communication distances. Expensive deployment and maintenance. Typical application – civil construction.
  3. Underwater WSNs: Nodes are limited in communication distances and bandwidths. Data is collected by manned or unmanned surface water vehicles. Wave energy might be used to recharge batteries.
  4. Mobile WSNs: In addition to the mentioned functions, Mobile WSNS are capable of self-propelling to relocate or interact with their environment.
  5. Multimedia WSNs: Low-cost noise, sound, image, etc., sense and pre-processing sensors. They require higher bandwidth communications and higher battery capacities.

Typical Network Topologies of WSNs

Depending on the application and particular functionality, WSN systems employ one of the following typical topologies:

Star network (single point to multi-point, figure 41):

  • The central node manages the network.
  • Since the central node has only the right to send messages (usually), it can control the power consumption.
  • Easy to manage and power-efficient
  • The central node has to be within the transmission range
 Star Network
Figure 41: Star Network

Mesh network (figure 42):

  • Allows messages from Node to Node within the range
  • Multi-hop communications are allowed
  • Enables high redundancy and ad-hoc solutions
  • Requires higher power consumption of the Nodes
 Mesh Network
Figure 42: Mesh Network

Hybrid Star (figure 43):

  • Enables all the benefits of high redundancy and multi-hop while maintaining power consumption to minimum levels;
  • Usually applies restrictions on Nodes, which are and are not allowed to forward messages.
  • Multi-hop Nodes usually are plugged in.
 Hybrid Star
Figure 43: Hybrid Star

Difference Between WSN and IoT Systems

Due to developments in infrastructure and communications technologies, IoT has grown far beyond simple interconnected devices as it is with WSN. While the IoT system might include WSN as its part, the IoT system functionality and application goal shift more towards decision-making and deeper data analysis. Because of its growing processing power, the availability of global wireless infrastructure, and synergies, IoT can solve complex tasks and support complex decisions.

WSN v.s. IoT challenges: Since the beginning, WSNs have been challenged by the availability of reliable data transport and power consumption. IoT has different challenges:

  • Hybrid computation capabilities – HPC, CPU, GPU, GRID, Mobile devices, Different multi-core architectures for AI;
  • Data security and management – Who is responsible for what in a global system?
  • Data source trust and reliability – bitwise security and traceability of sources;
  • Mobile AI capacities for complex decisions in real-time
  • Interaction with smart environments – smart appliances, smart cities, smart vehicles. What are the measures of connectivity, availability, trustworthiness, etc.….

IoT System Architectures

IoT is a network of physical things or devices that might include sensors or simple data processing units, complex actuators, and significant hybrid computing power. Today, IoT systems have transitioned from being perceived as sensor networks to smart-networked systems capable of solving complex tasks in mass production, public safety, logistics, medicine and other domains, requiring a broader understanding and acceptance of current technological advancements, including advanced AI data processing.

Since the very beginning of sensor networks, one of the main challenges has been data transport and data processing, where significant efforts have been put by the ICT community towards service-based system architectures. However, the current trend already provides considerable computing power, even for small mobile devices. Therefore, the concepts of future IoT already shifted towards more innovative and more accessible IoT devices, and data processing has become possible closer to the Fog and Edge.

Cloud Computing

Cloud-based computing (figure 44) is a relatively well-known and adequately employed paradigm where IoT devices can interact with remotely shared resources such as data storage, processing, and mining. Other services are unavailable to them locally because of the constrained hardware resources (CPU, ROM, RAM) or energy consumption limits. Although the cloud computing paradigm can handle vast amounts of data from IoT clusters, the transfer of extensive data to and from cloud computers presents a challenge due to limited bandwidth[11]. Consequently, there is a need to process data near data sources, employing the increasing number of smart devices with enormous processing power and a rising number of service providers available for IoT systems.

 Cloud IoT System' Architecture
Figure 44: Cloud IoT System' Architecture
Fog Computing

Fog computing (figure 45)addressed the bottlenecks of cloud computing regarding data transport while providing the needed services to IoT systems. Fog computing is a trend that aims to process data near the source. It pushes applications, services, data, computing power, and decision-making away from the centralised nodes to the logical extremes of a network. Fog computing significantly decreases the data volume that must be moved between end devices and the cloud. Fog computing enables data analytics and knowledge generation closer to the data source. Furthermore, the dense geographic distribution of fog helps to attain a better-localised accuracy for many applications than the cloud processing of the data [12].
The recent development of energy-efficient hardware with AI acceleration enters the fog class of the devices, putting fog computing in the middle of the interest of IoT application development and extending new horizons to them. Fog computing is more energy efficient than raw data transfer to the cloud and back, and in the current scale of the IoT devices, the application is meant for the future of the planet Earth. Fog computing usually also has a positive impact on IoT security, e.g., sending pre-processed and depersonalised data to the cloud and providing distributed computing capabilities that are more attack-resistant.

 Fog IoT system' architecture
Figure 45: Fog IoT system' architecture
Edge Computing

Recent developments in hardware, power efficiency, and a better understanding of IoT data nature, including privacy and security, led to solutions where data is processed and pre-processed right to their source in the Edge class devices. Edge data processing on end-node IoT devices is crucial in systems where privacy is essential and sensitive data is not to be sent over the network (e.g. biometric data in a raw form). Moreover, distributed data processing can be considered more energy efficient in some scenarios where, e.g. extensive, power-consuming processing can be performed during green energy availability (figure 46).

 Edge IoT system' architecture
Figure 46: Edge IoT system' architecture

While Cloud, Fog, and Edge systems might seem the same to the end user from a functionality perspective, they are very different and provide different performance, scalability, and computing capabilities, which are emphasised in the following comparison, presented in figure 47.

 Differences if Cloud and Edge IoT Systems
Figure 47: Differences between Cloud and Edge IoT Systems
Cognitive IoT Systems

According to [13], Cognitive IoT, besides a proper combination of hardware, sensors and data transport, comprises cognitive computing, which consists of the following main components:

  • understanding – in the case of IoT, it means systems' capability to process a significant amount of structured and unstructured data, extract the meaning of the data – produce a model that binds data to reality,
  • reasoning – involves decision-making according to the understood model and acquired data,
  • learning – creating new knowledge from the existing, sensed data and elaborated models.

Usually, cognitive IoT systems or C-IoT are expected to add more resilience to the solution. Resilience is a complex term explained differently in different contexts; however, there are standard features for all resilient systems. As a part of their resilience, C-IoT should be capable of self-failure detection and self-healing that minimises or gradually degrades the system's overall performance. In this respect, the non-resilient system fails or degrades in a step-wise manner. In case of security issues, that system should be able to change its security keys and encryption algorithms and take other measures to cope with the detected threats. Self-optimisation abilities are often considered part of the C-IoT feature list to provide more robust solutions. Recent developments in the Fog and Edge class devices and the efficient software leverage cognitive IoT Systems to a new level.

All IoT System Architectures presented before, from cloud to cognitive systems, focus on adding value to IoT devices, system users, and related systems on demand. Since market and technology acceptance of mobile devices is still growing, and the amount of produced data from those devices is growing exponentially, mobility as a phenomenon is one of the main driving forces of the technological advancements of the near future.

IoT Data Analysis

IoT systems are built to provide better insights into different processes and systems to make better decisions. The insights are provided by measuring the statuses of the systems or process elements represented by data. Unfortunately, the bits and bytes become useless without adequately interpreting the data content. Therefore, providing a means for understanding data is an essential property of a modern IoT system. Today, IoT systems produce a vast amount of data, which is very hard to use manually. Thanks to modern hardware and software developments, it is possible to develop fully or semi-automated systems for data analysis and interpretation, which may go further into decision-making and acting according to the decisions.

As various resources have stated, IoT, in most cases, complies with the so-called big 5Vs of Big Data, where just one correspondence is needed to solve a Big Data problem. As has been explained by Jain et al. [14] Big Data might be of different forms, volumes and structures, and in general, the 5Vs, e.g. Volume, Variety, Veracity, Velocity and Value might be interpreted as follows:

Volume

This characteristic is the most obvious and refers to the size of the data. In most practical applications of IoT systems, large volumes of data are reached through intensive production and collection of sensor data. It usually rapidly populates existing operational systems and requires dedicated IoT data collection systems to be upgraded or developed from scratch (which is more advisable).

Variety

Jain explained that big data is highly heterogeneous regarding source, kind, and nature. Having different systems, processes, sensors, and other data sources, variety is usually a distinctive feature of practical IoT systems. For instance, a system of intelligent office buildings would need data from a building management system, appliances and independent sensors, and external sources like weather stations or forecasts from appropriate external weather forecast APIs (Application programming interfaces). Additionally, the given system might require historical data from other sources, like XML documents, CSV files or other sources, diversifying the sources even more.

Veracity

Unfortunately, the volume or diversity of data does not bring value; the data needs to be reliable and clean. In other words, data has to be of good quality; otherwise, the analysis might not bring additional value to the system’s owner or even compromise the decision-making process—the quality of data is represented by Veracity. In IoT applications, it is easy to lose data quality due to malfunctioning sensors that are missing or producing false data. Since the IoT essential part is hardware, the data must be preprocessed in most cases.

Velocity

Data velocity characterises the data bound to the time and its importance during a specific period or at a particular time instant. A good example might be any real-time system like an industrial process control system, where reactions or decisions must be made during a fixed period, requiring data at particular time instants. In this case, data has a flow nature of a specific density.

Value

Since IoT systems and their data analysis subsystems are built to add value to their owners, the development and ownership costs should not exceed the returned value. A system is of low or no value if it does not apply.

Dealing with big data requires specific hardware and software infrastructure. While there is a certain number of typical solutions and a lot more customise, some of the most popular are explained here:

Relational DB-based systems

Those systems are based on well-known relational data models and appropriate database management systems like MS SQL Server, Oracle Server, MySQL, etc. There are some advantageous features of those systems, for instance:

  • Advantages of SQL (Structured Querying Language): enabling easy data manipulation while maintaining a relatively good expressiveness of the data model.
  • A well-designed set of software tools and interfaces enabling integration with many different systems.
  • A lot of built-in data processing routines (stored procedures) provide higher development productivity.
  • Enables asynchronous reactions to events by triggering internal events.
  • Data reading might be scaled out using multiple entities, while writing might be scaled up using more productive servers.

Unfortunately, scaling out data writing (figure refrelationaldbms) is not always possible and is usually supported at a high cost for software products.

 Relational DBMS Scaling Options
Figure 48: Relational DBMS Scaling Options

Complex Event Processing (CEP) systems

CEP systems are very application-tailored, enabling significant productivity at a reasonable cost. High productivity is usually needed for processing data streams, such as voice or video. Maintaining a limited time window for data processing is possible, which is relevant for systems close to real-time (figure 49). Some of the most common drawbacks to be considered are:

  • It might be scaled up only by introducing higher productivity hardware, which is limited by the application-specific design. To some extent, the design might be more flexible if microservices and containerisation are applied.
  • Due to the factors mentioned above and the complexity, the maintenance costs are usually higher than a universal design.
 CEP Systems
Figure 49: CEP Systems

NoSQL systems

As the name suggests, the main characteristic is higher flexibility in data models, which overcomes the limitations of highly structured relational data models (figure 50). NoSQL systems are usually distributed, where the distribution is the primary tool to enable supreme flexibility. In IoT systems, software typically gets older faster than hardware, which requires the maintenance of many versions of communication protocols and data formats to ensure back compatibility. Another reason is the variety of hardware suppliers, where some protocols or data formats are specific to the given vendor. It also provides a means for scalability out and up, enabling high future tolerance and resilience. A typical approach uses a key-value or key-document approach, where a unique key indexes incoming data blocks or documents (JSON, for instance). Some other designs might extend the SQL data models by others – object models, graph models, or the mentioned key-value models, providing highly purpose-driven and, therefore, productive designs. However, the complexity of the design raises problems of data integrity as well as the complexity of maintenance.

 NoSQL Systems
Figure 50: NoSQL Systems

In-memory data grids

This is probably the most productive type of system, providing high flexibility, productivity and scalability. Because these systems are designed to operate in servers RAM, the in-memory data grids are the best choice for data preprocessing in IoT systems due to their high productivity and ability to scale dynamically depending on actual workloads. They provide all the benefits of the CEP and Relational systems, adding a scale-out functionality for data writing. There are only two major drawbacks – limited RAM and high development costs. Some examples of available solutions:

  • Hazelcast [15] (Uses in-memory NoSQL DB)
  • JBOSS Infinispan [16] (Apache-based key-value store)
  • IBM eXtreme Scale [17]
  • Gigaspace XAP Elastic caching edition [18] (Transactions-based microservices)
  • Oracle Coherence [19]
  • Terracotta enterprise suite [20]
  • Pivotal Gemfire [21]

This chapter is devoted to the main groups of algorithms for numerical data analysis and interpretation, covering both mathematical foundations and application specifics in the context of IoT. The chapter is split into the following subchapters:

Data Products Development

In the previous chapter, some essential properties of Big Data systems have been discussed and how and why IoT systems relate to Big Data problems. In any IoT implementation, data processing is the system's heart, which at least transforms into a data product. While it is still mainly a software subsystem, its development differs significantly from that of a regular software product. The difference is expressed through the roles involved and the lifecycle itself. It is often wrongly assumed that the main contributor is the data scientist responsible for developing a particular data processing or forecasting algorithm. It is somewhat valid, except other essential roles are vital to success. The team of developers playing the roles might be as small as three or as large as 20 people, depending on the scale of the project. The leading roles are explained below.

Business user

Business users have good knowledge of the application domain and, in most cases, benefit significantly from the developed data product. They know how to transform data into a business value in the organisation. Typically, they take positions like Production manager, Business/market analyst, and Domain expert.

Project sponsor

He is the one who defines the business problem and is triggering the birth of the project. He represents the project's scope and volume and meets the necessary provisions. While he defines project priorities, he does not have deep knowledge or skills in the technology, algorithms, or methods used.

Project manager

As in most software projects, the project manager is responsible for meeting project requirements and specifications within the given time frame and available provisions. He selects the needed talents, chooses development methods and tools, and selects goals for the development team members. Usually, he reports to the project sponsor and ensures that information flows within the team.

Business information analyst

He possesses deep knowledge in the given business domain, supported by his skills and experience. Therefore, he is a valuable asset for the team in understanding the data's content, origin, and possible meaning. He defines the key performance indicators (KPI) and metrics to assess the project's success level. He selects information and data sources to prepare information and data dashboards for the organisation's decision-makers.

Database administrator

He is responsible for configuring the development environment and Database (one, many, or a complex distributed system). In most cases, the configuration must meet specific performance requirements, which must be maintained. He ensures secure access to the data for the team members. During the project, he backs up data, restores it if needed, updates configuration, and provides other support.

Data engineer

Data engineers usually have deep technical knowledge of data manipulation methods and techniques. During the project, he tuned data manipulation procedures, SQL queries, and memory management and developed specific stored or server-side procedures. He is responsible for extracting particular data chunks for the Sandbox environment and formatting and tuning them according to data scientists' needs.

Data scientist

Develops or selects data processing models needed to meet the project specifications. Develops, tests and implements data processing methods and algorithms; develops decision-making support methods and their implementations for some projects. Provides needed research capacities for selecting and developing the data processing methods and models.

As it might be noticed, there is no doubt that the Data Scientist is playing a vital role, but only in cooperation with the other roles. For a single person, depending on their competencies and capacities, roles might overlap, or a single team member could provide several roles. Once the team is built, the development process can start. As with any other product development, data product development follows a specific life cycle of phases. Depending on particular project needs, there might be variations, but the data product development follows the well-known waterfall pattern in most cases. The phases are explained in the figure 51:

 Data Product Life Cycle
Figure 51: Data Product Life Cycle
Discovery

The project team learns about the problem domain, the problem itself, its structure, and possible data sources and defines the initial hypothesis. The phase involves interviewing the stakeholders and other potentially related parties to reach as broad an insight as necessary. It said that during this phase, the problem is farmed – defined the analytical problem, indicators of the success for the potential solutions, business goals and scope. To understand business needs, the project sponsor is involved in the process from the very beginning. The identified data sources might include external systems or APIs, sensors of different types, static data sources, official statistics and other vital sources. One of the primary outcomes of the phase is the Initial Hypothesis (IH), which concisely represents the team's vision of the problem and potential solution simultaneously. For instance, “Introduction of deep learning models for sensor time series forecast provides at least 25% better performance over statistical methods used at the moment.” Whatever the IH is, it is a much better starting point than defining the hypothesis during the project implementation in later phases.

Data preparation

The phase focuses on creating a sandbox system by extracting, transforming and loading it into a sandbox system (ETL – Extract, Transform, Load). This is usually the most prolonged phase in terms of time and can take up 50% of the total time allocated to the project. Unfortunately, most teams tend to underestimate this time consumption, which costs the project manager and analysts dearly, leading to losing trust in the project's success. Data scientists given a unique role and authority in the team tend to “skip” this phase and go directly to phase 3 or 4, which is costly because of incorrect or insufficient data to solve the problem.

  1. Data analysis sandbox - The client's operational data, log (window), raw streams, etc., are copied. There is a possibility of a natural conflict where Data scientists want everything, and IT «service» provides a minimum. The needs must, therefore, be explained through a thorough argument. The sandbox can be 5 – 10 times larger than the original dataset!
  2. Carrying out ETLs - The data is retrieved, transformed and loaded back into the sandbox system. Sometimes, simple data filtering excludes outliers and cleans the data. Due to the volume of data, there may be a need for parallelisation of data transfers, which leads to the need for appropriate software and hardware infrastructure. In addition, various web services and interfaces are used to obtain context.
  3. Exploring the content of the data - The main task is to get to know the content of the extracted data. A data catalogue or vocabulary is created (small projects can skip this step). Data research allows for identifying data gaps and technology flaws, as well as teams' own and extraneous data (for determining responsibilities and limitations).
  4. Data conditioning - Slicing and combining are the most common actions in this step. The compatibility of data subsets with each other after the performed manipulations is checked to exclude systematic errors – errors that occur as a result of incorrect manipulation (formatting of data, filling in voids, etc…). During this step, the team ensures the time, metadata, and content match.
  5. Reporting and visualising - This step uses general visualisation techniques, providing a high-level overview – value distributions, histograms, correlations, etc. explaining the data content. It is necessary to check whether the data represent the problem sphere, how the value distributions “behave” throughout the dataset, and whether the details are sufficient to solve the problem.
Model planning

The main task of the phase is to select model candidates for data clustering, classification or other needs consistent with the Initial Hypothesis from Phase 1.

  1. Exploring data and selecting variables - The aim is to discover and understand variables' interrelationships through visualisations. The identified stakeholders are an excellent source of relevant insights about internal data relationships – even if they do not know the reasons! These steps allow the selection of key factors instead of checking all against all.
  2. Selection of methods or models - During this step, the team creates a list of methods that match the data and the problem. A typical approach is making many trim model prototypes using ready-made tools and prototyping packages, such as R, SPSS, Excel, Python, and other specific tools. Tools typical of the phase might include but are not limited to R or Python, SQL and OLAP, Matlab, SPSS, and Excel (for simpler models).
Model development

During this phase, the initially selected trim models are implemented on a full scale concerning the gathered data. The main question is whether the data is enough to solve the problem. There are several steps to be performed:

  1. Data preparation - Specific subsets of data are created, such as training, testing, and validation. The data is adjusted to the selected initial data formatting and structuring methods.
  2. Model development - Usually, conceptually, it is very complex but relatively short in terms of time.
  3. Model testing - The models shall be operated and tuned using the selected tools and training datasets to optimise the models and ensure their resilience to incoming data variations. All decisions must be documented! This is important because all other team roles require detailed decision-making reasoning, especially during communication and operationalisation.
  4. Key points to be answered during the phase area:
    • Is the model accurate enough?
    • Are the results obtained meaningful in relation to the objectives set?
    • Don't models make unacceptable mistakes?
    • Is the data enough?

In some areas, false positives are more dangerous than false negatives. For example, targeting systems may inadvertently target “their own”.

Communication

During this phase, the results must be compared against the established quality criteria and presented to those involved in the project. It is important not to show any drafts outside a group of data scientists! - The methods used by most of those involved are too complex, which leads to incorrect conclusions and unnecessary communication to the team. Usually, the team is biased in not accepting the results, which falsifies the hypotheses, taking it too personally. However, the data led the team to the conclusions, not the team itself! Anyway, it must be verified that the results are statistically reliable. If not, the results are not presented. It is also essential to present all the obtained side results, as they almost always provide additional value to the business. The general conclusions need to be complemented by sufficiently broad insights into the interpretation of the results, which is necessary for users of the results and decision-makers.

Operationalisation

The results presented are first integrated into the pilot project before full-scale implementation, after which the widespread roll-out follows the pilot's tests in the production environment. During this phase, some performance gaps may require replacing, for instance, Python or R code with compiled code. Expectations for each of the roles during this phase:

  • Business user: Identifiable benefits of the model for the business.
  • Project sponsor: return on investment (ROI) and impact on the business as a whole – how to highlight it outside the organisation / other business.
  • Project manager: completing the project within the expected deadlines with the intended resources.
  • Business Information Analyst: add-ons to existing reports and dashboards.
  • Data scientist: Convenient maintenance of models after preparation of detailed documentation of all developments and explanation of the work performed by the team.

Data Preparation for Data Analysis

Introduction

In most cases, data must be prepared before analysing or applying some processing methods. There might be different reasons for this, such as missing values, sensor malfunctioning, different time scales, different units, the specific format needed for a given method or algorithm, and many more. Therefore, data preparation is as necessary as the analysis itself. While data preparation is usually particular to a given problem, some standard general cases and preprocessing tasks are beneficial. Data preprocessing also depends on the data's nature – preprocessing is usually very different for data, where the time dimension is essential (time series), or it is not like a log of discrete cases for classification, where there are no internal causal dependencies among entries. It must be emphasised that whatever data preprocessing is done needs to be carefully noted and the reasoning behind it explained so that others can understand the results acquired during the analysis.

"Static data"

Some of the methods explained here might also be applied to time series but must be done with full awareness of possible implications. Usually, the data should be formatted as a table consisting of rows representing data entries or events and fields representing features of the event entry. For instance, a row might represent a room climate data entry, where fields or factors represent air temperature, humidity level, CO2 level and other vital measurements. For the sake of simplicity in this chapter, it is assumed that data is formatted as a table.

Filling the missing data

One of the most common situations is missing sensor measurements, which communication channel issues, IoT node malfunctioning or other reasons might cause. Since most data analysis methods require complete entries, it is necessary to ensure that all data fields are present before applying the analysis methods. Usually, there are some common approaches to deal with the missing values:

  • Random selection – the method, as suggested by the name, allows randomly selecting one of the possible values of the data field. If the field value list is categorical, representing a limited set of possible values, for instance, a set of colours or operation modes, one value from the list is randomly selected. In the case of a continuous value, a random value from an interval is specified. Besides its simplicity, the method allows for filling gaps in data in cases where a fraction of missing values is insignificant. In case of a significant fraction of missing values, the method should not be applied due to implications on the data analysis.
  • Informed selection – the method, in essence, does the same as the Random selection except that additional information on values distribution of the field (factor) is used. In other words, the most common might be selected for discrete factor values. However, in the case of continuous values, an average value might be chosen according to the distribution characteristics. There might be more complex situations which cannot be described by Gaussian distribution. In those cases, the data analyst must decide on particular selection mechanisms, representing the distribution's specifics.
  • Value marking – this approach might be applied for cases where there is the chance that missing data is a consequence of some critical processes; for instance, whenever the engine's temperature reaches a critical value, the pressure sensor stops functioning due to overheating. Analysts might know the issue or not; in any case, it is essential to mark those situations to find possible causalities in the data. A dedicated new category might be introduced if the factor is definite, like “empty”. In the case of continuous values, a dedicated “impossible” value might be assigned, such as max integer value, minimum integer value, zero, and others.

Scaling

Scaling is a very often used method for continuous value numerical factors. The main reason is that different value intervals for other factors are observed. It is essential for methods like clustering, where a multi-dimensional Euclidian distance is used, where, in the case of different scales, one of the dimensions might overwhelm others just because of a higher order of the numerical values. Usually, scaling is performed by applying a linear transformation of the data with set min and max values, which mark the desired value interval. In most software packages, like Python Pandas [22], scaling is implemented as a simple-to-use function. However, it might be done manually if needed as well:

 Scaling
Figure 52: Scaling

where:
Vold – the old measurement
Vnew – the new – scaled measurement
mmin – minimum value of the measurement interval
mmax – maximum value of the measured interval
Imin – minimum value of the desired interval
Imax – maximum value of the desired interval

Normalisation

Normalisation is effective when the data distribution is unknown or known as non-Gaussian (not following a bell curve of the Gaussian distribution). It is beneficial for data with varying scales, especially when using algorithms that do not assume any specific data distribution, such as k-nearest neighbours and artificial neural networks. Normalisation does not change the scale of the values but the distribution of the values to represent a Gaussian distribution. This technique is mainly used in machine learning and is performed with appropriate software packages due to the complexity of the calculations when compared to scaling.

Adding dimensions

Sometimes, it is necessary to emphasise a particular phenomenon in the data. For instance, it might be very helpful to bolden the changes in the factor value, i.e., those that are more distant from 0 should be even larger, but those closer should not be raised. In this case, applying the exponent function to the factor values—squaring or raising to a power of 4—is a simple technique. If negative values are present, uneven power values might be used. A variation of the technique might be summing up different factor values before or after applying the exponent. In this case, a group of similar values representing the same phenomenon emphasises it. Any other function can be used to represent the specifics of the problem.

Time series

Time series usually represent the dynamics of some process, and therefore, the order of the data entries has to be preserved. This means that in most cases, all of the mentioned methods might be used as long as the data order remains the same. A time series is simply a set of data - usually events, arranged by a time marker. Typically, time series are arranged in the order in which events occur/are recorded. Several significant consequences follow from this simple fact:

  • The sequence of events must be followed for any data manipulation.
  • The arrangement of events in time is the order of data arrival and reflects a particular process and its development in time.
  • The sequence of events reflects the causal relations of this process, which we try to discover through data analysis.
Time Series Analysis Questions

Therefore, there are several questions that data analysis typically tries to answer:

  • Is the process stationary, i.e. is the process variable over time?
  • If the process is dynamic, is there a direction of development?
  • Is the process chaotic or regular?
  • Is there periodicity in the dynamics of the process?
  • Are there any regularities between the individual changes of the parameters characterising the process – correlation?
  • Does the dynamics of the process depend on changes in parameters of the external environment that we can influence, i.e. is the process adaptive?
Some definitions

Autocorrelation - A process is autocorrelated if the similarity of the values of a given observation is a function of the time between observations. In other words, the difference between the values of the observations depends on the interval between the observations. This does not mean that the process values are identical but that their differences are similar. The process can equally well be decaying or growing in the mean value or amplitude of the measurements, but the difference between subsequent measurements is always the same (or close).

Seasonality - The process is seasonal if the deviation from the average value is repeated periodically. This does not mean the values must match perfectly, but there must be a general tendency to deviate from the average value regularly. A perfect example is a sinusoid.

Stationarity - A process is stationary if its statistical properties do not change over time. Generally, the mean and variance over a period serve as good measures. In practice, a certain tolerance interval is used to tell whether a process is stationary since ideal cases (no noise) do not tend to occur in practice. For example, temperature measurements over several years are stationary and seasonal. It is not autocorrelated because temperatures are still relatively variable across days. Numerically, stationarity is evaluated with the so-called Dickey-Fuller test [23], which uses a linear regression model to measure change over time at a given time step. The model's t-test [24] indicates how statistically strong the hypothesis of process stationarity is.

Time series modelling

In many cases, it is necessary to emphasise the main pattern of the time series while removing the “noise”. In general, there are two main techniques – decimation and smoothing. Both are widely used but need to be treated carefully.

Moving average (sliding average)

The essence of the method is to obtain an average value within a particular time window, M, thereby giving inertia to the incoming signal and reducing noise's impact on the overall analysis result. Different effects might be obtained depending on the size of the time window M.

 Moving Average
Figure 53: Moving Average

where:
SMAt - the new smoothed value at time instant t
Xi – ith measurement at a time instant i
M – time window

The image in figure 54 demonstrates the effects of a time window size of 10 and 100 measurements – an incoming signal from a freezer's thermometer.

  • At first, it must be emphasised that the moving average adds a slight lag in the incoming data, i.e., the rise and fall of the values are slightly behind the original values.
  • In the case of M = 10, the overall shape of the time series is preserved while noise is removed.
  • In the case of M = 100, the time series shape is transformed into a new function, which does not represent the main feature of the original measurements. For instance, the rises are replaced by falls and vice versa, while the data spike melts with the coming rise and forms one more significant rise of the signal. So, the result annihilates the initial features of the signal.
:en:iot-reloaded:equationtwo.png?900 | Moving Average
Figure 54: Moving Average
Exponential moving average

The exponential moving average is widely used in noise filtering, for example, in analysing changes in stock markets. Its main idea is that each measurement's weight (influence) decreases exponentially as time increases. Thus, the evaluation takes more recent measurements and less considers older ones.

 Exponential Moving Average
Figure 55: Exponential Moving Average

where:
EMAt - the new smoothed value at time instant t
Xi – ith measurement at a time instant i
Alpha - smoothing factor between 0 and 1, which reflects the weight of the last - the most recent measurement.

As seen in the picture 56, the exponential moving average in the case of different weighting factor values preserves the shape of the initial signal. It has a minimal lag while removing the noise, which makes it a handy smoothing technique.

 Exponential Moving Average
Figure 56: Exponential Moving Average
Decimation

Decimation is a technique of excluding some entries from the initial time series to reduce the overwhelming or redundant data. As the name suggests, every tenth entry is usually excluded to reduce the data by 10%. It is a simple method that significantly benefits cases of over-measured processes with slow dynamics. With preserved time stamps, the data still allows the application of general time-series analysis techniques like forecasting.

Regression Models

Introduction

While AI and especially Deep Learning techniques have advanced tremendously, the fundamental data analysis methods still provide a good and, in most cases, efficient way of solving many data analysis problems. Linear regression is one of those methods that provide at least a good starting point to have an informative and insightful understanding of the data. Linear regression models are relatively simple and do not require significant computing power in most cases, which makes them widely applied in different contexts. The term regression towards a mean value of a population was widely promoted by Francis Galton, who introduced the term “correlation” in modern statistics[25] [26] [27].

Linear regression model

Linear regression is an algorithm that computes the linear relationship between the dependent variable and one or more independent features by fitting a linear equation to observed data. In its essence, linear regression allows the building of a linear function – a model that approximates a set of numerical data in a way that minimises the squared error between the model prediction and the actual data. Data consists of at least one independent variable (usually denoted by x) and the function or dependent variable (usually denoted by y). If there is just one independent variable, then it is known as Simple Linear Regression, while in the case of more than one independent variable, it is called Multiple Linear Regression. In the same way, in the case of a single dependent variable, it is called Univariate Linear Regression. In contrast, in the case of many dependent variables, it is known as Multivariate Linear Regression. For illustration purposes in the figure 57 below, a simple data set is illustrated that was used by F. Galton while studying relationships between parents and their children's heights. The data set might be found here: [28]

 Galton's Data Set
Figure 57: Galton's Data Set

If the fathers' heights are Y and their children's heights are X, the linear regression algorithm is looking for a linear function that, in the ideal case, will fit all the children's heights to their parent heights. So, the function would look like the following equation:

 Linear Model
Figure 58: Linear Model

where:

  • yi – ith child height
  • xi – ith father height
  • β0 and β1 y axis crossing and slope coefficients of the linear function correspondingly

Unfortunately, in the context of the given example, finding such a function is not possible for all x-y pairs at once since x and y values differ from pair to pair. However, finding a linear function that minimises the distance of the given y to the y' produced by the function or model for all x-y pairs is possible. In this case, y' is an estimated or forecasted y value. At the same time, the distance between each y-y' pair is called an error. Since the error might be positive or negative, a squared error estimates the error. It means that the following equation might describe the model:

 Linear Model with Estimated Coefficients
Figure 59: Linear Model with Estimated Coefficients

where

  • y'i – ith child height estimated by the model
  • xi – ith father height
  • Β’0 and β’1 y axis crossing and slope coefficients' estimates of the linear function correspondingly, which minimises the error term:
 Model Error
Figure 60: Model Error

The estimated beta values might be calculated as follows:

 Coefficient Values
Figure 61: Coefficient Values

where:

  • Cor(X, Y) – Correlation between X and Y (capital letters mean vectors of individual x and y corresponding values)
  • σx and σy – standard deviations of vectors X and Y
  • µx and µy – mean values of the vectors X and Y

Most modern data processing packages possess dedicated functions for building linear regression models with few lines of code. The result is illustrated in the figure 62:

 Galton's Data Set with Linear Model
Figure 62: Galton's Data Set with Linear Model

Errors and their meaning

As discussed previously, an error in the context of the linear regression model represents a distance between the estimated dependent variable values and the estimate provided by the model, which the following equation might represent:

 Coefficient Values
Figure 63: Coefficient Values

where,

  • y'i – ith child height estimated by the model
  • yi - ith childer height true values
  • ei - error of the model's ith output

Since an error for a given yith might be positive or negative and the model itself minimises the overall error, one might expect that the error is typically distributed around the model, with a mean value of 0 and its sum close to or equal to 0. Examples of the error for a few randomly selected data points are depicted in the following figure 64 in red colour:

 Galton's Data Set with the Linear Model and its Errors
Figure 64: Galton's Data Set with the Linear Model and its Errors

Unfortunately, knowing the following facts does not always provide enough information about the modelled process. In most cases, due to some dynamic features of the process, the distribution of the errors is as important as the model itself. For instance, a motor shaft wears out over time, and the fluctuations steadily increase from the centre of the rotation. To estimate the overall wearing of the shaft is enough to have just a max amplitude measurement. However, it is not enough to understand the dynamics of the wearing process. Another important aspect is the order of magnitude of the errors compared to the measurements, which, in the case of small quantities, might be impossible to notice even if the modeller illustrated the model. The following figure 65 might illustrate such situation:

 Error Distribution Example
Figure 65: Error Distribution Example

In figure 65 both small error quantities and progression dynamics are illustrated. Another example of cyclic error distribution is provided in the following figure 66:

 Error Distribution Example
Figure 66: Error Distribution Example

From this discussion, a few essential notes have to be taken:

  • Error distributions (around 0) should be treated as carefully as the models themselves;
  • In most cases, error distribution is complex to notice even if the errors are illustrated;
  • It is essential to look into the distribution to ensure that there are no regularities.

If any regularities are noticed, whether a simple variance increase or cyclic nature, they point to something the model does not consider. It might point to a lack of data, i.e., other factors that influence the modelled process, but they are not part of the model, which is therefore exposed through the nature of the error distribution. It also might point to an oversimplified look at the problem, and more complex models should be considered. In any of the mentioned cases, a deeper analysis should be considered. In a more general way, the linear model might be described with the following equation:

General Notation of a Linear Model
Figure 67: General Notation of a Linear Model

Here, the error is considered to be normally distributed around 0, with its standard deviation sigma and variance sigma squared. Variance provides at least a numerical insight into the error distribution; therefore, it should be considered an indicator for further analysis. Unfortunately, the true value of sigma is not known; thus, its estimated value should be used:

 Sigma Estimate
Figure 68: Sigma Estimate

Here, the variance estimated value's expected value equals the true variance value:

 Variance Estimate
Figure 69: Variance Estimate

Multiple linear regression

In many practical problems, the target variable Y might depend on more than one independent variable X, for instance, wine quality, which depends on its level of serenity, amount of sugars, acidity and other factors. In the case of applying a linear regression model that doesn't seem very easy, but it is still a linear model of the following form:

 Multiple Linear Model
Figure 70: Multiple Linear Model

During the application of the linear regression model, the error term to be minimised is described by the following equation:

 Multiple Linear Model Error Estimate
Figure 71: Multiple Linear Model Error Estimate

Unfortunately, due to the number of factors (dimensions), the results of multiple linear regression cannot be visualised in the same way as those of a single linear regression. Therefore, numerical analysis and interpretation of the model should be done. In many situations, numerical analysis is complicated and requires a semantic interpretation of the data and model. To do it, visualisations reflecting the relation between the dependent variable and independent variables result in multiple graphs. Otherwise, the quality of the model is hardly assessable or even unassessable.

Piecewise linear models

Piecewise linear models, as the name suggests, allow splitting the overall data sample into pieces and building a separate model for every piece, thus achieving better prediction for the data sample. The formal representation of the model is as follows:

 Piecewise Linear Model
Figure 72: Piecewise Linear Model

As it might be noticed, the individual models are still linear and individually simple. However, the main difficulty is to set the threshold values b that splits the sample into pieces. To illustrate the problem better, one might consider the following artificial data sample (figure 73):

 Complex Data Example
Figure 73: Complex Data Example

Intuition suggests splitting the sample into two pieces and, with the boundary b around 0, fitting a linear model for each of the pieces separately (figure 74):

 Piecewise Linear Model with 2 Splits
Figure 74: Piecewise Linear Model with 2 Splits

Since we do not know the exact best split, it might seem logical to play with different numbers of splits at different positions. For instance, a random number of splits might generate the following result (figure 75):

 Piecewise Linear Model with Many Splits
Figure 75: Piecewise Linear Model with Many Splits

It is evident from the figure above that some of the individual linear models do not reflect the overall trends, i.e. the slope steepness and direction (positive or negative) seem to be incorrect. However, it is also apparent that those individual models might be better for the given limited sample split. This simple example brings a lot of confusion in selecting the number of splits and their boundaries. Unfortunately, there is no simple answer, and the possible solution might be one of the following:

  • Using contextual information, the model developer might select a particular number of splits and boundaries based on the context.
  • Some additional methods might be used to find the best split automatically. In this case, software packages usually have tools for this. For Python developers, a very handy package mlinsights [29] provides a set of such tools, including regression trees and other methods.

Clustering Models

Introduction

Clustering is a methodology that belongs to the class of unsupervised machine learning. It allows for finding regularities in data when the group or class identifier or marker is absent. To do this, the data structure is used as a tool to find the regularities. Because of this powerful feature, clustering is often used as part of data analysis workflow prior to classification or other data analysis steps to find natural regularities or groups that may exist in data.

This provides very insightful information about the data's internal organisation, possible groups, their number and distribution, and other internal regularities that might help us better understand the data content. One might consider grouping customers by income estimate to explain the clustering better. It is natural to assume some threshold values of 1KEUR per month, 10KEUR per month, etc. However:

  • Do the groups reflect a natural distribution of customers by their behaviour?
  • For instance, does a customer with 10KEUR behave differently from the one with 11KEUR per month?

It is evident that, most probably, customers' behaviour depends on factors like occupation, age, total household income, and others. While the need for considering other factors is obvious, grouping is not – how exactly different factors interact to decide which group a given customer belongs to. That is where clustering exposes its strength – revealing natural internal structures of the data (customers in the provided example).

In this context, a cluster refers to a collection of data points aggregated together because of certain similarities [30]. Within this chapter, two different approaches to clustering are discussed:

  • Cluster centroid-based, where the main idea is to find an imaginary centroid point representing the “centre of mass” of the cluster or, in other words, the centroid represents a “typical” member of the cluster that, in most cases, is an imaginary point.
  • Cluster density-based, where the density of points around the given one determines the membership of a given point to the cluster. In other words, the main feature of the cluster is its density.

In both cases, a distance measure estimates the distance among points or objects and the density of points around the given. Therefore, all factors used should be numerical, assuming an Euclidian space.

Data preprocessing before clustering

Before starting clustering, several necessary steps have to be performed:

  • Check if the used data is metric: In clustering, the primary measure is Euclidian distance (in most cases), which requires numeric data. While it is possible to encode some arbitrary data using numerical values, they must maintain the semantics of numbers, i.e. 1 < 2 < 3. Good examples of natural metric data are temperature, exam assessments, and the like—bad examples are gender and colour.
  • Select the proper scale: For the same reasons as the distance measure, the values of each dimension should be on the same scale. For instance, customers' monthly incomes in euros and their credit ratios are typically at different scales – the incomes in thousands, while ratios between 0 and 1. If scales are not adjusted, the income dimension will dominate distance estimation among points, deforming the overall clustering results. A universal scale is usually applied to all dimensions to avoid this trap. For instance:
    • Unity interval: A minimal factor value is subtracted from the given point value and divided by the interval value, giving the result 0 to 1.
    • Z-scale: The factor's average value is subtracted from the original value of the given point and then divided by the factor's standard deviation, which provides results distributed around 0 with a standard deviation of 1.

Summary about clustering

  • There are many other clustering methods besides the discussed ones; however, all of them, including the discussed ones, require prior knowledge of the problem domain.
  • All clustering methods require setting some parameters that drive the algorithms. In most cases, the value setting might not be intuitive and require interesting fine-tuning.
  • Proper data coding in clustering may provide a significant value even in complex application domains, including medicine, customer behaviour analysis, and finetuning of other data analysis algorithms.
  • In data analysis, clustering is one of the first methods used to acquire the internal structure of the data before applying more informed methods.

To illustrate the mentioned algorithm groups, the following algorithms are discussed in detail:

  • K-Means - a widely used algorithm that uses distance as the main estimate to group objects;
  • DBSCAN - a good example of a density-based algorithm widely used in signal processing;

K-Means

The first method discussed here is one of the most commonly used – K-means. K-means clustering is a method that splits the initial set of points (objects) into groups, using distance measure, representing a distance from the given point of the group to the group's centre representing a group's prototype: centroid. The result of the clustering is N points grouped into K clusters, where each point has assigned a cluster index, which means that the distance from the point of the cluster centroid is closer than the distance to any other centroids of other clusters. Distance measure employs Euclidian distance, which requires scaled or normalised data to avoid the dominance of a single dimension over others. The algorithm steps schematically are represented in the following figure 76:

 K-means Steps
Figure 76: K-means Steps

In the figure:

  • STEP 1: Initial data set where points do not belong to any of the clusters.
  • STEP 2: Cluster initial centres are selected randomly.
  • STEP 3: For each point, the closest cluster centre, which is the point marker, is selected.
  • STEP 4: Cluster mark is assigned to each point.
  • STEP 5: The initial cluster centre is being refined to minimise the average distance to it from each cluster point. As a result, cluster centres might no longer be physical points; instead, they become imaginary.
  • STEP 6: Cluster marks of the points are updated.

Steps 4-6 are repeated until changes in cluster positions do not change or changes are insignificant. The distance is measured using Euclidian distance:

  Euclidian Distance
Figure 77: Euclidian Distance

where:

  • Data points - points {xi}, i = 1, … ,N in multi-dimensional Euclidian space i.e. each point is a vector.
  • K – number of clusters set by the user.
  • rnk – an indicator variable with values {0,1} – indicates if data point xn belongs to cluster k.
  • mk – centroid of kth cluster.
  • D – Squared Sum of all distances di to their assigned cluster centroids.
  • Goal is to find such values of variables rnk and m k to minimise D.

Example of initial data and assigned cluster marks with cluster centres after running the K-means algorithm (figure 78):

  K-means Example with Two Clusters
Figure 78: K-means Example with Two Clusters

Unfortunately, the K-means algorithm does not possess automatic mechanisms to select the number of clusters K, i.e., the user must set it. Example of setting different numbers of cluster centres (figure 79):

 K-means Example with Three Clusters
Figure 79: K-means Example with Three Clusters
Elbow method

In K-means clustering, a practical method – the Elbow method is used to select a particular number of clusters. The elbow method is based on finding the point at which adding more clusters does not significantly improve the model's performance. As explained, K-means clustering optimises the sum of squared errors (SSE) or squared distances between each point and its corresponding cluster centroid. Since the optimal number of clusters (NC) is not known initially, it is wise to increase the NCs iteratively. The SSE decreases as the number of clusters increases because the distances to the cluster centres also decrease. However, there is a point where the improvement in SSE diminishes significantly. This point is referred to as the “elbow” [31].

Steps of the method:

  1. Plot SSE against the number of clusters:
    • Computing the SSE for different values of NC, typically starting from NC=2 reasonable maximum value (e.g., 10 or 20).
    • Plotting the SSE values on the y-axis and the number of clusters NC on the x-axis.
  2. Observe the plot:
    • As the number of clusters NC increases, the SSE will decrease because clusters become more specialised.
    • Initially, adding clusters will result in a significant drop in SSE.
    • After a certain point, the reduction in SSE will slow down, not showing a significant drop in the SSE.
  3. The “elbow” point:
    • The point on the curve where the rate of decrease sharply levels off forms the “elbow.”
    • This is where adding more clusters beyond this point doesn't significantly reduce SSE, indicating that the clusters are likely well-formed.
  4. Select optimal NC:
    • The value of NC at the elbow point is often considered the optimal number of clusters because it balances the trade-off between model complexity and performance.

Since the method requires iteratively running the K-means algorithm, which might be resource-demanding, a selection of data might be employed to determine the NC first and then used to run the K-means on the whole dataset.

Limitations:

  • The elbow point is not always obvious; in some cases, the curve may not show a distinct “elbow.”
  • The elbow method is heuristic and might not always lead to the perfect number of clusters, especially if the data structure is complex.
  • Other methods, like the Silhouette score, can complement the elbow method to help determine the optimal NC.
 Elbow Example on Two Synthetic Data Sets
Figure 80: Elbow Example on Two Synthetic Data Sets

The figure above (figure 80) demonstrates more and less obvious “elbows”, where users could select the number of clusters equal to 3 or 4.

Silhouette Score

The Silhouette Score is a metric used to evaluate the quality of a clustering result. It measures how similar an object (point) is to its own cluster (cohesion) compared to other clusters (separation). The score ranges from −1 to +1, where higher values indicate better-defined clusters [32].

The Silhouette score considers two main factors for each data point:

  • Cohesion (a(i)) - The cohesion measure for the ith point is the average distance between the point and all other points in the same cluster. It measures the point's proximity to other points in its cluster. A low a(i) value indicates that the point is tightly grouped with other points in the same cluster.
  • Separation (b(i)) – The separation measure for the ith point estimates the average distance between the point and points in the nearest neighbouring cluster - the cluster that is not its own but is closest to it. A large value for b(i) indicates that the point is far away from the closest other cluster, meaning it is well-separated.

The silhouette score for a point i is then calculated as:

 Silhouette Score
Figure 81: Silhouette Score

where:

  • s(i) is the silhouette score for point i.
  • a(i) is the average distance from point i to all other points in the same cluster.
  • b(i) is the average distance from point i to all points in the nearest other cluster.
  • s(i) ≈+1 indicated that the point i is well clustered.
  • s(i) around 0 indicates that the point lies close to the boundary between clusters.
  • s(i) ≈-1 indicated that the point i was most probably wrongly assigned to the cluster.

Steps of the method:

  1. Plot silhouette score (SC) against the number of clusters:
    • Computing the SC for different values of NC, typically starting from NC=2 reasonable maximum value (e.g., 10 or 20).
    • Plotting the SC values on the y-axis and the number of clusters NC on the x-axis.
  2. Observe the plot:
    • As the number of clusters NC increases, the SC shows different score values, which may or may not gradually decrease, as in the case of the “elbow” method.
    • The main goal is to observe the maximum SC value and the corresponding NC value.
  3. Select optimal NC:
    • The value of NC at the maximum SC value is often considered the optimal number of clusters because it balances the trade-off between model complexity and performance.

Limitations:

  • It may not perform well if the data does not have a clear structure or if the clusters are of very different densities or sizes.
  • The Silhouette score might not always match intuitive or domain-specific clustering insights.

An example is provided in the following figure 82:

 Silhouette Example on a Synthetic Data Set
Figure 82: Silhouette Example on a Synthetic Data Set

The user should look for the highest score, which in this case is for the 3-cluster option.

DBSCAN

DBSCAN (Density-Based Spatial Clustering of Applications with Noise) employs density measures to mark points in high-density regions and those in low-density regions – the noise. Because of this natural behaviour of the algorithm, it is particularly useful in signal processing and similar application domains. [33].

One of the essential concepts is the point's p neighbourhood, which is the set of points reachable within the user-defined distance eps (epsilon):

  Point's Neighbourhood

where:

  • p – the point of interest.
  • N(p) – neighbourhood of the point p.
  • q – any other point.
  • distance(p,q) – Euclidian distance between points q and p.
  • eps – epsilon – user-defined distance constant.
  • D – the initial set of points available for the algorithm.

The algorithm treats different points differently depending on density and neighbouring points distribution around the point – its neighbourhood:

  1. Core Points:
    • A point is a core point if it has at least MinPts neighbours within a distance eps, where MinPts and eps are user-defined parameters, i.e. N(p) ≥ MinPts.
  2. Directly Density-Reachable points:
    • A point is directly density-reachable from a core point if it lies within the distance eps of the core point.
  3. Border Points:
    • A border point is not a core point but within the eps distance of a core point. Border points are part of a cluster but do not have enough neighbours to be core points.
  4. Noise points:
    • Points that are not core and are not reachable from any core point are considered noise or outliers.
  DBSCAN Concepts
Figure 83: DBSCAN Concepts
  • The main steps in DBSCAN:
    • Selects a point from the data set – it might be the first in the data set or any randomly selected.
    • If it is a core point, form a cluster by grouping it with all directly density-reachable points.
    • Move to the next unvisited point and return to step 1.
    • Border points are added to the nearest cluster, and points not reachable from any core point are marked as noise.
  • Advantages:
    • Can detect clusters of arbitrary shape.
    • Naturally identifies outliers or noise.
    • Unlike K-means, DBSCAN does not require specifying the number of clusters upfront.
  • Disadvantages:
    • Results depend heavily on the choice of eps and MinPts.
    • It struggles with clusters of varying densities since eps is fixed.

DBSCAN is excellent for discovering clusters in data with noise, especially when clusters are not circular or spherical.

Some application examples (figures 84 and 85):

  DBSCAN Example
Figure 84: DBSCAN Example: Eps = 1.0, 13 clusters and 96 noise points
  DBSCAN Example
Figure 85: DBSCAN Example: Eps = 1.5, 3 clusters and 8 noise points

A typical application in signal processing (figure 86):

  DBSCAN Example
Figure 86: DBSCAN Example: Eps = 0.2, 3 Clusters and 84 Noise Points
Selecting eps and MinPts values

Usually, MinPts is selected using some prior knowledge of the data and its internal structure. If it is done, the following steps might be applied:

  • Calculate the average distance between every point and its k-nearest neighbours, where k = MinPts.
  • The average distances are sorted and depicted on a chart, where x – is the index of the sorted average distance, y – is the distance value.
  • The optimal eps value is when y increases rapidly, as shown in the following picture (figure 87) on artificial sample data.
  Selecting MinPts
Figure 87: Selecting MinPts

The red horizontal line shows a possible eps value, around 0,045.

Decision Tree-based Classification Models

Introduction

Classification assigns a class mark to a given object, indicating that the object belongs to the selected class or group. In contrast to clustering, classes should be pre-existent. In many cases, clustering might be a prior step to classification. Classification might be slightly understood differently in different contexts. However, in the context of this book, it will be used to describe a process of assigning marks of pre-existing classes to objects depending on their features.

Classification is used in almost all domains of modern data analysis, including medicine, signal processing, pattern recognition, different types of diagnostics and other more specific applications.

Interpretation of the model output

The classification process consists of two steps: first, an existing data sample is used to train the classification model, and then, in the second step, the model is used to classify unseen objects, thereby predicting to which class the object belongs. As with any other prediction, in classification, the model output is described by the error rate, i.e., true prediction vs. wrong prediction. Usually, objects that belong to a given class are called – positive examples, while those that do not belong are called – negative examples.

Depending on a particular output, several cases might be identified:

  • True positive (TP) – the object belongs to the class and is classified as a class member.

Example: A SPAM message is classified as SPAM, or a patient classified as being in a particular condition is, in fact, experiencing this condition.

  • False positive (FP) – the object that does not belong to the class is classified as a class member.

Example: A harmless message is classified as SPAM, or a patient who is not experiencing a certain condition is classified as being in this condition.

  • True negative (TN) – the object that is classified as not being a member of the class, in fact, is not a member.

Example: A harmless message is classified as harmless, or a patient not experiencing a certain condition is classified as not experiencing.

  • False negative (FN) – the object that belongs to the class is classified as not belonging to it.

Example: A SPAM message is classified as harmless, or a patient experiencing a certain condition is classified as not experiencing.

While training the model and counting the number of training samples falling into the mentioned cases, it is possible to describe its accuracy mathematically. Here are the most commonly used statistics:

  • Sensitivity = TP/(TP+FN)
  • Specificity = TN/(FP+TN)
  • Positive predictive value = TP/(TP+FP)
  • Negative predictive value = TN/(TN+FN)
  • Accuracy = (TP+TN)/(TP+FP+TN+FN)

Training the models

The classification model is trained using the initial sample data, which is split into training and testing subsamples. Usually, the training is done using the following steps:

  1. The sample is split into training and testing subsamples.
  2. Training subsample is used to train the model.
  3. Test subsample is used to acquire accuracy statistics as described earlier.
  4. Steps 1 – 3 are repeated several times (usually at least 10 – 25) to acquire average model statistics.

The average statistics are used to describe the model.

The model's results on the test subsample depend on different factors—noise in the data, the proportion of classes represented in the data (how even classes are distributed), and others which are out of the developer's reach. However, by manipulating the sample split, it is possible to provide more data for training and thereby expect better training results—seeing more examples might lead to a better grasp of the class features. However, seeing too much might lead to a loss of generality and, consequently, dropped accuracy on test subsamples or previously unseen examples. Therefore, it is necessary to maintain a good balance between testing and training subsamples, usually 70% for training and 30% for testing or 60% for training and 40% for testing. In real applications, if the initial data sample is large enough, a third subsample is used – a validation set used only once to acquire final statistics and not provided to developers. It usually holds small but representative subsamples in 1-5% of the initial data sample.

Unfortunately, the data sample is not large enough in many practical cases. Therefore, several testing techniques are used to ensure the reliability of statistics and respect the scarcity of data. The method is called cross-validation, which uses the training and testing data subsets but allows data to be saved without using the validation set.

Random sample

 Random Sample
Figure 88: Random Sample

Most of the data is used for training in random sample cases (figure 88), and only a few randomly selected samples are used to test the model. The procedure is repeated many times to ensure the model's average accuracy. Random selection has to be made without replacements. In the case of using replacements, the method is called bootstrapping, which is widely used and generally is more optimistic.

K-folds

 K-folds
Figure 89: K-folds

This approach splits the training set into smaller sets called splits (in the figure 89 above, there are three splits). Then, for each split, the following steps are performed:

  • Model is trained using k-1 folds; in the picture above (figure 89), every split (row) is divided into k folds, where in sequence, split by the split, an i-th fold is used for testing, while the k-1 folds for training.
  • The model's accuracy is assessed iteratively using the remaining fold for each split.

The overall performance for the k-fold cross-validation is the average performance of the individual performances computed for each split. It requires extra computing but respects data scarcity, which is why it is used in practical applications.

One out

 One Out
Figure 90: One Out

This approach splits the training set into smaller sets called splits in the same way as previous methods described here (in the figure 90 above; there are three splits). Then, for each split, the following steps are performed:

  • The model is trained using n-1 samples, and only one sample is used for testing the model's performance.
  • The overall performance for the one-out cross-validation is the average performance of the individual performances computed for each split. It requires extra computing but respects data scarcity, which is why it is used in practical applications.

This method requires many iterations due to the limitations of the testing set.

Within the following sub-chapters, two very widely used algorithm groups are discussed:

  • Decision Trees - a fundamental set of methods and their variants are discussed.
  • Random Forests - one of the best out-of-the-box methods widely used by data analysts.

Decision Trees

Decision trees are the most commonly used base technique in classifications. To describe the idea of the decision trees, a simple data set might be considered, as presented in figure 91:

 Classification Problem Example
Figure 91: Classification Problem Example

In this dataset, xn indicates the n-th observation; each column refers to a particular factor, while the last column, “Call for technical assistance,” refers to the class variable with values Yes or No, respectively.

To build a decision tree for the given problem of calling the technical assistance, one might consider constructing a tree where each path from the root to tree leaves represents a separate example xn with a complete set of factors and their values corresponding to the given example. This solution would provide the necessary outcome – all examples will be classified correctly. However, there are two significant problems:

  • The developed model is the same table encoded into a tree data structure, which might require the same amount of memory or even more since the model literally memorises all the examples.
  • The generalisation is lost, which is the essential feature of classification models – the ability to classify correctly unseen examples. In this case, this ability is lost.

Referring to Occam's razor principle [34] the most desirable model is the most compact one, i.e., using only the factors necessary to make a valid decision. This means that one needs to select the most relevant factor and then the next most relevant factor until the decision is made without a doubt.

 Factor Selection Example
Figure 92: Factor Selection Example

In the figure 92 above, on its left, the factor “The engine is running” is considered, which has two potential outputs: Yes and No. For the outcome Yes, the target class variable has an equal number of positive (Yes) and negative (No) class values, which does not help much in deciding since it is still 50/50. The same is true for output No. So, checking if the engine works does not bring the decision closer.

The figure 92 on its right considers a different factor with similar potential outputs: “There are small children in the car.” For the output No, all the examples have the same class variable value—No, which makes it ideal for deciding since there is no variability in the output variable. A slightly less confident situation is for the output Yes, which produces examples with six positive class values and one negative. While there is a little variability, it is much less than for the previously considered factor.

In this simple example, it is obvious that checking if children are in the car is more effective than checking the engine status. However, an effective estimate is needed to assess the potential effectiveness of a given factor. Ross Quinlan, in 1986, proposed an algorithm ID3 [35], which employs an Entropy measure:

where:

E(D) - Entropy for a given data set D.

C - Total number of values c of the class variable in the given data set D.

p(c) - The proportion of examples with class value c to the total number of examples in D.

E(D) = 0, when only one class value is represented (the most desirable case), and E(D) = 1, where class values are evenly distributed in the D (the least desirable case).

To select a particular, estimating how much uncertainty is removed from the data set after applying a specific factor (test) is necessary. Quinlan proposed to use Information gain:

where:

IG(D,A) - Information gain of the dataset D, when factor A is applied to split it into subsets.

E(D) - Entropy for a given data set D.

T - Subsets of D, created by applying factor A.

p(t) - The proportion of examples with class value t to the total number of examples in D.

E(t) - Entropy of subset t.

The attribute with the most significant information gain is selected to split the data set into subsets. Then, each subset is divided into subsets in the same way. The procedure is continued until each of the subsets has zero entropy or no factors remain to test. The approach, in its essence, is a greedy search algorithm with one hypothesis, which is refined by each iteration. It uses statistics from the entire data set, which makes it relatively immune to missing values, contradictions or errors. Since the algorithm seeks the best-fitting decision tree, it might run into a local minima trap, where the generalisation is lost. To avoid possible local minima solutions, it is necessary to simplify or generalise the decision tree. There are two common approaches:

  • Methods that monitor the hypothesis development and stop it when overfitting risks are significant. In this case, an accuracy change rate might be used, i.e. after every factor addition, the classification accuracy is measured. If the changes are small enough, it indicates that further model development does not bring significant improvements and can be stopped. For this reason, if the data set is large enough, a separate “pruning” dataset is used.
  • Methods that allow overfitting and then pruning the tree to simplify or generalise the decision tree. In this case, the decision tree is transformed into a set of IF-THEN rules, where each rule represents a path from the decision tree root to the leaves. Iteratively, every rule is generalised by excluding conditionals from the rule's premise and classification accuracy is checked. If the changes are acceptably small, then the conditional is excluded permanently. If the data set is large enough, a separate “pruning” dataset is used.

However, knowing the best factor to split the data set is not always helpful due to the costs related to the factor value estimation. For instance, in the medical domain, the most effective diagnostic methods might be the most expensive and, therefore, not always the most appropriate. Over time, different alternatives to information gain have been developed to respect expenses that are related to factor value estimation:

Alternative 1:

Alternative 2:

Currently, many other alternatives to the known ID3 family are used: ILA [36], RULES 6 [37], CN2 [38], CART [39].

The mentioned here does not use entropy-based estimates, reducing the computational complexity of the algorithms.

Random Forests

Random forests [40] are among the best out-of-the-box methods highly valued by developers and data scientists. For a better understanding of the process, an imaginary weather forecast problem might be considered, represented by the following true decision tree (figure 93):

 Weather Forecast Example
Figure 93: Weather Forecast Example

Now, one might consider several forecast agents – friends of neighbours - where each provides their forecast depending on the factor values. Some will be higher than the actual value, and some will be lower. However, since they all use some experience-based knowledge, the forecast collected will be distributed around the exact value. The Random forest (RF) method uses hundreds of forecast agents and decision trees and then applies majority voting (figure 94).

 Weather Forecast Voting Example
Figure 94: Weather Forecast Voting Example

Some advantages:

  • RF uses more knowledge than a single decision tree.
  • Furthermore, the more diverse the initial information sources used, the more diverse the models will be and the more robust the final estimate.
  • This is true because a single data source might suffer from data anomalies reflected in model anomalies.

RF features:

  • Each tree in the forest uses a randomly selected subset of factors.
  • Each tree has a randomly sampled subset of training data.
  • However, each tree is trained like usual.
  • This increases the independence of data anomalies.
  • When a decision is made, it is a simple average of the whole forest.

Each tree in the forest is grown as follows:

  • If the number of cases in the training set is N, a sample of N cases at random is taken - but with replacement, from the original data. Some samples will be represented more than once.
  • This sample will be the training set for growing the tree.
  • If there are M input factors, a number m«M (m is significantly smaller than M) is specified such that at each node, m factors are selected randomly out of the M, and the best split on this m is used to split the node.
  • The value of m is held constant while the forest grows.
  • Each tree is grown to the largest extent possible. There is no pruning.

Additional considerations

Correlation Between Trees in the Forest: The correlation between any two trees in a Random Forest refers to the similarity in their predictions across the same dataset. When trees are highly correlated, they will likely make similar mistakes on the same inputs. In other words, if many trees make similar errors, the model's aggregated predictions will not effectively reduce the bias and variance, and the overall error rate of the forest will increase. The Random Forest method addresses this by introducing randomness in two main ways:

  • Bootstrap Sampling: Each tree is trained on a different bootstrapped sample (random sampling with replacement) of the training data, which helps to reduce the correlation between the trees.
  • Feature Randomness: A random subset of features is selected for each split within a tree. This subset size is denoted by m, the number of features considered at each split. By reducing m, fewer features are considered at each split, leading to more diversity among the trees and, consequently, lower correlation. Decreasing the correlation among trees increases the effectiveness of the ensemble because it reduces the variance of the overall model error, as the trees are less likely to make the same mistakes.

Strength of Each Individual Tree: The strength of an individual tree refers to its classification accuracy on new data, i.e., its ability to perform as a string classifier. In Random Forest terminology, a tree is strong if it has a low error rate. If each tree can classify well independently, the aggregate predictions of the forest will be more accurate.

Each tree's strength depends on various factors, including its depth and the features it uses for splitting. However, there is a trade-off between correlation and strength. For example, reducing m (the number of features considered at each split) increases the diversity among the trees, lowering correlation. Still, it may also reduce the strength of each tree, as it may limit its access to highly predictive features.

Despite this trade-off, Random Forests balance these dynamics by optimising m to minimise the ensemble error. Generally, a moderate reduction in m lowers correlation without significantly compromising the strength of each tree, thus leading to an overall decrease in the forest's error rate.

Implications for the Forest Error Rate: The forest error rate in a Random Forest model is influenced by the correlation among the trees and the strength of each tree. Specifically:

  • Increasing correlation among trees typically increases the error rate, as it reduces the ensemble's ability to correct individual trees' errors.
  • Increasing the strength of each tree (i.e., reducing its error rate) generally decreases the forest error rate, as each tree becomes a more reliable classifier.

Consequently, an ideal Random Forest model balances between individually strong and sufficiently diverse trees, typically achieved by tuning the m parameter.

For further reading on practical implementations, it is highly recommended to look at the SciKit-learn package of the Python community [41]

Introduction to Time Series Analysis

As discussed in the data preparation chapter, time series usually represent the dynamics of some process. Therefore, the order of the data entries has to be preserved. As emphasised, a time series is simply a set of data—usually events—arranged by a time marker. Typically, time series are placed in the order in which events occur/are recorded.

In the context of IoT systems, there might be several reasons why time series analysis is needed. The most widely ones are the following:

  • Process dynamics forecasting for higher-performing decision support systems. An IoT system, coupled with appropriate cloud computing or other computing infrastructure, can provide not only a rich insight into the process dynamics but also a reliable forecast using regression algorithms like the ones discussed in the regressions section or more advanced like autoregressive integrated moving average (ARIMA) and seasonal ARIMA (SARIMA) [42] [43].
  • Anomaly detection is a highly valued feature of IoT systems. In its essence, anomaly detection is a set of methods enabling the recognition of unwanted or abnormal behaviour of the system over a specific time period. Anomalies might be expressed in data differently:
    • A certain event in time: for instance, a measurement jumps over a defined threshold value. This is the simplest type of anomaly, and most control systems cope with it by setting appropriate threshold values and alerting mechanisms.
    • Change of a data fragment shape: This might happen to technical systems, where a typical response to control inputs has changed to some shape that is not anticipated or planned. A simple example is an engine's response to turning it on and reaching typical rpm values. Due to overloads, worn-out mechanics, or other reasons, the response might take too long, signalling that the device has to be repaired.
    • Event density: Many technical systems' behaviour is seasonal–cyclic. Changes in the periods and their absolute values, or their response shapes within the period, are excellent predictors of current or future malfunctioning. So, recognition of typical period shapes and response shapes in time is of high value for predictive maintenance, process control, and other applications of IoT systems.
    • Event value distribution: In most measuring systems, measurements due to imperfection of sensors or systems are distributed around some actual value, providing an estimate of the true value with some variance. Due to mechanical wear, the variance might increase or change in value distribution over time, which is a good indicator and predictor of malfunctioning or possible system failures.

Due to its diversity, various algorithms might be used in anomaly detection, including those covered in previous chapters. For instance, clustering for typical response clusters, regression for normal future states estimation and measuring the distance between forecast and actual measurements, and classification to classify normal or abnormal states. An excellent example of using classification-based methods for anomaly detection is Isolation forests [44]

  • Understanding of system dynamics, where the system owner is interested in having insightful information on the system functioning to make good decisions on its control or further development. Typical applications are system monitoring, the production of dashboards, different industrial research, and the study of system prototypes.

While most of the methods covered here might be employed in time series analysis, this chapter outlines anomaly detection and classification cases through an industrial cooling system example.

A cooling system case

A given industrial cooling system has to maintain a specific temperature mode of around -18oC. Due to the technology specifics, it goes through a defrost cycle every few hours to avoid ice deposits, leading to inefficiency and potential malfunction. However, a relatively short power supply interruption has been noticed at some point, which needs to be recognised in the future for reporting appropriately. The logged data series is depicted in the following figure 95:

 Cooling System
Figure 95: Cooling System

It is easy to notice that there are two standard behaviour patterns: defrost (small spikes), temperature maintenance (data between spikes) and one anomaly – the high spike.

One possible alternative for building a classification model is to use K-nearest neighbours (KNN). Whenever a new data fragment is collected, it is compared to the closest ones and applies a majority principle to determine its class. In this example, three behaviour patterns are recognised; therefore, a sample collection must be composed for each pattern. It might be done by hand since, in this case, the time series is relatively short.

Examples of the collected patterns (defrost on the left and temperature maintenance on the right) are present in figure 96:

 Example Patterns
Figure 96: Example Patterns

Unfortunately, in this example, only one anomaly is present (figure 97):

 Anomaly Pattern
Figure 97: Anomaly Pattern

A data augmentation technique might be applied to overcome data scarcity, where several other samples are produced from the given data sample. This is done by applying Gaussian noise and randomly changing the sample's length (for example, the original anomaly sample is not used for the model). Altogether, the collection of initial data might be represented by the following figure 98:

 Data Collection
Figure 98: Data Collection

One might notice that:

  • Samples of different patterns are different in length.
  • Samples of the same pattern are of different lengths.
  • The interested phenomena (spikes) are located at different locations within the samples and are slightly different.

The abovementioned issues expose the problem of calculating distances from one example to another since comparing data points will produce misleading distance values. To avoid it, a Dynamic Time Warping (DTW) metric has to be employed [45]. For the practical implementations in Python, it is highly recommended to visit TS learn library documentation [46].

Once the distance metric is selected and the initial dataset is produced, the KNN might be implemented. The closest ones can be determined using DTW by providing the “query” data sequence. As an example, a simple query is depicted in the following figure 99:

 Single Query
Figure 99: Single Query

For practical implementation, the TSleanr package is used. In the following example, 10 randomly selected data sequences are produced from the initial data set. While the data set is the same, none of the selected data sequences are “seen” by the model due to the randomness. The following figure shows the results 100:

 Multiple Test Queries
Figure 100: Multiple Test Queries

As it might be noticed, the query (black) samples are somewhat different from the ones found to be “closest” by the KNN. However, because of the DTW advantages, the classification is done perfectly. The same idea demonstrated here might be used for unknown anomalies by setting a similarity threshold for DTW, classifying known anomalies as shown here, or even simple forecasting.

Hints for Further Readings on AI

This chapter has covered some of the most widely used data analysis methods applicable in sensor data analysis, which might be typical for IoT systems. However, it is only the surface of the exciting world of data analytics and AI. The authors suggest the following online resources besides the well-known online learning platforms to dive into this world.

Useful Python libraries
  • SciKit learn library for general data analysis and fundamental AI algorithms [47]: a handy Python library with complemented detailed documentation and example code snippets.
  • Time series library TSlearn [48]: provides very insightful comments and documentation on different algorithms and approaches widely used in time series analysis.
  • Pytorch [49] and Keras [50]: community pages for those who seek deep learning resources and more complex models in comparison to those that were covered in this chapter.
  • Scipy [51]: a vibrant library for statistical models in Python.
Useful tools
  • Orange [52]: visual programming tool for data analysis and visualisation.
  • Weka [53]: a ready-to-use data analysis and visualisation tool.

IoT Security

IoT systems and services are widely adopted in various industries, such as health care, agriculture, smart manufacturing, smart energy systems, intelligent transport systems, logistics (supply chain management), smart homes, smart cities, and security and safety. The primary goal of incorporating IoT into existing systems in various industries is to improve productivity and efficiency. Despite the enormous advantages of integrating IoT into existing systems in multiple sectors, including critical infrastructure, there are concerns about the security vulnerabilities of IoT systems. Businesses are increasingly anxious about the possible risks IoT systems introduce into their infrastructure and how to mitigate them.

One of the weaknesses of IoT devices is that they can easily be compromised. This is because some IoT manufacturers of IoT devices fail to incorporate security mechanisms into the devices, resulting in security vulnerabilities that can easily be exploited. Some manufacturers and developers often focus on device usability and adding features that satisfy the users' needs while paying little or no attention to security measures. Another reason that IoT device manufacturers and developers pay little or no attention to security because they are often focused on getting the device to the market as soon as possible. Also, some IoT users focus mainly on the price of the devices and ignore security requirements, incentivising the manufacturers to focus on minimising the cost of the devices while trading off the security of the devices.

Also, IoT hardware constraints make it challenging to implement reliable security mechanisms, making them vulnerable to cyber-attacks. Since batteries with limited energy capacities power IoT devices, they possess low-power computing and communication systems, making it hard to implement sufficient security mechanisms. Using power-hungry computing and communication systems that would permit the incorporation of reliable security mechanisms will significantly reduce the device's lifetime (the time from when the device is deployed to when the energy stored in its battery is completely drained). As a result, manufacturers and developers tend to trade off the security of the device with the reliability and lifetime of the device.

A successful malicious attack on an IoT system could result in data theft, loss of data privacy, and damage to other critical systems connected to the IoT systems. IoT systems are increasingly being targeted due to the relative ease with which they can be compromised. Also, they are increasingly being incorporated into critical infrastructure such as energy, water, transportation, health care, education, communication, security, and military infrastructures, making them attractive targets, especially during conventional, hybrid, and cyber warfare. In this case, the attackers' goal is not only to compromise IoT systems but to exploit the vulnerabilities of the IoT device to compromise or damage critical infrastructures. Some examples of attacks that have been orchestrated by exploiting vulnerabilities of IoT devices include:

  • The Mirai Botnet attack: An IoT botnet (a network of IoT devices, each of which runs bots) was used to conduct a massive Distributed Denial of service (DDoS) attack against the internet's domain name system (DNS) provider Dyn in October 2016. The traffic from the IoT botnet, including devices such as cameras and DVR players, was coordinated to bombard Dyn's DNS servers with traffic until they became overwhelmed and collapsed under the strain. The assault that was sustained for several hours disrupted the services of websites such as Twitter, the Guardian, Netflix, Reddit, CNN and many others in Europe and the US.
  • The Stuxnet attack: It is one of the most well-known IoT attacks. It was designed to target the Iranian uranium enrichment plant in Natanz, Iran. The attack compromised the Siemens Step7 software running on a Windows operating system, providing malicious software (worm) access to the industrial program logic controllers. The attack damaged several uranium centrifuges, demonstrating the extent to which IoT-based attacks could damage energy systems and critical infrastructure.
  • The Jeep Hack: This test attack was conducted by researchers in July 2015 on a Jeep SUV. They successfully took control of the vehicle by exploiting a firmware update vulnerability. They demonstrated that this attack can control the vehicle's speed and steer it off the road. Therefore, as more IoT sensors are added to cars, there is a serious risk that they can be exploited to cause a massive attack on cars, which could result in huge accidents. This kind of vulnerability can be exploited for terror attacks or targeted killings.
  • Cold in Finland: Cybercriminals conducted an IoT-based attack on heating systems in the Finnish city of Lappeenranta by turning off the heating system. They also conducted a DDoS attack on the heating infrastructure, forcing the heating controllers to reboot the system repeatedly and preventing the heating system from ever turning on. This is a severe attack, given the cold temperatures in Finland during the winter season. A similar attack may be conducted against air conditioning systems in a hot environment, which may cause serious problems for inhabitants. Thus, IoT systems may be leveraged to conduct attacks on critical civilian infrastructures to disrupt the proper functioning of society.
  • The Verkada hack: This attack was conducted against a cloud-based video surveillance service provider, Verkada. The attackers successfully compromised the privacy of their customers (including factories, hospitals, schools, and prisons) by gaining access to live feeds from about 150000 cameras. This shows the risk of a successful full compromise on IoT cloud/fog computing service providers' customers, especially customers that provide critical services for society.

The attacks mentioned above are just a few examples of how cybercriminals may exploit the vulnerabilities of IoT devices to compromise and disrupt services in other sectors, especially the disruption of critical infrastructure. These examples demonstrate the urgent need to incorporate security mechanisms into IoT infrastructures, especially those integrated with essential infrastructures. The above attack examples also indicate that the threat posed by IoT is real and can seriously disrupt the functioning of society and result in substantial final and material losses. It may even result in the loss of several lives. Thus, if serious attention is not given to IoT security, IoT will soon be an Internet of Threats rather than an Internet of Things.

Therefore, IoT security involves design and operational strategies to protect IoT devices and other systems against cyber attacks. It includes the various techniques and systems developed to ensure the confidentiality of IoT data, the integrity of IoT data, and the availability of IoT data and systems. These strategies and systems are designed to prevent IoT-based attacks and ensure IoT infrastructures' security. In this chapter, we will discuss IoT security concepts, IoT security challenges, and techniques that can be deployed to secure IoT data and systems from being compromised by attackers and used for malicious purposes.

Cybersecurity Concepts

IoT designers and engineers need to understand cybersecurity concepts. This will help them understand the various attacks that can be conducted against IoT devices and how to implement security mechanisms to protect them against cyber attacks. This section discusses some cybersecurity concepts required to understand IoT security.

What is cybersecurity

Cybersecurity refers to the technologies, strategies, and practices designed to prevent cyberattacks and mitigate the risk posed by cyberattacks on information systems and other cyber-physical systems. It is sometimes called information technology security, which involves developing and implementing technologies, protocols, and policies to protect information systems against data theft, illegal manipulation, and service interruption. The main goal of cybersecurity systems is to protect the hardware and software systems, networks, and data of individuals and organisations against cybersecurity attacks that may bridge these systems' confidentiality, integrity, and availability.

After understanding cybersecurity, it is also essential to understand what a cyberattack is. A cyberattack can be considered any deliberate compromise of an information system's confidentiality, integrity, or availability. That is unauthorised access to a network, computer system or digital device with a malicious intention to steal, expose, alter, disable, or destroy data, applications or other assets. A successful cyberattack can cause a lot of damage to its victims, ranging from loss of data to financial losses. An organisation whose systems have been compromised by a successful cyber attack could lose its reputation and be forced to pay for damages incurred by customers due to a successful cybersecurity attack.

The question is why should we be worried about cybersecurity attacks, especially in the context of IoT. The widespread adoption of IoT to improve business processes and personal well-being has exponentially increased the options available to cybercriminals to conduct cybersecurity attacks, increasing cybersecurity-related risks for businesses and individuals. This underscores the need for IoT engineers, IT engineers, and other non-IT employees to understand cybersecurity concepts.

The confidentiality, integrity and availability (CIA) triad

The CIA triad is a conceptual framework that combines three cybersecurity concepts, confidentiality, integrity, and availability, to provide a simple and complete checklist for implementing, evaluating, and improving cybersecurity systems. They form a set of requirements that a well-designed cybersecurity system must sacrifice to ensure information systems' confidentiality, integrity, and availability. It provides a powerful approach to identify vulnerabilities and threats in information systems and then implement appropriate technologies and policies to protect the information systems from being compromised. It provides a high-level framework that guides organisations and cybersecurity experts when designing, implementing, evaluating, and auditing information systems. In the following paragraphs, we briefly discuss the elements of the CIA triad (figure 101).

CIA Triad
Figure 101: CIA Triad

Confidentiality

It involves the technologies and strategies to ensure that sensitive data is kept private and inaccessible to unauthorised individuals. That is, sensitive data should be viewed only by authorised individuals within the organisation and kept private from unauthorised individuals. Some of the data collected by IoT sensors is very sensitive, and it must be kept private and should not be viewed by unauthorised individuals with malicious intentions. Data confidentiality involves a set of technologies, protocols, and policies designed and implemented to protect data against unintentional, unlawful, or unauthorised access, disclosure, or theft. To ensure data confidentiality, it is essential to answer the following questions:

  • Who should be able to view the data or have access to the data?
  • Are there laws, regulations, or contracts that require the data to be confidential?
  • Are there specific conditions under which the data may be used or disclosed?
  • How sensitive is the data, and what consequences may be faced if unauthorised individuals access the data?
  • How valuable can the data be to unauthorised individuals (e.g., cybercriminals) if they can access it?

To ensure the confidentiality of the data stored in computer systems and transported through computer and telecommunication networks, some security guidelines should be followed:

  • Encrypt sensitive data during storage in computer systems and transportation through computer and telecommunication networks. Encryption renders the data unreadable or unintelligible to unauthorised persons, and only those who possess the appropriate keys can decrypt and access the data. The encryption scheme used is kept confidential, and unauthorised individuals cannot access it unless the encryption scheme used is compromised.
  • Proper data access management is needed to ensure that only authorised individuals with the proper privileges can access the data. Users should always authenticate themselves using strong passwords, and multi-factor (e.g., two-factor) authentication should be used where possible. Also, users' access rights or privileges should be regularly reviewed, and unnecessary rights or privileges should be revoked.
  • The physical location of hardware systems and paper documents should be secured appropriately. Just as it is essential to control remote access to digital systems, access to the physical location where the hardware and other critical assets are stored should also be thoroughly controlled. Even paper documents should be properly sorted and stored in secure locations, and access must be controlled.
  • Any data, hardware devices, and paper documents no longer needed should be securely disposed of immediately.
  • Care must be taken to ensure data privacy or confidentiality is not compromised, especially for sensitive data. If it is possible to do so without collecting sensitive data, then it should not be collected, as one of the ways to avoid the risk of handling sensitive data is not to collect it in the first place if it's possible to do without it.
  • Sensitive data should be used only when necessary; otherwise, it should not be used to preserve its confidentiality.
  • Appropriate security systems should be implemented to ensure data confidentiality. Some of these measures include access control systems (e.g., firewalls), threat management systems, and attack detection and prevention systems.

Integrity

Integrity in cybersecurity involves technologies and strategies designed to ensure that data is not modified or deleted during storage or transportation by unauthorised persons. It is essential to maintain the integrity of the data to ensure that it is consistent, accurate, and reliable. In the context of IoT, integrity is the assurance that the data collected by the IoT sensors is not illegally altered during transportation, processing, and storage, making it incomplete, inaccurate, inconsistent, and unreliable. The data can only be modified or changed by those authorised to access it. The collected data must be kept complete, accurate, consistent and safe throughout its entire lifecycle in the following ways [54]:

  • To ensure it is complete, the data must be maintained in full form with no data elements filtered, truncated or lost.
  • The accuracy of the data is preserved by ensuring that the data is not altered or aggregated either by human error or malicious attacks in such a way that affects the results of further processing and analysis of the data.
  • The consistency of the data should be maintained by ensuring that the data is unchanged regardless of how often it's accessed and no matter how long it's stored.
  • Data safety should be ensured by guaranteeing it is securely maintained and accessed only by authorised applications and individuals. Data security methods such as authentication, authorisation, encryption, backups, etc., can ensure that unauthorised applications or individuals do not alter or destroy the data.

The IoT system designers, manufacturers, developers, and operators should ensure that the data collected is not lost, leaked, or corrupted during transportation, processing, or storage. As the data collected by IoT sensors is growing and lots of companies depend on the results from the processing of IoT data for decision-making, it is vital to ensure the integrity of the data. It must be assured that the IoT data collected is complete, accurate, consistent and secure throughout its lifecycle, as compromised data is of little or no interest to organisations and users. Also, data losses due to human error and cyberattacks are undesirable for organisations and users. Physical and logical factors can influence the integrity of the data.

  • Physical integrity: It includes the various ways the integrity of the data can be compromised during transportation, storage and retrieval. During the transportation of data, some parts of the data could be lost due to packet losses occurring at the network equipment or packet errors caused by a disturbance in the transmission media. Also, data could be lost due to physical damage to the storage or computing devices. The integrity of the data could be compromised due to the following reasons:
    • Hardware failures and faults.
    • Design failures and negligence.
    • Natural failures may result from the deterioration of the hardware device (e.g., corrosion).
    • Power failures and outages.
    • Natural disasters.
    • Environmentally induced failures resulting from extreme environmental failures like high temperatures.
    • Cyberattacks designed to cause hardware or power failures (e.g., energy depletion attacks).

The physical integrity of data could be enforced by:

  • Implementing redundancy in data storage systems to ensure that failure of a storage memory will not result in data losses.
  • Implementing battery-protected write cache.
  • Deploying storage systems with advanced error-correcting memory devices.
  • Implementing clustered and distributed file systems.
  • Implementing error-detection algorithms to detect any changes in the data during transportation.
  • Deploying backups that are located in different physical locations.
  • Implement network protection mechanisms to ensure the data is not corrupted or lost during transportation.

IoT system designers, manufacturers, and developers can adopt various technologies and policies to ensure the integrity of the hardware from the IoT devices and communication to fog/cloud data centres.

  • Logical integrity: Even with no hardware issues, there can still be unintended or malicious alterations in the data or data losses during transportation, storage, and retrieval that could alter its integrity. Software design flaws, bugs, poor network configurations, human error, and cyberattacks can compromise logical integrity. Some of the data integrity risks include:
    • Data may be deleted, wrongly entered, and illegally altered in the storage system.
    • Data may be damaged, lost, or illegally altered during transportation.
    • Data may be stolen, damaged, or illegally altered by a malicious hacker after a successful cyberattack.
    • Poor network and infrastructure configuration may cause data to be stolen, damaged, lost, or illegally altered.

Enforcing data integrity is a complex task that requires carefully integrating cybersecurity tools, policies, regulations, and people. Some of the ways that data integrity can be enforced include but are not limited to the following strategies:

  • Access to the data should be strictly controlled using effective authentication and authorisation tools to ensure that unauthorised persons do not manipulate it.
  • Logs of users' actions should be created and carefully audited to keep track of their changes.
  • Data should be encrypted during transportation and storage to ensure that it is not altered or damaged during transportation or storage.
  • Data protection mechanisms should be used to prevent data losses. For example, data should be backed up regularly, and error detection and correction communication algorithms should be used.
  • When accessing data to process or analyse it, necessary steps should be taken to ensure that it is not corrupted, lost, or damaged, primarily when it is accessed by third parties for analysis.
  • The employees and other stakeholders should be trained to handle the data so that its integrity is not lost, altered, or damaged.

Availability

The computing, communication, and data storage and retrieval systems should be accessible anytime and when needed. Availability in the context of cybersecurity is the ability of authorised users or applications to have reliable access to the information systems when necessary at any time. It is one of the elements of the CIA triad that constitutes the requirement for designing secure and reliable information and communication systems such as IoT. Given that IoT nodes are being integrated into critical infrastructure and other existing infrastructure of companies and individuals, longer downtimes are not tolerated, making availability a crucial requirement. Availability disruption could result from any of the following causes:

  • Hardware failures that may result from natural failures resulting from deterioration.
  • Software failures that may result from software design flaws or bugs.
  • Cyberattacks, e.g., DoS/DDoS, energy depletion attack in the case of an IoT node.
  • Power failure may result from power outages or depletion of energy stored in the battery in the case of IoT nodes.
  • Data damage, corruption, or losses during transportation or storage and retrieval that prevent authorised users and applications from accessing the data when needed.
  • Bandwidth bottlenecks and link failures in the communication network that interfere with data transfer to users and applications that need them.
  • The downtimes could result from failure, misbehaviour, or malfunctioning of the cybersecurity systems.
  • Data to the computing, communication and storage infrastructure resulting from natural disasters, theft, vandalisation, political unrest, or conflict.

Some of the ways to ensure the availability of information systems and data include the following:

  • Creating data backups and storing the backup systems in different geographical locations.
  • Ensuring effective operation and maintenance processes.
  • Ensuring effective and efficient energy sources and energy storage systems.
  • Energy consumption should be minimised to increase the lifetime of IoT nodes.
  • Software design flaws and bugs should be resolved immediately and quickly to minimise downtimes.
  • The physical storage locations of hardware infrastructure should be carefully secured.
  • Effective authentication and authorisation mechanisms should ensure that authorised users can access the systems when needed.
  • Cybersecurity systems should be carefully implemented and configured to minimise performance degradation and downtimes resulting from malfunctioning.
  • Ensuring the networking systems are correctly configured with appropriate security mechanisms and networking failures are quickly resolved.

Some commonly used cybersecurity terms

To understand advanced cybersecurity concepts and technologies, it is crucial to have a good understanding of some basic cybersecurity concepts. Below, some cybersecurity concepts are presented.

Cybersecurity risk: It is the probability of being exposed to a cybersecurity attack or that any of the cybersecurity requirements of confidentiality, integrity, or availability is violated, which may result in data theft, leakage, damage or corruption. It may also result in service disruption or downtime that may cause the company to lose revenue and damage infrastructure. An organisation that falls victim to a successful cyber-attack may lose its reputation and be compelled to pay damages to its customers or to pay a fine to regulatory agencies. Thus, a cybersecurity risk is the potential losses that an organisation or individuals may experience as a result of successful cyberattacks or failures of the information systems that may result in loss of data, customers, revenues, and resources (assets and financial losses).

Threats: It is an action performed to violate any cybersecurity requirements that may result in data theft, leakage, damage, corruption, or losses. The action may either disclose the data to unauthorised individuals or alter the data illegally. It may equally result in the disruption of services due to system downtime, system unavailability, or data unavailability. Threats may include, among others, device infections with viruses or malware, ransomware attacks, denial of service, phishing attacks, social engineering attacks, password attacks, SQL injection, data breaches, man-in-the-middle attacks, energy depletion attacks (the case of IoT devices), or many other attack vectors. Cybersecurity threats could result from threat actors such as nation stations, cybercriminals, hacktivists, disgruntled employees, design errors, misconfiguring of systems, software flaws or bugs, terrorists, spies, errors from authorised users, and natural disasters [55].

Cybersecurity vulnerability: It is a weakness, flaw, or error found in an information system or a cybersecurity system that cybercriminals could exploit to compromise the security of an information system. There are several cybersecurity vulnerabilities, and so many are still being discovered. Still, the most common ones include SQL injection, buffer overflows, cross-site scripting, security misconfiguration [56], weak authentication and authorisation mechanisms, and unencrypted data during transportation or storage. Security vulnerabilities can be identified using vulnerability scanners and performing penetration testing. When a vulnerability is detected, necessary steps should be taken to eliminate or mitigate its risk.

Cybersecurity exploit: A cybersecurity exploit is the various ways that cybercriminals take advantage of cybersecurity vulnerabilities to conduct cyberattacks to compromise the confidentiality, integrity, and availability of information systems. The exploit may involve the use of advanced techniques (e.g., commands, scripting, or programming) and software tools (proprietary or open-source) to identify and exploit vulnerabilities to steal data, disrupt the services, damage or corrupt the data, and hijack data or systems in exchange for money.

Attack vector: It is the various ways that attackers may compromise the security of an information system, such as computing, communication, or data storage and retrieval systems. Some of the common attack vectors include:

  • Phishing attacks.
  • Email attachments.
  • Credential theft using various social engineering techniques.
  • Account takeover to steal or damage data and other resources and to conduct further attacks.
  • Cryptoanalysis of encrypted data.
  • Man-in-the-middle attacks.
  • Cross-site scripting.
  • SQL injection.
  • Insider threats.
  • Vulnerability exploits (e.g., vulnerabilities in unpatched software, servers, and operating systems).
  • Browser-based attacks, application compromise.
  • Brute-force attacks to compromise passwords.
  • Using malware to take over devices, gain unauthorised access, and may cause damage to data or information systems.
  • Exploiting the presence of open ports.

The various approaches to eliminate attack vectors to reduce the chances of a successful attack include the following [57]:

  • Encryption of data during transportation, storage, and retrieval.
  • Designing effective security policies and training and compelling employees and stakeholders to apply them.
  • Patching security vulnerabilities by regularly updating the software and hardware and checking the various system configurations to identify any vulnerabilities.
  • Implementing secure network access mechanisms.
  • Performing regular security audits to identify and eliminate threats and vulnerabilities before cybercriminals exploit them.
  • Deploying threats (intrusion) detection and prevention systems.

Attack surface: An attack surface is a location or possible attack vectors that cybercriminals can target or use to compromise data and information systems' confidentiality, integrity, and availability. Organisations and individuals should always strive to minimise their attack surfaces; the smaller the attack surfaces, the smaller the likelihood that their data or information systems will be compromised. So, they must constantly monitor their attack surfaces to detect and block attacks as soon as possible and minimise the potential risk of a successful attack. Some of the common attack surfaces are poorly secured devices (e.g., devices such as computers, mobile phones, hard drives, and IoT devices), weak passwords, a lack of email security, open ports, and a failure to patch software, which offers an open backdoor for attackers to target and exploit users and organisations. Another common attack surface is weak web-based protocols, which hackers can exploit to steal data through man-in-the-middle (MITM) attacks. There are two categories of attack surface, which include [58]

  • Digital attack surface: This kind of attack surface consists of all the software and hardware systems found within an organisation's infrastructure. These include applications, code, ports, servers, websites, and sensor devices (IoT devices). With the deployment of tens of millions to hundreds of millions of IoT devices, the attack surfaces created by IoT infrastructure from the sensor layer, through the networking infrastructure, to fog/cloud computing infrastructure is vast.
  • Physical attack surface: This kind of attack surface consists of all endpoint devices that an attacker can gain physical access to, such as desktop computers, hard drives, laptops, mobile phones, Universal Serial Bus (USB) drives, and IoT devices (in the case of IoT systems). Some physical attack surfaces include carelessly discarded hardware containing user data and login credentials, user passwords written on pieces of paper, and unauthorised access to the physical location where sensitive assets are stored.

A practical attack surface management provides the following advantages to organisations and individuals:

  • Identify vulnerabilities and eliminate them.
  • To mitigate the risk posed by cybersecurity threats.
  • Identify new attack surfaces created as they expand their infrastructure and adopt new services.
  • Effective management of access to critical resources and data minimises the chances of any security breach.
  • Minimise the possibility of successful cybersecurity attacks.

As IT infrastructures increase and are connected to external IT systems over the internet, they become more complex, hard to secure, and frequently targeted by cybercriminals. Some of the ways to minimise attack surfaces to reduce the risk of cyberattacks include:

  • Implementing zero-trust policies ensures that only authorised users and applications can access information resources (computing devices, sensor devices, networks, servers, databases, etc.). This eliminates or reduces the chances of unauthorised access.
  • Reducing unnecessary complexities by turning off or removing unused hardware devices and software from the IT infrastructure to reduce the attack surfaces that cybercriminals can exploit.
  • Perform regular security audits and scan the entire network and IT systems to identify vulnerabilities (both hardware and software) that cybercriminals could exploit and resolve to reduce the attack surface that cybercriminals can exploit.
  • The network should be segmented into smaller networks using firewalls and micro-segmentation strategies to add more barriers, restrict the spread of attacks, and reduce attack surfaces.
  • Regular training of employees so that they can adopt security best practices and respect security policies designed to enhance the security of data and information systems.

Encryption: Encryption is scrambling data into a secret code (encrypted data) to only be transformed back into the original data (decrypted) with a unique key by authorised users or applications. It ensures that the confidentiality and integrity of the data are not compromised. That is, it prevents the data from being stolen or illegally altered by cybercriminals. Encryption is often used to protect data during transportation, storage, and processing/analysis. The process of encryption involves the use of a mathematical cryptographic algorithm (encryption algorithm) to scramble data (plaintext) to a cyphertext that can only be unscrambled back into the plain text using another cryptographic algorithm (decryption algorithm) and an appropriate unique key. The cryptographic keys should be long enough that cybercriminals can not easily guess them through a brute-force attack or cryptanalysis. The goals of implementing encryption algorithms in information systems are:

  • To ensure the confidentiality of data, preventing unauthorised users from having access to the data and ensuring that the data is kept secret.
  • To ensure the integrity of the data by ensuring that it is not altered, damaged, or corrupted during storage or transportation.
  • To authenticate the users by verifying the origin of the data to ensure that the users are who they say they are.
  • To ensure non-repudiation by ensuring that a data sender cannot deny that they are the origin of the data.
  • It also enables organisations to comply with regulators' security requirements, which require that sensitive data be adequately protected from theft, corruption, and illegal alteration.

Cryptographic algorithms can be categorised into two main types as follows:

  • Symmetric encryption: In this type of encryption, the same key is used for encryption and decryption; hence, it is sometimes called the private key or shared key encryption. The encryption key is sent through a secured channel so that it can be used to decrypt the data. The main advantage of this type of encryption scheme is that it is relatively less expensive to create the cypher, making it less computationally costly and faster to decrypt. A significant disadvantage of this type of encryption is that the key could be compromised when it is transferred from the sender to the receiver. If a third party views the key, the person or application could use it to decrypt the data, compromising the confidentiality and integrity of the data. Some common examples of symmetric encryption algorithms are Data Encryption Standard (DES), Triple DES (3DES), Advanced Encryption Standard (AES), and Twofish.
  • Asymmetric encryption: In this type of encryption, two different types of keys (private and public keys) are used to encrypt and decrypt the data; hence, it is sometimes called a public key encryption scheme. The public key is shared among the communication parties (senders) so that it can be used to encrypt the data, but only the receiver with the appropriate private key can decrypt the data. Asymmetric cryptographic algorithms are relatively secure but relatively expensive to generate a cypher and computationally costly to decrypt the ciphertext into the original plaintext. Some examples of public key encryption algorithms include RSA (Rivest-Shamir-Adelman) and elliptic Curve Cryptography (ECC).

Although encryption is very valuable for securing data during transportation, processing, and storage, it still has disadvantages. Some of the drawbacks of encryption are:

  • Cybercriminals can use it to hijack the data of individuals and organisations, demanding a ransom to be paid before they can access their data, the so-called ransomware attack.
  • Effective management of encryption keys to ensure that they cannot be compromised is challenging, making it possible for cybercriminals to access the keys and use them to compromise the confidentiality and integrity of the data.
  • There is a growing anxiety that when quantum computing technologies mature, they will be able to break advanced encryption schemes that we now depend on to protect our data.

Authentication: Authentication is an access control mechanism that makes it possible to verify that a user, device, or application is who they claim to be. The authentication credentials (username and password) are matched against a database of authorised users or data authentication servers to verify their identities and ensure they have access rights to the device, servers, application or database. Using a username or ID and a password for authentication is called single-factor authentication. Recently, organisations, especially those dealing with sensitive data (e.g., banks), require their users and applications to provide multiple factors for authentication (rather than only an ID and password), resulting in what is now known as multi-factor authentication. In the case of two factors, it is known as two-factor authentication. Using human features such as fingerprint scans, facial or retina scans, and voice recognition is known as biometric authentication [59]. Authentication ensures the confidentiality and integrity of data and information systems by allowing only authenticated users, applications, and processes access valuable and sensitive resources (e.g., computers, wireless networks, wireless access points, databases, websites, and other network-based applications and services).

Authorisation: Just like authentication, authorisation is another process often used to protect data and information systems from being abused or misused by cybercriminals and unintended (or intended) actions of authorised users. Authorisation is the process of determining the access rights of users and applications to ensure they have the right to perform the action they are trying to perform. Unlike authentication, which verifies the users' identities and then grants them access to the systems, authorisation determines the permissions they have to perform specific actions. One example of authorisation is the Access Control List (ACL), which allows or denies users and applications access to particular information system resources and to perform specific actions. General users may be allowed to perform some actions but may be refused permission to perform others. In contrast, super users or system administrators can perform almost every action in the system. Also, some users are authorised to access some data and are denied access to more sensitive data; thus, in database systems, general users may be permitted to access less sensitive data, and the administrator is permitted access to more sensitive data.

Access control: It consists of the various mechanisms designed and implemented to grant authorised users access to information system resources and to control the actions that they are allowed to perform (e.g., view, modify, update, install, delete). It can also control an organisation's physical access to critical resources. It ensures that the confidentiality and integrity of data and information systems are not compromised. Thus, physical access controls physical access to critical resources, while logical access control controls access to information systems (networks, computing nodes, servers, files, and databases). Access to locations where critical assets (servers, network equipment, files) are stored is restricted using electronic access control systems that use keys, access card readers, personal identification number (PIN) pads, auditing and reports to track employee access to these locations. Access to information systems (networks, computing nodes, servers, files, and databases) is restricted using authentication and authorisation mechanisms that evaluate the required user login credentials, which can include passwords, PINs, biometric scans, security tokens or other authentication factors [60].

Non-repudiation: It is a way to ensure that the data sender does not refute that it sent the data and that the receiver does not deny that it received the data. It also ensures that an entity that signs a document cannot refute its signature. It is a concept adopted from the legal field and has become one of the five pillars of information assurance, including confidentiality, integrity, availability, and authentication. It ensures the authenticity and integrity of the message. It provides the sender's identity to the receiver and assures the sender that the message was delivered without being altered along the way. In this way, the sender and receiver cannot deny they send, receive or process the data. Signatures can be used to ensure non-repudiation as long as they are unique for each entity.

Accountability: Accountability requires organisations to take all the necessary steps to prevent cyberattacks and mitigate the risk of a possible attack. If an attack occurs, the organisation must take responsibility for the damages and engage relevant stakeholders to handle the consequences and prevent future attacks. It must also accept responsibility for dealing with security challenges and fallouts from security breaches.

IoT Hardware and Cybersecurity

A typical IoT architecture consists of the physical layer, which consists of IoT sensors and actuators, which may be connected in the form of a star, linear, mesh, or tree network topology. The IoT devices can process the data collected by the IoT sensors at the physical layer or send it to the fog/cloud computing layers for analysis through IoT access and Internet core networks. The Fog/cloud computing nodes perform lightweight or advanced analytics on the data, and the result may be sent to users for decision-making or IoT actuators to perform a specific task or control a given system or process. This implies that in an IoT infrastructure, we may have IoT devices, wireless access points, gateways, fog computing nodes, internet routers and switches, telecommunication transmission equipment, cellular base stations, servers, databases, cloud computing nodes, mobile applications, and web applications. All these hardware devices and applications constitute attack surfaces that cybercriminals can target to compromise IoT devices.

In implementing IoT security, it is vital to consider the kind of hardware found in IoT systems, from the IoT device level through the IoT networks, fog computing nodes, and internet core networks to the cloud. Security of traditional internet and cloud-based infrastructure is very complex but less challenging due to the massive amount of computing and communication resources that are deployed to handle cybersecurity algorithms and applications that are used to eliminate vulnerabilities, detect and prevent cyberattacks to ensure the confidentiality, integrity, and availability of data and information systems. In the case of IoT devices, the computing and communication resources are very limited due to the limited energy required to power the IoT device. Hence, energy-hungry and computationally expensive cybersecurity algorithms and applications can not be used to secure IoT nodes. This hardware limitation makes IoT devices vulnerable to cyberattacks and easy to compromise.

IoT hardware vulnerabilities

IoT devices are vulnerable to certain types of security attacks due to the nature of IoT hardware. Some of these vulnerabilities or weaknesses resulting from IoT hardware limitations include:

  • The confidentiality and integrity of sensitive data collected by sensor devices can easily be compromised due to a lack of appropriate cryptographic algorithms or weak cryptographic algorithms. It is difficult to implement strong cryptographic algorithms that are difficult to be compromised by cybercriminals due to limited computing resources in IoT devices. IoT devices use microcontrollers for computing, which cannot handle strong but computationally expensive cryptographic algorithms. This makes IoT devices vulnerable to man-in-the-middle attacks where the wireless IoT traffic can be captured by cybercriminals and analysed to access it if it is not encrypted or if the encryption scheme is weak.
  • Device manufacturers introduce some of the vulnerabilities of IoT devices. They are often focused on minimising the cost of the devices and the time to market, paying little or no attention to the security requirements or needs of the customers sometimes because customers are often concerned about the devices' prices, ease of use and functionalities. In this way, they sometimes ship devices with default passwords, no encryption algorithms implemented, and sometimes without any mechanisms for authentication. This makes the devices vulnerable to attacks.
  • In some IoT deployments, the IoT devices share the same communication channels, making them vulnerable to packet collision attacks, where compromised IoT devices are used to create packet collisions on the channels, forcing the device to deplete its stored energy rapidly and may eventually shut down the device.
  • Since the communication between the IoT devices and between the IoT devices and the access point or gateway is through wireless radio communication channels, the IoT devices are vulnerable to jamming attacks designed to force them to deplete their stored energy rapidly.
  • IoT devices are also vulnerable to flooding attacks designed to flood IoT devices with benign or useless packets, so they will spend more energy processing these useless packets, rapidly depleting their stored energy and eventually shutting down the device.
  • Since IoT devices are relatively easy to infect with malware, they are vulnerable to a kind of malware attack in which the attacker infects the device with malware that forces the device to perform more computations, rapidly depleting the energy stored in the device and eventually shutting it down.
  • Another type of IoT hardware vulnerability is route poisoning, in which the attacker creates routing loops, turns some devices into sinkholes, or increases routing paths to force the devices to spend more energy and eventually deplete their energy, reducing the lifetime of some devices in the network.
  • IoT devices can easily be infected and turned into botnets, which can then be used to conduct sophisticated large-scale attacks, such as distributed denial of service attacks, which can paralyse IT assets (servers and gateways).
  • Another IoT hardware vulnerability is the lack of visibility. Many IoT devices are deployed without appropriate identification numbers (IP addresses), creating blind spots because the devices are not visible to security monitoring tools and can be exploited. Also, the fact that various devices may have different protocols makes monitoring all the devices within the network challenging, making them weak points.
  • An inefficient firmware verification mechanism allows tampering with or reverse-engineering the firmware, making the device vulnerable to attacks. Attackers may illegally update the device's firmware or tamper with it so that they can easily capture it and use it for further attacks.
  • Due to poor device management strategies, some organisations or individuals sometimes fail to attend to some devices to ensure that they are well-secured (failing to install necessary updates and patch security holes to gaps), leaving them vulnerable to attacks from cybercriminals.
  • Some hardware security vulnerabilities are hard to eliminate, such as side-channel attacks, reverse engineering of the hardware, malware infection, and data extraction, which could be exploited and result in a data breach.
  • IoT devices are vulnerable to physical attacks. A criminal can destroy or vandalise them and even access them manually.

IoT hardware attacks

IoT hardware attacks are the various ways that security weaknesses resulting from limitations in IoT hardware can be exploited to compromise the security of IoT data and systems. An attacker may install malware on IoT devices, manipulate their functionality, or exploit their weaknesses to gain access to steal or damage data, degrade the quality of services, or disrupt the services. An attack could conduct an IoT on devices to use them for a more sophisticated large-scale attack on ICT infrastructures and critical systems. There is an increase in the scale and frequency of IoT attacks due to the rise in IoT attack surfaces, the ease with which IoT devices can be compromised, and the integration of IoT devices into existing systems and critical infrastructure. Some of the common IoT hardware attacks include:

  • Unauthorised access: Some IoT device manufacturers use weak or no security mechanisms to minimise manufacturing costs and reduce the time to market to meet the increase in market demand. They sometimes do not provide mechanisms for necessary updates to patch up security holes. Some create backdoors for remote servicing, which malicious hackers can exploit. In contrast, others use default or no passwords, making it easier for attackers to access and exploit the device to escalate their attacks.
  • Emulation of fake IoT devices: A third party that knows the communication protocol could develop software to emulate standard functionalities between IoT devices and then get the leverage to share false information.
  • Identity Theft: An attacker could steal the identification of legitimate devices and then perform malicious actions within the network without being identified.
  • Injection of fake information: An attacker can inject fake or misleading information to disrupt the intended functionalities. For example, in a food supply chain, a third party could inject false information about the ethylene sensor and make the system think that the transported commodity is already rotten. Therefore, mechanisms must be implemented to protect the system from fake information injection.
  • Firmware-base attacks: When a new security threat is discovered, a new firmware update is required to obtain an updated version to address the security threat. The firmware, security configuration and other device features can be cloned. The attacker can also upgrade the firmware of a device with malicious software [61]. The firmware, security configuration and other device features can be cloned. The attacker can also upgrade the firmware of a device with malicious software [62].
  • Eavesdropping and man-in-the-middle attacks: Data exchange should be performed securely, making data interception by a third party impossible. Traditional data encryption schemes cannot be implemented in IoT devices, requiring lightweight encryption, which is not straightforward and is sometimes ignored by manufacturers. Transmitting unencrypted IoT data, including security data, makes IoT networks susceptible to eavesdropping and man-in-the-middle attacks.
  • Energy depletion attacks: In this kind of attack, an attacker tries to increase the energy consumption of a battery-powered IoT device significantly, drain the device's battery, and eventually shut down the device. Examples of such attacks include Denial of Sleep (DoS), flooding, a carousel, and stretch attacks [63].
  • Vampire attacks: In this kind of attack, an attacker tries to increase the energy consumption of a battery-powered IoT device significantly, drain the device's battery, and eventually shut down the device. Examples of such attacks include Denial of Sleep (DoS), flooding, a carousel, and stretch attacks [64].
  • Routing attacks: An attacker may manipulate the routing information of the devices to create routing loops, selectively forward packets or intend to use longer routes to increase energy consumption. Some routing attacks include sinkholes, selective forwarding, wormholes, and Sybil attacks [65].
  • Jamming attacks: A denial of service attack in a shared wireless communication channel where a user may prevent other users from using the shared channel [66]. It is an attack targeting the IoT wireless network's physical or data link layer.
  • Brut-force attacks: This kind of attack is aimed at obtaining the login credentials of the detail to gain unauthorised access to the device. For devices with fault passwords, commonly used passwords (e.g., admin), or weak passwords, attackers can access these credentials and use them to gain illegal access to IoT devices.
  • DoD/DDoD attacks: Because adequate security mechanisms are not implemented to harden the security of IoT devices, they can easily be compromised. Many IoT devices can constitute an army of botnets to conduct DDoS attacks to saturate the buffers and other resources in the access points, fog nodes and cloud platforms.
  • Packet collision attacks: This attack is typical in IoT applications where the devices share the wireless communication channel. An attacker can capture some of the devices and then use them to create packet collisions in the communication channel to disrupt the communication and force the devices to consume more energy by trying to transmit packets multiple times and increase the time the devices stay awake to perform communication (or decreases the sleep time of the device). This kind of attack is a type of energy depletion attack.
  • Physical attack on the device: An IoT device may be physically manipulated or damaged to extract vital information. This is an essential aspect of IoT-based agriculture, as the IoT infrastructure in the fields can be vandalised.

IoT hardware security

It is tough to eliminate IoT hardware vulnerabilities due to the hardware resource constraint of IoT devices. Some of the measures for securing IoT devices and mitigating the risk posed by IoT security vulnerabilities include the following:

  • Implementing lightweight encryption schemes on IoT devices: The data stored in the IoT devices (e.g., device authentication data and other sensitive data) should be encrypted to ensure its confidentiality and integrity are not compromised. The IoT data should be encrypted before being transmitted through any transmission medium. Since traditional cryptographic algorithms are computationally expensive and require strong and energy-hungry computing systems, it is preferable to implement lightweight cryptographic algorithms that require relatively less energy.
  • Implementing robust authentication mechanisms on IoT devices: Robust authentication mechanisms should be implemented to restrict access to IoT devices and to ensure that all IoT devices that connect to access points and servers are authenticated. This ensures that access to critical resources like access points, gateways, and servers can be controlled to ensure the authenticity of the communication. It is also important to avoid purchasing devices with hardcoded passwords, change default passwords, and create strong passwords for devices.
  • Configuring firewalls to protect devices from traffic-based attacks: The network's perimeter can be protected by implementing firewalls that reject malicious traffic at the network's edge. That is, it allows only traffic from legitimate sources and blocks traffic from sources deemed malicious. It can also be used to segment the network so that the IoT network can be isolated from other networks and attacks on IoT networks can not be spread to different networks. Software firewalls can be configured on individual devices to restrict traffic from unauthorised sources from reaching the devices.
  • Ensure that the software and hardware components are not compromised: The software and hardware components used in IoT devices should be well-tested to ensure no security vulnerabilities that malicious attackers may exploit to compromise the security of the data and the devices. Security measures should be included at every stage of the device lifecycle to ensure that well-known vulnerabilities are resolved and that there are security strategies to ensure that the device and data security is not compromised.
  • Implement dedicated security hardware to improve device security: Dedicated hardware components are designed specifically to perform security-related functions (e.g., secure communications, energy-efficient cryptographic functions, and key management) to ensure the devices' real-time security. Some dedicated hardware components can facilitate the implementation of a secure boot process and authentication operations. Another advantage of using dedicated IoT security hardware is that some are designed to strike a balance between the IoT hardware constraint, energy consumption, and security.
  • Always verify the validity and trustworthiness of the software and firmware of IoT devices: Reliable mechanisms should be implemented to verify the validity and trustworthiness of the software and firmware of IoT devices. This way, we can check if the software or the device's operating system has been tampered with or manipulated so that the device is vulnerable to attacks.
  • Regular security checks and updates: Mechanisms should be implemented to check if the device has been tampered with. The firmware and software of the device should also be updated and regulated to patch any security holes.
  • Regular security audits should be performed: The IoT network should be regularly audited using vulnerability scanning and security auditing tools to ensure that IoT vulnerabilities (including hardware vulnerabilities) and threats can be detected and resolved before criminals can exploit them.
  • Enforcement of security policies: Sound security policies should be designed and enforced to ensure that the IoT device and data are not easily compromised. For example, the principle of security by design was implemented when developing and implementing IoT hardware and software. All IoT devices must be identified, monitored continuously, and regularly audited to ensure that known vulnerabilities can be resolved on time. Attacks against IoT devices should also be detected and blocked on time.

IoT Cybersecurity Challenges

The security of computer systems and networks has garnered significant attention in recent years, driven by malicious attackers' ongoing exploitation of these systems, which leads to service disruptions. The increasing prevalence of known and unknown vulnerabilities has made designing and implementing effective security mechanisms increasingly complex and challenging. This section discusses the challenges and complexities of IoT cybersecurity systems.

An in-depth description of the cybersecurity challenges is presented below and shortly listed on the diagram 102.

Challenges in Cybersecurity
Figure 102: Challenges in Cybersecurity

Complexities in Security Implementation

Implementing robust security in IoT ecosystems is a multifaceted challenge that involves satisfying critical security requirements, such as confidentiality, integrity, availability, authenticity, accountability, and non-repudiation. While these principles may appear straightforward, the technologies and methods needed to achieve them are often complex. Ensuring confidentiality, for example, may involve advanced encryption algorithms, secure key management, and secure data transmission protocols. Similarly, maintaining data integrity requires comprehensive hashing mechanisms and digital signatures to detect unauthorised changes.

Availability is another essential aspect that demands resilient infrastructure to protect against Distributed Denial-of-Service (DDoS) attacks and ensure continuous access to IoT services. The authenticity requirement involves using public key infrastructures (PKI) and digital certificates, which introduce key distribution and lifecycle management challenges.

Achieving accountability and non-repudiation involves detailed auditing mechanisms, secure logging, and tamper-proof records to verify user actions and device interactions. These systems must operate seamlessly within constrained IoT environments with limited processing power, memory, or energy resources. Implementing these mechanisms thus demands technical expertise and the ability to reason through subtle trade-offs between security, performance, and resource constraints. The complexity is compounded by the diversity of IoT devices and communication protocols and the potential for vulnerabilities arising from integrating these devices into broader networks.

Inability to Exhaust All Possible Attacks

When developing security mechanisms or algorithms, it is essential to anticipate and account for potential attacks that may target the system's vulnerabilities. However, fully predicting and addressing every conceivable attack is often not feasible. This is because malicious attackers constantly innovate, usually approaching security problems from entirely new perspectives. By doing so, they can identify and exploit weaknesses in the security mechanisms that were not initially apparent or considered during development. This dynamic nature of attack strategies means that security features can never be wholly immune to every potential threat, no matter how well-designed. As a result, the development process must involve defensive strategies, ongoing adaptability, and the ability to respond to novel attack vectors that may emerge quickly. The continuous evolution of attack techniques, combined with the complexity of modern systems, makes it nearly impossible to guarantee absolute protection against all threats.

The problem of Where to Implement the Security Mechanism

Once security mechanisms are designed, a crucial challenge arises in determining the most effective locations for their deployment to ensure optimal security. This issue is multifaceted and involves both physical and logical considerations.

Physically, it is essential to decide at which points in the network security mechanisms should be positioned to provide the highest level of protection. For instance, should security features such as firewalls and intrusion detection systems be placed at the perimeter, or should they be implemented at multiple points within the network to monitor and defend against internal threats? Deciding where to position these mechanisms requires careful consideration of network traffic flow, the sensitivity of different network segments, and the potential risks of various entry points.

Logically, the placement of security mechanisms also needs to be considered within the system's architecture. For example, within the TCP/IP model, security features could be implemented at different layers, such as the application layer, transport layer, or network layer, depending on the nature of the threat and the type of protection needed. Each layer offers different opportunities and challenges for securing data, ensuring privacy, and preventing unauthorised access. The choice of layer for deploying security mechanisms affects how they interact with other protocols and systems, potentially influencing the overall performance and efficiency of the network.

In both physical and logical terms, selecting the proper placement for security mechanisms requires a comprehensive understanding of the system's architecture, potential attack vectors, and performance requirements. Poor placement can leave critical areas vulnerable or lead to inefficient resource use, while optimal placement enhances the system's overall defence and response capabilities. Thus, strategic deployment is essential to achieving robust and scalable security for modern networks.

The problem of Trust Management

Security mechanisms are not limited to implementing a specific algorithm or protocol; they often require a robust system of trust management that ensures the participants involved can securely access and exchange information. A fundamental aspect of this is the need for participants to possess secret information—such as encryption keys, passwords, or certificates—that is crucial to the functioning of the security system. This introduces various challenges regarding how such sensitive information is generated, distributed, and protected from unauthorised access.

For instance, cryptographic keys must be created and distributed carefully to prevent interception or theft. Secure key exchange protocols must be employed, and mechanisms for storing keys securely—such as hardware security modules or secure enclaves—must be in place. Additionally, the management of trust between parties is often based on keeping these secrets confidential. If any party loses control over their secret information or if it is exposed, the entire security framework may be compromised.

Beyond the management of secrets, trust management also relies on communication protocols whose behaviour can complicate the development and reliability of security mechanisms. Many security mechanisms depend on the assumption that specific communication properties will hold, such as predictable latency, order of message delivery, or the integrity of data transmission. However, in real-world networks, factors like varying network conditions, congestion, and protocol design can introduce unpredictable delays or alter the sequence in which messages are delivered. For example, if a security system depends on setting time-sensitive limits for message delivery—such as in time-based authentication or transaction protocols—any communication protocol or network that causes delays or variability in transit times may render these time limits ineffective. This unpredictability can undermine the security mechanism's ability to detect fraud, prevent replay attacks, or ensure timely authentication.

Moreover, trust management issues also extend to the trustworthiness of third-party services or intermediaries, such as certificate authorities in public key infrastructures or cloud service providers. If the trust assumptions about these intermediaries fail, it can lead to a cascade of vulnerabilities in the broader security system. Thus, a well-designed security mechanism must account for the secure handling of secret information, the potential pitfalls introduced by variable communication conditions and the complexities of establishing reliable trust relationships in a decentralised or distributed environment.

Continuous Development of New Attack Methods

Computer and network security can be viewed as an ongoing battle of wits, where attackers constantly seek to identify and exploit vulnerabilities. In contrast, security designers or administrators work tirelessly to close those gaps. One of the inherent challenges in this battle is the asymmetry of the situation: the attacker only needs to discover and exploit a single weakness to compromise a system, while the security designer must anticipate and mitigate every potential vulnerability to achieve what is considered “perfect” security.

This stark contrast creates a significant advantage for attackers, as they can focus on finding just one entry point, one flaw, or one overlooked detail in the system's defences. Moreover, once a vulnerability is identified, it can often be exploited rapidly, sometimes even by individuals with minimal technical expertise, thanks to the availability of tools or exploits developed by more sophisticated attackers. This constant risk of discovery means that the security landscape is always in a state of flux, with new attack methods emerging regularly.

On the other hand, the designer or administrator faces the monumental task of identifying every potential weakness in the system and understanding how each vulnerability could be exploited in novel ways. As technology evolves and new systems, protocols, and applications are developed, new attack vectors emerge, making it difficult for security measures to remain static. Attackers continuously innovate, leveraging new technologies, techniques, and social engineering strategies, further complicating the defence task. They may adapt to environmental changes, bypassing traditional security mechanisms or exploiting new weaknesses introduced by system updates or third-party components.

This dynamic forces security professionals to stay one step ahead, often engaging in continuous research and development to identify new threat vectors and implement countermeasures. It also underscores the impossibility of achieving perfect security. Even the most well-designed systems can be vulnerable to the next wave of attacks, and the responsibility to defend against these evolving threats is never-ending. Thus, developing new attack methods ensures that the landscape of computer and network security remains a complex, fast-paced arena in which defenders must constantly evolve their strategies to keep up with increasingly sophisticated threats.

Security is Often Ignored or Poorly Implemented During Design

One of the critical challenges in modern system development is that security is frequently treated as an afterthought rather than being integrated into the design process from the outset. Security considerations are often only discussed after the system's core functionality and architecture have been designed, developed, and even deployed. This reactive approach, where security is bolted on as an additional layer at the end of the development cycle, leaves systems vulnerable to exploitation by malicious actors who quickly discover and exploit flaws that were not initially considered.

The tendency to overlook security during the early stages of design often stems from a focus on meeting functionality requirements, deadlines, or budget constraints. When security is not a primary consideration from the start, it is easy for developers to overlook potential vulnerabilities or fail to implement adequate protective measures. As a result, the system may have critical weaknesses that are difficult to identify or fix later on. Security patches or adjustments, when made, can become cumbersome and disruptive, requiring substantial changes to the architecture or design of the system, which can be time-consuming and expensive.

Moreover, systems not designed with security are often more prone to hidden vulnerabilities. For example, they may have poorly designed access controls, insufficient data validation, inadequate encryption, or weak authentication methods. These issues can remain undetected until an attacker discovers a way to exploit them, potentially leading to severe data integrity, confidentiality, or availability breaches. Once a security hole is identified, patching it in a system not built with security in mind can be challenging. It may require reworking substantial portions of the underlying architecture or logic, which may not have been anticipated during the initial design phase.

The lack of security-focused design also affects the system's scalability and long-term reliability. As new features are added or updates are made, vulnerabilities can emerge if security isn't continuously integrated into each step of the development process. This results in a system that may work perfectly under normal conditions but is fragile or easily compromised when exposed to malicious threats.

To address this, security must be treated as a fundamental aspect of system design, incorporated from the beginning of the development lifecycle. It should not be a separate consideration but rather an integral part of the architecture, just as essential as functionality, performance, and user experience. By prioritising security during the design phase, developers can proactively anticipate potential threats, reduce the risk of vulnerabilities, and build robust and resilient systems for future security challenges.

Difficulties in Striking a Balance Between Security and Customer Satisfaction

One of the ongoing challenges in information system design is finding the right balance between robust security and customer satisfaction. Many users, and even some security administrators, perceive strong security measures as an obstacle to a system's smooth, efficient, and user-friendly operation or the seamless use of information. The primary concern is that stringent security protocols can complicate system access, slow down processes, and interfere with the user experience, leading to frustration or dissatisfaction.

For example, implementing strong authentication methods, such as multi-factor authentication (MFA), can significantly enhance security but may also create additional steps for users, increasing friction during login or access. While this extra layer of protection helps mitigate security risks, it may be perceived as cumbersome or unnecessary by end-users who prioritise convenience and speed. Similarly, enforcing strict data encryption or secure communication protocols can slow down system performance, which, while necessary for protecting sensitive information, may result in delays or decreased efficiency in routine operations.

Furthermore, security mechanisms often introduce complexities that make the system more difficult for users to navigate. For instance, complex password policies, regular password changes, or strict access control rules can lead to confusion or errors, especially for non-technical users. The more stringent the security requirements, the more likely users may struggle to comply or bypass security measures in favour of convenience. In some cases, this can create a dangerous false sense of security or undermine the protections the security measures are designed to enforce.

Moreover, certain security features may conflict with specific functionalities that users require for their tasks, making them difficult or impossible to implement in specific systems; for example, ensuring that data remains secure during transmission often involves limiting access to specific ports or protocols, which could impact the ability to use certain third-party services or applications. Similarly, achieving perfect data privacy may necessitate restricting the sharing of information, which can limit collaboration or slow down the exchange of essential data.

The challenge lies in finding a compromise where security mechanisms are robust enough to protect against malicious threats but are also sufficiently flexible to avoid hindering user workflows, system functionality, and overall satisfaction. Striking this balance requires careful consideration of the needs of both users and security administrators and constant reassessment as technologies and threats evolve. To achieve this, designers must work to develop security solutions that are both effective and as seamless as possible, protecting without significantly disrupting the user experience. Practical user training and clear communication about the importance of security can also help mitigate dissatisfaction by fostering an understanding of why these measures are necessary. Ultimately, the goal should be creating an information system that delivers a secure environment and a positive, user-centric experience.

Users Often Take Security for Granted

A common issue in cybersecurity is that users and system managers often take security for granted, not fully appreciating its value until a security breach or failure occurs. This tendency arises from a natural human inclination to assume that systems are secure unless proven otherwise. Users are less likely to prioritise security when everything functions smoothly, viewing it as an invisible or abstract concept that doesn't immediately impact their day-to-day experience. This attitude can lead to a lack of awareness about their potential risks or the importance of investing in strong security measures to prevent those risks.

Many users, especially those looking for cost-effective solutions, are primarily concerned with acquiring devices or services that fulfil their functional needs—a smartphone, a laptop, or an online service. Security often takes a backseat to factors like price, convenience, and performance. In pursuing low-cost options, users may ignore or undervalue security features, opting for devices or platforms that lack robust protections, such as outdated software, weak encryption, or limited user controls. While these devices or services may meet the immediate functional demands, they may also come with hidden security vulnerabilities that expose users to cyber threats, such as data breaches, identity theft, or malware infections.

Additionally, system managers or administrators may sometimes adopt a similar mindset, focusing on operational efficiency, functionality, and cost management while overlooking the importance of implementing comprehensive security measures. Security features may be treated as supplementary or burdens, delaying or limiting their integration into the system. This results in weak points in the system that are only recognised when an attack happens, and by then, the damage may already be significant.

This lack of proactive attention to security is further compounded by the false sense of safety that can arise when systems appear to be running smoothly. Without experiencing a breach, many users may underestimate the importance of security measures, considering them unnecessary or excessive. However, the absence of visible threats can be deceiving, as many security breaches happen subtly without immediate signs of compromise. Cyber threats are often sophisticated and stealthy, evolving in ways that make it difficult for the average user to identify vulnerabilities before it's too late.

To counteract this complacency, it's essential to foster a deeper understanding of the value of cybersecurity among users and system managers. Security should be presented as an ongoing investment in protecting personal and organisational assets rather than something that can be taken for granted. Education and awareness campaigns can play a crucial role in helping users recognise that robust security measures protect against visible threats and provide long-term peace of mind. By prioritising security at every stage of device and system use—whether in design, purchasing decisions, or regular maintenance—users and system managers can build a more resilient, secure environment less vulnerable to emerging cyber risks.

Security monitoring challenges in IoT infrastructures

Security requires regular, even constant monitoring, which is difficult in today's short-term, overloaded environment. One of the key components of maintaining strong security is continuous monitoring, yet in today's fast-paced, often overloaded environment, this is a complex and resource-intensive task. Security is not a one-time effort or a set-it-and-forget-it process; it requires regular, and sometimes even constant, oversight to identify and respond to emerging threats. However, the demand for quick results and the drive to meet immediate business objectives often lead to neglect in long-term security monitoring efforts. In addition, many security teams are stretched thin with multiple responsibilities, making it challenging to prioritise and maintain the vigilance necessary for effective cybersecurity.

This challenge is particularly evident in the context of Internet of Things (IoT), where security monitoring becomes even more complex. The IoT ecosystem consists of a vast and ever-growing number of connected devices, many deployed across different environments and serving particular niche purposes. One of the main difficulties in monitoring IoT devices is that some are often hidden or not directly visible to traditional security monitoring tools. For example, specific IoT devices may be deployed in remote locations, embedded in larger systems, or integrated into complex networks, making it difficult for security teams to comprehensively view all the devices in their infrastructure. These “invisible” devices are prime targets for attackers, as they can easily be overlooked during routine security assessments.

The simplicity of many IoT devices further exacerbates the monitoring challenge. These devices are often designed to be lightweight, inexpensive, and easy to use, which means they may lack advanced security features such as built-in encryption, authentication, or even the ability to alert administrators to suspicious activities. While their simplicity makes them attractive from a consumer standpoint—offering ease of use and low cost—they also make them more vulnerable to attacks. Without sophisticated monitoring capabilities or secure configurations, attackers can exploit these devices to infiltrate a network, launch DDoS attacks, or compromise sensitive data.

Moreover, many IoT devices are deployed without proper oversight or follow-up, as organisations may prioritise functionality over security during procurement. This “set-and-forget” mentality means that once IoT devices are installed, they are often left unchecked for long periods, creating a window of opportunity for attackers to exploit any weaknesses. Additionally, many IoT devices may not receive regular firmware updates, leaving them vulnerable to known exploits that could have been patched if monitored and maintained.

The rapidly evolving landscape of IoT, combined with the sheer number of devices, makes it almost impossible for security teams to stay on top of every potential threat in real time. To address this challenge, organisations must adopt more robust, continuous monitoring strategies to detect anomalies across various devices, including IoT. This may involve leveraging advanced technologies such as machine learning and AI-based monitoring systems that automatically detect suspicious behaviour without constant human intervention. Additionally, IoT devices should be integrated into a broader, cohesive security framework that includes regular updates, vulnerability assessments, and comprehensive risk management practices to ensure these devices are secure and potential security gaps are identified and addressed on time.

Ultimately, as IoT grows in scale and complexity, security teams must be more proactive in implementing monitoring solutions that provide visibility and protection across all network layers. This requires advanced technological tools and a cultural shift toward security as a continuous, ongoing process rather than something that can be handled in short bursts or only when a breach occurs.

The Procedures Used to Provide Particular Services Are Often Counterintuitive

Security mechanisms are typically designed to protect systems from various threats. Still, the procedures to implement these mechanisms are often counterintuitive or not immediately apparent to users or those implementing them. In many cases, security features are complex and intricate, requiring multiple layers of protection, detailed configurations, and extensive testing. When a user or system administrator is presented with a security requirement—such as ensuring data confidentiality, integrity, or availability—it is often unclear whether such elaborate and sometimes cumbersome measures are necessary. At first glance, the measures may appear excessive or overly complicated for the task, leading some to question their utility or necessity.

The need for these complex security mechanisms becomes evident only when the various aspects of a potential threat are thoroughly examined. For example, a seemingly simple requirement, such as ensuring the secure transfer of sensitive data, may involve a series of interconnected security protocols, such as encryption, authentication, access control, and non-repudiation, often hidden from the end user. Each of these mechanisms serves a critical role in protecting the data from potential threats—such as man-in-the-middle attacks, unauthorised access, or data tampering—but this level of sophistication is not always apparent. The complexity is driven by the diverse and evolving nature of modern cyber threats, which often require multi-layered defences to be effective.

The necessity for such intricate security procedures often becomes more evident when a more in-depth understanding of the potential threats and vulnerabilities is gained. For instance, an attacker may exploit seemingly minor flaws in a system, such as weak passwords, outdated software, or unpatched security holes. These weaknesses may not be immediately apparent or seem too trivial to warrant significant attention. However, once a security audit is conducted and the full scope of potential risks is considered—ranging from insider threats to advanced persistent threats (APTs)—it becomes apparent that a more robust security approach is required to safeguard against these risks.

Moreover, the procedures designed to mitigate these threats often involve trade-offs in terms of usability and performance. For example, enforcing stringent authentication methods may slow down access times or require users to remember complex passwords, which may seem inconvenient or unnecessary unless the potential for unauthorised access is fully understood. Similarly, implementing encryption or firewalls may add processing overhead or introduce network delays, which might seem like a burden unless it is clear that these measures are essential for defending against data breaches or cyberattacks.

Security mechanisms are often complex and counterintuitive because they must account for many potential threats and adversaries, some of which may not be immediately apparent. The process of securing a system involves considering not only current risks but also future threats that may emerge as technology evolves. As such, security measures must be designed to be adaptable and resilient in the face of new and unexpected challenges. The complexity of these measures reflects the dynamic and ever-evolving nature of the cybersecurity landscape, where seemingly simple tasks often require sophisticated, multifaceted solutions to provide the necessary level of protection.

The Complexity of Cybersecurity Threats from the Emerging Field of Artificial Intelligence (AI)

As Artificial Intelligence (AI) continues to evolve and integrate into various sectors, the cybersecurity landscape is becoming increasingly complex. AI, with its advanced capabilities in machine learning, data processing, and automation, presents a double-edged sword. While it can significantly enhance security systems by improving threat detection and response times, it also opens up new avenues for sophisticated cyberattacks. The growing use of AI by malicious actors introduces a new dimension to cybersecurity threats, making traditional defence strategies less effective and increasing the difficulty of safeguarding sensitive data and systems.

One of AI's primary challenges in cybersecurity is its ability to automate and accelerate the identification and exploitation of vulnerabilities. AI-driven attacks can adapt and evolve in real-time, bypassing traditional detection systems that rely on predefined rules or patterns. For example, AI systems can use machine learning algorithms to continuously learn from the behaviour of the system they are attacking, refining their methods to evade security measures, such as firewalls or intrusion detection systems (IDS). This makes detecting AI-based attacks much harder because they can mimic normal system behaviour or use techniques previously unseen by human analysts.

Furthermore, AI's ability to process and analyse vast amounts of data makes it an ideal tool for cybercriminals to mine for weaknesses. With AI-powered tools, attackers can sift through large datasets, looking for patterns or anomalies that could indicate a vulnerability. These tools can then use that information to craft highly targeted attacks, such as spear-phishing campaigns, that are more convincing and difficult to detect. Additionally, AI can automate social engineering attacks by personalising and optimising messages based on available user data, making them more effective at deceiving individuals into divulging sensitive information or granting unauthorised access.

Another layer of complexity arises from the potential misuse of AI in creating deepfakes or synthetic media, which can be used to manipulate individuals or organisations. Deepfakes, powered by AI, can generate realistic videos, audio recordings, or images that impersonate people in positions of authority, spreading misinformation or causing reputational damage. In cybersecurity, such techniques can be employed to manipulate employees into granting access to secure systems or to convince stakeholders to make financial transactions based on false information. The ability of AI to produce high-quality, convincing fake content complicates the detection of fraud and deception, making it harder for individuals and security systems to discern legitimate communication from malicious ones.

Moreover, AI's influence in the cyber world is not limited to the attackers; it also has significant implications for the defenders. While AI can help improve security measures by automating the analysis of threats, predicting attack vectors, and enhancing decision-making, it also presents challenges for security professionals who must stay ahead of increasingly sophisticated AI-driven attacks. Security systems that rely on traditional, signature-based detection methods may struggle to keep pace with AI-driven threats' dynamic and adaptive nature. AI systems in cybersecurity must be continually updated and refined to combat new and evolving attack techniques effectively.

The use of AI in cybersecurity also raises concerns about vulnerabilities within AI systems. AI algorithms, especially those based on machine learning, are not immune to exploitation. For instance, attackers can manipulate the training data used to teach AI systems, introducing biases or weaknesses that can be exploited. This is known as an “adversarial attack,” where small changes to input data can cause an AI model to make incorrect predictions or classifications. Adversarial attacks pose a significant risk, particularly in systems relying on AI for decision-making, such as autonomous vehicles or critical infrastructure systems.

As AI continues to advance, it is clear that cybersecurity strategies will need to adapt and evolve in tandem. The complexity of AI-driven threats requires a more dynamic and multifaceted approach to defence, combining traditional security measures with AI-powered tools to detect, prevent, and respond to threats in real time. Additionally, as AI technology becomes more accessible, organisations must invest in training and resources to ensure that their cybersecurity teams can effectively navigate the complexities AI introduces in attack and defence scenarios. The convergence of AI and cybersecurity is a rapidly evolving field, and staying ahead of emerging threats will require constant vigilance, innovation, and collaboration across industries and sectors.

The Difficulty in Maintaining a Reasonable Trade-off Between Security, QoS, Cost, and Energy Consumption

One of the key challenges in modern systems design, particularly in areas like network architecture, cloud computing, and IoT, is balancing the competing demands of security, Quality of Service (QoS), cost, and energy consumption. Each of these factors plays a critical role in a system's performance and functionality, but prioritising one often comes at the expense of others. Achieving an optimal trade-off among these elements is complex and requires careful consideration of how each factor influences the overall system.

Security is a critical component in ensuring the protection of sensitive data, system integrity, and user privacy. Strong security measures—such as encryption, authentication, and access control—are essential for safeguarding systems from cyberattacks, data breaches, and unauthorised access. However, implementing high-level security mechanisms often increases system complexity and processing overhead. For example, encryption can introduce delays in data transmission, while advanced authentication methods (e.g., multi-factor authentication) can slow down access times. This can negatively impact the Quality of Service (QoS), which refers to the performance characteristics of a system, such as its responsiveness, reliability, and availability. In environments where low latency and high throughput are essential, such as real-time applications or high-performance computing, security measures that introduce delays or bottlenecks can degrade QoS.

Cost is another critical consideration, as organisations must manage the upfront and ongoing expenses associated with system development, implementation, and maintenance. Security mechanisms often involve significant costs regarding the resources required to design and deploy them and the ongoing monitoring and updates needed to keep systems secure. Similarly, ensuring high QoS may require investments in premium infrastructure, high-bandwidth networks, and redundant systems to guarantee reliability and minimise downtime. Balancing these costs with budget constraints can be difficult, mainly when investing in top-tier security or infrastructure, which can result in higher operational expenses.

Finally, energy consumption is an increasingly important factor, particularly in the context of sustainable computing and green technology initiatives. Systems requiring constant security monitoring, high-level encryption, and redundant infrastructures consume more energy, increasing operational costs and contributing to environmental concerns. Managing power usage is particularly challenging in energy-constrained environments, such as IoT devices or mobile applications. Energy-efficient security measures may not be as robust or require trade-offs regarding the level of protection they provide.

Striking a reasonable balance among these four factors requires careful optimisation and decision-making. In some cases, prioritising security can reduce system performance (QoS) or increase energy consumption, while focusing on minimising energy usage might result in security vulnerabilities. Similarly, trying to cut costs by opting for cheaper, less secure solutions can lead to higher long-term expenses if a security breach occurs.

Organisations must take a holistic approach to achieve an effective balance, considering the system's specific requirements, potential risks, and resource constraints. For example, in critical infrastructure or financial systems, security may need to take precedence over cost or energy consumption, as the consequences of a breach would be too significant to ignore. In contrast, consumer-facing applications may emphasise maintaining QoS and minimising energy consumption while adopting security measures that are adequate for the threat landscape but not as resource-intensive.

Advanced technologies like machine learning and AI can help dynamically adjust trade-offs based on real-time conditions. For example, AI-powered systems can adjust security measures based on the sensitivity of the transmitted data or the system's load, optimising security and performance. Similarly, energy-efficient algorithms and hardware can minimise power usage without sacrificing too much security or QoS.

Achieving a reasonable trade-off between security, QoS, cost, and energy consumption requires a careful, context-specific approach, ongoing monitoring, and the ability to adjust strategies as system requirements and external conditions evolve.

Neglecting to Invest in Cybersecurity

Failing to allocate adequate resources to cybersecurity is a critical mistake many organisations, significantly smaller businesses and startups make. Neglecting cybersecurity investments can be far-reaching, with potential damages affecting the organisation's immediate operations and long-term viability. In today's increasingly digital world, where sensitive data and critical infrastructure are interconnected through complex networks, cybersecurity is no longer a luxury or a secondary concern—it is an essential element of any business strategy. Ignoring or underestimating the importance of cybersecurity exposes an organisation to a wide range of threats, ranging from data breaches to ransomware attacks, each of which can result in significant financial losses, reputational damage, and legal ramifications.

One of the most immediate risks of neglecting cybersecurity is the increased vulnerability to cyberattacks. Hackers and cybercriminals continuously evolve their techniques, using sophisticated methods to exploit weaknesses in systems, networks, and applications. Organisations create a fertile ground for these attacks without adequate investment in cybersecurity measures, such as firewalls, encryption, intrusion detection systems (IDS), and multi-factor authentication (MFA). Once a system is compromised, the damage can be extensive: sensitive customer data may be stolen, intellectual property could be leaked, and systems may be crippled, leading to prolonged downtime and operational disruptions.

Beyond the immediate damage, neglecting cybersecurity can also negatively impact an organisation's reputation. In today's hyper-connected world, news of a data breach or cyberattack spreads quickly, potentially causing customers and partners to lose trust in the organisation. Consumers are increasingly concerned about the privacy and security of their personal information, and a single breach can lead to a loss of customer confidence that may take years to rebuild. Moreover, businesses that fail to protect their customers' data may also face significant legal and regulatory consequences. Laws such as the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) impose strict requirements on data protection, and failure to comply with these regulations due to inadequate cybersecurity measures can result in heavy fines, lawsuits, and other legal penalties.

Another key consequence of neglecting cybersecurity is the potential for operational disruptions. Cyberattacks can cause significant downtime, rendering critical business systems inoperable and halting normal operations. For example, a ransomware attack can lock organisations out of their systems, demanding a ransom payment for the decryption key. During this period, employees may be unable to access important files, emails, or customer data, and business processes may come to a standstill. This operational downtime disrupts the workflow and results in lost productivity and revenue, with some companies facing weeks or even months of recovery time.

Additionally, the cost of dealing with the aftermath of a cyberattack can be overwhelming. Organisations not investing in proactive cybersecurity measures often spend significantly more on recovery after an incident. These costs can include legal fees, public relations campaigns to mitigate reputational damage, and the implementation of new security measures to prevent future breaches. In many cases, these costs far exceed the initial investment that would have been required to establish a robust cybersecurity program.

Neglecting cybersecurity also risks an organisation missing out on potential opportunities. As businesses increasingly rely on digital technologies, clients, partners, and investors emphasise the security of an organisation's systems. Organisations that cannot demonstrate strong cybersecurity practices may be excluded from partnerships, denied contracts, or even lost on investment opportunities. For example, many companies today require their suppliers and partners to meet specific cybersecurity standards before entering into business agreements. Failing to meet these standards can limit growth potential and damage business relationships.

Furthermore, cybersecurity requires ongoing attention and adaptation as technology evolves and the digital threat landscape becomes more complex. A one-time investment in security tools and protocols is no longer sufficient to protect systems. Cybercriminals constantly adapt their tactics, developing new attacks and finding innovative ways to bypass traditional defences. Therefore, cybersecurity is an ongoing effort that requires regular updates, continuous monitoring, and employee training to stay ahead of the latest threats. Neglecting to allocate resources for regular security audits, patch management, and staff education leaves an organisation vulnerable to these evolving threats.

In conclusion, neglecting to invest in cybersecurity is risky and potentially catastrophic for any organisation. The consequences of a cyberattack can be severe, ranging from financial losses and operational downtime to reputational harm and legal penalties. Organisations can protect their data, systems, and reputations from the growing threat of cybercrime by prioritising cybersecurity and investing in the right tools, processes, and expertise. Cybersecurity is not just a technical necessity but a critical business strategy that can safeguard an organisation's future and foster trust with customers, partners, and investors.

Vulnerabilities in IoT Systems

In order to secure IoT systems to data confidentiality, privacy, and integrity, it is important to understand the various vulnerabilities or security weaknesses of IoT systems that cybercriminals can exploit. Most of the security vulnerabilities of IoT are found at the physical layer of the IoT reference architecture, which consists of the IoT devices. As discussed in the previous sections, IoT devices have limited computing and communication resources, making it difficult to implement strong security protocols and algorithms that can ensure that the confidentiality, integrity, availability, accountability, and nonrepudiation security requirements of IoT data and systems are satisfied. Hence, the security measures often designed and implemented to secure IoT data and systems are not sufficient, making IoT systems vulnerable to several types of cybersecurity attacks and more straightforward to compromise.

As IoT devices are being integrated into existing systems of businesses, personal devices, household systems, and critical infrastructure, they are becoming attractive targets for cybercriminals, making them vulnerable to constant attacks. Cybercriminals are often searching for security weaknesses (vulnerabilities) in IoT devices that they can exploit in order to steal or damage data, disrupt the quality of service, or coordinate the devices to conduct large-scale attacks such as DoD/DDoS attacks or any attack to compromise other systems, especially critical infrastructures.

Some Common IoT Vulnerabilities

Given the severe risk posed by security weaknesses in IoT systems to IoT services and other services in society, including the possibility of causing the loss of human lives or disrupting society, it is crucial to identify and address IoT security vulnerabilities before cybercriminals can exploit them. The proliferation of diverse IoT devices across various sectors in society with very little or no standardisation and regulation has increased IoT vulnerabilities and attack surfaces that cybercriminals can leverage to compromise the data collected using IoT devices and to compromise existing systems. Some of the IoT security vulnerabilities include the following (figure 103):

Some Common IoT Vulnerabilities
Figure 103: Some Common IoT Vulnerabilities
  • Embedding of passwords on the IoT devices: To facilitate remote technical support, IoT engineers and developers must remotely access the devices for configuration during deployments and troubleshooting during the operations and maintenance of IoT networks with many devices. This makes it easy for cybercriminals to access and exploit IoT devices for malicious purposes.
  • Lack of authentication: Sometimes, some IoT manufacturers ship devices without incorporating any authentication mechanism, making the devices vulnerable to unauthorised access by malicious attackers, which violates the confidentiality, privacy, and integrity of IoT data. They may also take over the devices and use them for malicious purposes. Thus, devices without any form of authentication are rugged devices that can be used as an attack surface to conduct advanced attacks on IoT systems and other critical resources.
  • Weak passwords: To make their devices easy to use, device manufacturers ship devices with default security such as hardcoded passwords, which users cannot change, default usernames and passwords, or provide a simple way of logging into the device. Since the manufacturer's security credentials are easy and never changed, attackers usually exploit them to gain access to the device, compromising the confidentiality and integrity of the data. They can use the devices for further attacks.
  • Backdoors: Most IoT manufacturers create hidden access mechanisms called back doors (user-id/password or open ports) to permit them to support the devices. Attackers often access these back doors and then exploit them to launch attacks (e.g., botnets and other malware attacks).
  • Failure to install security patches and updates: Some IoT manufacturers do not provide a simple and effective way to install security patches and updates, making it difficult for IoT service providers to resolve security vulnerabilities before cybercriminals can exploit them. Unlike traditional computer systems with mechanisms for continuous installation of security updates and notification of security changes due to updates, IoT devices are straightforward and lack these features, making them vulnerable to cyberattacks. Also, due to their simple nature, IoT devices are vulnerable to attacks such as unauthorised software and firmware updates. IoT manufacturers do not even release patches or updates of the software on their devices, and attackers exploit them. And even if patches and updates are released, users have difficulties adding them to the device, and most of the vulnerabilities in these devices are never patched
  • Poorly protected network services: The wireless communication channel between the IoT device and the access point or gateway is a significant attack surface often used to attack IoT devices. Unencrypted communication channels are one of the network vulnerabilities due to unprotected network services. Because of the energy, cost and processing power constraints, most IoT manufacturers do not implement cryptographic mechanisms to ensure secure communication. This makes it easier for attackers to launch man-in-the-middle attacks on IoT networks. Confidential data, including authentication credentials, can be compromised and used to launch further attacks, such as DoS/DDoS attacks, without protecting the communication between the IoT devices and the servers. Also, there are some unnecessary services, such as unprotected ports, that cybercriminals can exploit. Failure to disable unused ports or protect used ports with a firewall leaves them vulnerable to cybersecurity attacks.
  • Internet exposure: Some IoT devices are connected directly to the internet without firewalls or any form of security mechanism and are likely to be attacked.
  • Unprotected interfaces: Some vulnerabilities in IoT systems can be introduced by poorly secure or unprotected interfaces (e.g., web, backend APIs, cloud, fog interfaces), which make IoT devices and other resources vulnerable to cyberattacks. Weak (and sometimes lack of) authentication/authorisation and cryptographic mechanisms make the communication through these interfaces vulnerable to cyberattacks as there is access control to essential resources, lack of accountability, and protection of data and systems from being compromised.
  • Use of outdated components: Sometimes, IoT device manufacturers cannot resolve hardware or software security vulnerabilities discovered in IoT devices, forcing IoT service providers to keep using the devices without any security improvements to address the known vulnerabilities. These outdated devices with well-known security vulnerabilities become sweet pots for cybercriminals to exploit, compromise, and damage IoT systems and resources.
  • Supply chain vulnerabilities: The IoT supply chain consists of manufacturers (manufacturers of semiconductor chips, hardware parts, IoT devices, software), distributors, vendors, service providers, and users. Vulnerabilities may be introduced into the IoT devices at any stage of the supply chain. It could be a piece of compromised software or hardware manipulated or installed to introduce security weaknesses that make IoT devices vulnerable to IoT attacks or easy to compromise. The objective of supply chain attacks could be cyberespionage (data theft or compromise) and exploiting the devices to launch sophisticated cyberattacks. Poorly designed third-party software (such as libraries, drivers, kernels or hardware components) installed on the devices or part of other applications or firmware may introduce vulnerabilities that may eventually be exploited to compromise the devices or use them for further attacks on infrastructures. One of the sources of supply chain vulnerabilities is the use of third-party software and hardware components without adequately checking for security vulnerabilities and resolving them before incorporating the components into IoT products. In some instances, IoT device developers sometimes copy codes from online sources and add them to their programs for IoT devices to get the desired functionalities of the device running. Another form of supply chain vulnerability is implementing very little or no security mechanism on IoT devices by device manufacturers or developers (when deploying the device), making them vulnerable to attacks. One of the significant challenges of supply chain attacks is that the users are unaware of these weaknesses and how many devices in their infrastructure from different manufacturers possess such vulnerabilities.
  • Outdated firmware: After IoT devices are deployed, some IoT service providers do not update the firmware or software running on the devices for a long time. Some do not update at all, leaving them with vulnerabilities that may be exploited.
  • Poor device management strategies and policies: Some IoT devices are deployed without unique identifiers to enable the tracking, monitoring, and management of IoT devices. As a result, some IoT nodes sit on the infrastructure without being adequately monitored and managed to ensure any vulnerability can be identified and resolved. If the cybersecurity department is unaware of some IoT nodes' presence, they cannot protect them, leaving them vulnerable to attacks. Some IT administrators neglect IoT nodes without giving them the same security efforts they do for traditional computing and networking nodes and do not list them on the inventory of assets that need to be protected; thus, the devices are rarely updated and maintained to ensure that they cannot be compromised or exploited.
  • Poor security key management protocols: If the cryptographic keys are compromised, the IoT devices become vulnerable to man-in-the-middle attacks and other attacks that could disrupt the IoT service or compromise the IoT data.
  • Poor physical hardening of the IoT devices: The fact that IoT nodes are often deployed in outdoor or remote environments makes them physically accessible to criminals who could compromise them. A criminal could either physically damage the device, extract information, or manipulate it so that it cannot perform its normal functions. For example, an attack may copy the data stored in the device's memory and may even replace some components with compromised ones, which could give them remote access to the devices.
  • Data management vulnerabilities: For large-scale IoT deployments with thousands, tens of thousands or hundreds of thousands of IoT nodes, the sheer volume of IoT data collected is so huge that traditional data management systems may be unable to handle them securely. That is, the confidentiality and integrity of the data may be compromised due to data storage, processing, and retrieval vulnerabilities in data management systems, which get worse with the scalability of IoT assets.
  • Vulnerabilities standardisation: Although there are many efforts to ensure proper standardisation in the IoT ecosystem, there are standardisation and interoperability issues. Designing an integrated security system to protect IoT devices from different manufacturers with diverse vulnerabilities is challenging. The diversity of IoT devices from various manufacturers makes integrating IoT devices into existing security frameworks difficult, resulting in weak IoT security or security being taken for granted, leaving the devices vulnerable to attacks.

Security Strategies to Mitigate IoT Vulnerabilities

Although IoT vulnerabilities cannot all be eliminated, there are best practices that can be adopted to ensure that IoT vulnerabilities are not easily exploited to compromise IoT data and systems. Some of the security measures and techniques that can be adapted to harden IoT security and mitigate the risk of an IoT attack resulting from the exploitation of any of the IoT vulnerabilities include the following (figure 104):

Security Strategies to Mitigate IoT Vulnerabilities
Figure 104: Security Strategies to Mitigate IoT Vulnerabilities
  • Adoption of security by design principles: At every stage of the IoT lifecycle of the IoT systems, from the design, manufacturing, deployment, operation and maintenance to the decommission and disposal stage, security control measures should be considered and incorporated to ensure that IoT data is not compromised or that the device is not exploited to conduct sophisticated attacks. In this way, every stakeholder in the IoT device supply chain is aware of the various vulnerabilities and implements appropriate measures to resolve them and ensure that they cannot be exploited to compromise the IoT devices or data. Security by design requires close collaboration between IoT designers, engineers, and cybersecurity experts to ensure that security is among the key design criteria. Before IoT devices are released to the market and deployed, there should be a challenging security assessment (e.g., penetration testing or vulnerability scanning) to identify potential vulnerabilities in IoT hardware or software components and communication protocols. If some vulnerabilities are found, they should be resolved as quickly as possible.
  • Design and enforcement of strong password policies: Devices with hardcoded or embedded passwords should not be deployed in IoT infrastructures, and rather than hardcoding passwords on IoT devices, manufacturers should be required to provide the option for users to create user names and passwords for their devices. Default user names and passwords on IoT devices, access points and gateways should be changed. The passwords should be strong enough, and simple and overused passwords should be avoided. Using new, unique, and complex passwords that follow strong password policies is essential. Effective password management policies should be implemented, making it easy to quickly and securely update and reset passwords.
  • Mandatory authentication: Every IoT device should be required to authenticate before joining the network, and those without authentication mechanisms should be rejected. This implies that every IoT device must be identifiable and can only be admitted into the network after proper authentication. If possible, multifactor (e.g., two-factor) authentication should be implemented. These measures will ensure that only authorised users and IoT devices can access IoT resources, reducing the risk of a security breach.
  • Implementing effective network security mechanisms: IoT network services and protocols should be adequately protected. Port forwarding should be disabled, and ports that are not needed should be closed. Authentication should be required to access IoT networks. Also, network security tools such as firewalls, intrusion detection systems, and intrusion prevention systems should be used to inspect the traffic coming from various sources, and malicious traffic sources should be blocked. Secure network protocols such as TLS/SSL and cryptographic protocols should be used to secure the communication channels. Network segmentation techniques should also be employed to isolate IoT networks from the rest of the infrastructure and to isolate the various IoT networks (especially those integrated with critical assets) to contain potential attacks on isolated segments and to mitigate the risk of compromising critical assets.
  • Regular update of software and firmware: Regular installation of software and firmware updates ensures that the latest security patches are applied to fix security holes or gaps, reducing the chances that existing software security vulnerabilities can be exploited. Manufacturers should make the process of installing software and firmware updates or patches as simple or straightforward as possible. Ideally, it should be automatic or require just a single click without complex installation procedures.
  • Avoid prioritising ease of use over security: Plug-and-play devices require very few or no additional settings or configurations, introducing vulnerabilities as they can easily be exploited. Avoid plug-and-play devices and other systems that are easier to deploy and use but easier to compromise.
  • Securing the APIs: The APIs that facilitate the communication between the IoT devices, data collection points, and user interfaces should be appropriately secured by the implementation of strong authentication (e.g., OAuth for secure authentication), encryption (HTTPS to ensure that the data is encrypted), and access control mechanisms (e.g., validating every input to prevent inject attacks) [67]. Thus, implementing API security techniques prevents unauthorised devices and users from accessing the IoT devices and compromising the IoT systems or data.
  • Validating firmware using secure boot mechanism: This ensures that the device is running authorised firmware, protecting the device against malicious software and firmware tampering. In this way, the device verifies the digital signature of the firmware during the boot process. It prevents the execution of unauthorised or modified firmware, ensuring the device's integrity. Thus, manufacturers should incorporate mechanisms to verify the authenticity of the firmware at startup and to securely update the device, which will ensure the security of the devices throughout their lifecycle[68].
  • Use of secure key management systems: Cryptographic keys should be appropriately managed. In the case of an asymmetric encryption scheme of securing commutation to servers in IoT infrastructures, the PKI and digital certificate infrastructure should be used to ensure the secure management of the keys and to maintain trust.
  • Mitigate risk from outdated components: Vulnerable devices should be updated, replaced, or removed from the network. Deploying an effective monitoring system to ensure tighter monitoring and controls to spot and resolve vulnerabilities quickly can achieve this.
  • Implement and enforce zero-trust policies: This means that all devices and users inside and outside the IoT network/infrastructure must be verified, authorised, and evaluated continuously to ensure that they are not a threat or could introduce some vulnerabilities. Over time, users or devices may be compromised and become a threat to critical resources. Thus, automated zero-trust policies are crucial and must be enforced.
  • Leverage machine learning tools: Use machine learning tools to automate security tasks like vulnerability and attack detection and mitigation techniques. AI tools are a practical approach to detecting vulnerabilities and attacks in IoT networks. They will be very useful for very large IoT networks. They have been added to security systems such as SIEM systems to detect vulnerabilities, threats, and attacks.
  • Training of staff: Continuous training of IoT designers, developers, and engineers on best security practices will ensure that they do not design, manufacture, or deploy devices with vulnerabilities that may result from an error or carelessness in the design, manufacturing, and deployment process.
  • Continues education of consumers: Many manufacturers neglect security features because users focus more on their desired functionality, ease of use, and cost and rarely pay serious attention to security. Users sometimes misuse the devices and fail to install updates and patches. Continued education of users could be beneficial.
  • Physical protection of the devices: Appropriate measures should be taken to ensure that the device is not physically compromised, and if such an event should occur, it should be easily detected. Appropriate measures should be taken to ensure that data is not compromised and the device is not exploited for further attacks.
  • Implement cyber supply chain best practices: To reduce supply chain vulnerabilities, follow secure software development lifecycle methods, conduct a thorough review of code from internal and external sources, avoid using counterfeit hardware and software from very untrusted sources, and review the design and development processes for software and hardware from third parties. Also, check the processes for addressing vendor vulnerabilities [69].

IoT Attack Vectors

In this section, we discuss the concept of IoT attack vectors, attack surfaces, and threat vectors to clarify the difference between these cybersecurity terms, which are often used interchangeably. We discuss some IoT attack vectors that should be considered when designing cybersecurity strategies for IoT networks and systems. We also discuss some strategies that can be used to eliminate or mitigate the risk posed by IoT attack vectors.

IoT attack vector, attack surface, and threat vector

IoT attack vectors are the various methods that cybercriminals can use to access IoT devices to launch cyberattacks on the IoT infrastructure or other information system infrastructure of an organisation or the Internet as a whole. They provide a means for cybercriminals to exploit security vulnerabilities to compromise sensitive data's confidentiality, integrity, and availability. It is essential to minimise the attack vectors to reduce the risk of a security breach. It may cost an organisation a lot of money, and its reputation may be negatively impacted after a security breach.

The number of attack vectors keeps growing as cybercriminals develop numerous simple and sophisticated methods to exploit unresolved security vulnerabilities and zero-day abilities on computer systems and networks. In this way, there is no single solution to mitigate the risk posed by the growing number of attack vectors in classical computer systems and networks. As the number of IoT devices connected to the Internet increases, the number of IoT-related attack vectors also increases, requiring the development of a holistic cybersecurity strategy that handles the traditional attack vectors (e.g., malware, viruses, email attachments, web pages, pop-ups, instant messages, text messages, and social engineering, credential theft, vulnerability exploits, and insufficient protection against insider threats) and those that are designed to target IoT systems (e.g., exploitation of IoT-based vulnerabilities such as weak or no passwords, lack of firmware and software updates, unencrypted communications).

To defend IoT networks and systems, it is crucial to understand the various ways a cybercriminal can use to gain unauthorised access to IoT networks and systems. The term threat vector is often used interchangeably with attack vector. An IoT threat vector is the number of potential ways or methods cybercriminals can use to compromise the confidentiality, integrity, or availability of IoT data and systems. As IoT networks grow and are integrated with other IT and cyber-physical systems, the complexities of managing them and the number of threat or attack vectors increase. Therefore, it is very challenging to illuminate all threat or attack vectors, but IoT-based cybersecurity systems are designed to eliminate threat or attack vectors whenever possible.

An IoT attack surface is the number of attack vectors that cybercriminals can use to manipulate an IoT network or system to compromise data confidentiality, integrity, or availability. It combines all IoT attack vectors available to cybercriminals to compromise IoT data and systems. It implies that the more IoT attack vectors an organisation has due to deploying IoT systems, the larger its cybersecurity attack surface and vice versa. Therefore, organisations must minimise the number of attack vectors to minimise the attack surface.

Some IoT attack vectors

To eliminate IoT attack vectors, it is essential to understand the nature of some of them and their sources and then develop comprehensive security strategies to deal with them. This section will discuss IoT attack vectors from the perception layer to the application layer. Some of the IoT attack vectors or ways in which cybercriminals can gain illegal access to IoT networks and systems (to compromise data security or launch further attacks) include the following:

  • Compromised user or device credentials: Password compromise is one of the most common ways cybercriminals can gain unauthorised access to IoT systems. This is partly because some IoT device manufacturers ship devices with hardcoded passwords and sometimes with default passwords that are rarely changed. This gives cybercriminals easy access to IoT devices, which they use to conduct sophisticated attacks such as DDoS attacks. Password credentials to log in to IoT mobile and web applications can also be compromised by cybercriminals through data leaks, phishing scams, malware, and brute-force attacks.
  • Weak cryptographic algorithms: Implementing strong cryptographic algorithms in IoT devices is very challenging due to hardware constraints. This makes it easy for cybercriminals to access IoT data transported over wireless communication channels. Also, the confidentiality of sensitive data stored on IoT devices can easily be compromised. Hence, weak cryptographic algorithms (and data encryption algorithms not implemented) make it attractive for cybercriminals to try to access IoT data through man-in-the-middle attacks.
  • Open communication ports: Cybercriminals can exploit unsecured and unnecessarily open ports (virtual entry points into a device that associates network traffic with a given application or process) to gain access to the device. Every necessarily open and unsecured port is a threat vector that cybercriminals can exploit to attack IoT devices, servers, and applications.
  • Misconfigurations: Poorly configured IoT devices, network devices, servers, computing nodes, and applications can serve as weak points that cybercriminals can exploit to attack the IoT network and systems. Thus, exploiting vulnerabilities created by misconfiguration is one-way attackers can gain unauthorised access to IoT networks and systems.
  • Firmware vulnerabilities: Since IoT firmware and software are not regularly updated to patch security holes and to protect IoT devices from newly discovered security vulnerabilities, cybercriminals can exploit unresolved firmware and software vulnerabilities to gain unauthorised access to IoT devices and data. Thus, exploiting firmware and software vulnerabilities is one of the ways cybercriminals can easily compromise the security of IoT networks and systems.
  • Zero-day vulnerabilities: Several security vulnerabilities (flaws in hardware or software) are regularly being discovered on a daily, weekly, monthly, or annual basis. Suppose there are security vulnerabilities for which the developer has not released a security patch, or the user has not installed/applied the update. In that case, attackers will likely exploit such vulnerabilities to gain unauthorised access to IoT networks and systems. A zero-day attack exploits unknown vulnerabilities or software flaws before a security patch is released. Therefore, exploiting unresolved known vulnerabilities is one of the attack vectors that cybercriminals use to compromise the security of IoT networks and systems.
  • Cross-site scripting (XSS): A browser-based attack vector can inject or insert malicious code within a browser-based application designed for users to access IoT services. For many IoT applications, the end-users access the IoT services hosted on cloud computing platforms through web and mobile applications using their browsers. Cybercriminals can inject malicious code into IoT web applications, redirect users to fake websites and trick the browser into executing malicious code that downloads malware that infects user devices. The inserted malicious code can launch into an infected script, infecting the user's device and stealing information. Hence, since IoT services are provided to users through web-based applications, this kind of attack vector will be targeted by cybercriminals.
  • SQL injection: Many IoT data is stored in structured databases and then accessed through web and mobile applications by users and other applications. The data stored in structured databases is often managed using SQL (structured Querry Language), a programming language used to administer or interact with the database to store, access, and manipulate the data. An SQL injection attack vector is one in which an attacker leverages known vulnerabilities to inject malicious SQL statements into an application to trick the server into allowing the attacker to illegally extract, alter or delete information. In the case of IoT applications in which sensor data is collected and stored in structured databases, this type of attack vector will likely be targeted.
  • Distributed Denial of Service (DDoS) attacks: This type of attack vector involves the use of bots to infect IoT devices and then create a botnet (network of bots) that can be controlled to overwhelm IoT gateways, services, data centres, and web applications with a massive amount of traffic or requests. This attack aims to cause the IoT gateways, services, data centres, and web applications to crash, depriving the users of accessing IoT services. The attacker takes over many IoT devices, creates a botnet, and redirects traffic from their devices to IoT gateways, services, data centres, and web applications to disrupt IoT services.
  • Session hijacking: Cybercriminals can gain unauthorised access to sensitive IoT data through session hijacking. When IoT users login to access IoT services, they are provided with a session key or cookie, so they don't need to log in again. This cookie can be hijacked by an attacker who uses it to gain access to sensitive IoT information [70].
  • Malware infection: This attack vector involves using malicious software (malware) designed to take control of an IoT network or system. Malware may corrupt and steal data and can also be used to carry out malicious attacks on multiple IoT devices and other systems. Some examples of malware that can be used to target IoT networks and systems include ransomware (malware that can encrypt valuable IoT data or data of IoT users to deprive legitimate access to the data until a ransom is paid) and trojan (malware that can be used to create a backdoor that gives attackers unauthorised access to IoT networks and systems).
  • Phishing: This type of attack vector could be targeted at employees of IoT organisations or users to compromise their login credentials. It involves using social engineering strategies where the target is contacted by email, telephone, or text message by someone posing as a legitimate colleague or institution to trick them into providing sensitive data, credentials, or personally identifiable information (PII). It is one of the most commonly used attack vectors to gain unauthorised access to sensitive information, and it is also the starting point for many forms of attacks like ransomware attacks (which often start with phishing campaigns against their targets) and spyware (malware that can share sensitive IoT data to attacks).
  • Brute-force attack: This is another attack vector aimed at compromising the authentication credentials and encryption keys to gain unauthorised access to IoT data. It could be done using a trial-and-error method to guess the password or encryption key to gain unauthorised access to IoT networks, systems, and data. If the password and the encryption key are not strong enough, the attacker can illegally gain access to IoT devices. Using default passwords and weak encryption schemes in IoT devices makes them susceptible to these attacks.
  • Physical attacks: This type of attack vector involves the adversary's physical access to the IoT device. Suppose an attacker can physically access deployed IoT devices. In that case, it is possible to steal sensitive data, compromise the devices, and later use them to conduct attacks on IoT networks and other systems.
  • Insider attack: It is also essential to consider the fact that legitimate users or employees could decide to leak sensitive IoT data to external entities, compromising the confidentiality of the data. An insider may also delete sensitive data intentionally or unintentionally. This attack vector should be considered when designing a cybersecurity strategy for IoT networks and systems.
  • Exploitation of supply chain vulnerability: This kind of attack vector involves the exploitation of vulnerabilities present in third-party hardware and software systems. Attacks could target vulnerabilities that the hardware or software system supplier may not have discovered. Therefore, vulnerabilities present in third-party products may become entry points for attackers to gain unauthorised access to IoT networks and systems.

The attack vectors discussed above could be grouped into two categories: passive and active. Passive attack vector exploits allow attackers to gain unauthorised access to IoT networks and systems without intruding or interfering with their operation. Examples of these attack vectors include phishing and other social engineering-based attack vectors. On the other hand, active attack vector exploits interfere with the operation of the IoT network and system. Examples of this category of attack vector include DDoD attacks, brute-force attacks, malware attacks, etc.

Strategies to defend against well-known IoT attack vector exploits

To address common attack vectors, it is vital to understand the nature of the attack vector exploits, including passive and active ones. Most attack vector exploits share some common characteristics, which include the following:

  • The attackers first identify targets that they intend to go after.
  • The attackers use social engineering strategies, malware, phishing, and vulnerability scanning tools to scan the targeted victim's IoT network and other information systems to identify vulnerabilities they intend to exploit.
  • The attackers set out to identical a set of attack vectors that they intend to exploit and then search for the tools required to carry out the attack vector exploits.
  • Attackers gain unauthorised access to IoT systems, steal sensitive data, install malware, and sometimes escalate the attack by using compromised devices to carry out further attacks to compromise other system resources.
  • The attack tries to clean their tracks to remain undetected. They also steal valuable data or use computing and communication resources.

Identifying and deploying practical security tools and policies to deal with IoT attack vectors is essential. These security tools and policies should be designed to eliminate or reduce the risk from IoT attack vectors from the IoT perception layer to the application layers. Some of the strategies that can be designed to defend IoT networks and systems against well-known IoT attack vectors include the following:

  • Create secure authentication policies: Replace default passwords with strong passwords. Encourage the use of password managers to ensure that login credentials are strong and resilient to brute-force attacks.
  • Implementation of strong energy-efficient cryptographic schemes: The IoT data stored in IoT devices, computing devices, network devices, and databases should be encrypted or transformed to a format that is unintelligible to unauthorised entities. Data should be encrypted before being transported over communication networks.
  • Secure communication ports: All communication ports should be secured, and unused ports should be closed to prevent exploitation.
  • Identify and resolve vulnerabilities: Use security monitoring tools to identify and fix vulnerabilities as quickly as possible to ensure that they are not exploited to compromise the security of the IoT network and systems. Also, install or apply security updates as soon as they are released to quickly patch security vulnerabilities that attackers may target.
  • Enforce the policy of least resistance: Implement the principle of least privilege, in which only necessary permissions are granted to firmware components and processes. Users should also be given only the required privileges at the networking and application layers. When a user no longer needs certain privileges, they should be deactivated.
  • All IoT devices in the network should be identifiable: To avoid unwanted access, every device should have a distinct identity to ensure that it can be effectively monitored and must authenticate before it can access IoT networks and systems.
  • Adoption of secure software development methods: The code should be well-tested and reviewed to ensure that security vulnerabilities can be identified and resolved. We should also ensure that the libraries used to implement the device firmware are secured and well-tested. When programming IoT devices, copying already-written code from the Internet should be minimised to ensure it does not introduce security vulnerabilities.
  • Continual monitoring of IoT devices: Maintaining an up-to-date inventory of all connected devices and monitoring the activities within IoT devices and other systems. Automated tools should be used to discover all connected devices and continuously scan them to identify and address vulnerabilities.
  • Regular security update and patching: Although managing and installing security updates and patching security gaps for thousands of devices can be challenging, Remote Management and Monitoring (RMM) tools can perform regular security updates and patching. This will ensure that IoT device firmware and software are always up to date.
  • Decommission unused IoT devices: Unused IoT devices should be removed from the IoT network. If any IoT device is not being used, it may not be regularly updated or adequately secured, which poses a risk to the IoT network and systems. Thus, any used IoT device and any other hard or software system not being used should be removed from the IoT network.
  • Implement centralised management for IoT devices: Managing IoT devices, network traffic and data flow from a single point facilitates the detection of malicious events and swiftly addresses them. It also promotes the implementation of integrated cybersecurity systems that enforce the implementation of security controls throughout the network.
  • Isolate IoT devices from critical system resources and data: By isolating IoT devices from essential system and data resources, we ensure that even if the IoT network is compromised, the attacker cannot move laterally across the network to compromise critical system resources and networks. Segmenting the network and isolating the IoT devices from some of the organisation's networks gives the organisation more visibility and control of the network.
  • Use updated antimalware software: Ensure that antimalware software is up to date to guarantee that it can protect against the latest malware.
  • Deploy attack detection and response tools: Deploy automated attack detection and response tools that can detect and stop cyberattacks as soon as they are launched. AI and machine learning tools should be leveraged to design automated attack prevention, detection and response tools for IoT.
  • Regular and effective employee training: Employees should be well-trained to handle cybersecurity tools and detect social engineering and phishing attacks designed to trick them into leaking sensitive information.
  • Ensuring supply chain security: Third-party hardware and software tools should be well-secured to prevent the introduction of security vulnerabilities that attackers can exploit. Also, ensure that third-party software is regularly updated on time.
  • Zero-trust security approach: Apply the Zero Trust (ZT) security framework to ensure that all users, whether in or outside the organisation's network, are authenticated, authorised, and continuously validated for security configuration and posture before being granted or keeping access to IoT networks, systems, applications and data.
  • System-based security approach: The IoT security landscape is very complex and is constantly changing, requiring the integration of security tools, security policies, people, and diverse types of information and cyber-physical systems. The best way to manage the complex and dynamic interaction of complex components that constitute the IoT infrastructure is to use a system-based approach. Concepts from the growing fields of systems thinking, systems dynamics, and software engineering can be borrowed to model and design robust and secure cybersecurity systems for IoT networks and systems.

IoT Security Technologies

In the previous sections of this chapter, we discussed the various IoT vulnerabilities, cybersecurity attacks, and attack vectors and the various best practices to address these vulnerabilities, threats, and attack vectors. This section presents the various IoT security technologies and a general methodology for securing IoT networks and systems.

Security Technologies for Various IoT Layers

Various cybersecurity tools are deployed to design a robust and comprehensive cybersecurity system. No single cybersecurity tool can handle security issues at all the layers of the IoT reference architecture. Therefore, appropriate security tools can be implemented at the various layers, from the IoT perception or device layer to the application layer. Hence, IoT security can be categorised into the following categories (figure 105):

Security Technologies for Various IoT Layers
Figure 105: Security Technologies for Various IoT Layers
  • IoT device security,
  • IoT network security,
  • IoT fog/cloud security,
  • IoT application security.

The hardware constraints of IoT devices make it hard to deploy traditional end-node security tools like firewalls and antimalware software to secure them. It is also challenging to update and patch these devices, similar to how we update and install security patches in traditional end nodes. However, many efforts are still being made to adapt conventional security technologies to secure IoT devices. However, there is a growing need for security technologies that could address the specific security of all IoT nodes at a lower energy and communication cost. Some of the technologies designed to secure IoT devices include:

Lightweight Energy-efficient Encryption Algorithms

It is critical to implement lightweight cryptographic encryption algorithms designed for efficient performance on devices with limited processing power and energy constraints to enhance the security of data transmitted by IoT devices. Algorithms such as Data Encryption Standard (DES), Advanced Encryption Standard (AES), and other optimised, energy-efficient cryptographic schemes protect data integrity and confidentiality.

Importance of Lightweight Encryption Algorithms for IoT

  1. Efficiency and Suitability: Unlike traditional computing systems, many IoT devices operate with constrained computational resources, limited memory, and reduced battery capacity. Therefore, lightweight cryptographic algorithms are essential because they provide robust encryption without overburdening device capabilities. Algorithms like DES and AES have been adapted into lightweight versions, such as AES-128, which balances security and efficiency. These adaptations ensure IoT devices can encrypt data effectively without significant energy drain or processing delays.
  2. Securing Data in Transit: Encryption algorithms protect data as it is transmitted from IoT devices to central servers, cloud platforms, or other networked endpoints. By encoding the data, these algorithms prevent unauthorised interception or tampering during transmission, ensuring that sensitive information—such as health metrics, industrial sensor readings, or home security footage—remains confidential and intact.

Data Protection During Storage and Transmission

  1. Encryption of Data at Rest: Encryption algorithms extend their utility beyond data transmission and are vital for securing data at rest. Data stored in device memory, cloud databases, or on-premise servers must be encrypted to mitigate the risk of data breaches. This is especially critical for IoT applications in healthcare, finance, and smart cities, where breaches could lead to significant privacy violations or operational disruptions.
  2. Securing Communication Channels: For data in transit, encryption protocols ensure that communication channels are secure. This can include using Transport Layer Security (TLS) in combination with lightweight encryption algorithms to create a secure communication pathway. By encrypting the data packets before transmission and decrypting them at the receiving end, IoT systems can prevent man-in-the-middle (MitM) attacks and other types of eavesdropping.

Firmware Integrity Verification

  1. Ensuring Authentic Firmware Updates: Maintaining the integrity of IoT device firmware is essential for preventing the deployment of malicious updates that could compromise device functionality or provide attackers with unauthorised access. Cryptographic digital signatures play a vital role in this process. Before an IoT device accepts and installs firmware updates, the device verifies the cryptographic signature attached to the update.
  2. Process of Verification: Digital signatures utilise public key cryptography to ensure authenticity. When a firmware update is created, it is signed with a private key held by the manufacturer or trusted source. The IoT device, which holds the corresponding public key, verifies the signature upon receiving the update. If the signature matches, the device confirms that the update has not been tampered with and originates from an authentic source. If the signature fails, the device rejects the update to prevent the installation of potentially harmful software.
  3. Protection Against Unauthorised Modifications: This verification process ensures that firmware updates remain secure from unauthorised modifications, safeguarding devices from potential exploitation. Attackers often attempt to inject malicious code through spoofed or altered firmware. IoT ecosystems can defend against these risks by requiring cryptographic signature verification and maintaining trust in device operation.

Enhanced Security Through Layered Cryptographic Solutions

  1. Combining Encryption with Other Security Measures: While encryption is a powerful tool, comprehensive IoT security involves a layered approach that integrates encryption with other security protocols. This can include network segmentation, multifactor authentication (MFA), and intrusion detection systems (IDS). Combining encryption with these practices helps create a robust defence strategy that protects data and infrastructure from various attack vectors.
  2. Future-Proofing with Emerging Cryptographic Techniques: As IoT technology evolves, so too do the methods employed by cybercriminals. To stay ahead, organisations should look into adopting emerging cryptographic techniques like elliptic curve cryptography (ECC), which offers strong security with lower computational overhead than traditional algorithms. Such advancements ensure that IoT systems remain secure, even as processing power and attack sophistication increase.

Implementing lightweight cryptographic algorithms, such as DES and AES, is fundamental for ensuring that data transmitted by IoT devices is secure. These algorithms safeguard data during storage and communication and play a critical role in verifying the integrity of firmware updates. By utilising cryptographic digital signatures, IoT systems can confirm that updates are authentic and unaltered, reinforcing the trustworthiness of the entire IoT ecosystem. For comprehensive security, integrating these cryptographic practices with other proactive measures ensures resilience against a range of cyber threats.

Secure Firmware Verification and Update Mechanisms

IoT devices' security and reliability depend heavily on their firmware, the foundational software layer that controls the hardware's functions. Because IoT devices are typically connected to the internet 24/7, they are exposed to a wide range of cybersecurity threats. Regular and secure firmware updates are critical to patch vulnerabilities, enhance functionality, and defend against new attack vectors. Without secure mechanisms for firmware verification and updates, IoT devices can become entry points for attackers to compromise network security, disrupt services, or steal sensitive data.

Common Firmware-Based Security Risks in IoT Devices

  1. Weak or No Encryption: Many IoT devices have firmware that lacks sufficient encryption protocols. This oversight leaves the device vulnerable to eavesdropping and unauthorised access by malicious actors who can intercept unencrypted data and use it to compromise the device or network. Implementing robust encryption standards ensures that data communicated between the device and servers remains secure.
  2. Weak Authentication Measures: IoT firmware often includes hardcoded or weak credentials, which attackers can easily exploit. Such vulnerabilities provide an entry point for unauthorised users to gain control over the device. To mitigate this risk, firmware should be designed to support strong, configurable authentication methods that require users to implement unique, complex credentials.
  3. Absence of Secure Update Mechanisms: The lack of secure update procedures poses significant risks. Firmware that cannot be securely updated or patched leaves devices exposed to known vulnerabilities, allowing attackers to exploit these weaknesses to launch cyberattacks. Secure update mechanisms that involve digital signatures and integrity checks should be incorporated to ensure only authentic and authorised updates are applied.
  4. Risk of Tampering and Alteration: IoT devices without secure boot and update procedures are highly susceptible to tampering. Attackers can modify or replace firmware with malicious code, enabling them to control the device or create persistent backdoors. Implementing secure boot processes ensures that the device only loads firmware that has been verified and authenticated, preventing unauthorised code from executing during start-up.
  5. Threats from Poor Development Practices: Insufficient security measures during the firmware development phase can result in built-in vulnerabilities that attackers can exploit. Poor coding practices or the introduction of security flaws by malicious insiders increase the risk of compromised firmware. Ensuring robust security protocols during development, such as code reviews, automated security testing, and secure development lifecycles, minimises these risks.

Best Practices for Secure Firmware Verification and Updates

  1. Secure Boot Processes: A secure boot process protects IoT devices from running unauthorised or malicious firmware during start-up. This process involves cryptographic verification, where the manufacturer digitally signs the device's firmware. The device's hardware checks this signature before loading the firmware, ensuring that only firmware verified by the manufacturer is allowed to run. This step prevents tampering, unauthorised modifications, and malware injection attacks.
  2. Digital Signatures for Verification: Digital signatures provide an additional security layer by authenticating the source and integrity of firmware updates. Public-key cryptography ensures that the firmware is not altered in transit and comes from a trusted source. Any update that fails the signature verification is rejected, safeguarding the device from potentially harmful code.
  3. Secure Over-the-Air (OTA) Update Mechanisms: OTA updates offer a streamlined way to deliver firmware patches and security updates without physical intervention. An over-the-air (OTA) update is a method used to remotely update the software or firmware of an IoT device without the need for physical intervention. OTA updates allow manufacturers and network administrators to efficiently distribute patches, feature enhancements, security fixes, and bug resolutions to IoT devices connected over a network. This remote update capability is crucial for maintaining device performance, addressing emerging vulnerabilities, and ensuring that devices operate with the latest security protocols. With OTA updates, IoT devices can receive significant upgrades seamlessly, reducing manual updates' downtime and logistical challenges. To ensure security, OTA updates should include encrypted data transmission, authentication protocols to verify the source of the update, and integrity checks to confirm that the update has not been tampered with during transit. Proper implementation of OTA mechanisms enhances the functionality and security of IoT devices and strengthens the overall resilience of the IoT ecosystem.
  4. Integrity Checks and Fail-Safe Mechanisms: Incorporating integrity checks during the update process helps ensure that firmware has not been altered or corrupted. Devices should be equipped with rollback mechanisms that revert to a known safe state if an update fails validation or disrupts functionality. This ensures continuous operation and protects against accidental or malicious firmware corruption.
  5. Regular Security Audits and Patch Management: Firmware should be regularly audited for vulnerabilities, even post-deployment. Manufacturers should maintain a proactive approach to identifying potential weaknesses and releasing patches promptly. IoT devices should support automated patch management to streamline the distribution and application of updates while ensuring that each update passes security checks before installation.

The Role of Standards and Regulations

Adhering to industry standards and regulations, such as those outlined by the Internet Engineering Task Force (IETF) and the National Institute of Standards and Technology (NIST), can bolster the security of IoT firmware. These guidelines provide best practices for secure development, encryption protocols, and authentication mechanisms. Compliance with these standards helps establish user trust and aligns with global cybersecurity expectations.

Manufacturers and businesses deploying IoT devices should ensure that their firmware update processes and verification mechanisms comply with relevant security standards. This protects devices from cyberattacks, demonstrates a commitment to security, and can provide competitive advantages in industries where data protection is paramount.

Secure firmware verification and update mechanisms are indispensable for maintaining the security and integrity of IoT devices. Implementing a secure boot process that loads and executes only trusted, digitally signed firmware is essential to prevent unauthorised or tampered firmware from running. This measure protects IoT devices from malware injection attacks during start-up. Additionally, secure over-the-air (OTA) update mechanisms should be established to enable the safe delivery of patches and security updates to IoT devices, safeguarding against man-in-the-middle attacks and unauthorised modifications during the update process [71]. These strategies, combined with rigorous development practices and compliance with industry standards, create a robust security framework that supports the safe operation of IoT ecosystems.

Blockchain-based firmware updates

Regular firmware updates for IoT devices are essential to maintaining security and functionality; however, ensuring these updates' authenticity, integrity, and compatibility poses significant challenges. Leveraging blockchain technology can enhance the security and reliability of the entire update process—from generation and signing to distribution, verification, and installation. This approach greatly reduces the risk of malicious tampering, unauthorised modifications, or errors that could compromise devices or networks.

Blockchain technology facilitates transparent collaboration among multiple stakeholders, allowing them to contribute to and review firmware code while maintaining a clear, traceable record of versions and code changes. Digital signatures and cryptographic hashes can be employed to confirm the source's identity and the integrity of the updated content. Additionally, blockchain consensus mechanisms and smart contracts provide a robust framework for verifying and executing updates and recording and auditing the results. This ensures a comprehensive and secure process for firmware updates, safeguarding both devices and connected networks.

Antimalware tools for IoT security

Cybercriminals are creating increasingly sophisticated malware to target the specific vulnerabilities of IoT devices. These attacks can vary in severity, from harmless pranks, such as altering the temperature on a smart thermostat, to more serious threats, like taking control of security cameras or compromising industrial control systems. IoT malware differs significantly from traditional computer viruses. These malicious programs are typically engineered to function on devices with limited processing power and memory, making detection and removal more difficult. Additionally, they can quickly propagate through networks of connected devices, forming extensive botnets capable of carrying out powerful distributed denial-of-service (DDoS) attacks.

The variety of IoT malware showcases the ingenuity of cybercriminals, who are continually devising new methods to exploit these devices—often outpacing manufacturers' ability to release timely patches for vulnerabilities [72]. It is advisable to implement comprehensive security technologies to safeguard IoT devices from malware-based threats. Deploying robust antimalware solutions, including antivirus, antispyware, anti-ransomware, and anti-trojan software, can significantly enhance the protection of IoT devices. These security measures help detect, prevent, and neutralise malicious programs before they can compromise device functionality or data integrity. Given many IoT devices' unique vulnerabilities and limited processing power, choosing lightweight, efficient security solutions tailored to their specific needs is crucial. Integrating these antimalware tools with real-time threat monitoring and automatic updates can further bolster the defence against rapidly evolving cyber threats.

Effective authentication management technologies such as password management systems and multifactor authentication should be adopted to ensure robust access control mechanisms for IoT data privacy and confidentiality.

Secure Credential Management: Avoid using default or hardcoded credentials in firmware, as attackers can quickly discover them and gain unauthorised access. Instead, strong authentication mechanisms, such as multifactor authentication, should be implemented to enhance security. Encourage users to change default passwords during the initial setup of the IoT device to prevent potential attacks based on known credentials.

Leveraging SNMP Monitoring for IoT Device Security

A Simple Network Management Protocol (SNMP) is essential in maintaining IoT devices' security and operational integrity within a network. This widely adopted protocol is designed to collect data and manage network-connected devices, ensuring they remain protected against unauthorised access and other security threats. However, organisations should utilise robust monitoring and management tools tailored for comprehensive oversight to harness SNMP's capabilities effectively.

The Importance of SNMP Monitoring and Management: SNMP is a communication protocol that facilitates the exchange of management information between network devices and monitoring systems. It allows network administrators to oversee a range of connected devices, such as routers, switches, IoT sensors, and other hardware. The information collected through SNMP can be invaluable for identifying potential security risks, detecting performance bottlenecks, and preemptively addressing issues before they escalate.

Key Features and Capabilities of SNMP Monitoring Solutions

Centralised Monitoring Platform: SNMP monitoring solutions provide a unified platform for administrators to keep track of all network-connected devices. This centralised approach simplifies managing diverse IoT devices, enabling administrators to monitor real-time device traffic, access points, and overall activity. Such comprehensive visibility ensures that any potential security breach or abnormal behaviour can quickly be addressed.

  • Traffic and Activity Analysis: Network traffic can be analysed with SNMP tools to detect unusual patterns indicating malicious activity or unauthorised access attempts. Administrators can identify spikes in data flow, unexpected communication with external servers, or other anomalies that suggest the presence of malware or a cyberattack.
  • Hardware Performance Monitoring: Beyond security, SNMP solutions help monitor the health and performance of network devices. This includes tracking critical metrics such as CPU usage, memory load, and device uptime. By continuously assessing these parameters, administrators can detect signs of hardware failure or performance degradation, allowing for timely maintenance and minimising the risk of downtime.
  • Custom Alerts and Notifications: One of the standout features of advanced SNMP management tools is the ability to create customised alerts. Administrators can set thresholds for various performance and security indicators, such as bandwidth usage or login attempts. When these thresholds are breached, the system sends alerts, empowering the team to respond swiftly to potential issues. Customisable notifications ensure that threats are not overlooked and teams remain proactive in addressing vulnerabilities.
  • Device Discovery and Classification: High-quality SNMP management solutions, such as NinjaOne, offer automated device discovery capabilities. This means that new devices added to the network are immediately identified and logged. The system can then classify these devices based on authentication credentials, device type, and other criteria. This feature is handy for maintaining an accurate inventory of all network assets and ensuring that unknown or rogue devices are promptly flagged for review.

Enhancing IoT Security with SNMP: By integrating SNMP monitoring tools into the broader security strategy, organisations can bolster their defence mechanisms and strengthen their IoT ecosystem's resilience. Regular audits and real-time oversight provided by SNMP solutions enable better compliance with security protocols and help maintain the integrity of sensitive data transmitted through IoT devices. Additionally, integrating SNMP data with other cybersecurity tools, such as Security Information and Event Management (SIEM) systems, can provide deeper insights and enhance incident response capabilities.

Best Practices for Implementing SNMP Solutions

  • Configure Access Controls: Ensure SNMP access is restricted to trusted administrators. To safeguard the data being transmitted, it is recommended that you use SNMPv3, which supports encryption and enhanced security features.
  • Regularly Update SNMP Software: Keep your SNMP management tools updated to protect against known vulnerabilities and ensure compatibility with the latest device firmware.
  • Utilise Multi-Layered Security: Combine SNMP monitoring with other security measures such as firewalls, intrusion detection systems (IDS), and endpoint protection solutions to create a multi-layered defence strategy.
  • Conduct Training and Awareness: Equip IT teams with the knowledge and training to leverage SNMP tools effectively. Understanding how to interpret SNMP data and respond to alerts is critical for maintaining network security.

Therefore, SNMP monitoring and management are vital for organisations looking to safeguard their IoT infrastructure. By implementing advanced SNMP solutions, businesses can achieve better visibility, proactive threat detection, and comprehensive control over their network, thus enhancing overall security and operational efficiency.

Network Security for IoT: Implementing Robust Encryption Protocols

Communication security between IoT devices and backend servers is fundamental to a strong network security framework. As IoT ecosystems grow in complexity and scale, protecting data transmissions' integrity, confidentiality, and authenticity becomes increasingly critical. One of the most effective strategies for securing these interactions is implementing robust encryption protocols, such as Transport Layer Security (TLS).

The Importance of Robust Encryption in IoT Security: IoT devices often transmit sensitive data, from personal user information to industrial control signals. If intercepted or tampered with, this data can have severe consequences, including breaches, unauthorised access, and disruption of essential services. Encryption protocols act as a protective barrier, ensuring that data remains confidential and unaltered between devices and servers. Organisations can minimise the risks associated with data interception by encrypting data in transit and providing secure communication.

How TLS Enhances IoT Security

Transport Layer Security (TLS) is a widely recognised encryption protocol designed to secure data transmitted over networks. TLS establishes an encrypted connection between IoT devices and backend servers, protecting data from eavesdropping and tampering. Here's how TLS helps fortify network security in IoT ecosystems:

  • Data Encryption: TLS uses cryptographic algorithms to encrypt data before it is transmitted. This ensures that even malicious actors intercept the communication, they cannot decipher the content without the appropriate decryption key. Encrypted data appears as a random, unreadable sequence, making it highly resistant to unauthorised access.
  • Authentication: TLS supports authentication mechanisms that verify the identities of communicating parties. This prevents man-in-the-middle (MitM) attacks, where attackers could impersonate a device or server to intercept and alter data. Mutual authentication, which can involve device and server certificates, strengthens trust within the network by confirming that data is only exchanged between verified parties.
  • Data Integrity: TLS protocols incorporate hashing functions that maintain data integrity during transmission. These functions generate a unique checksum or hash value for each data packet. Upon reaching the destination, the hash value is compared to ensure that the data has not been tampered with in transit. If discrepancies are detected, the transmission is flagged as compromised.

Implementing TLS in IoT Networks

Implementing TLS across an IoT network involves several best practices and considerations:

  • Use TLS 1.2 or Higher: It is crucial to use the latest versions of TLS (preferably TLS 1.2 or TLS 1.3) to take advantage of enhanced security features and avoid vulnerabilities in older versions. TLS 1.3, for instance, simplifies the handshake process and removes outdated algorithms, resulting in stronger security and faster connection establishment.
  • Certificate Management: The implementation of TLS relies on digital certificates issued by trusted Certificate Authorities (CAs). Proper certificate management is essential to maintain secure communications. Organisations should automate the certificate renewal process to prevent disruptions caused by expired certificates. Additionally, IoT devices must be capable of securely storing and managing certificates to protect against theft or misuse.
  • Device Compatibility and Resource Constraints: Many IoT devices are constrained by limited processing power, memory, and battery life, so it's crucial to optimise the implementation of TLS to avoid performance issues. Lightweight versions of TLS and hardware acceleration for cryptographic operations can be employed to strike a balance between security and device functionality.
  • Regular Security Updates and Patch Management: To keep TLS secure and effective, organisations must stay vigilant about applying security patches and updates. Cybercriminals are constantly developing new techniques to exploit vulnerabilities, so keeping devices and backend servers updated ensures that the encryption mechanisms remain resilient against emerging threats.

Complementary Security Measures

While TLS is a powerful tool for securing data in transit, it should be part of a comprehensive security strategy that includes:

  • End-to-end encryption (E2EE): Implement E2EE to secure data from when it leaves the source until it reaches the destination. This further prevents data exposure in intermediate points of the network.
  • Strong Access Controls: Implement strict access controls and multifactor authentication (MFA) for administrative roles to limit access to encryption keys and certificates.
  • Secure Configuration Practices: Ensure that all IoT devices are configured securely to prevent vulnerabilities that could undermine TLS encryption, such as weak default passwords or open ports.

Robust encryption protocols like TLS are essential for safeguarding the communication channels between IoT devices and backend servers. By encrypting data, authenticating parties, and ensuring data integrity, TLS minimises the risk of unauthorised access and data breaches. However, effective TLS implementation should complement continuous monitoring, updates, and a layered security approach to maximise protection in an increasingly interconnected world.

SIEM Systems Technologies for Integrated IoT Security

Logging and Monitoring for Comprehensive Threat Management

Security Information and Event Management (SIEM) systems play a vital role in protecting IoT ecosystems by combining logging, monitoring, and advanced data analysis to safeguard devices and networks. These technologies provide a unified platform for collecting and analysing security data, essential for maintaining a secure environment in an increasingly interconnected landscape. Below, we explain how logging and monitoring capabilities contribute to comprehensive IoT security and why they are indispensable for modern organisations.

Real-Time Monitoring and Live Tracking

  1. Continuous Monitoring for Rapid Response: SIEM systems enable real-time tracking of IoT device activity and network traffic, allowing security teams to swiftly detect and respond to incidents. Continuous monitoring ensures that any deviation from regular activity is identified promptly, helping prevent potential breaches before they escalate. This capability is crucial in an IoT ecosystem where device behaviour can vary widely, and new threats can emerge anytime.
  2. Granular Visibility: SIEM systems give organisations a detailed view of their IoT network. This includes monitoring data flows between devices, interactions with backend servers, and communications with external networks. Such visibility ensures that any irregularities, such as unexpected data transmissions or unauthorised access attempts, are flagged immediately for further investigation.

Comprehensive Log Collection and Analysis

  1. Log Aggregation from Diverse Sources: SIEM solutions collect logs from multiple sources across the IoT network, including device event logs, network traffic data, application activity, and user access records. This aggregation allows for a holistic network view, making detecting coordinated attacks or patterns that might otherwise go unnoticed easier.
  2. Anomaly Detection Through Log Analysis: SIEM systems can recognise deviations from established baselines and identify unusual behaviour indicative of security incidents by analysing logs. For example, a sudden spike in data transfer from a specific device or an influx of failed login attempts could point to a compromised device or a brute-force attack. Advanced SIEM platforms often use machine learning algorithms to enhance anomaly detection, learning from historical data to better differentiate between benign and suspicious activity.
  3. Behavioral Insights: Logs provide invaluable behavioural insights to help organisations understand typical device operations and spot deviations. These insights enable security teams to identify potentially malicious behaviour, such as IoT devices attempting to connect to unauthorised endpoints or being used as entry points for lateral movement within a network.

Alert Mechanisms and Incident Response

  1. Automated Alerts for Faster Response Times: A key feature of SIEM systems is the implementation of automated alert mechanisms. These alerts notify administrators in real-time when potential security breaches or abnormal activities are detected. Alerts can be configured based on various criteria, such as access attempts from unrecognised IP addresses, unusual data transfers, or unauthorised changes in device configurations.
  2. Customisable Alert Thresholds: Organisations can tailor SIEM alert settings to align with their unique risk profiles and operational needs. Customisable thresholds help filter out noise and focus on high-priority alerts, ensuring that security teams can respond effectively to critical incidents without being overwhelmed by false positives.
  3. Facilitating a Coordinated Incident Response: With centralised data and real-time alerting, SIEM systems provide the tools needed to streamline the incident response process. Security teams can investigate alerts quickly using the contextual data provided by SIEM logs, enabling them to trace the source of a breach, assess its scope, and take corrective action. This coordinated approach minimises the potential damage and downtime associated with security incidents.

Benefits of Implementing SIEM in IoT Security

  1. Enhanced Threat Detection: Continuous monitoring, log analysis, and alert mechanisms enable SIEM systems to detect threats that might bypass traditional security measures. This is especially important in IoT environments where conventional antivirus solutions may not be feasible due to limited device processing power.
  2. Compliance and Reporting: Many industries are subject to regulations that require organisations to maintain comprehensive logs and audit trails. SIEM systems support compliance by automating the collection and storage of logs, providing clear evidence of security measures, and generating reports needed for regulatory adherence. Compliance reporting features help organisations demonstrate that they are meeting data security and privacy industry standards. Thus, SIEM systems can enable organisations to generate reports that can be presented to internal and external security auditors to prove that they comply with regulatory requirements.
  3. Scalability for Expanding IoT Networks: As IoT networks grow, SIEM systems can scale to accommodate increasing data volumes and new device types. This scalability ensures that organisations can continue to monitor their expanding IoT ecosystem without sacrificing visibility or responsiveness.
  4. Proactive Threat Hunting: Besides automated monitoring, SIEM systems empower security teams to conduct proactive threat hunting. Analysts can use the system's search and query capabilities to explore logs and uncover potential threats that might not have triggered automatic alerts, allowing for preemptive mitigation measures.
  5. Automated attack detection and response: SIEM systems make it possible to detect and respond to cybersecurity attacks automatically, reducing the damage that cyberattacks can cause. The event correlation engine that analyses the massive amounts of logs generated by IoT devices and other cybersecurity tools (e.g., intrusion detection systems, intrusion prevention systems, antimalware applications, firewalls, and honeypots) can be replaced by AI or machine learning models, facilitating the speed and accuracy of attack detection and response.

SIEM systems are integral to IoT security, providing a powerful combination of logging, real-time monitoring, and automated alerts to help organisations detect and respond to threats efficiently. By aggregating data from a wide range of sources, analysing logs for anomalies, and providing comprehensive alerts, SIEM solutions enhance an organisation's ability to maintain secure operations in an increasingly connected world. Implementing a high-quality SIEM system ensures that businesses are reactive and proactive in their IoT security efforts, positioning them to handle present and future challenges confidently.

IoT security methodology: Identifying and Preventing IoT Cyber Threats

Navigating the unpredictable landscape of digital threats is challenging, but effective risk management in an IoT ecosystem is achievable. Businesses of all sizes must integrate robust security protocols into their operations, focusing on enhancing threat detection and response. Dedicated IT administrators or specialised security teams (e.g., security operation centres) should secure networks, including all IoT devices. To design and implement robust cybersecurity tools and policies to secure IoT networks and systems, cybersecurity analysts or teams should conduct comprehensive network and software risk assessments, implement robust defensive measures, and leverage SIEM solutions and other security monitoring tools. Some of these strategies have been discussed in [73].

Conduct Comprehensive Network and Software Risk Assessments. Practical cyber threat intelligence revolves around finding and addressing vulnerabilities within a cybersecurity framework. This process should be continuous and consist of planning, data collection, analysis, and reporting. The resulting report should be evaluated and adapted to include new findings before being incorporated into strategic decisions.

Risk assessments can be broken down into three main types:

  • Strategic Assessment: This type of assessment provides executives with insights into long-term challenges and timely warnings. It informs decision-makers about cybercriminals' intentions and capabilities in the current IoT landscape.
  • Tactical Assessment: This approach offers real-time analysis of events, activities, and reports, supporting daily operations and customer needs. It often involves data from sensors and smart meters in industrial IoT systems.
  • Operational Assessment: Tracks potential incidents based on related activities and reports, enabling proactive strategies for managing future incidents and maintaining predictive maintenance.

Implement Robust Defensive Measures. A comprehensive cybersecurity policy is essential for protecting your IoT ecosystem. This policy should incorporate a range of strategies to minimise risks. Standard defensive practices include:

  • Deploying effective antivirus and antimalware software.
  • Enabling two-factor (2FA) or multifactor authentication (MFA).
  • Keeping all software updated to patch known vulnerabilities.
  • Utilising attack surface management tools.
  • Implementing network segmentation to limit the spread of threats.
  • Adopting a zero-trust security model.
  • Providing continuous cybersecurity training and awareness programs for employees and endpoint users.

Leverage SIEM Solutions. Security Information and Event Management (SIEM) systems are crucial for real-time cybersecurity management. These solutions enhance security by integrating threat intelligence with incident response, making them an invaluable tool for analysing security operations within an IoT ecosystem.

SIEM platforms gather event data from applications, devices, and other systems within the IoT infrastructure and consolidate this data into a clear, actionable format. The system issues customisable alerts based on different threat levels. Key benefits of using SIEM solutions include:

  • Detecting vulnerabilities.
  • Identifying potential insider threats.
  • Aggregating and visualising data for improved oversight.
  • Ensuring compliance with regulations.
  • Managing and analysing logs effectively.

Strengthening IoT Security: Key Protection Strategies

To effectively defend against IoT malware, a comprehensive, multi-layered approach that integrates advanced technology and robust security practices is essential. Here are some expert-recommended best practices discussed in [74]:

  • Implement Network Segmentation: A highly effective way to contain IoT malware and traffic-based attacks (e.g., DDoS attacks) is through network segmentation. Organisations can prevent malware from spreading and safeguard critical infrastructure by placing IoT devices on separate network segments or VLANs. It can also ensure that IoT devices are not turned into botnets and are used to conduct DDoS attacks on network gateways and servers in the organisation's IT infrastructure or other organisations. Think of it as setting up digital containment zones. An infected IoT device cannot compromise your entire network, and compromised IoT devices cannot be used to launch attacks on the rest of the network and its systems.
  • Ensure Timely Firmware Updates and Patch Management: Many IoT attacks target known vulnerabilities that manufacturers have already patched. Late installation of security updates and patches allows attackers to exploit newly discovered vulnerabilities that have already been fixed in the latest updates by device manufacturers. Establishing a disciplined update and patch management protocol is essential to close these security loopholes. Users should treat IoT devices in the same way they treat their computers and smartphones. They should regularly update their devices as the first line of defence against new threats.
  • Strengthen Authentication and Access Controls: Weak or default passwords are a common entry point for IoT malware. Implementing effective access control mechanisms to limit access to IoT networks, devices, servers, and applications only to authorised devices and users is essential. Using strong, unique passwords for each device and enabling two-factor authentication can significantly lower the risk of unauthorised access.
  • Deploy Network Monitoring and Anomaly Detection: Advanced network monitoring tools that detect irregular traffic or unusual behaviour from IoT devices are vital for early threat identification. Machine learning-based systems can help flag potential malware before it spreads. The advantage of machine learning-based Network Monitoring and Anomaly Detection tools is that they can detect new attacks, unlike signature-based tools.
  • Maintain a Comprehensive Device Inventory: An up-to-date inventory of all IoT devices on the network is crucial for security management. This should include device types, firmware versions, and known vulnerabilities. That is, every device connecting to IoT networks should be identifiable and effectively monitored and secured to ensure the network's security. A compelling need for device visibility is because we can't protect what we don't know exists and can't even see. A complete device inventory forms the backbone of any effective IoT security plan.
  • Conduct Vendor Security Assessments: Some vulnerabilities in IoT devices are introduced to the various stakeholders in the IoT device development cycle, from the IoT hardware manufacturer to the firmware and software developers. Before introducing new IoT devices, organisations should thoroughly evaluate vendors and their products. They should also assess their security measures, update policies, and track records for addressing vulnerabilities.
  • Promote Employee Education and Awareness: Human error is a leading cause of security incidents. Regular training on IoT security best practices can help employees recognise risks and understand their role in maintaining a secure environment. Employee training also ensures that IoT security policies are followed during the deployment and operation of IoT networks and systems.

IoT Data Storage Security

The Internet of Things (IoT) proliferation has revolutionised industries by enabling data collection, transmission, and analysis from billions of interconnected devices. However, this rapid adoption has also introduced significant security challenges, particularly concerning the storage and management of IoT data in databases. IoT database security protects sensitive data collected from IoT devices, ensuring its integrity, availability, and confidentiality.

This detailed overview explores the unique challenges of IoT database security, common threats, best practices, and emerging trends in securing databases for IoT ecosystems.

The typical protection stack is presented in the figure 106. It involves protection and management mechanisms on a variety of levels.

IoT Data Storage Security Stack
Figure 106: IoT Data Storage Security Stack

Network Security:
Network security in IoT databases protects the data flow between IoT devices and their associated databases from unauthorised access and cyberattacks. This involves securing communication protocols with encryption standards such as TLS, implementing firewalls to filter traffic, and utilising virtual private networks (VPNs) for remote access. Network segmentation can isolate IoT databases from other parts of the system, reducing the risk of lateral movement during a breach. Real-time monitoring and intrusion detection systems (IDS) ensure anomalies in traffic are promptly identified and mitigated.

Access Management:
Access management for IoT databases ensures that only authorised users, devices, and applications can access stored data. This is critical in preventing unauthorised manipulation or theft of sensitive information. Multi-factor authentication (MFA), role-based access control (RBAC), and device-specific tokens are commonly employed to regulate access. Additionally, periodic audits of access logs can reveal patterns indicative of suspicious activities, enabling proactive security measures.

Threat Management:
Threat management in IoT databases focuses on detecting, mitigating, and preventing risks such as malware, ransomware, or insider threats that could compromise data integrity and availability. Organisations can use advanced threat detection tools powered by machine learning to identify unusual patterns in database queries or access attempts. Automated threat response mechanisms, such as isolating compromised database nodes, further enhance protection. Regular vulnerability assessments and patch management ensure the database remains resilient against emerging threats.

Data Protection:
Data protection in IoT databases ensures that sensitive information remains secure throughout its lifecycle—collection, storage, processing, and deletion. Encryption techniques like AES safeguard data at rest, while TLS protects data in transit. Secure backup strategies and redundancy mechanisms help mitigate the impact of data loss or corruption. Compliance with data protection regulations, such as GDPR or CCPA, ensures that personally identifiable information (PII) from IoT devices is handled responsibly. Data masking and anonymisation techniques are often employed to enhance privacy and limit exposure in case of a breach.

Importance of IoT Database Security

IoT devices generate vast amounts of data, often in real-time, encompassing sensitive information such as personal identifiers, health records, location data, and industrial metrics. Ensuring the security of databases storing this data is critical for several reasons:

  1. Data Privacy: IoT databases often contain personally identifiable information (PII), which makes them subject to privacy regulations such as GDPR, HIPAA, and CCPA.
  2. Operational Continuity: Compromised databases can disrupt IoT-dependent operations, such as industrial automation or smart city infrastructure.
  3. Threat Mitigation: Protecting IoT databases minimises risks associated with data breaches, device manipulation, and unauthorised access.
  4. Compliance Requirements: Many industries mandate strict data security standards for IoT deployments, requiring robust database security measures.

Unique Challenges in IoT Database Security

IoT database security presents distinct challenges due to the scale, diversity, and dynamic nature of IoT systems:

  1. Volume and Velocity of Data: IoT devices generate vast amounts of data at high velocity, requiring databases that can handle continuous read/write operations without compromising security. Managing security for such high-throughput environments can be complex.
  2. Diverse Data Types: IoT ecosystems often include structured, semi-structured, and unstructured data (e.g., sensor readings, video feeds, logs). Securing these varied data types requires adaptable security measures.
  3. Distributed Nature of IoT: IoT databases are often deployed in distributed environments, including cloud, edge, and hybrid setups. Ensuring consistent security across multiple locations and architectures is challenging.
  4. Device-Database Interaction: IoT devices frequently interact directly with databases via APIs, posing risks if these interfaces are not secured. Compromised devices can become entry points for attackers targeting the database.
  5. Resource Constraints: Many IoT devices have limited computational power, making implementing strong security measures at the device level difficult. Thus, the burden is shifted to the database.
  6. Real-Time Data Processing: Security measures must not compromise the real-time processing and analytics capabilities essential for many IoT applications.

Common Threats to IoT Databases

IoT databases face various security threats, many of which exploit the vulnerabilities inherent in IoT systems:

  1. Unauthorised Access: Weak authentication mechanisms in IoT devices or database systems can allow attackers to gain unauthorised access to sensitive data.
  2. Data Breaches: Unsecured IoT databases are prime targets for data exfiltration, potentially exposing PII, financial data, or proprietary information.
  3. Injection Attacks: APIs and applications interacting with IoT databases are vulnerable to SQL or NoSQL injection attacks, which can manipulate or extract data.
  4. DDoS Attacks: Distributed Denial of Service (DDoS) attacks can overwhelm IoT databases, causing outages or degraded performance.
  5. Man-in-the-Middle (MITM) Attacks: If data is transmitted between IoT devices and databases without encryption, attackers can intercept and manipulate it.
  6. Malware and Ransomware: IoT databases can be infected with malware or ransomware, leading to data loss, corruption, or unauthorised encryption.
  7. Insider Threats: Privileged insiders with access to IoT databases can misuse their access, leading to data leaks or intentional sabotage.

Best Practices for Securing IoT Databases

Implementing robust security measures for IoT databases involves a multi-layered approach to protect against various threats. Key best practices include:

  1. Data Encryption: Encrypt data at rest and in transit to prevent unauthorised access. Use strong encryption algorithms (e.g., AES-256) and implement secure key management practices.
  2. Authentication and Authorisation: Enforce strong, multi-factor authentication (MFA) for database access. Implement role-based access control (RBAC) to ensure users and devices have only the necessary permissions.
  3. API Security: Secure APIs connecting IoT devices to databases using HTTPS, authentication tokens, and rate-limiting mechanisms. Regularly test APIs for vulnerabilities, such as injection attacks or improper input validation.
  4. Database Hardening: Remove unused services and features in database systems to reduce the attack surface. Change default credentials and ports to mitigate brute-force attacks.
  5. Monitoring and Logging: Enable detailed logging of database access and operations to detect and respond to suspicious activity. Use Security Information and Event Management (SIEM) tools to correlate logs and identify potential threats.
  6. Regular Updates and Patching: Keep database software and related infrastructure up to date to protect against known vulnerabilities.
  7. Secure Device-Database Communication: Use secure communication protocols (e.g., MQTT over TLS) for data exchange between IoT devices and databases. Authenticate devices before allowing them to transmit data.
  8. Segmentation and Isolation: Segment IoT networks to limit database access to authorised devices and applications. Use virtual private clouds (VPCs) or private subnets for database deployment.
  9. Backup and Disaster Recovery: Regularly back up IoT database contents and test disaster recovery plans. Store backups in secure locations, separate from the primary database.
  10. Compliance Adherence: Align database security measures with industry-specific regulations and standards, such as ISO/IEC 27001, GDPR, or HIPAA.

As IoT ecosystems grow and evolve, new approaches and technologies are emerging to address database security challenges:

  1. Zero Trust Architecture: Adopting a zero-trust model ensures that all access to IoT databases is verified and validated, reducing the risk of unauthorised access.
  2. AI-Driven Security: Artificial intelligence and machine learning are increasingly used to analyse IoT database activity, detect anomalies, and predict potential threats.
  3. Edge Computing Security: Securing databases closer to IoT devices at the edge minimises latency while protecting data in decentralised environments.
  4. Blockchain for Data Integrity: Blockchain technology is being explored to secure IoT data and ensure tamper-proof records in IoT databases.
  5. Post-Quantum Cryptography: As quantum computing advances, IoT database security is adopting encryption algorithms that are resistant to quantum attacks.

IoT database security is critical to ensuring IoT ecosystems' safe and efficient operation. Organisations can protect sensitive IoT data and maintain users' trust by addressing unique challenges, understanding common threats, and implementing best practices. As IoT adoption expands, proactive security strategies and emerging technologies will be essential in safeguarding IoT databases against evolving threats.

Blockchain

This chapter delves into blockchain technology. While often associated with cryptocurrency, blockchain is a flexible framework for securely storing, sharing, and protecting data across diverse domains. The chapter explores blockchain applications beyond financial transactions, widening readers' view of the technology and potential markets.

For developers, blockchain offers tools and encryption techniques for secure, distributed data storage. In business and finance, it enables decentralized transaction tracking without central authorities. Tech enthusiasts see it as a driver of the Internet's future, while others view it as a transformative tool for decentralizing control in society and the economy.

At its core, blockchain is a secure, distributed database powered by cryptography and distributed computing. Originating from Satoshi Nakamoto's innovative design, it enables global networks of computers to maintain a shared, tamper-resistant ledger. By fostering trust through technology rather than institutions, blockchain facilitates direct, secure collaboration, paving the way for new forms of global cooperation without reliance on traditional central entities.

The following subchapter introduces the concepts and applications of blockchains:

Key Concepts of Blockchain

This chapter will explore how blockchain technology can be applied in various fields. While we will primarily use examples related to financial transaction processing, it's essential to understand that blockchain's potential is not limited to this area. This technology offers a flexible framework for implementing decentralised solutions to securely store, share, and protect data across multiple domains.

The term 'blockchain' has come to mean different things to different people. For developers, it's a set of tools and encryption techniques that make it possible to store data securely across a network of computers. In business and finance, it's seen as the technology behind digital currencies and a way to keep track of transactions without needing a central authority. For tech enthusiasts, blockchain is driving the future of the Internet. Others view it as a powerful tool that could reshape society and the economy, moving us toward a world with less centralised control.

At its core, blockchain is a new type of data structure that merges cryptography with distributed computing. Satoshi Nakamoto developed this technology by combining these elements to create a system where a network of computers works together to maintain a shared, secure database. In essence, blockchain technology can be described as a secure, distributed database.

Blockchain technology demonstrates that people anywhere in the world can trust each other and conduct business directly within large networks without needing a central authority to manage everything. This trust isn't based on big institutions but on technology—protocols, cryptography, and computer code. This shift makes it much easier for people and organisations to work together, opening up new possibilities for global collaboration without relying on traditional central institutions.

What is blockchain in simple terms?

A blockchain is a method of storing data. Data is stored in blocks that are linked to the previous block.

Each block contains:

  • a list of transactions,
  • a unique ID for all the data in the block called a hash,
  • a hash of the previous block's data.

Data in the block usually consists of transactions, each block can contain hundreds of transactions (for example person A sends 100 EUR to person B, this transaction describes 3 variables: sender identification, receiver identification and amount).

A hash generated from a transaction record is a unique combination of letters and numbers. It's always unique to every block on the blockchain. When the data in the block changes, the hash will also change. When a hash is applied to transaction data, it turns off the option to make changes in a record, as the resulting hash of the new record will not equal the previous value. (For example, if we generate a hash for records “PersonA, PersonB,100,” the hash result of this record will be a unique value and will be changed if at least one symbol from the original record is changed.) Each block also contains the previous block's hash, forming a chain structure.

As a result, if a transaction in any block changes, the block's hash will change. When the hash of the block changes, the next block will show a mismatch with the previous hash that was recorded by it. This gives blockchain the property of being tamper-resistant as it becomes very easy to identify when data in a block has changed. Blockchain has one more property that makes it secure. A blockchain is not stored on one computer or server, which is usually the case with a database. Instead, it is stored in a large network of computers called a peer-to-peer network.

Peer-to-peer is a network in which all computers play server and node roles. Such networks usually do not have a centralised server; this role is shared across network nodes. This structure allows the network to remain operational with any number and combination of available nodes.

Every time a new block of transactions is added to the network, all network members or nodes must verify whether all transactions in the block are valid. If all nodes in the network agree that the transactions in the block are correct, the new block will be added to every node's blockchain.

This process is called consensus. Hence, any attacker who tries to tamper with the data on the blockchain must tamper with the data in most of the computers in the peer-to-peer network.

Blockchain Network Structures and Technologies

Transactions

Blockchain technology uses two main types of cryptographic keys to provide the security of transactions and data: public keys and private keys. These keys work together to protect the integrity of the blockchain, enabling secure exchanges of digital records and protecting user identities. Consider the example of a mailbox. The public key is your email ID, which everyone knows about and can send you messages. The private key, on the other hand, is like the password to that mailbox. Only you own it, and you can read the messages inside.

A public key is a cryptographic code that others share and use to interact with your blockchain account. It's generated from your private key using a specific mathematical process. Public keys are used to verify digital signatures and to encrypt data that only the private key can decrypt. This ensures that messages or transactions are intended for the correct recipient.

A private key is a secret cryptographic code that grants access to your blockchain records. It must be kept confidential because anyone accessing the private key can control the records associated with the corresponding public key. This key is used to authorise transactions on the blockchain. When it is necessary to transfer information (make a transaction), you use your private key to create a digital signature that proves you are the owner of those transactions.

Public and private keys work together to secure blockchain operations:

  • Encryption and Decryption: Only the corresponding private key can decrypt data when it is encrypted using a public key. This mechanism ensures that even if the data is intercepted, it cannot be read without the private key.
  • Digital Signatures: When a transaction is signed with a private key, the signature can be verified by others using the public key. This verification process confirms that the transaction is authentic and has not been tampered with.
  • Secure Transactions: Blockchain transactions rely on the interplay between public and private keys. The public key directs the transaction to the correct recipient, while the private key authorises the movement of transactions.

Categories of blockchain.

There are three categories of blockchain:

Public blockchains, anyone can access the database, store a copy and make changes subject to consensus in the public blockchain. Bitcoin is a classic public blockchain. The key characteristic of public blockchains is that they are entirely decentralised. The network is open to any new participants. All participants, having equal rights, can be involved in validating the blocks and accessing the data contained in the blocks.

Public blockchains process transactions more slowly because they are decentralised; as a result, each node should agree on each transaction. This requires time-consuming consensus methods like Proof of Work and prioritising security over speed.

Private blockchains (sometimes referred to as managed blockchains) are closed networks accessible only to authorised or select verified users. They are often owned by companies or organisations which use them to manage sensitive data and internal information.

Private blockchain is very similar to existing databases regarding access restrictions but is implemented with blockchain technology. As a result, such networks are not aligned with the principle of decentralisation.

Since it is accessible only by certain people, there is no requirement for mining blocks (validating). As a result, such networks are faster than other types because they do not have the necessary mining, consensus, etc.

Hybrid or consortium blockchains are permission-based blockchains, but in comparison to private blockchains, control is provided by a group of organisations rather than one coordinator. Such blockchains have more restrictions than public ones but are less restrictive than private ones. For this reason, they are also known as hybrid blockchains. New nodes are accepted based on a consensus with the consortium. Blocks are validated according to predefined rules defined by the consortium. Access rights can be public or limited to certain nodes. User rights might differ from user to user. Hybrid blockchains are partly decentralised.

Blockchain type selection

When choosing the right type of blockchain for a project, it's important to consider how it will be used, who will use it, and how it needs to perform. There are three main types of blockchains, each suited for different situations:

Private Blockchain:

When to Use: A private blockchain is the best option if the blockchain is to be used only within a single organisation by a specific group of people. Advantages: It gives the organisation more control over who can join and see the data. It's suitable for internal processes like keeping track of company records or managing internal operations. Performance: Since only a few trusted users are involved, the system can run faster and more efficiently because it doesn't need complex methods to agree on things. Examples: Hyperledger Fabric, Corda.

Consortium Blockchain:

When to Use: A consortium blockchain is the right choice if the blockchain will be shared by a group of companies or organisations working together. Advantages: It allows several organisations to work together while controlling who can access the blockchain. This is great for industries where businesses need to collaborate and share data securely. Performance: Since only trusted groups are involved, it works faster and more efficiently than a public blockchain. Examples: R3, Quorum.

Public Blockchain:

When to Use: A public blockchain is the best fit if the goal is to create a completely open and decentralised system that anyone can join, such as for cryptocurrencies. Advantages: It allows anyone to participate and offers complete transparency. This is perfect for digital currencies, where trust needs to be spread across everyone using them. Performance: Public blockchains can be slower and use more energy because they require complex processes to ensure everyone agrees. However, they are highly secure and trustworthy. Examples: Bitcoin, Ethereum.

To summarise – If, in your project, the blockchain is only for internal use, go with a private blockchain. Choose a consortium blockchain if it's for a group of related businesses. And if it needs to be open to everyone, a public blockchain is the way to go.

Second Generation Applications

While first-generation blockchain applications, such as Bitcoin, primarily focused on decentralised digital currencies, second-generation blockchain applications introduced more sophisticated functionalities. These advancements allowed for broader use cases beyond simple peer-to-peer transactions, laying the groundwork for smart contracts, decentralised applications (dApps), and improved scalability. Enhanced programmability, consensus mechanisms, and adaptability to various industries often characterise second-generation blockchains.

Key Features of Second-Generation Blockchain Applications

Smart Contracts

One of the innovations of second-generation blockchain applications is the introduction of smart contracts. Initially pioneered by Ethereum, smart contracts are self-executing agreements where the terms of the contract are written directly into code. Once predetermined conditions are met, the contract is automatically executed. This eliminates the need for intermediaries and significantly reduces transaction costs and delays.

Smart contracts have diverse applications, including financial agreements, supply chain automation, real estate, insurance, and beyond. They have enabled decentralised finance (DeFi) platforms to flourish by providing services like lending, borrowing, trading, and liquidity provision in a trustless, decentralised manner.

Decentralised Applications (dApps)

Second-generation blockchains also serve as platforms for decentralised applications, or dApps, which are applications that run on a blockchain instead of centralised servers. Ethereum, again, was the first platform to popularise the use of dApps by providing a robust infrastructure for developers to build decentralised applications with the Ethereum Virtual Machine (EVM).

dApps are transparent, autonomous, and can operate without a central authority. Their decentralised nature means they are less vulnerable to censorship and hacking, as they run on a distributed network of nodes rather than a single point of failure. This has led to the creation of various decentralised services, including decentralised exchanges (DEXs), prediction markets, gaming platforms, and more.

Programmability and Turing-Completeness

Unlike Bitcoin, which was specifically designed for financial transactions, second-generation blockchains like Ethereum introduced Turing-completeness. This means the blockchain can process any computational logic and execute any program, given enough resources. This allows developers to create complex and sophisticated blockchain-based applications that can address various problems.

Other platforms that focus on programmability include EOS, Tezos, Tron, and Solana. All of these allow for the deployment of smart contracts and dApps. These platforms differ from first-generation blockchains by being application-oriented rather than transaction-oriented.

Interoperability

One of the challenges addressed by second-generation blockchains is the need for interoperability between different blockchain networks. Many blockchain applications work in silos, but with the growth of DeFi and dApps, there has been a demand for different blockchain systems to communicate with each other. Interoperability solutions aim to enable blockchains to transfer data, tokens, and assets between them seamlessly.

Projects like Polkadot and Cosmos have focused on creating interoperable blockchain ecosystems. These networks use relay chains and hubs to connect different blockchains, facilitating cross-chain transactions and enabling various blockchain networks to work together. Interoperability helps improve liquidity, expands market reach, and enhances the overall utility of blockchain applications.

Decentralised Finance (DeFi)

One of the most transformative developments of second-generation blockchain applications is Decentralised Finance (DeFi). DeFi refers to a collection of financial services and platforms built on blockchain technology that aims to recreate traditional financial systems such as banks, exchanges, and lending platforms in a decentralised and permissionless way.

DeFi applications leverage smart contracts to create financial services like decentralised lending and borrowing platforms (e.g., Aave, Compound), decentralised exchanges (DEXs) (e.g., Uniswap, Sushiswap), and yield farming platforms. These services allow users to borrow, lend, trade, and earn interest on digital assets without relying on centralised entities. The global DeFi market has exploded in recent years, with billions of dollars locked in DeFi protocols, transforming how people access and manage financial services.

Governance and Decentralised Autonomous Organizations (DAOs)

Second-generation blockchain applications have introduced new models for decentralised governance, most notably in the form of Decentralised Autonomous Organizations (DAOs). DAOs are blockchain-based entities governed by a set of rules encoded in smart contracts. Token holders typically have voting rights and can collectively decide the organisation's direction, including funding, development, and protocol changes.

DAOs aim to provide a transparent, decentralised governance model, eliminating the need for traditional hierarchical structures. Many DeFi projects and blockchain ecosystems have adopted the DAO model for decision-making processes. For instance, MakerDAO is a popular DAO that governs the Maker Protocol, which allows users to generate the Dai stablecoin.

Examples of Second-Generation Blockchain Platforms

Ethereum

Ethereum is the most notable second-generation blockchain platform. It is designed to go beyond cryptocurrency by providing a general-purpose framework for building decentralised applications. Ethereum's ability to execute smart contracts and support decentralised applications has made it the go-to platform for innovators in DeFi, NFTs, and beyond.

EOS

EOS is another second-generation blockchain platform known for its high scalability, faster transaction speeds, and user-friendly development tools. EOS aims to address the scalability issues faced by Ethereum by offering higher throughput and lower transaction fees, making it a popular choice for developers building high-performance dApps.

Cardano

Cardano is a second-generation blockchain platform that provides a secure and scalable infrastructure for decentralised applications and smart contracts. It uses a unique Proof of Stake (PoS) consensus mechanism called Ouroboros, designed to be more energy-efficient than Ethereum's original Proof of Work. Cardano's research-based development approach emphasises formal verification to ensure the security and correctness of its blockchain protocols.

Polkadot

Polkadot is a platform designed to enable different blockchains to work together. It introduces the concept of “parachains,” which are parallel chains that can interoperate with each other. Polkadot's interoperability aims to solve the fragmentation problem by connecting various blockchains, enabling them to exchange information and assets seamlessly.

Solana

Solana is known for its high-performance blockchain, which is capable of handling thousands of transactions per second. It uses a novel consensus mechanism called Proof of History (PoH), which enables fast block confirmation times. This makes Solana suitable for high-frequency trading, gaming, and other high-demand dApps.

Expanded Application of Blockchain

Blockchain technology has evolved far beyond its origins in cryptocurrency, finding applications across various industries. Here are some expanded applications of blockchain:

  1. Supply Chain Management: Blockchain enables real-time tracking of goods, ensuring transparency and verifying the authenticity of products. This potentially allows the reduction of counterfeit goods by securely recording the origin and movement of items. Smart contracts, in this regard, automate compliance with contractual terms in logistics and procurement, increasing trustworthiness and traceability and opening them for software agencies for higher efficiency.
  2. Healthcare: Provides a secure system for storing and sharing patient data among healthcare providers. Traceability ensures the authenticity of pharmaceuticals by tracking their production and distribution. Additionally, blockchain technology enhances data integrity in trials, preventing tampering and providing accurate results.
  3. Government and Public Sector: Provides secure and transparent digital voting to prevent fraud and increase voter confidence. On the other hand, it offers a safe and decentralised way to manage personal identities, reducing identity theft risks. The same might be applied to electronic document and their processing workflows via traceability and transparency and the document management actions taken.
  4. Media and Entertainment: This ensures fair compensation for creators by tracking and enforcing intellectual property rights for digital content users. It also facilitates the introduction of micropayments for content consumption (e.g., per article or song). Since content creation and alteration are traceable, fake news recognition and author identification have potential value for general society.
  5. Agriculture and environment: Ability to track origin and food processing steps; it tracks the journey of food from farm to consumer, ensuring its quality and safety standards. For farmers or food processing companies, it provides transparency in transactions, securing fair compensation for farmers and fair prices for consumers. Due to the ability to track transactions, waste production and recycling operations might also be tracked, verifying environment-friendly practices in production and distribution. It also might contribute to higher environmental impact tax collection.

Green IoT

Green IoT (G-IoT) is the adoption of energy-efficient procedures (hardware, software, communication, or management) and waste reduction methods (energy harvesting and recycling of e-waste) to conserve resources and reduce waste (including pollutants like carbon dioxide) produced by the IoT ecosystem from the design, manufacturing, deployment and operation of IoT systems from the IoT devices to IoT cloud computing data centres. Green IoT is an emerging field within the IoT ecosystem that is aimed at raising awareness of the sustainability problems that may result from the massive deployment of IoT applications in the various sectors of society (health care, agriculture, manufacturing, intelligent transport systems, smart cities, supply chains, smart homes, and smart energy systems) and exploring ways to address those challenges. These challenges include the increase in energy consumption, which increases the IoT industry's carbon footprint, and the amount of e-waste created resulting from discarding electronic components of IoT devices, especially IoT batteries, as they need to be replaced after a few years.

Although energy-efficient strategies have been developed to minimise the energy consumption of IoT devices, the energy consumption of billions or trillions of IoT devices will be enormous. The amount of traffic generated by IoT devices is increasing exponentially, and it is predicted that by 2024, IoT traffic will constitute about 45% of the total internet traffic. A rapid increase in the amount of traffic generated by billions to trillions of IoT devices and transported through the internet to cloud computing platforms will significantly increase the energy consumption of the internet network infrastructures, especially with the dense deployment of 5G base stations and IoT wireless access points to service IoT devices. Also, data centres consume tremendous energy to process or analyse the massive amount of data collected using IoT devices.

Much attention is often focused on the energy consumed by IoT devices, networks, and computing platforms. However, less attention is given to the energy consumed by manufacturing and transporting IoT devices and other ICT systems used in the IoT ecosystem. The carbon footprint of the IoT industry can be traced from mining the minerals required to manufacture IoT devices, the manufacturing process, and the supply chains involved. To realise the green IoT goal, energy efficiency and sustainable practices should be designed to ensure that the mining, manufacturing and supply chains are environmentally friendly or sustainable.

The design and implementation of energy-efficient strategies may significantly reduce the energy consumption of IoT systems. However, the rapid increase in the use of IoT to address problems and increase efficiency and productivity in other sectors of the economy will result in a significant net increase in the energy consumed by these systems. Another approach to enforcing green IoT is using renewable resources such as renewable energy sources to continuously recharge IoT batteries, reducing the maintenance cost of replacing IoT batteries and increasing the amount of e-waste created by the IoT industry.

Another Green IoT strategy is to reuse and recycle IoT components and resources. This will significantly reduce the amount of waste produced by the IoT industry and optimise using natural resources to manufacture IoT devices. Hence, reusing and recycling IoT components and resources is a green IoT strategy to increase the sustainability of the IoT industry.

An effective green IoT strategy should span the entire IoT product lifecycle from the design to production (manufacturing) to the deployment, operations and maintenance, and recycling. The primary goal in each stage is to reduce energy consumption, adopt sustainable resources (e.g., harvesting energy from sustainable energy sources, using sustainable materials) usage, minimise e-waste and other pollutants, and adopt recycling of resources or waste. Therefore, a shift toward Green IoT (GIoT) emphasises the need to adopt energy-efficient practices and processes prioritising resource conservation, waste reduction, and environmental sustainability[75].

Green IoT strategies can be grouped into the following categories: green IoT design, green IoT manufacturing, green IoT applications, green IoT operation, and green IoT disposal [76].

Green IoT design: Designing IoT hardware, software, management systems, and policies considering the requirement of minimising the energy consumption, carbon footprint and environmental impact of IoT systems. One of the design goals should be to implement energy-efficient strategies to reduce energy consumption and to develop strategies to minimise the amount of e-waste produced from the IoT systems and infrastructures. Green IoT design techniques include green hardware, green communication and networking infrastructure, green software, green architecture, green software, energy-efficient security mechanisms, and energy harvesting.

Green IoT Operations: Deploying, operating, and managing IoT systems in such a way as to minimise energy consumption and to minimise waste. One such strategy is to switch off idle networking and computing nodes, applying radio resource optimisation mechanisms (e.g., control of the transmission power and the modulation), energy-efficient routing mechanisms, and software energy optimisation mechanisms (improving software code to be energy-efficient and using software optimisation algorithm to minimise energy consumption).

Green IoT applications or use cases: Using IoT applications to reduce energy consumption (or the carbon footprint) and to conserve resources to ensure sustainability in other industries, for example, using IoT to reduce energy consumption, water consumption, and the use of chemicals (fertilisers, herbicides, fungicides, insecticides etc) in the agricultural industry. IoT can reduce energy consumption, carbon footprint, waste production, and the over-utilisation of resources in the various sectors of the economy, including manufacturing, energy production, mining, health care, and transportation. Therefore, the massive deployment of IoT in these sectors to address efficiency and productivity challenges should be done in such a way as also to address sustainability issues.

Green IoT waste disposal and management: Reducing the waste created from deploying and operating IoT systems. Renewable energy sources should be used to recharge IoT batteries to reduce the amount of IoT battery waste generated and dumped in landfills. Recycling IoT components and resources should be adopted and promoted to reduce the amount of e-waste generated by the IoT industries and dumped in landfills, which may increase significantly with the large-scale adoption and deployment of IoT systems in the various sectors of the economy.

Green IoT manufacturing: Energy-efficient manufacturing infrastructure for IoT hardware. With the expectation to connect hundreds of billions or trillions of IoT devices to satisfy the demand for IoT to improve various sectors or industries in the evolving tech-driven economy, the carbon footprint from factories manufacturing IoT devices will be enormous. Also, the manufactured IoT systems should be energy efficient.

Green IoT Design

Green IoT design is a paradigm based on a holistic IoT design framework that focuses on maintaining a balanced trade-off between the functional requirements, Quality of Service (QoS), interoperability, cost, security, and sustainability within the IoT ecosystem. It emphasises the need to prioritise energy efficiency and the reduction of waste in the IoT ecosystem by manufacturing IoT devices, deploying IoT systems, and operating IoT systems.

The emergence of modern technologies such as Fifth Generation (5G) mobile networks, blockchain, Artificial Intelligence (AI), and fog/cloud computing are unlocking new IoT use cases in various industries and sectors of the modern technology-driven economy or society. As a result, the number of IoT devices connected to the internet and the volume of traffic generated from IoT infrastructures will increase significantly, increasing the energy demand in the IoT ecosystem. The result is an increase in the carbon footprint and e-waste (especially from battery-powered IoT devices) from IoT-related services or the IoT ecosystem.

An effective green IoT strategy should span the entire IoT product lifecycle from the design to production (manufacturing) to the deployment, operations and maintenance, and recycling. The primary goal in each stage is to reduce energy consumption, adopt sustainable resources (e.g., harvesting energy from sustainable energy sources, using sustainable materials) usage, minimise e-waste and other pollutants, and adopt recycling of resources or waste. Therefore, a shift toward green IoT (G-IoT) emphasises the need to adopt energy-efficient practices and processes prioritising resource conservation, waste reduction, and environmental sustainability[77].

Green IoT design is a design framework consisting of design, production, implementation, deployment, and operation choices to reduce energy consumption and waste from the IoT ecosystem. They are energy-efficient strategies devised to reduce the carbon footprint from manufacturing, deploying, and operating IoT systems (IoT sensor devices, networking nodes, data centres or computing devices). They are also strategies devised to reduce the waste from IoT infrastructures. They may involve hardware, software, management or policy decisions. The green IoT design framework should consist of the following design considerations: developing and deploying energy-efficient mechanisms, choosing energy sources, and mechanisms to ensure environmental and resource sustainability.

Energy-efficient design

It involves designing and deploying energy-saving mechanisms to reduce the energy consumption of IoT devices. These mechanisms include the following:

  1. Green computing: Energy-efficient strategies designed to minimise energy consumption or to maximise energy efficiency to decrease the carbon footprint of computing devices and processes in IoT infrastructures (from the devices at the IoT layer to the computing servers at the fog computing servers).
  2. Green communication and networking: Selecting energy-efficient technologies, products, and practices designed to minimise energy consumption or to maximise energy efficiency decreases the carbon footprint from networking and communication nodes in nodes and processes in IoT infrastructures (from the IoT access nodes through the Internet core network to the cloud data centres).
  3. Green security: Design and implement energy-efficient security algorithms to minimise energy consumption or maximise energy efficiency in IoT infrastructures.
  4. Green architectures: Designing and organising IoT and other ICT architectures within the IoT infrastructure to minimise energy consumption or maximise energy efficiency.
  5. Green hardware design: Design energy-efficient hardware chips and devices (computing and networking nodes) to minimise energy consumption or maximise energy efficiency and decrease the carbon footprint from computing and networking hardware nodes in IoT infrastructures. Energy-efficient chips and hardware devices can save a lot of energy. With the increased use of AI and blockchain in IoT applications, energy efficiency at the hardware level becomes essential.
  6. Green software design: Optimising software algorithms and programs to minimise energy consumption, maximise energy efficiency, and decrease the carbon footprint from software programs running on IoT infrastructure.

The above energy-efficient or sustainable computing, security, networking, hardware, and software design strategies can significantly reduce the energy demand from large-scale IoT infrastructures deployed throughout the world. Although significant amounts of energy can be saved by applying these strategies, the rapid growth in the size of the IoT industry may offset these gains, but they offer a significant gain for the environment.

Design choices for energy sources

The type of energy sources required to power IoT infrastructures varies from the IoT cyber-physical infrastructure to the core infrastructures. Electrical and electronic devices in the IoT infrastructure can be powered with energy from:

  1. Main: Powering electrical and electronic systems within the IoT infrastructure using electricity from the main power supply. This method is suitable for energy-hungry devices like networking nodes and servers but not for a massive number of IoT devices, especially when the devices are supposed to be mobile.
  2. Energy harvesting: Renewable energy sources power electrical and electronic systems within the IoT infrastructure to reduce dependence on fossil fuel and other environmentally unsustainable energy sources. The kind of renewable energy source depends on the energy demand of the networking and computing nodes. For IoT devices, energy harvesters that can be scaled down to produce a small amount of energy to power small IoT devices, while larger energy harvesters that can produce more significant amounts of energy are used to supply power-hungry computing and networking nodes.
  3. Energy storage: The energy storage systems used to store energy in IoT infrastructure are battery energy storage systems (BESS) and super-capacitors. Small-sized batteries with limited energy often power most IoT devices. Due to the intermittent nature of renewable energy sources, large energy storage systems are frequently used to store harvested extra energy. That is, if the energy harvested is more than the load demand of the computing and networking system to be powered within the IoT infrastructure, the extra energy is stored in an energy storage system. The energy stored in the energy storage system is then later used to supply the load (IoT infrastructure) when the renewable energy source can no longer produce sufficient energy to meet the load demand.

Environmental sustainability mechanisms

IoT systems should be designed, implemented, and operated in such a way as to ensure the conservation of natural resources and reduce the waste or pollutants that are generated by the IoT industry. Energy-efficient design and use of renewable energy sources are sustainability mechanisms. Deploying energy-efficient mechanisms and using renewable energy reduces the carbon footprint of IoT infrastructures. Other environmental sustainability strategies are:

  1. Biodegradable materials are used to fabricate some components of the IoT devices.
  2. Reuse of IoT components.
  3. Recycling some of the waste generated, especially e-waste (electronic parts and batteries) from the IoT industry.

Green IoT Energy-Efficient Design and Mechanisms

As IoT is adopted to address problems in the various sectors of society or economy, the energy demand for IoT is increasing rapidly and almost following an exponential trend. As the number of IoT devices increases, the amount of traffic created by IoT devices increases, increasing the energy demand of the core networks that are used to transport the IoT traffic and also increasing the energy demand of data centres that are used to analyse the massive amounts of data collected by the IoT devices. The large-scale adoption and deployment of IoT infrastructure and services in the various sectors of the economy will significantly increase the energy demand from the IoT cyber-physical infrastructure (sensor and actuator devices) through the transport network infrastructure and the cloud computing data centre infrastructure. Therefore, one of the design goals of green IoT is to develop effective strategies to reduce energy consumption. These strategies should be deployed across the IoT architecture stacks. That is, energy-saving strategy should be implemented across all the IoT layers, including:

  • The perception or “things” layer: This layer consists of IoT sensors that collect data and send it to computing platforms for analysis and actuators that manipulate physical systems based on feedback from data analytic platforms.
  • The network or transport layer: Consists of the network (access and internet core network) infrastructure that is used to transport the data collected by the sensors to fog or cloud computing platforms and the feedback or commands from the fog or cloud computing platforms to manipulate actuation that controls cyber-physical systems at the perception or things layer.
  • The Application layer: This layer processes (analyses) and stores the data collected by the IoT sensor devices, which are transported to the data centres through the transport layer. The computation results can be made available to users through applications or sent back to the things layer to manipulate actuators.
  • The energy and sustainability management layer: It is an abstract layer that spans all three of the above layers, as energy efficiency and sustainability management are implemented across them.

At each layer, various energy-efficient strategies are implemented to reduce energy consumption. Much energy is used to perform computation and communicate at the multiple layers. A significant amount of energy is saved by deploying energy-efficient computing mechanisms (hardware and software), low-power communication and networking protocols, and energy-efficient architectures. Energy efficiency should be one of the main goals of green IoT systems: design, manufacturing, deployment, and standardisation. The energy-saving mechanisms may vary from one layer to another, but they can be classified into the following categories (figure 107):

  • Green hardware.
  • Green communication and networking.
  • Green architectures.
  • Green software.
  • Green security.
  • Green policies.
Green IoT Energy-Efficient Design and Mechanisms
Figure 107: Green IoT Energy-Efficient Design and Mechanisms

Green IoT Hardware

A realistic approach to significantly reduce the energy consumption in IoT systems or infrastructures is to dramatically improve the energy efficiency of hardware systems because a large proportion of energy is used to power the electrical and electronic hardware such as computing nodes, networking nodes, cooling (and air conditioning) systems, and power electronics systems, security, and lighting systems. Recently, much attention has been paid to improving the energy efficiency of hardware systems in ICT infrastructures, especially in the IoT industry. The energy-saving mechanisms in IoT infrastructures include:

  • Reducing the size of the hardware device.
  • Using energy-efficient materials.
  • Energy-efficient hardware design.
  • Turning off idle devices.
  • Energy-efficient manufacturing.

To achieve the green IoT vision, deploying energy-efficient hardware in the entire IoT infrastructure (from the perception layer to the cloud) throughout the IoT industry is essential. Green IoT hardware is not limited to energy-efficient hardware design and hardware-based energy-saving mechanisms in the IoT infrastructure but also includes sustainable hardware approaches such as:

  • Using disposal and recyclable materials to manufacture IoT hardware.
  • Incorporating energy harvesting systems into IoT systems or infrastructure.

Reducing the size of hardware device

There has been a significant reduction in the size of electronic hardware from the times of the vacuum tube to modern-day semiconductor chips. In the early days of electronics, computers occupied entire floors of buildings, radio communication systems were large systems integrated into cabinets, and the smallest electronic device at the time was a two-way radio system that was often carried on the back [78]. As the sizes of electronics devices decreased, their energy demand also dropped drastically.

Over the past few decades, the sizes of computing and communication devices have decreased significantly, reducing the power required to operate them. Despite the significant progress made by the semiconductor industry to decrease the size of semiconductor chips while improving their performance, there is still a persistent drive to keep lowering the sizes of semiconductor chips to decrease their cost, reduce energy consumption, and conserve the resources required to manufacture them.

One of the Co-founders of Intel, Gordon Moore, observed that “the number of transistors and resistors on a chip doubles every 24 months”, and the computer industry adopted it as the well-known Moore's law and became a performance metric in the semiconductor or computer chip industry. As more transistors were being packed into a single small-sized chip, the sizes of computing and network equipment decreased significantly, translating to a significant decrease in power consumption. Although advanced chip manufacturing has dramatically reduced the transistor gate length, current leakage has increased, increasing chip power consumption and heat dissipation. Thus, doubling the number of transistors on the chip could double the amount of power consumed by the chip[79].

In some energy-hungry IoT devices, batteries with higher energy capacity are required. The energy capacity of a battery is correlated with its size. That is, batteries with higher energy capacities may be larger and heavier, limiting the extent to which the device's size can be decreased. The energy capacity of the battery may be relatively small. Still, an energy harvesting module is attached to the battery to recharge the battery with energy harvested from the environment continuously. Adding an energy harvesting module may increase the size of the IoT device, but it improves the device's operational life or lifetime. It should be noted that the energy harvested by energy harvesting modules is minimal and that the power electronics components consume energy.

Another approach to keep decreasing the sizes of IoT devices and possibly reduce energy consumption is to integrate the entire electronics of an IoT device, computer or network node into a single Integrated Circuit (IC) called a System on a Chip (SoC) [80]. The components are the devices or nodes often integrated into an IC or SoC, including a Central Processing Unit (CPU), input and output ports, memory, analogue input and output module, and the power supply unit. The SoC can efficiently perform specific functions such as signal processing, wireless communication, executing security algorithms, image processing, and artificial intelligence. The primary reason for integrating the entire electronics of a system into a chip is to reduce energy consumption, size, and cost of the system as a whole. That is, a system that was initially made of multiple chips is integrated into a single chip that is smaller in size, may be cheaper, and consumes less energy. External devices such as the power sources (batteries or energy harvesting, antennas and other analogue electronics components) can be integrated into a SoC to reduce size, energy consumption, and cost.

Using Energy-Efficient Materials and Sensors
Energy-efficient IoT systems start with the careful selection of materials and sensors. Modern IoT devices increasingly utilise low-power electronic components and sensors designed to minimise energy consumption without compromising performance. For instance:

  • Energy-efficient sensors: These include ultra-low-power sensors capable of capturing environmental data (e.g., temperature, humidity, motion) with minimal energy input. Advances in MEMS (Micro-Electromechanical Systems) technology and energy-harvesting sensors that draw power from ambient sources (solar, kinetic, or thermal) are pivotal for sustainable IoT designs.
  • Materials engineering: Devices are constructed with materials that reduce energy losses, such as low-resistance conductors or heat-dissipating materials, which enhance the efficiency of both computing and communication components.

Energy-efficient hardware design

At the IoT perception layer, some of the energy-efficient mechanisms include:

  1. Energy-efficient sensors (Green sensors): IoT sensors should be designed to consume as little energy as possible. When selecting the sensors to be used in the design of IoT devices, energy consumption and sustainability should be among the design criteria considered.
  2. Energy-efficient radio modules (Green radio modules): Radio modules are the major energy consumers in IoT devices, and designing them to consume a minimal amount of energy significantly decreases their energy consumption. When choosing an IoT device for an IoT application, the radio modules' energy consumption should be considered.
  3. Low-power microcontrollers and microprocessors (Green MCUs and ICs): the energy consumption of the microcontroller or microprocessor is significant as batteries with limited energy capacity often power these devices. In selecting IoT devices to be used for an IoT application, the performance and energy consumption of the devices should be prioritised rather than sacrificing one for the other. Some of the design strategies that have been developed to improve the energy efficiency of the microcontroller or microprocessor of IoT devices are:
    • Duty cycling: Switching off the microcontroller or microprocessor when the device is idle and then switching it on only when it is needed for processing.
    • Using low-power microcontrollers or microprocessors: Choosing very low-power microcontrollers or microprocessors with minimal processing power but consume relatively little energy.
    • Using energy-efficient CMOS ICs to manufacture MCUs or CPUs: Manufacturing the components of IoT devices using energy-efficient CMOS ICs can significantly reduce the energy consumption of IoT devices.
    • Hardware acceleration and SoC design: Using application-specific integrated circuits (ASICs) to implement hardwired functionalities in an energy-efficient way (e.g., DSP systems, System-in-package(SiP), System-on-Chip(SoC)), resulting in highly compact designs (combining sensors, MCU, batteries, and energy harvesters into a single chip).

As tens of billions to trillions of IoT devices are being deployed in various sectors (e.g., intelligent transport systems, smart health care, smart manufacturing, smart homes, smart cities, smart agriculture, and smart energy) of the society or economy, the amount of traffic generated by IoT devices and transported through the local network and the Internet to fog or cloud computing platforms is also multiplying. The computing or processing required to analyse the massive amounts of data generated has also increased significantly. The increase in traffic and computing or processing requirements also increases the energy consumption of hardware deployed in the networking and data centre infrastructures handling the IoT traffic and data. Some of the hardware-based energy-saving strategies that can be leveraged to reduce the energy consumption of networking and computing nodes in IoT based-infrastructure (some of which were discussed in [81]) include:

  1. Custom systems-on-chip: A design approach that integrates some or all system components into a single chip, which reduces the size of the system compared to the method of designing the various components of the system separately. Although the SoC devices' size, weight and energy consumption may be relatively lower than those developed using separate chips, their performance may be lower. For example, a Raspberry Pi that contains a Broadcom SoC may consume less than 5 W, and its processing power may be less than that of computer processors. SoCs are used in mobile phones to ensure acceptable computing or processing and networking performance while minimising energy consumption to extend the battery life. Thus, the SoC design approach will significantly reduce the device's size and energy consumption without necessarily sacrificing the performance of the devices.
  2. Dynamic frequency scaling: The processor, microprocessor, or microcontroller can be forced into a low-power mode by reducing clock frequency or voltage. Also, the power consumption of the peripheral components of the device can be dynamically reduced by dynamically powering down some of the peripherals that are idle (not used at all). The power consumption of the peripherals can be controlled so that they consume power only when necessary. Dynamic frequency or voltage scaling can be implemented in software, which is then used to monitor and adjust the processor's power and clock frequency or voltage. Frequency and voltage scaling can be implemented on computing and networking nodes from the IoT perception layer through the networking or transporting layer to fog/cloud computing layers. Frequency or voltage scaling is a feature implemented in some Intel processes in the form of P-states and C-states. The P-states provide a mechanism to scale the frequency and voltage the processor runs to reduce its power consumption. The C-states are the states of the CPU when it has reduced or turned off some of its selected functions [82].
  3. Low-energy displays: For applications that require information to be displayed, increasing the energy efficiency of the display could decrease the device's energy consumption.
  4. Hardware data processing (e.g., (AI hardware): Rather than using the CPU for all computing or processing tasks, hardware acceleration is employed to shift unique data operations or specific computing tasks into dedicated hardware. Hardware acceleration refers to the process by which an application offloads some specific computing tasks onto some specialised hardware components (e.g., GPUs, DSP, ASICs, etc) within a system to achieve greater efficiency than it is possible using software that is running solely on a general purpose CPU [83]. Visualisation, packet processing, AI processing, cryptography, error correction, and signal processing can be offloaded onto specialised hardware, freeing up the CPU to perform other tasks. Such specific hardware often offers high performance and low energy consumption compared to CPUs. For example, running AI-based tasks on GPUs is more efficient than running them on a CPU, which justifies why GPUs are preferable to CPUs. AI-specific hardware has been introduced, especially for neural-network-based tasks. Thus, IoT hardware designers should examine carefully if tasks could be offloaded to specialised hardware to free up the microcontroller or processors, significantly improving performance and energy efficiency.
  5. Cloud computing (remote processing): Cloud computing is a cost-effective and scalable computing paradigm that enables on-demand remote access to resources such as software, infrastructure, and platforms over the Internet. By adopting cloud-based services (software-as-a-service, infrastructure-as-a-service, platform-as-a-service), companies or organisations do not need to invest in hardware infrastructure to host their service, significantly reducing the energy demand of IT services. An interesting strategy that has significantly increased the performance and energy efficiency of IT infrastructure and services is virtualisation. Virtualisation refers to the hardware or software methods that enable partitioning a physical machine into multiple instances that run concurrently and share the underlying physical resources and devices. It involves the use of a Virtual Machine Monitor (VMM), also called a hypervisor, to manage the Virtual Machines (VMs) and enable them to share the underlying physical resources (hardware). The sharing of hardware resources by VMs hosting multiple services (data analytics, high-performance computing, security, etc.) significantly reduces the energy demand from data centres. Data centres have developed and implemented several energy-efficient strategies (e.g., switching off idle servers, energy-efficient task scheduling, and other optimisation methods). The exponential increase in the number of deployed IoT devices and the generation of massive amounts of data they generate and send to fog computing nodes or cloud computing data centres will likely significantly increase the energy consumption of data centres, requiring green cloud computing strategies.
  6. Photonic computing: In an attempt to increase processing performance and significantly decrease energy consumption, researchers and experts in the electronics and computer industries are seeking ways to use optical devices for data processing, data storage, and data communication. Optical or photonic computing offers high speed, high bandwidth, and low energy consumption benefits that can be exploited to meet the need for high-performance computing, high-speed communication, and low energy consumption and can be considered as a promising technology for high-performance or high-speed computing and communication technologies for computing and networking nodes in the IoT networking/transport and fog/cloud computing layers. The main components of photonic or optical computing systems are optical processing units (for data processing), optical connectors (for optical data transfer), and optical storage units (for optical data storage). In optical or phototonic computing, light waves (photons) produced by lasers or incoherent sources are exploited as a primary means for carrying out numerical calculations, reasoning, artificial intelligence, data processing, data storage and data communications for computing, unlike in traditional computers where these functions are performed using electrons [84]. A significant challenge in optical or photonic computing systems is the inefficiencies or performance bottlenecks introduced when converting electrical signals to optical and optical signals to electrical, as there is still a need to interface them with existing digital computing and communication systems.
  7. Improving the energy efficiency of mobile radio networks: The adoption of Low-Power Wide Area (LPWA) cellular technologies (e.g., NB-IoT, LTE-M) has enabled the deployment of IoT networking services over existing mobile networks [85]. Power amplifiers consume more than 50% of the energy consumption of cellular base stations, so it is possible to reduce energy consumption by improving the efficiency of the power amplifier of wireless access network nodes (e.g., improving the efficiency of the power amplifier of 4G/5G/6G base stations). Another strategy to reduce the energy demand of cellular mobile base stations is to centralise or shift some of the baseband processing to the cloud or a pool of baseband units, the so-called Cloud Radio Access Network (C-RAN).
  8. Turning off idle networking or computing nodes: The most popular energy-efficient management strategy is to switch off idle devices or components. This approach can be applied from the IoT perception layer to the fog/cloud computing layer.

Green Computing

The increasing proliferation of IoT devices in almost every sector or industry in developing and developed economies has increased the amount of data collected from the environment, increasing the demand for processing or computing. IoT and traditional devices require high performance, QoS, and longer battery life, which can be achieved primarily by developing strategies to improve computing performance and energy consumption. Green or sustainable computing is the practice of developing strategies to maximise energy efficiency (minimise energy consumption) and to minimise the environmental impact from the design and use of computer chips, systems, and software, spanning across the supply chain from the extraction of raw materials needed to make computers to how systems are recycled [86].

Green computing strategies can be implemented in software or hardware. Some of the hardware-based green computing strategies have been discussed above in the section on Green IoT hardware. The software strategies will be addressed in the Green IoT software section below. Hardware acceleration is a primary green computing strategy that improves performance and energy efficiency. Hardware accelerators such as GPUs and Data Processing Units (DPUs) are major green computing drivers because they provide high-performance and energy-efficient computing for AI, networking, cybersecurity, gaming, and High-Performance Computing (HPC) services or tasks. It is estimated that about 19 terawatt-hours of electricity a year could be saved if all AI, HPC and networking computing tasks could be offloaded to GPUs and DPU accelerators. With the increasing use of sophisticated data analytics and AI tools to process the massive amounts of data generated by IoT devices, green computing strategies such as hardware acceleration will be essential [87].

Green software goes back to the beginning of the computer era in terms of code efficiency and compactness. For example, it uses assembler and C/C++ code that is far more efficient in terms of performance and memory compared to modern high-level programming languages such as Python or Java. It also emphasises the importance of proper software-based energy management, such as asynchronous routines, use of interrupts, and sleep modes.

Recent developments in AI models and edge and fog computing enable the use of lightweight AI models in the fog and edge class of devices commonly powered by green energy sources.

Green computing is not only about devising strategies to reduce energy consumption. It also includes leveraging high-performance computing resources to tackle climate-related challenges. For example, GPUs and DPUs are used to run climate models (e.g., predict climate and weather patterns) and develop other green technologies (e.g., energy-efficient fertiliser production, development of battery technologies, etc.). Combining IoT and green computing technologies provides powerful tools for scientists, policymakers, and companies to tackle complex climate-related problems.

Green IoT Communication and Networking Infrastructure

Communication infrastructure is a significant energy consumer in IoT systems as device-generated data increases exponentially. Strategies to enhance energy efficiency include:

a. Low-power networking and communication technologies:
Communication protocols were adopted for low bandwidth and low power operations, such as Zigbee, LoRaWAN, Sigfox, and BLE (Bluetooth Low Energy).
Energy-efficient adaptations of 5G technologies through techniques like massive MIMO (Multiple Input, Multiple Output) and dynamic spectrum sharing.

b. Energy-efficient data transmission:
Data aggregation and compression reduce the transmitted data volume, conserving network bandwidth and lowering energy usage.
Scheduling transmissions during periods of low network usage minimises power surges and optimises resource utilisation.

c. Network-level offloading of computation: Devices conserve battery power by shifting intensive computational tasks from resource-constrained IoT devices to more capable edge or fog nodes.
Edge computing reduces data transfer requirements and latency, leading to energy savings at device and infrastructure levels.

d. Energy-efficient communication techniques:
Algorithms that adaptively control transmission power based on signal strength and environmental conditions ensure optimal energy use.
Implementing sleep and wake cycles for IoT devices, where communication modules remain dormant when not in use, significantly reduces energy consumption.

Green IoT Architectures

Energy-efficient IoT systems are built around architectural frameworks that integrate energy optimisation across all layers of the IoT ecosystem, including device, network, and application levels. Key strategies include:

  • Hierarchical architectures: Layering devices into hierarchical systems (e.g., sensors, gateways, cloud) allows for task and resource distribution, improving energy efficiency.
  • Decentralised processing: Leveraging edge and fog computing reduces dependency on energy-intensive cloud operations, curbing overall system power consumption.

Green IoT Software

Optimised software plays a critical role in reducing the energy footprint of IoT systems. Besides computing considerations presented in the chapter above, the following approaches are efficient:

  • Energy-aware algorithms: Algorithms that minimise computational complexity reduce CPU cycles and energy usage.
  • Dynamic software updates: Incremental updates allow IoT devices to maintain optimal functionality without requiring frequent firmware changes, saving energy over time.
  • AI-based optimisation: Machine learning algorithms predict and adapt energy consumption patterns based on usage, ensuring operational efficiency.

Green IoT security

Energy-efficient security measures are vital to ensure sustainable IoT systems:

  • Lightweight encryption algorithms: Designed to provide robust security without the high computational cost of traditional encryption methods.
  • Efficient authentication protocols: Multi-factor authentication mechanisms that minimise data exchange reduce energy costs associated with security processes.
  • Distributed security frameworks: IoT systems can maintain robust protection with reduced energy expenditure by decentralising security enforcement to edge nodes.

Advanced Green Manufacturing

Developing advanced design and manufacturing processes to produce energy-efficient chips is one of the strategies currently being used to reduce energy consumption to achieve the green computing and communication goals. Given the rapid adoption of smartphones and IoT systems, producing energy-efficient chips is very important. An example of how advanced manufacturing may significantly reduce energy consumption in computing and communication devices is the A-series chips used in Apple's iPhones. The power consumption of the 7-nm A12 chip is $50\%$ less than its 10-nm A11 predecessor. Also, the 5-nm A14 chip is $30%$ more power efficient than the 7-nm A13 chip, and the 4-nm A16 is $20%$ more power efficient than the 5-nm A15. [88].

A similar trend can be observed in the PC industry, although there is no guarantee that more advanced chip manufacturing processes will continue to improve chip performance and energy efficiency. Designing energy-efficient chips for 5G/6G base stations is crucial to meet the growing demands of high-speed communication while minimising energy consumption and environmental impact. These chips are engineered with advanced semiconductor technologies to reduce power consumption and improve energy efficiency. They integrate specialised hardware accelerators for signal processing and AI-driven resource management to optimise network performance dynamically. Power-saving techniques like dynamic voltage and frequency scaling (DVFS) are also employed to adapt energy usage based on real-time load.

Green IoT Policies

Regulatory frameworks and corporate policies play a foundational role in driving energy-efficient IoT adoption:

  • Global standards: Policies encouraging compliance with energy-efficient standards (e.g., Energy Star, IEEE standards for energy-efficient networking) foster widespread adoption of sustainable practices.
  • Incentives for energy-efficient designs: Governments and industry bodies can offer subsidies, tax benefits, and grants to encourage the development of energy-efficient IoT systems.
  • E-waste management regulations: Effective recycling and disposal policies ensure that IoT components are responsibly managed, reducing their environmental impact.

Energy-efficient IoT systems demand an integrated approach, combining advanced hardware, optimised software, sustainable manufacturing, and policy support to meet the goals of green computing and communication. As the IoT ecosystem expands, these strategies are essential to balance innovation with environmental sustainability.

Design Consideration for Energy Sources for IoT Devices

Choosing an appropriate energy source for IoT systems is critical to ensuring reliability, efficiency, and sustainability. These considerations are guided by the diverse requirements of IoT devices and their deployment scenarios. Below, we expand on key design aspects (figure 108):

Design Consideration for Energy Sources
Figure 108: Design Consideration for Energy Sources

1. Scalability

IoT deployments often involve a large number of devices operating in diverse environments. The energy solution must:

  • Be scalable such that it can be manufactured on a large scale at a reasonable cost.
  • Be capable of serving small, low-power devices and large, energy-intensive systems like gateways or servers.
  • Offer modular or adaptable energy storage solutions, allowing upgrades to accommodate future device additions or higher power demands.

2. Minimum Maintenance

IoT devices are often deployed in remote or hard-to-access locations where frequent maintenance is impractical. Energy sources must:

  • Require minimal or no regular maintenance, reducing operational costs.
  • Be reliable for long-term usage, particularly in battery-powered devices, where recharging or replacement can be challenging.
  • Leverage self-sustaining solutions such as energy harvesting from solar, thermal, or mechanical sources to extend operational lifespans and reduce the frequency of replacing the node's energy storage system, maintenance frequency and cost.

3. Mobility

For IoT applications requiring mobile devices, such as wearables, drones, or vehicle-mounted sensors, energy sources must:

  • Be lightweight and compact to avoid hindering mobility.
  • Ensure sufficient energy storage capacity to power devices during extended operation without recharging.
  • Be rugged and resilient to withstand movement, vibration, or other dynamic conditions.

4. Energy Requirements

The energy consumption of IoT devices varies widely, depending on their purpose and workload. Key considerations include:

  • Low-power devices: Sensors and simple IoT nodes require minimal power, making batteries or energy harvesting sufficient.
  • Energy-hungry devices: Edge computing nodes or gateways with high processing and networking requirements need more robust and continuous power sources.
  • Due to their limited capacity, devices requiring a constant power supply (e.g., critical infrastructure sensors or medical devices) may not rely solely on batteries.
  • Hybrid systems combining batteries with solar or other ambient energy sources are often ideal.
  • Hybrid energy storage systems can also supply stored energy to IoT devices.

5. Flexibility

IoT systems are deployed in diverse environments, from urban areas to remote, off-grid locations. Flexible energy solutions should:

  • Adapt to environmental conditions (e.g., solar panels for sunny regions or RF harvesting in urban areas).
  • Support hybrid energy systems that combine multiple energy sources to enhance reliability and efficiency.
  • Allow for easy integration into both existing and new IoT infrastructures.

6. Efficiency

Efficient energy usage is vital to maximize device lifespans and reduce energy waste. Considerations include:

  • Energy-efficient components: Use low-power processors and communication protocols to minimize energy demands.
  • Energy storage systems: Batteries and capacitors must offer high energy density, low leakage, and efficient charge-discharge cycles.
  • Optimize power management to match the device's active and idle states, reducing unnecessary consumption.

7. The Need for Backup Energy Sources

IoT devices must remain operational during power outages or periods when primary energy sources are unavailable. Backup considerations include:

  • Incorporating energy storage systems like batteries or supercapacitors to provide temporary power.
  • Designing hybrid systems with renewable energy sources (solar, wind) as backups for grid-dependent devices.
  • Ensuring seamless transitions between primary and backup power sources to avoid service interruptions.

8. Minimum Cost

Cost-effectiveness is critical for large-scale IoT deployments. Energy source design must:

  • Balance initial costs (e.g., solar panels or advanced batteries) with long-term savings from reduced maintenance and energy efficiency.
  • Use cost-efficient materials and manufacturing techniques for batteries and energy harvesting systems.
  • Optimize deployment and maintenance strategies to minimize labour and operational expenses.

9. Sustainability

Sustainable energy solutions are essential to reducing the environmental footprint of IoT systems. Considerations include:

  • Using renewable energy sources like solar, wind, or hydro to power IoT devices.
  • Deploying energy harvesting systems to reuse ambient energy and reduce reliance on non-renewable sources.
  • Designing systems with recyclable or biodegradable materials to minimize waste.

10. Green and Environmentally Friendly

To align with green IoT principles, energy sources should:

  • Minimize carbon emissions during production and operation.
  • Avoid toxic materials (e.g., certain battery chemicals) that can harm the environment if not disposed of properly.
  • Support eco-friendly practices, such as leveraging clean energy or reducing e-waste through longer-lasting components.

Designing energy sources for IoT systems requires a holistic approach that balances power needs, cost, efficiency, and sustainability. By addressing these considerations, developers can create reliable, scalable, and environmentally responsible IoT systems, paving the way for innovative and sustainable IoT solutions.

Energy Sources for IoT

The electrical and electronic devices in IoT infrastructure require electrical energy to operate. The energy requirements of the device depend on its size, computing or processing requirements, traffic load, and other mechanical and electrical loads that need to be handled, especially in IoT applications where the feedback commands from fog/cloud computing platforms are used to control a physical process or system through actuators. The main power sources for IoT devices are (figure 109):

  • main power,
  • energy storage systems,
  • energy harvesting systems.
Energy Sources for IoT
Figure 109: Energy Sources for IoT

Grid power

In IoT applications where the hardware devices do not need to be mobile and are energy-hungry (consume significant energy), they can be reliably powered using grid power sources. The mains power from the grid is AC power, which should be converted to DC power and scaled down to meet the power requirements of sensing, actuating, computing, and networking nodes. The hardware devices at the networking or transport layer and those at the application layer (fog/cloud computing nodes) are often power-hungry and supplied using grid energy.

A drawback of using the main power to supply an IoT infrastructure with many IoT devices that depend on the grid power source is the complexity of connecting the devices to the power source using cables. In the case of hundreds or thousands of devices, supplying them using the main power is impractical. If the energy from the grid source is generated using fossil fuels, then the carbon footprint from the IoT infrastructure increases as its energy demands increase.

Energy storage systems

Energy storage systems are systems that are used to store energy so that it can be consumed later. In IoT infrastructures, some sensors, actuators, computing and networking nodes, and other electrical systems are powered by energy storage systems. The energy is stored in forms that can readily be converted into electrical energy required to power the IoT devices, computing and networking nodes and other electrical systems in the IoT infrastructure. In some scenarios, electrical energy from a main power supply or local renewable energy plants (or energy harvesting systems) is converted to storable energy forms and stored in energy storage systems to be used when the source is not able to generate energy to meet the needs of the electrical systems in the IoT infrastructure. Energy storage systems can be categorised depending on the form of the energy (mechanical, electrical, chemical, and thermal energy) that is stored and then subsequently converted into electrical energy.

Categories of Energy Storage Systems

  1. Electrostatic energy storage systems: They use capacitors to store energy as an electric field. They are suitable for high-speed energy release but limited in storage capacity.
  2. Magnetic energy storage system: This includes superconducting magnetic energy storage (SMES) systems, which store energy as a magnetic field in superconducting materials. These systems provide high efficiency and rapid discharge but require advanced cooling systems to maintain superconductivity.
  3. Electrochemical energy storage systems Store energy through reversible chemical reactions in batteries. Common types include lithium-ion, lead-acid, alkaline, solid-state thin-film, and 3D-printed zinc batteries. These batteries suit many applications, from tiny IoT sensors to more extensive infrastructures like data centres.
  4. Chemical energy storage systems: The electrical energy generated is converted to chemical energy and stored in chemical fuels that can be easily converted into electrical energy. The energy generated can be stored in chemical forms such as hydrogen for a long time and used when necessary. In this case, energy is harvested from renewable energy sources such as solar or wind when conditions are good, like spring or summer and used during winter when conditions are not favourable for renewable energy generation.
  5. Mechanical energy storage systems: The electrical energy produced is converted into mechanical energy (e.g., potential and kinetic energy) and stored in a mechanical energy storage system. The mechanical energy is stored to be easily converted back to electrical energy for consumption. Examples of mechanical energy storage systems include pumped hydro energy storage systems, gravity energy storage systems, compressed air energy storage systems, and flywheel energy storage systems. Mechanical energy storage systems are vast and complex. They may be used as an energy storage option for fixed IoT infrastructures like base station sites or data centres, provided there is space for it and the area's geography is suitable. It may not be an energy storage option for small IoT systems constrained by size and weight.
  6. Electrothermal energy storage system: The electrical energy generated is converted to thermal energy, which is stored and used for heating, cooling, or conversion purposes for large-scale infrastructure (e.g., base stations, core network infrastructure, or fog/cloud data centres). The thermal energy can be stored and converted into electrical energy for consumption.
  7. Hybrid energy storage system: This system combines multiple storage technologies (e.g., batteries with supercapacitors) to balance capacity, discharge rate, and longevity. It offers flexibility and performance optimisation for diverse IoT applications.

Most IoT devices are powered using a small energy storage system (e.g., battery or supercapacitor) with minimal energy capacity. The energy storage system, in the form of a battery or supercapacitor, is charged to its full capacity when the device is being deployed. The device is shut down when all the energy stored in the energy storage system is completely consumed or drained. The device's lifetime is the time from when the device is deployed to when all the energy stored in its energy storage system is consumed. The capacity of the energy storage is often chosen in such a way as to satisfy the energy consumption demand of the device and ensure a longer lifetime for the device. In a massive deployment of thousands or hundreds of thousands of IoT devices, frequent replacement or recharging of batteries or supercapacitors can be tedious and costly and may also degrade the quality of service.

An energy storage system is recommended mainly for IoT devices that require a tiny amount of power (in the order of micro- or milliwatts) to operate and spend most of their time in sleep mode to save energy. The lifetime of a low-power IoT device powered by a small battery is desired to be at least a decade. The energy storage systems' energy capacity is contained by its size and weight. That is, increasing the capacity of an energy storage system increases its size or weight. Still, it is desired to keep the size and weight of IoT devices as small as possible, especially in IoT applications where mobility is critical.

The computing and networking nodes at the edge/fog/cloud layer of the IoT architecture are energy-hungry devices not often powered solely by energy storage systems. They are often powered by a main power source, such as an electricity grid or renewable energy sources (e.g., wind, solar, pumped hydro-power). A backup energy storage system is often installed to store energy so that when the main power source fails (especially in the case where energy is generated from renewable energy sources as they are intermittent in nature), the energy storage system will supply the computing or networking node until the main source is restored.

Energy Storage in IoT Devices

Small IoT Devices

Most small IoT devices rely on compact energy storage systems such as batteries or supercapacitors. These devices are typically constrained by:

  • Size and Weight: Energy storage capacity must be balanced with the need for compact designs.
  • Energy Demand: Devices are optimised for low power consumption (in the range of micro or milliWatts) and often operate in sleep mode to conserve energy.
  • Lifetime: The energy storage system's capacity determines the device's operational lifetime, which is designed to minimise frequent replacements or recharging.

The most common energy storage systems used in small IoT devices include:

  • Batteries: Lithium-ion and solid-state thin-film batteries are standard in IoT devices due to their energy density and compact size.
  • Supercapacitors: Provide rapid charging and discharging capabilities suitable for devices requiring quick energy bursts.

Large IoT Infrastructure

IoT infrastructure at the edge, fog, and cloud layers (e.g., base stations, access points, fog nodes, and data centres) require more robust and large-scale energy storage solutions. These include:

  • Battery Energy Storage Systems: Provide reliable backup power.
  • Hydrogen Energy Storage Systems: Store renewable energy in chemical form for long-term use.
  • Thermal Energy Storage Systems: Store energy as heat, often used for cooling or reconverted to electricity.
  • Mechanical Storage Solutions: Pumped hydro or flywheel systems can store vast amounts of energy for large-scale operations.
  • Hybrid energy storage: A combination of two or more energy storage systems, e.g., supercapacitor and battery.

Such systems often serve as backup power sources to ensure uninterrupted operation during grid outages or renewable energy intermittency.

Examples of Energy Storage Systems for IoT

Electrical Energy Storage Systems

  • Supercapacitors: For high-speed energy release in sensors or actuators.
  • Superconducting Magnetic Energy Storage: Suitable for critical applications requiring rapid energy discharge.

Mechanical Energy Storage Systems

  • Pumped Hydro: For large-scale energy backup in base stations or data centres.
  • Flywheel Storage: Ideal for facilities needing rapid energy delivery.

Chemical Storage

  • Flow Batteries: Provide scalability for varying energy demands.
  • Hydrogen Storage: Stores renewable energy over long durations.

Thermal Storage

  • Cryogenic Energy Storage: Stores energy in liquefied air, suitable for cooling-intensive applications.
  • Phase-Change Materials: Efficiently store and release thermal energy.
Challenges and Considerations
  • Energy Efficiency vs. Size: Increasing energy capacity often results in larger, heavier systems, which may conflict with the need for compact designs.
  • Cost: Advanced energy storage systems, such as hydrogen or SMES, can be costly.
  • Environmental Impact: Sustainable energy storage solutions are critical to minimising the ecological footprint of IoT systems.
  • Reliability: Ensuring consistent performance over long periods, especially in critical IoT applications.

Energy storage systems are pivotal in enabling reliable, efficient, and sustainable IoT operations. These technologies, from small-scale batteries in sensors to large-scale mechanical systems in data centres, ensure that IoT infrastructures can function even without a direct power supply. IoT designers can meet the growing demands of connected ecosystems while addressing environmental and operational challenges by leveraging diverse storage options and optimising for specific use cases.

Energy Harvesting Systems

To deal with limitations of energy storage systems such as the limited lifetime (the time from when an IoT device is deployed to when all the energy stored in its energy storage system is depleted or consumed), maintenance complexity, and scalability, energy harvesting systems are incorporated into IoT systems to harvest energy from the environment. The energy can be harvested from the ambient environment (energy sources naturally present in the immediate environment of the device, e.g., solar, wind, thermal, radiofrequency energy sources) or from external sources (the source of energy is from external systems, e.g., mechanical or human body) and then converted into electrical energy to power IoT devices or storage in an energy storage system for later usage.

Energy Harvesting from Ambient Energy Sources

The energy can be harvested from ambient sources (environmental energy sources) such as solar and photovoltaic, Radio Frequency (RF), flow (wind and hydro energy sources), and thermal energy sources. Ambient energy harvesting is the process of capturing energy from the immediate environment of the device (ambient energy sources) and then converting it into electrical energy to power IoT devices. Each energy source has unique characteristics that make it suitable for specific IoT applications, providing tailored solutions to power devices based on their requirements. The ambient energy harvesting systems that can be used to harvest energy to power IoT devices, access points, fog nodes or cloud data centres include:

1. Solar and Photovoltaic Energy Harvesting

Source: Solar energy is derived from natural sunlight, while artificial light sources can be harnessed indoors. Solar panels or photovoltaic cells are the primary tools for capturing this energy.

Process: Photovoltaic (PV) cells, composed of semiconductor materials, absorb photons from light. This absorption excites electrons, generating an electric current that powers IoT devices or charges energy storage systems.

Applications:

  • Outdoor IoT devices: Environmental sensors, agricultural IoT systems, and smart city deployments (e.g., solar-powered streetlights or traffic systems).
  • Indoor IoT systems: Energy-efficient smart home devices like automated blinds or temperature controllers.

Advantages:

  • Solar energy is abundant, renewable, and widely available.
  • Photovoltaic cells can be scaled to suit various device sizes and energy needs.

Challenges:

  • Performance depends on light availability, weather conditions, and shading.
  • Energy storage systems (e.g., batteries) are required for use during periods of darkness or cloudy weather.

2. Radio Frequency (RF) Energy Harvesting

Source: RF energy is emitted by various wireless communication systems such as Wi-Fi routers, mobile networks, and television transmitters.

Process: RF energy is captured using specialised antennas and rectified to produce usable electrical power. Depending on the application, these systems can operate over various frequencies.

Applications: Low-power IoT devices: Wearable sensors, asset trackers, and remote controllers in urban and indoor environments where RF signals are prevalent.

Advantages:

  • Utilises an omnipresent energy source in populated areas.
  • Offers a continuous power supply in environments with dense RF activity.

Challenges:

  • Energy output is relatively low and insufficient for high-power devices.
  • Proximity to RF sources and signal strength significantly impact efficiency.

3. Flow Energy Harvesting

Source: Energy from the movement of air (wind) or water (hydro) is captured and converted into electrical energy.

Process:

  • Wind energy: Micro wind turbines or harvesters capture the kinetic energy of moving air.
  • Hydro energy: Small-scale hydroelectric systems capture water flow in rivers, streams, or pipelines.

Applications: Remote IoT devices in areas with consistent air or water flow, such as wind-powered weather stations or hydro-powered sensors in smart water management systems.

Advantages:

  • Renewable and highly scalable for large and small IoT deployments.
  • Provides a sustainable energy source in specific geographic locations.

Challenges:

  • Requires consistent flow availability and favourable conditions for effective energy generation.
  • Infrastructure needs can be costly and space-intensive.

4. Thermal Energy Harvesting

Source: Temperature differences or heat dissipation from industrial processes, human bodies, or natural sources.

Process: Thermoelectric generators (TEGs) use the Seebeck effect, where a voltage is generated due to a temperature gradient across a material, to convert heat into electrical energy.

Applications:

  • Industrial IoT systems: Waste heat recovery from factories or power plants.
  • Smart home devices: Heat-based systems for energy-efficient homes.
  • Wearables: Powering smartwatches or fitness trackers using body heat.

Advantages:

  • Utilises existing waste energy, improving overall energy efficiency.
  • Ideal for applications with constant heat sources.

Challenges:

  • Limited conversion efficiency.
  • Reliance on stable and sufficient temperature gradients.

5. Acoustic Noise Energy Harvesting

Source: Pressure waves from sound or vibrations caused by machines, vehicles, or environmental noise.

Process: Piezoelectric or acoustic materials capture sound vibrations and convert them into electrical energy.

Applications:

  • Urban IoT devices in noisy environments.
  • Sensors in factories or other high-decibel areas.

Advantages:

  • Exploits previously untapped sound energy.
  • Can be deployed in areas with persistent noise.

Challenges:

  • Low energy output.
  • Efficiency depends on sound frequency and intensity.

Energy Harvesting from Mechanical Sources

Mechanical energy sources, such as vibrations and pressure changes, are prevalent in dynamic environments like transportation and industrial settings.

1. Vibration Energy Harvesting

Source: Vibrations generated by machinery, vehicles, or natural phenomena.

Process: Devices with piezoelectric or electromagnetic materials capture vibrational energy and convert it to electrical energy.

Applications:

  • Monitoring industrial machinery health.
  • Powering IoT sensors on vehicles or railways.

Advantages:

  • Utilises existing mechanical energy.
  • Ideal for environments with continuous movement.

Challenges: Dependent on vibration consistency and intensity.

2. Pressure and Stress-Strain Energy Harvesting

Source: Pressure variations or mechanical stress on materials.

Process: Piezoelectric materials produce electrical charges when subjected to stress or strain.

Applications:

  • Medical sensors in wearable devices.
  • IoT devices in hydraulic or pneumatic systems.

Advantages: Effective for compact devices.

Challenges: Limited applications outside specific industries.

Energy Harvesting from Human Body Sources

The human body is a valuable energy source, especially for wearable and implantable IoT devices.

1. Human Activity Energy Harvesting

Source: Biomechanical movements like walking, running, or cycling.

Process: Kinetic systems convert movement into electrical energy, which can power wearables or charge onboard batteries.

Applications:

  • Smart fitness trackers.
  • IoT-enabled medical monitoring devices.

Advantages: Eliminates external charging needs.

Challenges: Energy generation depends on user activity levels.

2. Human Physiological Energy Harvesting

Source: Body heat, biochemical reactions, or other physiological processes.

Process:

  • Thermal: Converts body heat into power using thermoelectric generators.
  • Chemical: Biofuel cells harness energy from biochemical reactions.

Applications:

  • Implantable medical devices like pacemakers.
  • Continuous health monitoring systems.

Advantages:

  • Supports self-sustaining devices.
  • Minimizes maintenance for medical applications.

Challenges: Requires advanced materials for efficient energy conversion.

Hybrid Energy Harvesting Systems

Hybrid systems combine multiple energy sources to ensure reliability and maximise efficiency. They are instrumental in scenarios where environmental conditions vary unpredictably.

Advantages:

  • Reliable energy supply from complementary sources.
  • Improved energy generation and storage flexibility.

Challenges:

  • Complex integration of different energy harvesting mechanisms.
  • Higher costs and design challenges for seamless operation.

Energy harvesting from ambient sources is a transformative approach to powering IoT devices sustainably. These systems provide self-sufficient, low-maintenance energy solutions by leveraging solar, RF, thermal, acoustic, and mechanical sources. Innovations in hybrid energy systems and advanced materials are expected to enhance the efficiency and applicability of energy harvesting technologies, paving the way for widespread adoption in IoT infrastructures across industries.

Green IoT Design Trade-offs

Balancing various design criteria is critical to achieving optimal performance while minimising environmental impact in designing and implementing IoT devices and infrastructures. The concept of Green IoT (G-IoT) emphasises designing IoT systems that are energy-efficient, sustainable, and environmentally friendly, addressing the growing concern about the ecological footprint of IoT technologies. However, achieving these goals often involves trade-offs between competing priorities such as energy consumption, performance, security, cost, and sustainability (figure 110).

Green IoT Design Trade-offs
Figure 110: Green IoT Design Trade-offs

Energy Efficiency

One of the primary design goals of IoT is minimising energy consumption, as many IoT devices rely on limited-capacity batteries. Energy-efficient hardware components, software optimisations, and low-power communication protocols are widely adopted to prolong device operating lifetimes. For example:

  • Energy-Efficient Hardware: Microcontrollers and sensors optimised for low power draw.
  • Energy-Efficient Software: Algorithms designed to reduce computational overhead.
  • Low-Power Communication Protocols: Technologies like Bluetooth Low Energy (BLE) and LoRa minimise power requirements for data transmission.

These measures reduce energy demand and extend battery life. However, the benefit of energy savings often comes at the cost of reduced performance:

  • Processing Speed: Energy-efficient hardware may have slower processing capabilities.
  • Network Bandwidth: Low-power communication protocols typically support lower data rates, which may not suffice for high-bandwidth applications.
  • Packet Loss and Latency: Optimisations to save power may increase transmission delays or packet loss, affecting Quality of Service (QoS).

Security Trade-offs

Security is another critical consideration that often conflicts with energy efficiency in IoT design. Traditional robust security algorithms, such as those used in standard computing systems, are computationally intensive and consume significant energy. Applying such algorithms directly to IoT devices would rapidly deplete their batteries.

  • Energy-Hungry Security Protocols: Encryption methods like AES-256 or RSA require substantial processing power, which can shorten the device's operational lifetime.
  • Efforts for Energy-Efficient Security: Research and development are focused on creating lightweight cryptographic algorithms and authentication mechanisms tailored for resource-constrained IoT devices.

However, prioritising energy efficiency may compromise the level of security, leaving devices vulnerable to attacks such as data breaches, eavesdropping, or denial of service (DoS).

Cost Considerations

Cost is another key factor influencing IoT design. Manufacturers often strive to keep production costs low to ensure the affordability of devices, especially for mass-market applications. This focus on cost reduction may lead to the following:

  • Sacrifices in Security: Inexpensive devices may lack robust security features, increasing the risk of vulnerabilities.
  • Tradeoffs in Performance and QoS: Lower-cost components may provide suboptimal computing or communication capabilities.

While minimising cost is essential for market viability, it can compromise other critical aspects, such as reliability, durability, or security, leading to potential issues over the device's lifecycle.

Green IoT (G-IoT): A Holistic Approach

Green IoT aims to address the environmental and sustainability challenges associated with IoT systems. It focuses on:

  • Minimising Energy Consumption: Through energy-efficient designs and renewable energy sources.
  • Reducing E-Waste: Promoting using recyclable materials and modular designs to extend device lifecycles.
  • Sustainable Applications of IoT: Leveraging IoT solutions to enhance resource efficiency in agriculture, transportation, and energy industries.

Examples include precision farming, smart grids, and waste management systems. However, Green IoT design must also balance other key requirements:

  • Quality of Service (QoS): Ensuring energy and cost optimisations do not compromise performance.
  • Security: Developing secure yet lightweight protocols to protect data and device integrity.
  • Cost-Effectiveness: Striking a balance between affordability and sustainability without compromising essential functionalities.

Design Challenges and Trade-off Management

Achieving the goals of Green IoT requires careful consideration of trade-offs:

  • Energy vs. Performance: Designers must balance low-power operation with adequate processing and communication capabilities.
  • Security vs. Energy and Cost: Integrating security features without excessive energy consumption or cost inflation is a significant challenge.
  • Sustainability vs. Cost: Sustainable practices, such as using eco-friendly materials or designing for recyclability, may increase initial production costs.

To navigate these trade-offs, designers can adopt strategies such as:

  • Adaptive Systems: IoT devices that dynamically adjust energy use and processing power based on current requirements.
  • Edge Computing: Shifting computational tasks to edge devices to reduce the energy demand on individual IoT nodes.
  • Standardisation: Developing universal standards for energy-efficient, secure, and sustainable IoT designs.

Green IoT represents a transformative approach to designing IoT systems that align with environmental and sustainability goals. By addressing energy efficiency, e-waste reduction, and sustainable resource management, Green IoT can contribute to a more sustainable future. However, realising these benefits requires a balanced approach considering the trade-offs between QoS, security, energy efficiency, and cost, ensuring that IoT systems are functional and eco-friendly.

Green IoT Applications

Green IoT applications leverage energy-efficient and sustainable technologies to address critical challenges in various domains. By optimising resources, reducing energy consumption, and integrating renewable energy sources, these applications contribute to environmental sustainability while enhancing efficiency and performance. The list of selected Green IoT Applications and their features are discussed below (figure 111).

Green IoT Applications
Figure 111: Green IoT Applications

Smart Grids

A smart grid is an energy distribution network integrating IoT technologies to monitor, manage, and optimise real-time electricity flow. Key features include:

  1. Energy monitoring and management: IoT-enabled sensors and devices monitor electricity usage patterns, helping utilities optimise energy distribution and reduce wastage.
  2. Demand-side management: Smart meters and IoT devices enable dynamic pricing and real-time feedback, encouraging consumers to use energy efficiently during off-peak hours.
  3. Integration of renewable energy: IoT systems facilitate the seamless integration of renewable sources like solar and wind into the grid by managing variability and storage.
  4. Fault detection and repair: IoT-based predictive maintenance systems detect and address network faults before they cause outages, saving energy and improving reliability.

Smart Agriculture

IoT applications in agriculture, often called precision agriculture, improve resource utilisation and environmental sustainability. Examples include:

  1. Soil and crop monitoring: IoT sensors measure soil moisture, nutrient levels, and crop health, enabling precise irrigation and fertilisation.
  2. Smart irrigation systems: Automated irrigation systems use IoT data to water crops only when needed, reducing water and energy consumption.
  3. Livestock monitoring: IoT-enabled collars and tags track the health and location of livestock, minimising resource use and ensuring timely intervention when needed.
  4. Climate monitoring: Weather stations equipped with IoT devices help farmers predict environmental conditions and optimise planting and harvesting schedules.

Smart Manufacturing

Also known as Industry 4.0, smart manufacturing integrates IoT technologies to enhance efficiency and sustainability in production processes:

  1. Energy-efficient production: IoT sensors monitor machinery and processes, identifying areas where energy savings can be achieved.
  2. Predictive maintenance: By analysing sensor data, IoT systems predict equipment failures and schedule maintenance to avoid energy-wasting breakdowns.
  3. Resource optimisation: IoT-enabled systems track raw materials and energy use, reducing waste and ensuring sustainable resource management.
  4. Flexible manufacturing: IoT facilitates dynamic adjustment of production lines based on real-time demand, minimising overproduction and associated energy costs.

Smart Home

Smart homes utilise IoT technologies to improve energy efficiency, comfort, and security:

  • Energy-efficient appliances: IoT-enabled devices, such as smart thermostats, refrigerators, and lighting systems, optimise energy usage based on occupancy and usage patterns.
  • Home automation: Systems like Alexa, Google Home, or Zigbee-based networks manage energy usage by turning off lights, appliances, and HVAC systems when not in use.
  • Solar energy integration: Smart inverters and IoT monitoring systems enable homeowners to optimise the use of solar panels and battery storage.
  • Energy monitoring: Real-time data from IoT sensors helps homeowners track and reduce energy consumption, lowering utility bills and carbon footprints.

Intelligent Transport Systems

IoT applications in transport focus on creating efficient, sustainable, and intelligent mobility systems:

  1. Traffic management: IoT-enabled sensors and cameras monitor traffic in real-time, optimising traffic light systems and reducing congestion.
  2. Smart public transport: IoT systems provide real-time information on bus and train schedules, encouraging the use of public transport and reducing emissions.
  3. Fleet management: IoT technologies in logistics optimise routes, monitor vehicle health, and ensure efficient fuel use.
  4. Electric vehicle (EV) ecosystems: IoT-enabled charging stations provide real-time availability updates and optimise energy use by balancing grid demand.

Smart Cities

Smart cities integrate IoT solutions across various urban systems to improve sustainability and quality of life:

  • Smart waste management: IoT sensors in bins monitor fill levels and optimise waste collection routes, reducing fuel usage and emissions.
  • Energy-efficient buildings: IoT-enabled systems in commercial and residential buildings optimise energy use for lighting, heating, and cooling.
  • Air quality monitoring: IoT devices track pollutants, providing actionable data to improve urban air quality.
  • Public safety and infrastructure monitoring: IoT sensors monitor infrastructure health (e.g., bridges, roads) and enhance public safety through real-time alerts.
  • Water management: IoT technologies optimise water distribution, detect leaks, and monitor quality, ensuring sustainable usage.

Internet of Food (IoF)

IoF integrates many of the applications mentioned above. It enables to track food manufacturing and ensure food quality and proper nutrition:

  • Delivery chain tracking: ensures that food origin, processing and delivery channels are according to the records. This technology frequently uses RFIDs, QR codes and blockchain.
  • Nutrition information enables consumers to select the best diet according to their needs carefully.
  • Expire date active tracking: optimises food consumption on a global and local scale, lowering the amount of wasted food.
  • Quality monitoring: relates to delivery chain tracking and provides a mechanism to track counterfeited food and simplify cases such as mass intoxication.

Green IoT applications represent a vital step toward achieving a sustainable future. By enabling more innovative resource use and reducing energy consumption across diverse domains, they address environmental concerns while improving functionality and efficiency.


[1] VDI/VDE 2206 “Entwicklung mechatronischer und cyber-physischer Systeme”
[2] Digital Mahbub, “How an IoT System is Designed?”, August 2023, https://digitalmahbub.com/iot-system-is-designed/, accessed on Oct. 2023
[3] Digital Mahbub, “How an IoT System is Designed?”, August 2023, https://digitalmahbub.com/iot-system-is-designed/, accessed on Oct. 2023
[4] M. G. S. Wicaksono, E. Suryani, and R. A. Hendrawan. Increasing rice plant productivity based on IoT (Internet of Things) is needed to realise smart agriculture using a system thinking approach. Procedia Computer Science, 197:607–616, 2021.
[5] N. Silvis-Cividjian. Teaching Internet of Things literacy: A systems engineering approach. In Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET), Montreal, QC, Canada, 2015. IEEE.
[6] M. G. S. Wicaksono, E. Suryani, and R. A. Hendrawan. Increasing rice plant productivity based on iot (internet of things) is needed to realise smart agriculture using a system thinking approach. Procedia Computer Science, 197:607–616, 2021.
[7] OMG SysML v. 1.6 [https://sysml.org/]
[8] ISO/IEC/IEEE 29119-1:2022 Software and systems engineering — Software testing Part 1: General concepts
[9] UVNetworks, The Automated Network Mapping Tool For Network Administrators, https://www.uvexplorer.com/
[10] M. G. S. Wicaksono, E. Suryani, and R. A. Hendrawan. Increasing productivity of rice plants based on iot (internet of things) to realise smart agriculture using a system thinking approach. Procedia Computer Science, 197:607–616, 2021.
[11] Arslan Munir, IFCIoT: Integrated Fog Cloud IoT Architectural Paradigm for the Future Internet of Things, IEEE Consumer Electronics Magazine, Vol. 6, Issue 3, July 2017
[12] Arslan Munir, IFCIoT: Integrated Fog Cloud IoT Architectural Paradigm for the Future Internet of Things, IEEE Consumer Electronics Magazine, Vol. 6, Issue 3, July 2017
[14] Jain, A., Mittal, S., Bhagat, A., Sharma, D.K. (2023). Big Data Analytics and Security Over the Cloud: Characteristics, Analytics, Integration and Security. In: Srivastava, G., Ghosh, U., Lin, J.CW. (eds) Security and Risk Analysis for Intelligent Edge Computing. Advances in Information Security, vol 103. Springer, Cham. https://doi.org/10.1007/978-3-031-28150-1_2
[23] Dickey, D. A.; Fuller, W. A. (1979). “Distribution of the Estimators for Autoregressive Time Series with a Unit Root”. Journal of the American Statistical Association. 74 (366): 427–431. doi:10.1080/01621459.1979.10482531. JSTOR 2286348.
[24] Blair, R. Clifford; Higgins, James J. (1980). “A Comparison of the Power of Wilcoxon's Rank-Sum Statistic to That of Student's t Statistic Under Various Nonnormal Distributions”. Journal of Educational Statistics. 5 (4): 309–335. doi:10.2307/1164905. JSTOR 1164905.
[25] Everitt, B. S. (August 12, 2002). The Cambridge Dictionary of Statistics (2 ed.). Cambridge University Press. ISBN 978-0521810999.
[26] Upton, Graham; Cook, Ian (21 August 2008). Oxford Dictionary of Statistics. Oxford University Press. ISBN 978-0-19-954145-4.
[27] 3. Stigler, Stephen M (1997). “Regression toward the mean, historically considered”. Statistical Methods in Medical Research. 6 (2): 103-114. doi:10.1191/096228097676361431. PMID 9261910
[28] josephsalmon.eu/enseignement/TELECOM/MDI720/datasets/Galton.txt - Cited on 03.08.2024.
[30] Understanding K-means Clustering in Machine Learning | by Education Ecosystem (LEDU) | Towards Data Science https://towardsdatascience.com/understanding-k-means-clustering-in-machine-learning-6a6e67336aa1 – Cited 07.08.2024.
[31] Robert L. Thorndike (December 1953). “Who Belongs in the Family?”. Psychometrika. 18 (4): 267–276. doi:10.1007/BF02289263. S2CID 120467216.
[32] Peter J. Rousseeuw (1987). “Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis”. Computational and Applied Mathematics. 20: 53–65. doi:10.1016/0377-0427(87)90125-7.
[33] Ester, Martin; Kriegel, Hans-Peter; Sander, Jörg; Xu, Xiaowei (1996). Simoudis, Evangelos; Han, Jiawei; Fayyad, Usama M. (eds.). A density-based algorithm for discovering clusters in large spatial databases with noise (PDF). Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96). AAAI Press. pp. 226–231. CiteSeerX 10.1.1.121.9220. ISBN 1-57735-004-9.
[34] Schaffer, Jonathan (2015). “What Not to Multiply Without Necessity”. Australasian Journal of Philosophy. 93 (4): 644–664. doi:10.1080/00048402.2014.992447.
[35] Quinlan, J. R. 1986. Induction of Decision Trees. Mach. Learn. 1, 1 (Mar. 1986), 81–106
[36] Mehmet R. Tolun, Saleh M. Abu-Soud, ILA: an inductive learning algorithm for rule extraction, Expert Systems with Applications, Volume 14, Issue 3, 1998, Pages 361-370, ISSN 0957-4174,https://doi.org/10.1016/S0957-4174(97)00089-4.
[37] D. T. Pham and A. A. Afify, “RULES-6: a simple rule induction algorithm for supporting decision making,” 31st Annual Conference of IEEE Industrial Electronics Society, 2005. IECON 2005., Raleigh, NC, USA, 2005, pp. 6 pp.-, doi: 10.1109/IECON.2005.1569243.
[38] Clark, P. and Niblett, T (1989) The CN2 induction algorithm. Machine Learning 3(4):261-283.
[39] L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth, Belmont, CA, 1984.
[41] Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825-2830, 2011.
[42] Hyndman, Rob J; Athanasopoulos, George. 8.9 Seasonal ARIMA models. oTexts. Retrieved 19 May 2015.
[43] Box, George E. P. (2015). Time Series Analysis: Forecasting and Control. WILEY. ISBN 978-1-118-67502-1.
[44] IsolationForest example — scikit-learn 1.5.2 documentation
[45] Gold, Omer; Sharir, Micha (2018). “Dynamic Time Warping and Geometric Edit Distance: Breaking the Quadratic Barrier”. ACM Transactions on Algorithms. 14 (4). doi:10.1145/3230734. S2CID 52070903.
[46] Romain Tavenard, Johann Faouzi, Gilles Vandewiele, Felix Divo, Guillaume Androz, Chester Holtz, Marie Payne, Roman Yurchak, Marc Rußwurm, Kushal Kolar, & Eli Woods (2020). TSlearn, A Machine Learning Toolkit for Time Series Data. Journal of Machine Learning Research, 21(118), 1-6.
[55] Abi Tyas Tunggal, What is Cybersecurity Risk? A Thorough Definition, https://www.upguard.com/blog/cybersecurity-risk, 2024
[56] Rapid 7, Vulnerabilities, Exploits, and Threats, https://www.rapid7.com/fundamentals/vulnerabilities-exploits-threats/
[61] O. Garcia-Morchon, S. Kumar, S. Keoh, R. Hummen, R. Struik, Security Considerations in the IP-based Internet of Things draft-garcia-core-security-06, Internet Engineering Task Force (IETF), https://tools.ietf.org, 2013 accessed on 28/02/2020.
[62] O. Garcia-Morchon, S. Kumar, S. Keoh, R. Hummen, R. Struik, Security Considerations in the IP-based Internet of Things draft-garcia-core-security-06, Internet Engineering Task Force (IETF), https://tools.ietf.org, 2013 accessed on 28/02/2020.
[63] A. Rayes, S. Salam, Internet of Things-From Hype to Reality, Springer Nature, 2017.
[64] A. Rayes, S. Salam, Internet of Things-From Hype to Reality, Springer Nature, 2017.
[65] A. Rayes, S. Salam, Internet of Things-From Hype to Reality, Springer Nature, 2017.
[66] S. Yan-Qiang, W. Xiao-dong, Handbook of Research on Developments and Trends in Wireless Sensor Networks: From Principle to Practice, DOI: 10.4018/978-1-61520-701-5.ch015, IGI Global Knowledge Disseminator, https://www.igi-global.com/chapter/jamming-attacks-countermeasures-wireless-sensor/41122, 2010, access date: 7/03/2020
[67] Bruno Rossi, Top 10 IoT Vulnerabilities and How to Mitigate Them, https://sternumiot.com/iot-blog/top-10-iot-vulnerabilities-and-how-to-mitigate-them/
[68] Bruno Rossi, Top 10 IoT Vulnerabilities and How to Mitigate Them, https://sternumiot.com/iot-blog/top-10-iot-vulnerabilities-and-how-to-mitigate-them/
[69] Anna Chung and Asher Davila, Risks in IoT Supply Chain, https://unit42.paloaltonetworks.com/iot-supply-chain/
[70] Abi Tyas Tunggal, What is an Attack Vector? 16 Critical Examples, https://www.upguard.com/blog/attack-vector, 2024
[71] Lauren Ballejos, How to Secure IoT Devices, 2024
[72] Duplocloud, Defending Against IoT Threats: A Comprehensive Guide to IoT Malware Protection, https://duplocloud.com/blog/defending-against-iot-threats-a-comprehensive-guide-to-iot-malware-protection/
[73] Kyle Chin, What is the Internet of Things (IoT)? Definition and Critical Risks, https://www.upguard.com/blog/internet-of-things-iot, 2024
[74] Duplocloud, Defending Against IoT Threats: A Comprehensive Guide to IoT Malware Protection, https://duplocloud.com/blog/defending-against-iot-threats-a-comprehensive-guide-to-iot-malware-protection/
[75] Corey Glickman, “Green IoT: The shift to practical sustainability.” ETCIO.com (cio.economictimes.indiatimes.com, July 2023, Accessed on Aug. 24, 2023
[76] Thilakarathne, Navod Neranjan and Kagita, Mohan Krishna and Priyashan, WD Madhuka “Green internet of things: The next generation energy efficient internet of things.”Applied Information Processing Systems: Proceedings of ICCET 2021, pp. 391-402, 2022, Springer
[77] Corey Glickman, “Green IoT: The shift to practical sustainability.” ETCIO.com (cio.economictimes.indiatimes.com, July 2023, Accessed on Aug. 24, 2023
[78] Electronic Components, “Using modern technology to reduce power consumption”, June 2021, accessed on August 2023, https://www.arrow.com/en/research-and-events/articles/using-modern-technology-to-reduce-power-consumption
[79] Partner Perspectives, “Moore's Law Is Dead. Where Is Energy Saving Heading in the Electronic Information Industry?”, https://www.lightreading.com/moores-law-is-dead-where-is-energy-saving-heading-in-electronic-information-industry/a/d-id/781014, 2022, accessed on Sept. 7, 2023
[80] Anysilicon, “What is a System on Chip (SoC)?”, https://anysilicon.com/what-is-a-system-on-chip-soc/, accessed on: Sept 7, 2023
[81] Electronic Components, “Using modern technology to reduce power consumption”, June 2021, Accessed on Sept. 18, 2023
[82] Microsoft, “P-states and C-States”, https://learn.microsoft.com/en-us/previous-versions/windows/desktop/xperf/p-states-and-c-states, accessed on Oct. 2, 2023
[83] Heavy AI, “Hardware acceleration”, https://www.heavy.ai/technical-glossary/hardware-acceleration, accessed on Oct. 2, 2023
[84] Molly Loe, “Optical computers: everything you need to know”, TechHQ, May 2023, accessed on Oct. 4, 2023
[85] e.g., 2G/3G/4G/5G
[86] Rick Merritt “What is Green Computing?” NVIDIA, https://blogs.nvidia.com/blog/2022/10/12/what-is-green-computing/, 2022, accessed on Oct. 4, 2023
[87] Rick Merritt “What is Green Computing?” NVIDIA, https://blogs.nvidia.com/blog/2022/10/12/what-is-green-computing/, 2022, accessed on Oct. 4, 2023
[88] Partner Perspectives, “Moore's Law Is Dead. Where Is Energy Saving Heading in the Electronic Information Industry?”, https://www.lightreading.com/moores-law-is-dead-where-is-energy-saving-heading-in-electronic-information-industry/a/d-id/781014, 2022, accessed on Sept. 7, 2023