IOT-OPEN.EU Reloaded Consortium partners proudly present the Advanced IoT Systems book. The complete list of contributors is juxtaposed below.
This book is intended to provide readers with a comprehensive knowledge of IoT systems design on a conceptual level. It covers IoT design methodologies, IoT system architectures, IoT data-related aspects, cybersecurity in IoT systems, blockchain in IoT and green IoT.
Almost every top-level chapter of the book constitutes a separate study module related to a selected aspect of the IoT system design so that this book can be treated as a set of separate guides as well as a solid and complete workbook for an entire IoT course for the master's level.
The primary target group of readers are master's students and industrial system designers such as CTOs. This book constitutes a comprehensive manual for IoT technology; however, it is not a complete encyclopedia nor exhausts the market. The reason for it is pretty simple – IoT is so rapidly changing technology that new devices, ideas and implementations appear daily. Once familiar with this book's contents, the reader will understand the IoT systems design methodologies, tools and challenges. This book, however, comprises topics that are pf particular industrial interest - including but not limited to data analytics, cybersecurity and introduction to blockchains, which illustrated the diversity of the IoT world and technology landscape.
Even if most of the content is somehow technology-aware, the authors assume that persons studying the contents must have at least general knowledge of the technical details of IoT and embedded systems, including main components on the electronics level, networking, data processing and security.
For this reason, students starting their journey with IoT systems should instead start with the “Blue Book” first, which covers all technical aspects of IoT needed as background knowledge for this high-level approach towards IoT systems design.
Playing with real or virtual hardware and software is always fun, so keep going!
This Book was implemented under the wings of the following project:
Erasmus+ Disclaimer
This project has been funded with support from the European Commission.
This publication reflects the views of only the author, and the Commission cannot be held responsible for any use that may be made of the information contained therein.
Copyright Notice
This content was created by the IOT-OPEN.EU Reloaded Consortium 2022-2025.
The content is Copyrighted and distributed under CC BY-NC Creative Commons Licence, free for Non-Commercial use.
In case of commercial use, please get in touch with IOT-OPEN.EU Reloaded Consortium representative.
For now (2024-2025), we have experienced rapid growth in the Internet of Things (IoT) domain, as expressed by the number of scientific publications, the market volume, and other indicators suggesting that IoT has come to stay for long. IoT is one of the top priorities in Horizon Europe’s Research and Innovation strategic plan, which, among different Thematic areas, recognizes IoT as one of the most important under the Technology thematic group [European Commission, Directorate-General for Research and Innovation, Synopsis report – Looking into the R&I future priorities 2025-2027, Publications Office of the European Union, 2023, https://data.europa.eu/doi/10.2777/93927]. Knowing the importance of IoT technologies, how can one contribute to the domain by developing, using and designing IoT systems for different applications? This book, “The Green Book,” which continues the previous one, “Introduction to the IoT”, provides the background needed for design methods, IoT data analysis, cybersecurity essentials, and other vital topics.
The book is organised into the following chapters:
IoT systems are networked cyber-physical systems (CPS) and include components from three main domains: the hardware, primarily electromechanical devices; software, mostly microcontroller-specific process control software; and communication infrastructure. To develop an IoT solution, all aspects of these three domains must be designed in great synergy. When looking at the component level, the main building block of the IoT system is a node. The node is usually a microcontroller-based device dedicated to performing a specific task. The most common task is to perform measurements from the environment, but the node can also act as an actuator or user interface. In addition, IoT nodes can provide all kinds of supportive functions, like logging, providing time, storage, etc. However, the main function is still connected to the three main functions of sensing the environment, actuating or interfacing with humans, i.e. user interfaces. Today, CPS is created by expanding mechatronic systems with additional inputs and outputs and coupling them to the IoT. In principle, the IoT system is similar to classical smart systems, e.g., robots or mechatronic systems. These systems can be decomposed into three interconnected domains: process control by software, mechanical movements, and sensing of physical parameters from the system's environment. The figure below (2) demonstrates how these domains are interconnected to act as a smart system.
The IoT system has a similar purpose to general smart systems. Still, the main difference is that the IoT system is a distributed solution of smart functions using internet infrastructure. Similar functionality is decomposed into smaller devices acting as a single functioning device rather than a complex system. Nevertheless, when all small nodes are interconnected and can exchange messages with each other despite their location, we get a powerful system dedicated to performing automation tasks in vast application domains. The following figure represents the IoT system architecture, distributed nature, and communication function.
Even if the IoT system has a different component architecture from a regular mechatronic system, the development methodologies can be easily adapted from the domains of mechatronic systems and software system design. IoT systems have their specifics, but at the conceptual level, they are like any other smart software-intensive system. Thus, the methodologies are not IoT-specific but combinations and adaptions from related domains.
The product development process is a well-established domain with many different concepts. Over time, as the software part is increasing in today's technical systems, more and more software development methodologies are integrated into the physical product development process. IoT systems are similar in their component level to cyber-physical systems consisting of characteristics and features of mechatronic, software and network systems. Thus, the existing product design methodologies are also logical choices to apply to the IoT system design process. Despite the product's nature, the general product design process has several iteration steps through the design stages.
The classical product design process starts with requirement analysis followed by conceptual design. When the design candidate is selected, the detail design stage develops domain-specific solutions like mechanical, electrical, software, etc. The next stage is integrating domain-specific design results into one product and validating the solution. In addition, the product design process must deal with manufacturing preparation, maintenance and utilisation planning. The figure 4 illustrates the general process for most technical system designs, regardless of the application fields. However, depending on the system specifics, several other relevant stages and procedures might be essential to pass.
IoT systems are a combination of mechatronic and distributed software systems. Therefore, design methodologies from these domains are most relevant for IoT systems. For example, the well-known V-model (figure 5) has long been used for the software development process but has also adapted to the mechatronic design process. The Association of German Engineers has issued the guideline VDI 2206 - Design methodology for mechatronic systems (Entwicklungsmethodik für mechatronische Systeme) [1]. This guideline adopts the V-model as a macro-cycle process. The V-model is in line with general product design stages but emphasises the verification and validation through the whole development process. The execution of processes happens sequentially, similar to V-shape, thus the name. The actual design process runs through several V-shape macro-cycles. Every cycle increases the product maturity. For example, the output of the first iteration can be just an early concept-proof prototype where the last iteration output is ready to deploy the system. How many iterations are needed depends on the complexity of the final product. The figure below presents the IoT system design adapted to the V-model. The only difference from mechatronic systems is a domain-specific design stage. However, every general stage has several internal procedures and IoT-specific sub-design stages which must be addressed.
The new product development starts with customer input or other motivation, e.g., a business case, which must be carefully analysed and specified in a structured way. Requirements are not always clearly defined, and to put effort into proper requirement engineering pays off to save significantly from later design stages. It is not good practice to start designing a new system or solution when requirements are not adequately defined. At the same time, rarely all information is available initially, and requirements may be refined or even changed during the design process. Nevertheless, well-defined and analysed requirement specifications simplify the later design process and reduce the risk of expensive change handling at later stages. The initial requirements are articulated from the stakeholder's perspective, focusing on their needs and desires rather than the system itself. In the subsequent step, these requirements are translated into a system-oriented perspective. The resulting specification from the requirements elicitation process provides a detailed description of the system to be developed.
The second design stage is system architecture and design, which is dedicated to developing concepts for the whole system. Concept development and evaluation are decomposed into several sub-steps and procedures. For example, different concept candidates' development, assessment of concept candidates, and selecting the best concept solution for further development. Once the concept solution is selected and validated with requirements, the final solution candidates can be frozen, and the development will enter the detailed design stage. In the detailed design stage, domain-specific development occurs, including hardware, software, network structure, etc. Integration and validation will follow once the domain-specific solutions are ready at the specified maturity. The final step before the first prototype solution is complete system testing and again verifying and validating according to the system requirements.
The whole process may be repeated as often as necessary, depending on the final system's maturity level. If only proof of concept is needed, one iteration might be enough—frequently the case for educational projects. However, for real customer systems, many V-cycle iterations are usually performed. Once the design process is completed, the system enters the production stage, and then the focus goes to system/user support and maintenance. However, like in modern software-intensive systems, constant development, bug fixes, upgrades, and new feature development are standard practice.
When designing an IoT system, there are common design challenges, as in any other system engineering project, but also a few IoT-specific aspects. The engineering team must deal with difficulties similar to those of mechatronic and software system design. Some relevant vital elements to address when designing and deploying a new IoT system:
The following chapters contain more details:
It is expected that billions or trillions of IoT devices will be deployed in the various sectors of the society or economy (e.g., intelligent transport systems, smart health care, smart manufacturing, smart homes, smart cities, smart agriculture, and smart energy) to deliver better customer experience, provide more value to the market, and to solve significant problems such as climate change, national security, and public safety. Integrating massive numbers of IoT nodes, networking nodes, and computing devices or applications into the existing infrastructures in various industries will increase their complexity. It is, therefore, essential to follow some design principles to ensure that IoT systems designed to solve problems or create unique value in the various industries are adequately designed to fulfil their intended functions and are easier to operate, maintain, and scale.
IoT system design has its own set of challenges as IoT systems often contain multiple components or elements (e.g., sensors and actuators, cyber-physical devices, networking nodes, computing nodes) interacting with one another to collect data, manipulate physical systems, transport data packets, and analyse the collected data to deliver better customer experience, create value, or solve a specific problem. Below are some practical IoT system design principles that should be considered when designing IoT systems.
Before designing IoT systems, it is essential to understand the customers' problems or challenges before designing an IoT solution to address them. The designer must think from the perspective of customers and then design a research study to understand the customers' problems and the existing solutions they have. Then, the designer must find out how IoT solutions can address those challenges. It is only after understanding the actual problem that the customers are facing and how IoT solutions could address them that IoT system designers should engage in developing a solution to address them.
An IoT system may be designed not only to solve a problem or pain that potential customers are feeling but could be designed to create unique value. Innovative IoT solutions could create exceptional value to make their potential customers productive and competitive. It is required that IoT system designers understand the unique value that their system or solution is going to offer to their potential customers to improve their productivity, competitive advantage, or user experience. It is, therefore, required to conduct proper research before engaging in the project.
The research process could include defining research questions, defining the market segment, sending out questionnaires to potential customers, conducting interviews with relative stakeholders in the target market, talking with sales representatives of potential customers, and attending industry conferences. The research findings should be well documented and analysed by all the stakeholders and the design team before the IoT project is launched so that the designers can cater to the customers' needs during the design process.
The features to be included in the IoT solution should align with users' needs and problems and the value they can derive from the products to improve their productivity, competitive advantage or experience. The users are sometimes unaware of the value of IoT solutions or how they could address some of their problems, making them reluctant to adopt IoT solutions. Another barrier preventing users from adopting IoT solutions is uncertainties regarding cost, usability, returns on investments, and security concerns. Thus, the design team is responsible for addressing those user concerns when designing IoT solutions.
It is essential to answer the following questions:
Addressing the above questions carefully during the research and technical design stages is essential. Thus, when designing IoT systems, focusing on the users' values, needs, and problems is crucial.
The Internet of Things (IoT) is still in its early stages. We still have the opportunity to ensure that IoT systems are scalable, energy efficient, cheap, and secure by design while providing acceptable QoS. Another design requirement for IoT systems is interoperability. A holistic system-based approach is required to attain all these design goals and the goals of other stakeholders (network operators, service providers, regulators, and end users). There is a need for the development of formal methods and tools for the design, operation, and maintenance of IoT systems, networks, and applications in such a way as to satisfy the goals of the various stakeholders with minimal unintended consequences.
An IoT system often consists of multiple elements, such as the cyber-physical system (sensors and actuator device) deployed to collect data from the environment and to manipulate physical systems, communication systems deployed to transport data within the IoT infrastructure, and computing systems deployed to process the massive of data collected by the sensor and send feedback to actuators to automate physical processes or to human operators to make some decisions (or take some actions). One of the elements of the IoT infrastructure is the cyber security system, which should interact with other systems within the IoT infrastructure to deliver the required service. The IoT system is sometimes designed to interact with others to provide a specific value or solve a particular problem. It is, therefore, essential to adopt a system-based approach when designing IoT systems to ensure that the interaction between the various IoT elements and other existing systems of the organisation or users delivers the expected value or addresses the problems they are designed to solve. System thinking, design thinking, and systems engineering methods and tools can be leveraged to develop formal tools for designing IoT systems.
Users are concerned about possible security weaknesses that could appear in their infrastructure after integrating IoT solutions. IoT system designers should incorporate security mechanisms into their solutions to address the users' security concerns. Sometimes, IoT system designers are preoccupied with implementing features that are required to address customers' problems or deliver the expected value to customers. They may ignore the implementation of features that address customers' security concerns. Some IoT device manufacturers and service providers are often preoccupied with minimising manufacturing and deployment costs and the “time-to-market” such that security concerns are ignored or considered later.
Securing an IoT infrastructure's data hardware and software assets is essential and should be considered when designing IoT infrastructures. IoT system designers should consider a robust cyber security system as a subsystem within the IoT system to be designed and how the cyber security system will interact with other subsystems to deliver a secured IoT solution to the users. The IoT cyber security system consists of multiple elements that work together to provide an effective security solution to protect the data and other IT assets within an IoT infrastructure. Some of the cyber security features that should be considered when designing IoT solutions include:
A significant security weakness in IoT infrastructures is often at the IoT device level. Because the batteries that power these devices have a limited energy capacity, their computing and communication capabilities are minimal, making it difficult to implement reliable but sophisticated security mechanisms. As a result, it is easy to compromise these devices to disrupt IoT services and sometimes turn them into an army of botnets to conduct massive and sophisticated distributed denial of service attacks on the IoT infrastructure as a whole and the Internet. Maintaining a rational trade-off between performance, energy consumption, and security is essential.
The IoT security threats to be considered during IoT system design are not only those from external attackers but also those from internal attackers. The threats could be within, and there should be a mechanism to deal with internal threats. The internal threats could be from disgruntled employees (users) and reckless or careless ones who may perform operations that may breach or compromise some of the IT assets within the IoT infrastructures. Therefore, the IoT system designer must understand every possible error that may occur when operating IoT systems and then take care of them when designing the IoT solution and ensure that the users are aware of such errors and well-equipped to handle them.
The security aspects to be considered when designing IoT systems are not only cyber security aspects but also the physical security aspects. The physical security of the IoT infrastructure should be considered when designing and deploying them. Some adequate measures should be designed to address threats to the physical security of IoT devices.
Energy and environmental sustainability are among the essential constraints to consider when designing and deploying IoT infrastructures. Since IoT devices are designed to be minor, light, and powered by small batteries with limited energy capacity, energy efficiency is a primary design criterion when developing IoT devices. To reduce the energy consumption of IoT devices to a minimum level, low-power communication and networking technologies, low-power computing hardware and software, and low-power security mechanisms are incorporated into IoT devices. As the amount of data collected by the IoT devices from the environment increases, the traffic transported through the networking infrastructure to edge/fog/cloud computing nodes or data centres increases the energy consumed for data communication and computing purposes. The increase in energy consumed by IoT infrastructures increases the carbon emission from the IoT industry, which increases sharply with the rapid increase in the large-scale adoption of IoT in the various sectors of the economy.
In addition to energy efficiency, it is essential to minimise the amount of waste the IoT industry creates. IoT devices are powered by batteries with minimal energy capacity, resulting in a very short lifetime for IoT devices (the lifetime of an IoT device is the time to deplete all the energy stored in the battery of the IoT, requiring a recharge or change of battery). If the IoT batteries are replaced within a very short time (less than a decade), then with the deployment of tens of billions or trillions of IoT devices globally, there will be a problem of how to dispose or recycle the IoT batteries. There is already an environmental problem in managing the massive amount of batteries and e-waste the electronics industry generates. The problem will worsen if environmental sustainability is not considered as one of the design criteria when designing IoT devices. Some of the green and environmental sustainability strategies that should be considered when designing IoT devices include:
When designing IoT solutions, it is essential to consider the physical, social, and environmental context in which the device will be used. The features and specifications when designing IoT devices depend on the context of the application. The IoT systems intended for small agriculture, smart cities, smart health care, smart homes, intelligent transport systems, Internet of military things (Military Internet of Things (MIoT) or Battlespace Internet of Things (BIoT)), or smart energy should take into consideration the physical or social realities that may impact the integration of IoT systems into a given sector to fulfil a defined goal or purpose. For example, IoT devices designed for agricultural, disaster/emergency response, or battlefield purposes should operate sustainably in harsh conditions that may differ from IoT devices designed for smart homes or medical or health care purposes.
To consider the application context, it is recommended to treat the entire IoT use case as a system of which the IoT system being designed is part. In this way, the interaction between the IoT system being designed and other existing systems in the sector (e.g., cities, homes, factories, transportation infrastructure, health care infrastructures, etc.) are modelled using system engineering or systems dynamics modelling tools to ensure that the system to which the IoT system being designed is part of functions as a whole. Integrating IoT systems into existing systems in an organisation's infrastructure may create new problems that do not exist or may not benefit the organisation. Hence, it is essential to consider the application context and apply a system-based approach when designing IoT systems or solutions.
IoT devices collect massive amounts of data from the environments, which should be carefully managed to ensure data privacy or prevent the abusive use of personal data. Incorporating IoT devices into critical infrastructure such as energy, water, transportation, and health care infrastructure poses a national security risk for most countries, enforcing the case for effective data management. The collected IoT data should be protected adequately during processing, transmission, and storage in compliance with data security regulations or standards.
Data ownership issues, the kind of data that should be collected, and what the IoT service provider is permitted to do with the data should be considered when designing IoT solutions. The designers should ensure they comply with existing regulations or standards on data collection, management, and processing. Hence, the designers should ensure that the data of users is effectively managed by answering the following questions:
The IoT market is growing steadily, requiring IoT systems to be designed with the possibility of quickly scaling them up with increasing demand for IoT services. When developing IoT systems, it is essential to anticipate future growth and expansion and then provide the flexibility to expand the infrastructure to add more resources to meet the increase in service demand. Scalability and flexibility can be ensured by implementing a modular and flexible architecture that can be adapted to satisfy the growing demand. Also, the hardware, software, computing, networking, energy, and security choices should be made in such a way as to ensure that the designed IoT systems can handle current demand and future growth in data volume, traffic, and computing demand as demand for IoT services increases.
Interoperability and compatibility are significant barriers to ensuring scalability and flexibility when designing IoT systems. To ensure scalability, the IoT systems should be designed to integrate and interoperate seamlessly with the existing infrastructure of the organisation and those of other partners. The hardware and software design choices should be made in such a way as to ensure interoperability and compatibility so that it will be easier to scale up the IoT infrastructure. That is, “plan carefully, choose wisely, and design intelligently for a successful IoT system” should be the driving philosophy in IoT systems design [2].
The user interface for IoT systems should be intuitive, user-friendly, and simple enough for users to operate IoT systems with minimal difficulties or challenges. To ensure that the IoT system being designed can compete with other IoT products in the markets, it should be simple and can be operated relatively easily. Users are often reluctant to adopt complex products that are difficult to use, manage, or maintain and quickly drop such products. They are often quick to adopt simple products that are easy to use, operate, and maintain. It is essential to follow IoT design thinking principles that facilitate the design of IoT systems with intuitive, user-friendly, and simple user interfaces. An IoT designer should prioritise simplicity and clarity to create intuitive, user-friendly, and simple user interfaces to improve users' experience.
Testing and quality assurance are essential phases in the IoT system development life cycle. Testing and quality assurance enable the development of IoT systems that meet and satisfy the customers' needs, provide satisfactory performance, and are compatible and interoperable with existing IoT systems and other IT infrastructures of organisations. Comprehensive testing and quality assurance inspection plans developed during the IoT system design phase ensure that stress tests and audits can be carried out to ensure that the design goals (performance, security, sustainability, interoperability, cost, etc.) and national (or regional) regulatory rules or standards are fulfilled.
Effective performance test plans can ensure that the designed IoT system can withstand high stress and still provide users with acceptable service and experience. Security tests and audits enable IoT system designers and developers to identify potential vulnerabilities and threads and to ensure compliance with security regulations and standards. Effective testing and quality assurance plans can also provide compatibility and interoperability of the designed IoT system with other IoT systems (devices and networks), which is essential to ensure seamless integration to deliver the desired quality of service and experience to the users. Therefore, by implementing robust testing procedures, IoT system designers can ensure that the IoT system they are designing can meet the highest standards of quality and reliability [3], satisfying the needs of their users and satisfying their performance expectations.
An effective deployment, operation and maintenance plan is essential to ensure that the IoT systems being designed are cost-effective or affordable, providing the users with reasonable returns on their investments. Every IoT system development cycle stage should be carefully planned to minimise the design, manufacturing, deployment, operation, and maintenance costs. It is recommended to carefully document the deployment, operation, and maintenance procedures in such a way as to ensure that the deployed IoT systems or infrastructure can easily be deployed, operated, and maintained, requiring minimal intervention and human resources.
In IoT applications where thousands, tens of thousands, or millions of IoT devices are deployed and spread across a wide geographical area, deployment, operation, and maintenance procedures are tedious and costly. Effective deployment, operation, and maintenance plans and tools are essential to ensure acceptable performance (reducing downtime and improving the QoS or QoE). Monitoring and preventive maintenance plans to prevent failures or breakdowns and reactive maintenance plans to restore the system after breakdowns to reduce downtime should be carefully designed and documented. Expansion or scalability plans should be created to enable cost-effective expansion and extension of the IoT system to handle more users or to satisfy customers' expectations.
It is essential to develop training and support plans to ensure that the users are well trained and supported to effectively use and manage the designed IoT system to satisfy their needs. Reducing the need for human intervention is essential to keep the cost low. Deployment, operation, and maintenance tasks should be automated, especially for large-scale IoT infrastructures. Automation reduces deployment, operation, maintenance, security monitoring, and response costs. The IoT devices should be deployed to operate for decades without needing maintenance or replacement of parts for several decades. Therefore, IoT system designers should ensure that the deployment, operation, and maintenance costs are as low as possible.
In the early stage of the IoT system development life cycle, developing a working prototype that is well-tested and satisfies the users' needs may be necessary. A well-tested and working prototype is required before mass production or deployment of the IoT system. Developing a working prototype before mass production or deployments helps resolve many functional, performance, security, deployment, maintenance, and sales issues, increasing the chances of success and long-term adoption and sustainability for the IoT product or project.
When a working prototype is created, several iterations may be required to improve the product to satisfy the organisation's or users' needs. The prototype should meet the required design goals (functionalities, performance, security, scalability, interoperability, and sustainability goals) before the system can be mass-produced or deployed. Therefore, getting the product or solution right is essential through the rapid and iterative development of a complete working prototype that satisfies every technical and user design goal.
The feedback from the various use case applications where the IoT system being designed is deployed should provide user feedback that can be used to improve the production or solution. Users may expect or require features absent from the developed system or solution. IoT designers should be able to improve their designs to cater to users' needs or requirements. The users may use the designed system in ways that the designers did not expect. The designers should have a mechanism to follow up with the users to learn the various methods and contexts in which the systems are being used. Therefore, the ideas from the user feedback should be used to improve the design and adapt the system to satisfy the needs of its users.
IoT (Internet of Things) systems represent a convergence of hardware, software, and networking technologies to create seamless, intelligent solutions for various applications. To achieve their full potential, IoT systems must be designed with clear and comprehensive goals that ensure robustness, user-friendliness, scalability, and security. Here’s a detailed exploration of the primary design goals for IoT systems (figure 6):
User satisfaction is the cornerstone of IoT design, ensuring systems deliver intuitive, accessible, and valuable experiences. Achieving high user satisfaction requires the following:
1. Ease of Use: Interfaces and interactions should be simple and require minimal learning. Intuitive designs reduce user frustration and increase adoption rates. Tools like user testing, usability studies, and iterative feedback loops are critical in refining systems to align with user expectations.
Example: A smart thermostat with a user-friendly mobile app allows users to control home temperatures effortlessly, even remotely.
2. Reliability: Consistent performance is key to building trust. IoT devices must operate seamlessly without frequent failures, downtime, or lag. High reliability enhances user confidence and system usability.
3. Customisation and Personalisation: IoT systems should cater to individual user preferences. Features like custom schedules, modes, or settings enable personalisation, enhancing the perceived value of the system.
Example: Smart lighting systems allow users to adjust brightness and colour based on mood or activity.
3. Accessibility: Designs must accommodate diverse user abilities. Accessibility features, such as voice commands or compatibility with assistive technologies, ensure inclusivity.
Security is a non-negotiable aspect of IoT systems, as they often handle sensitive data and are susceptible to cyber threats. Security measures should be integrated into the design phase to ensure:
IoT systems generate immense volumes of data, making efficient management and strict privacy protection paramount.
1. Data Minimisation: Collect only the data necessary for functionality, reducing privacy risks and simplifying data storage and processing.
2. Data Anonymisation: Implement anonymisation techniques to protect user identities while enabling data analysis. Example: Anonymising health data from wearables to comply with regulations like GDPR.
3. Secure Storage: Encryption and access controls should be used to protect stored data on devices, local servers, or in the cloud.
4. Transparency: Clearly communicate to users how their data will be collected, used, and shared. Transparency fosters trust and compliance with legal standards.
With growing environmental concerns, sustainability is a critical consideration in IoT system design:
1. Energy Efficiency: Optimise devices to consume minimal energy, extending battery life and reducing electricity usage. Employ low-power communication protocols like Zigbee or LoRaWAN.
2. Sustainable Materials: Use recyclable, biodegradable, or eco-friendly materials to reduce the environmental footprint.
3. Lifecycle Management: Design systems with end-of-life considerations, including recycling or safe disposal of components.
4. Adaptive Energy Use: Employ strategies like sleep modes for devices to conserve energy when idle.
IoT solutions should balance affordability with quality to promote widespread adoption.
1. Affordable Components: Use reliable, cost-efficient hardware to reduce production costs without sacrificing performance.
2. Optimised Manufacturing: Streamline manufacturing processes through modular designs or economies of scale.
3. Low Maintenance Costs: Design self-maintaining systems or those requiring minimal intervention to reduce long-term costs.
IoT systems must accommodate future growth and evolving user needs.
1. Modular Architecture: Design systems with modular components that can be upgraded or expanded without overhauling the entire solution.
2. Interoperable Standards: Use open standards and protocols to ensure compatibility with devices from different manufacturers.
3. Dynamic Resource Management: Implement mechanisms to allocate resources dynamically based on demand, ensuring optimal performance as the system grows.
Seamless connectivity is fundamental for IoT systems to operate effectively.
1. Network Resilience: Incorporate failover mechanisms to maintain operations during network disruptions.
2. Low-Latency Communication: Real-time data transfer is critical for applications like autonomous vehicles—technologies like 5G and Wi-Fi 6 address these needs.
3. Edge Computing Integration: Process data locally to reduce reliance on central servers, improving reliability and responsiveness.
4. Protocol Optimisation: Use IoT-specific protocols like MQTT and CoAP, tailored for low-power and constrained environments.
Energy efficiency enhances device longevity and reduces operational costs.
1. Low-Power Hardware: Select components optimised for minimal energy consumption, such as microcontrollers with sleep modes.
2. Adaptive Power Management: Adjust energy usage based on real-time activity levels.
3. Energy Harvesting: Incorporate technologies that harness energy from ambient sources, such as solar or kinetic energy, to extend device life.
Interoperability ensures seamless communication and collaboration across diverse devices and platforms.
1. Standardised Protocols: Enable communication across systems using common protocols like MQTT, HTTP/HTTPS, and CoAP.
2. Open APIs and SDKs: Facilitate integration by providing developers with tools for building complementary services.
3. Middleware Solutions: Employ middleware to aggregate and harmonise data from different devices, ensuring compatibility and ease of management.
IoT design goals are the foundation for developing resilient, efficient, and user-centred solutions. IoT systems can address current challenges by prioritising security, scalability, sustainability, and interoperability while remaining adaptable to future advancements. This comprehensive approach ensures that IoT solutions meet user expectations and align with broader societal and environmental objectives.
The Internet of Things (IoT) transforms industries, lifestyles, and economies by enabling interconnected devices to collect, share, and act on data. However, its rapid expansion is accompanied by significant technical, economic, and societal challenges. Below, we delve deeper into these issues, exploring their nuances and potential mitigation strategies (figure 7).
IoT devices often rely on compact, energy-constrained hardware, such as batteries or capacitors, to function. These energy storage systems have limited capacities and once depleted, the devices shut down unless recharged or replaced. Managing the energy needs of hundreds or thousands of such devices in an IoT ecosystem becomes a significant logistical and financial burden.
Design Constraints and Strategies
1. Minimising Energy Consumption:
IoT device design prioritises energy efficiency to prolong operational lifetimes and reduce maintenance costs. Common strategies include:
2. Energy Management:
Mechanisms such as sleep modes or duty cycling are integrated to deactivate idle components, thereby conserving energy. However, this often compromises quality of service (QoS). Striking a balance between energy savings and performance remains a design challenge.
3. Energy Harvesting:
Incorporating energy harvesting systems (e.g., solar, thermal, or kinetic energy) can supplement energy needs, reducing reliance on batteries. Yet, these systems face limitations, including intermittent energy availability and integration challenges due to size and weight constraints.
Data is the backbone of IoT systems, making robust connectivity essential. IoT devices primarily rely on wireless networks to communicate, which introduces complexities in ensuring reliability, speed, and cost-efficiency.
Challenges in Connectivity
1. Network Performance Trade-offs:
Energy-efficient protocols (e.g., BLE, Zigbee, LoRa WAN, and SigFox) often compromise throughput, latency, and reliability, leading to packet delays, losses, or collisions. Balancing energy efficiency and network performance is a core challenge.
2. Scalability in Dense Deployments:
In urban areas, where wireless networks overlap, interference and bandwidth limitations degrade performance. This is especially critical for real-time IoT applications like healthcare monitoring or autonomous systems.
3. Cost of Connectivity:
Small and medium-sized businesses often struggle with the high costs of maintaining IoT networks. Reducing operational expenses without compromising connectivity quality is a priority.
Solutions to Connectivity Challenges
With billions of IoT devices deployed globally, the systems' energy demands and environmental impact have become significant concerns.
Energy and Environmental Challenges
1. Massive Energy Demand:
IoT devices, networks, and data centres collectively require substantial energy, increasing their carbon footprint.
2. Sustainability Concerns:
Mitigation Strategies
The diversity of hardware, software, and communication protocols in IoT ecosystems creates significant interoperability challenges, especially when integrating devices from multiple vendors.
Challenges
Solutions
The absence of universal IoT standards impedes collaboration and innovation while increasing security vulnerabilities.
Regulatory Challenges
Steps Forward
IoT systems are prone to cyber threats due to their distributed nature and resource-constrained devices.
Security Concerns
Mitigation Strategies
The debate over data ownership is complex, involving technical, legal, and ethical dimensions.
Key Challenges
Proposed Solutions
High design, deployment, and maintenance costs can discourage IoT adoption, particularly among smaller organisations.
Balancing Cost and Quality
The success of IoT systems depends on their perceived value and ease of use.
Challenges in Adoption
Solutions
The potential of IoT to revolutionise industries and improve quality of life is immense. However, its growth depends on addressing hardware, connectivity, security, sustainability, and adoption challenges. By focusing on innovative solutions, robust governance, and stakeholder collaboration, the IoT ecosystem can overcome these hurdles and achieve its transformative potential.
The Internet of Things (IoT) is still in its formative phase, presenting a critical window of opportunity to design and implement IoT systems that are scalable, cost-effective, energy-efficient, and secure. These systems must be developed to deliver acceptable Quality of Service (QoS) while meeting essential requirements such as interoperability and seamless integration across different devices and platforms.
Achieving these ambitious design objectives requires a comprehensive, system-based approach that considers the diverse priorities of various stakeholders, including network operators, service providers, regulatory bodies, and end users. Each group brings its requirements and constraints, and balancing these is essential to ensure the system's overall success.
To support this, there is a significant need for the development of robust formal methods, advanced tools, and systematic methodologies aimed at designing, operating, and maintaining IoT systems, networks, and applications. Such tools and methods should be capable of guiding the process to align with stakeholder goals while minimising potential unintended consequences. This approach will help create resilient and adaptive IoT ecosystems that meet current demands and are prepared for future technological advancements and challenges.
System thinking, design thinking, and systems engineering methodologies provide powerful frameworks for developing formal tools for designing and deploying complex IoT systems. These interdisciplinary approaches enable a comprehensive understanding of how interconnected components interact within a larger ecosystem, allowing for the creation of more resilient, efficient, and effective IoT solutions.
A practical example of leveraging these methodologies can be found in the work referenced in [4], where system dynamics tools were applied to design IoT systems for smart agriculture. Researchers constructed causal loop diagrams in this study to map and analyse the intricate interplay between multiple factors impacting rice farming productivity. By visually representing the causal relationships within the agricultural system, they identified key drivers and dependencies that influence outcomes. This insight allowed them to propose an IoT-based smart farming solution to optimise productivity through data-driven decision-making informed by these interdependencies.
The value of system dynamics and systems engineering tools extends beyond smart agriculture. These methods can simplify the design and analysis of complex IoT systems, networks, and applications across various sectors. They offer a structured way to break down the complexity of interconnected systems, ensuring that the resulting IoT solutions are cost-effective, reliable but also secure, and energy-efficient. This approach ensures that the needs of diverse stakeholders—including developers, network operators, regulatory bodies, and end-users—are met effectively.
Moreover, system dynamics tools have proven beneficial in educational contexts, particularly for teaching IoT courses. Educators can help students grasp the complexity of IoT systems and concepts more intuitively by adopting a system-centric approach. This holistic teaching method supports learners in understanding how various components and processes interact within an IoT ecosystem, thereby fostering a deeper comprehension of the subject matter and preparing them for real-world IoT challenges, as demonstrated in the findings of [5].
While numerous IoT-based systems are being individually developed and tested by practitioners and researchers, these efforts often fall short of addressing the practical reality that IoT systems must ultimately interact with each other and human users. This interconnectedness underscores the need for a holistic, system-centric design methodology to manage IoT systems' complexity and interdependencies effectively. The design of these systems should move beyond isolated functionalities to consider the broader ecosystem in which they operate, including human interaction, cross-system communication, and scalability.
Several studies have ventured into leveraging methods and tools to design IoT systems—for example, research referenced in [6] utilised causal loop diagrams to study the intricate interactions between systems and stakeholders, identifying key feedback loops influencing productivity. This approach provided actionable insights and recommendations on improving efficiency and performance within specific applications, such as smart agriculture. Using causal loop diagrams in such studies highlights the importance of visualising and understanding complex IoT ecosystems' relationships and feedback mechanisms.
However, it is crucial to incorporate both qualitative and quantitative system dynamics tools to advance IoT systems' design and operational robustness. While causal loop diagrams are practical for modelling qualitative interactions and identifying feedback structures, quantitative methods are needed to simulate and analyse the dynamic behaviour of IoT systems under various conditions. Integrating both approaches makes it possible to model the structure and the real-time, data-driven interactions among different IoT components.
This highlights the urgent need to develop a comprehensive, multi-faceted framework that blends system thinking, design thinking, and systems engineering tools. Such an integrated approach would support the end-to-end design, operation, and maintenance of IoT systems, networks, and applications. The goal would be to create systems that align with the objectives of various stakeholders, including developers, service providers, network operators, regulators, and end-users while minimising unintended consequences such as system inefficiencies, vulnerabilities, or user dissatisfaction.
System thinking enables a broad, interconnected view that helps identify and understand the relationships and dependencies across components. Design thinking ensures that solutions are user-centric, addressing real needs through iterative prototyping and feedback. Systems engineering brings discipline and structure, employing established methodologies and tools to optimise system performance and reliability.
IoT systems can be designed to be technically proficient, adaptable, scalable, and aligned with stakeholder needs by developing a framework that synergises these approaches. This will foster sustainable, resilient IoT ecosystems capable of evolving alongside technological advancements and societal demands, paving the way for a future where IoT seamlessly integrates into everyday life, supporting everything from smart cities to connected healthcare with minimal risk and maximal benefit.
Integrating systems thinking, design thinking, and engineering methodologies into developing IoT systems can significantly enhance their design and implementation. These approaches facilitate the creation of robust, scalable, and efficient IoT solutions tailored to modern applications' complex requirements while addressing the stakeholders' needs.
Linear thinking is crucial in designing and implementing IoT systems, offering a structured, step-by-step approach to problem-solving and development. In IoT, where multiple components must work seamlessly together, a logical and sequential methodology helps ensure clarity, efficiency, and precision.
Characteristics of Linear Thinking in IoT Design
Applications of Linear Thinking in IoT Design Methodologies
Linear thinking in IoT is applied throughout the design lifecycle, helping teams address specific challenges methodically and systematically.
Structured System Development
In IoT design, linear thinking enables the structured development of systems by organising tasks into sequential phases (figure 8):
Troubleshooting and Optimisation
Linear methodologies simplify troubleshooting in IoT systems. For example, diagnosing connectivity issues can follow a logical sequence (figure 9):
Linear thinking aids in integrating IoT systems with other technologies. For example, a smart home IoT solution might involve sequential integration of sensors, cloud platforms, and mobile applications to ensure a seamless user experience.
Benefits of Linear Thinking in IoT Design
Limitations of Linear Thinking in IoT Design
Despite its advantages, linear thinking may not address all aspects of IoT design effectively:
Complementing Linear Thinking with Non-Linear Approaches
To address these challenges, linear thinking in IoT design can be combined with non-linear approaches like:
Linear thinking provides a strong foundation for IoT design methodologies by ensuring clarity, efficiency, and dependability. It is particularly effective in addressing well-defined problems and structured tasks. However, it should be complemented with flexible, iterative approaches to meet IoT systems' complexity and dynamic nature. This balanced methodology enables organisations to design IoT solutions that are reliable, functional, innovative, and adaptable to future needs.
Design Thinking, a human-centred and innovative methodology, plays a transformative role in developing Internet of Things (IoT) solutions. By focusing on empathy, creativity, and collaboration, Design Thinking allows designers to craft IoT systems that deeply resonate with users, address real-world challenges, and deliver tangible value. This iterative and non-linear approach ensures that solutions remain user-focused while adapting to evolving needs and complexities. Below, we explore the application of Design Thinking to IoT design, breaking down its phases and highlighting its importance. The process is presented in a diagram (figure 10), and each step is described below.
Phases of Design Thinking in IoT Design
Empathise: Understanding Users in IoT Contexts
The foundation of Design Thinking lies in understanding the users —those who will interact with and benefit from IoT solutions. This phase involves:
Example: In designing a smart thermostat, empathising involves understanding how users perceive temperature comfort, their schedules, and preferences for energy savings.
Define: Framing IoT Challenges with User-Centricity
With insights from the empathise phase, designers synthesise the data to articulate the problem clearly. This phase involves:
Example: Defining the problem for a wearable health tracker could focus on addressing user concerns about data privacy and ease of use.
Ideate: Generating Creative IoT Solutions
The ideation phase encourages brainstorming innovative solutions for the defined problem. Activities include:
Example: For a smart irrigation system, ideation might explore options like soil-moisture sensors, weather-based predictions, and AI-powered water usage optimisation.
Prototype: Building Tangible IoT Concepts
In this phase, designers create prototypes to bring ideas to life. For IoT, this could involve:
Developing Low-Fidelity Prototypes: Sketches, mock-ups, or digital wireframes to demonstrate the user interface or functionality. Building Hardware Models: Using components like Arduino or Raspberry Pi to test device interactions and connectivity. Simulating IoT Scenarios: Creating controlled environments to test data flow and device responses. Example: A smart refrigerator prototype might include a basic app interface to demonstrate how users can view inventory and set the temperature remotely.
Test: Validating IoT Prototypes with Users
The testing phase ensures IoT solutions align with user expectations and functional requirements. This involves:
Example: Testing a smart door lock might involve scenarios where users remotely unlock doors via a mobile app, identifying issues like connectivity lag or interface confusion.
Iterative Nature of Design Thinking in IoT
Design Thinking is inherently iterative, requiring designers to revisit previous phases as new insights emerge. This flexibility is crucial for IoT systems, where user needs, technological advancements, and environmental factors can evolve rapidly.
Example Iterations
Benefits of Design Thinking in IoT Design
Challenges of Applying Design Thinking to IoT
Design Thinking is an invaluable methodology for IoT design. It enables teams to create solutions that prioritise users while addressing technical and business challenges. Its iterative and collaborative nature ensures that IoT systems remain adaptable, innovative, and effective. By integrating empathy, creativity, and feedback into the design process, Design Thinking helps organisations deliver IoT solutions that resonate deeply with users and stand out in a competitive landscape.
Systems Thinking is a holistic approach to analysing and solving complex problems by understanding a system's relationships, interactions, and interdependencies. In the context of Internet of Things (IoT) design, Systems Thinking becomes crucial because IoT systems are inherently complex, comprising interconnected devices, networks, data flows, and user interactions. By adopting Systems Thinking, IoT designers can address the challenges of scalability, interoperability, and sustainability while ensuring that solutions align with user needs and broader organisational goals.
What is Systems Thinking?
Systems Thinking views an IoT system as an integrated whole rather than isolated components. It emphasises:
For IoT, Systems Thinking ensures that solutions are robust, scalable, and adaptable to changing environments.
Key Principles of Systems Thinking in IoT Design Fundamental principles of systems thinking in IoT design are presented in figure 11 and discussed below:
Holistic Perspective
Understanding Interdependencies
Feedback Loops and Adaptability
Focus on Context and Environment
Emergent Behaviour Analysis
Steps to Apply Systems Thinking in IoT Design Methodologies
Figure 12 presents a workflow for the systems thinking approach for IoT design methodologies. Details are discussed below.
Define the System's Purpose and Boundaries
Example: For a smart factory, the purpose might be to optimise production efficiency, and the boundaries might include connected machinery, inventory systems, and supply chain interactions.
Identify Components and Stakeholders
Example: In an IoT-based energy management system, stakeholders might include utility companies, building managers, and end-users monitoring their energy consumption.
Map Interconnections and Data Flows
Example: A connected vehicle system requires mapping interactions between GPS devices, onboard diagnostics, traffic data servers, and driver interfaces.
Analyse Feedback Loops
Example: In a smart thermostat, a feedback loop might ensure that when the temperature exceeds a set point, cooling systems are activated, and adjustments are logged for future optimisation.
Consider Scalability and Interoperability
Example: A smart city IoT platform must handle a growing number of sensors, from traffic cameras to air quality monitors, while integrating with diverse protocols like MQTT and CoAP.
Address Security and Privacy Holistically
Example: In healthcare IoT, secure patient data transmission requires end-to-end encryption, secure APIs, and robust access control mechanisms.
Monitor and Iterate
Example: A smart logistics platform might adjust its route optimisation algorithms based on real-time traffic patterns and delivery delays.
Benefits of Systems Thinking in IoT Design
Challenges of Systems Thinking in IoT Design
Stakeholder Alignment: It can be challenging to ensure that all stakeholders understand and agree on the system's purpose and design.
Systems Thinking is an indispensable methodology for IoT design, offering a comprehensive framework to tackle the inherent complexity of interconnected systems. Systems Thinking enables designers to create robust, scalable, and user-focused IoT solutions by focusing on interdependencies, feedback loops, and the broader context. Its emphasis on holistic analysis and adaptability ensures IoT systems meet current needs and evolve gracefully with emerging challenges and opportunities.
System dynamics is a practical application of Systems Thinking, originally developed at MIT in the 1950s. It provides a framework for understanding and modelling the complex behaviour of systems by emphasising the interconnections, feedback loops, and time delays inherent in such systems. Practitioners and researchers in system dynamics employ various modelling and simulation tools to explore the implications of hypothesised causal relationships and understand system dynamics over time. Sample closed-loop system dynamics modelling methodology is present in figure 13.
Closed-system thinking methodology can be applied to overcome the limitations of open-loop or linear thinking approaches. Linear thinking typically involves problem identification, information gathering, evaluating alternative solutions, selecting the best option, and implementing the policy. However, this approach often generates unintended consequences because it operates in silos, addressing isolated issues without considering the broader goals or interactions within the system.
IoT systems are often designed to interact with other information systems, cyber-physical systems in industries, critical infrastructures (energy, water distribution, heating, health care, and transportation systems), and people (management systems). The interaction between IoT systems and other existing systems may create unintended consequences that must be considered at the design stage. There are also interactions between the various components of the IoT system that need to be considered. These interactions need to be modelled, and their impact evaluated and factored into the design of IoT systems and strategies devised to deal with possible unintended consequences that may arise.
System dynamics provides a modelling framework for analysing the complex interactions between IoT systems. IoT systems consist of multiple interconnected components (such as sensor networks, data processing units, communication infrastructures, management systems, and stakeholders like policymakers and users) that work together to achieve the diverse goals of the stakeholders as shown in Figure 13. Each IoT system comprises numerous interdependent parts interacting to perform their intended functions, and any modification in one part can affect the overall system performance. The effectiveness of IoT systems relies on the seamless interaction of all constituent components. However, these interactions, including stakeholder involvement, may lead to unintended consequences. Therefore, a system-centric approach is critical for designing and operating IoT systems to meet design objectives and address the expectations of all stakeholders.
The stakeholders involved may have conflicting priorities. For example, the main goal of system users might be to optimise operational efficiency, while the aim of technology developers could be to maximise data integration capabilities, and policymakers may focus on ensuring privacy, security, and environmental sustainability. Using the Systems Thinking framework, these stakeholders can apply tools such as causal loop diagrams to map the interconnections, feedback loops, and relationships (including nonlinear and causal dependencies) within the IoT ecosystem. Additionally, stock-and-flow models can be employed to simulate resource utilisation (e.g., data processing capacity or energy consumption) and to monitor accumulations such as system load or greenhouse gas emissions in IoT-supported applications. These models enable the creation of predictive frameworks that management teams or policymakers can leverage to design interventions, ensuring that the goals of diverse stakeholders are met effectively and sustainably.
System Dynamics Modeling Framework
The system dynamics modelling process involves several key steps (figure 14):
Core Assumptions of System Dynamics
System dynamics is based on the premise that a system's underlying structure determines its observed behaviour or trends. It emerges from the interaction of key elements, including physics, information availability, and decision-making rules.
The following structural elements are considered in modelling IoT systems:
1. Accumulations:
2. Causal Structures:
Identifying cause-and-effect relationships between components in the system.
3. Delays:
Recognising that the effects of actions or interventions often manifest after a time lag may impact decision-making.
4. Perceptions:
Correct or biased views of cause-and-effect relationships influence how problems are approached.
5. Pressures:
External or internal pressures resulting from perceptions of system challenges or opportunities.
6. Affects, Emotions, and Irrationalities:
Accounting for human factors that drive behaviours and decisions, often deviating from purely rational models.
7. Policies:
Rules and protocols, such as energy management policies or data prioritisation schemes, govern decisions.
8. Incentives:
Motivations that drive individual or system-level actions, such as minimising energy use or optimising throughput.
Defining Dynamics in IoT Systems
The system's dynamics are represented through graphs over time, capturing the variation of key variables and performance metrics as the system evolves. These graphs help to visualise the following:
By leveraging simulation results, we aim to plot and analyse these variations, providing actionable insights into how IoT systems behave under different conditions.
Why System Dynamics for IoT Systems?
System dynamics modelling offers a comprehensive approach to understanding the complexities of IoT systems, mainly when dealing with interactions between diverse components, feedback loops, and time-dependent behaviour. This methodology is especially relevant for IoT systems, where challenges such as data congestion, resource constraints, and dynamic user behaviour can significantly impact system performance. This is also very important in IoT systems that monitor and control industrial processes or critical systems.
By integrating system dynamics with IoT-specific considerations, we can:
IoT is a key technology enabler for Industry 4.0 and is increasingly being implemented in manufacturing. This subset of IoT, known as Industrial IoT (IIoT), integrates IoT functionality into industrial settings. While new production systems often come with IoT capabilities by default, many manufacturing companies still rely on legacy equipment that can be upgraded using IoT solutions. Upgrading existing machinery is especially important, as manufacturing equipment is typically designed to last for decades, making frequent replacements impractical. Consequently, IIoT is essential for modernising older machinery to meet today's data-driven production demands, enhance efficiency, reduce downtime, minimise production waste, and lower the overall carbon footprint.
Recently, a new industrial paradigm called Industry 5.0 has emerged. Industry 5.0 builds on the principles of Industry 4.0, with a stronger emphasis on human well-being, resilience, and sustainability. In this context, IoT plays a vital role in achieving these objectives.
Although the general concepts and architecture of Industrial IoT (IIoT) are similar to typical IoT, the industrial sub-domain has specific features and requirements for designing IoT solutions for industry. Industrial applications can be divided into various fields, such as manufacturing and production, energy and utilities, transportation and logistics, agriculture and farming, construction and building, and automotive. Each field has specific needs but shares common critical factors crucial for implementing IoT systems. The most common ones are listed below.
These aspects must be addressed early in the IoT system design process. Designing IIoT systems requires careful consideration of several critical factors to ensure the successful deployment and operation of IoT solutions in industrial environments. In addition to the listed factors, many industry domain-specific requirements may rule over general industrial requirements. A well-designed IIoT system can enhance productivity, optimise resource usage, and improve safety, ultimately providing significant value to industrial operations. By focusing on these key features during the design process, industries can fully harness the potential of IIoT to drive innovation and remain competitive in an increasingly connected world.
Model-based Systems Engineering (MBSE) is a systems engineering approach that prioritises using models throughout the system development lifecycle. Unlike traditional document-based methods, MBSE focuses on developing and using various models to depict different facets of a system, including its requirements, behaviour, structure, and interactions.
The systems modelling language (SysML)[7] is a general-purpose modelling language for systems engineering applications. It supports the specification, analysis, design, verification and validation of a broad range of systems and systems-of-systems. SysML plays a crucial role in the MBSE methodology. SysML provides nine diagram types to represent different aspects of a system. These diagram types (figure 15) help modellers visualise and communicate various perspectives of a system's structure, behaviour, and requirements.
Product development, including IoT systems development, commences with the proper engineering of requirements and the definition of use cases. The customer establishes requirements; here, the term “customer” encompasses a broad spectrum. In most instances, the customer is an individual or organisation commissioning the IoT system. However, it could also be an internal customer, such as a different department within the same organisation or entity. In the latter case, the customer and the developer are the same. Nonetheless, this scenario is the exception rather than the rule. The importance of conducting a thorough requirement engineering process remains constant across all cases.
The customer often inadequately defines requirements, and many parameters or functions remain unclear. In such cases, the requirement engineering stage assumes pivotal importance, as poorly defined system requirements can lead to numerous changes in subsequent design phases, resulting in an overall inefficient design process. In the worst-case scenario, this may culminate in significant resource wastage and necessitate the restart of system development amid a project. Such occurrences are not only costly but also time-consuming. While avoiding changes during the design and development process is impossible, proper change management procedures and resource allocation can significantly mitigate the impact on the overall design process.
This section uses an industrial IoT system as a case study to present examples of SysML diagrams. The context of this case study revolves around a wood and furniture production company with multiple factories across the country. Each factory specialises in various stages of the production chain, yet all factories are interconnected. The first factory processes raw wood and prepares building elements for the subsequent two. The second factory crafts furniture from the prepared wood elements, while the third factory assembles customised products by combining building elements and production leftovers. These factories utilise a range of modern and automated machinery, while others employ classical mechanical machines with limited automation.
The company seeks an IoT solution to ensure continuous production flow, minimise waste, and implement predictive maintenance measures to reduce downtime. In the following examples, we utilise this case study, presenting fragments as examples without covering the entire system through diagrams.
Let's consider a fragment of customer input regarding functional requirements for the system:
Furthermore, the non-functional requirements include:
Based on fragments of the requirement list like the ones above, we can construct a hierarchical requirement diagram (req, figure 16) with additional optional parameters to precisely specify all individual requirements. Not all individual requirements need to be defined at the same level. If insufficient information is available at the current stage, requirements can be further refined in subsequent design iterations.
Use case diagrams (uc) at the requirement engineering stage allow for the visualisation of higher-level services and identification of the main external actors interacting with the system services or use cases. They can be subsequently decomposed into lower-level subsystems. Still, in the requirement design stage, they facilitate a common understanding of the IoT system under development by different stakeholders, including management, software engineers, hardware engineers, customers, and others.
The following use case diagrams describe the high-level context of the IoT system (figure 17).
System architecture defines the system's physical and logical structure and interconnections between the subsystems and components. For example, block definition diagrams (bdd) can determine the system's hierarchical decomposition into subsystems and even component levels. The figure below shows a simple decomposition example of one IoT sensing node. It is essential to understand that blocks are one of the main elements of SysML and, in general, can represent either the definition or the instance. This is the fundamental concept of system design and the pattern used in system modelling. Blocks are named with stereotype notation «block» and its name. It also may consist of several additional compartments like parts, references, values, constraints, operations, etc.) In this example, Operations and values are demonstrated. Relationships between blocks describe the nature and requirement for the block's external connection. The most common relationships are associations, generalisations and dependencies. All of these have specific arrowheads that define the particular relationship. In the following example (figure 18), a composite association relationship (dark diamond arrowhead) is used to represent the structural decomposition of the subsystem.
One can define component interactions and flows with the internal block diagram (ibd). Cross-domain components and flows can be used in one single diagram, especially in the conceptual design stage. The ibd is closely related to bdd and describes the usages of the blocks. The interconnections between parts of blocks can be very different by nature; in one diagram, you can define flows of energy, matter, and data and services required or provided by connections. The following example (figure 19) shows how the data flows from the sensor to the website user interfaces in a simplified way.
The system behaviour of an IoT system defines the implementation of system services and functionality. The combination of hardware, software, and interconnections enables the offering of the required services and functionality and establishes the system's behaviour. It comprises cyber-physical system activities, actions, state changes, and algorithms. For example, we can define a system sensing node software general algorithm with an activity diagram (act), as presented in the figure 20.
Property prognosis and assurance are conducted during the complete development process. Expected system properties are forecasted early on using models. Property validation, which accompanies development, continuously examines the pursued solution through investigations with virtual or physical prototypes or a combination of both. The property validation includes verification and validation. Verification is the confirmation by objective proof that a specified requirement is fulfilled. At the same time, validation proves that the user can use the work result for the specified application [8].
SysML enables the tracking and connecting of requirements with different elements and procedures of the model. For example, the SysML requirement diagram (figure 21) captures requirements hierarchies and the derivation, satisfaction, verification, and refinement relationships. The relationships provide the capability to relate requirements to one another and to system design models and test cases.
SysML is a comprehensive graphical modelling language designed to visualise a system's structure, behaviour, requirements, and parametrics, enabling effective communication of this information to others. It defines nine types of diagrams, each with a unique role in conveying specific aspects of system design.
Due to the rapid development of communication technologies and novel data transmission carriers and protocols, IoT systems have emerged from the world of wireless sensor networks. These networks have already shown flexibility and resilience in different application domains, including healthcare, manufacturing, domestic services, etc. While IoT systems applications shift toward more data-intensive applications, their technical solutions and architectures are still essential to provide valid and trustworthy data for complex and reliable decisions.
More information is presented in the following chapters:
This chapter focuses on the architectural design of IoT networks and systems. It leverages the well-known four-layered IoT reference architecture shown in figure 22 to discuss the methodologies and tools for the design of IoT networks and systems. An IoT reference architecture is a strategic blueprint detailing the key components and their interactions within an IoT ecosystem. It offers a robust framework for designing, developing, and deploying effective IoT solutions, ensuring a cohesive and scalable system architecture. The IoT reference architecture outlines the foundational layers and components required for the seamless operation of IoT systems. Each layer is critical in ensuring efficient data collection, transmission, processing, and utilisation in an IoT ecosystem.
The perception layer forms the foundation of the IoT ecosystem by interacting directly with the physical world. It comprises various IoT-enabled devices, sensors, and actuators that gather data or influence the environment. Recent advances in hardware and low-power computing also bring data processing capabilities to this layer, including simple AI tasks.
Components
Functionality
This layer serves as the IoT system's “eyes and hands,” enabling it to sense and influence its surroundings.
The transport layer, called the network layer, facilitates connectivity between IoT devices and the broader system. It ensures that data captured at the perception layer is reliably transmitted to data processing units. This layer provides various communication models, including device-to-device and device-to-cloud communication.
Components
Functionality
This layer is the “nervous system” of the IoT architecture, enabling the flow of information across the ecosystem.
The data processing layer is responsible for aggregating, filtering, analysing, and deriving actionable insights from the data collected by IoT devices. Depending on the application's requirements, this layer can operate at the edge (closer to the devices) in the fog or the cloud.
Components
Functionality
This layer acts as the “brain” of the IoT system, transforming raw data into meaningful intelligence.
The Application Layer is also known as the User Interaction and Value Creation Layer. The Application Layer transforms processed data into end-user functionalities and value-driven solutions. It consists of software applications, services, and user interfaces that allow users to interact with and benefit from the IoT system.
Components
Functionality
This layer represents the “face” of the IoT system, delivering tangible benefits and user-centric solutions.
Key Insights and Integration of Layers
Organisations can build resilient and efficient IoT ecosystems tailored to their specific needs by leveraging a well-structured IoT reference architecture. This layered approach ensures that every component, from sensors to user applications, contributes to a cohesive and value-driven system. The discussion on IoT architectures presented in the remaining parts of this chapter is based on the IoT reference architecture presented above.
IoT Network Architecture is composed of a variety of layers, including Edge-class IoT devices such as sensors and actuators, access points enabling devices to connect to the Internet and services, fog-class devices performing preliminary data processing such as aggregation and conversion, core Internet network and finally a set of cloud services for data storage and advanced data processing. A sample model is present in figure 23.
IoT nodes are the fundamental building blocks of an IoT system, enabling the capture, processing, and transmission of data across connected devices. These nodes often operate in energy-constrained environments and are connected to an access point, which links them to the Internet, using low-power communication technologies (LPCT). These technologies enable cost-effective, reliable connectivity while adhering to the limitations of battery-operated or energy-harvesting devices. They encompass wireless access technologies at the physical layer for establishing connectivity and application layer communication protocols for managing data exchange over IP networks.
Wireless Access Technologies
Wireless access technologies are pivotal in connecting IoT devices to a network. They can be categorised into short- and long-range technologies and divided into licensed and unlicensed options. The selection of a specific technology depends on application requirements such as range, power consumption, scalability, and cost.
Short-Range Technologies
Short-range technologies are ideal for IoT applications in localised settings, such as smart homes, industrial automation, and personal devices. Examples include:
Long-Range Technologies
Long-range communication is critical for IoT applications spanning large areas, such as agriculture, utilities, and logistics. Examples include:
Licensed vs. Unlicensed Technologies
Low Power Wide Area Networks (LPWAN) LPWAN technologies are transformative for IoT because they provide long-range connectivity with ultra-low power consumption. These technologies are particularly suited for large-scale deployments where devices must operate autonomously for extended periods (up to a decade) without frequent maintenance or battery replacement.
Key Benefits of LPWAN Technologies
Popular LPWAN Protocols
While LPWAN protocols excel at transmitting text data, multimedia applications (e.g., images and audio) may require data compression techniques to balance bandwidth and energy efficiency. For instance, in smart agriculture, images from field cameras or audio from livestock monitoring systems might need to be compressed before transmission.
Application Layer Communication Protocols
Application layer protocols manage data exchange between IoT devices and platforms, ensuring efficient and reliable communication even in resource-constrained environments. These protocols address the limitations of traditional HTTP, offering lightweight and optimised alternatives.
Key Application Layer Protocols
1. Constrained Application Protocol (CoAP):
2. MQTT (Message Queuing Telemetry Transport):
3. Advanced Message Queuing Protocol (AMQP):
4. Lightweight M2M (LWM2M):
Specifically tailored for IoT device management, enabling firmware updates, configuration, and resource monitoring.
5. UltraLight 2.0:
A minimalistic protocol designed for low-power IoT applications, focusing on reducing message size and complexity.
IoT nodes rely on advanced wireless access technologies and application layer protocols to establish seamless connectivity, optimise energy efficiency, and support diverse use cases. The selection of these technologies should align with the application's specific requirements, ensuring a balance between performance, scalability, and cost. With the rise of LPWAN and lightweight communication protocols, IoT systems are increasingly capable of supporting massive, energy-efficient deployments in various domains, from smart cities to industrial automation.
The Internet of Things (IoT) Gateway is a pivotal component in IoT ecosystems, serving as the interface between IoT devices—such as sensors, actuators, and edge nodes—and the broader network infrastructure, including cloud platforms and external data analytics systems. The gateway facilitates seamless data transmission, device management, and integration, enabling efficient communication within the IoT network. By bridging IoT nodes that cannot directly communicate with each other or the Internet, IoT gateways are vital in ensuring interoperability and scalability across diverse devices and protocols.
IoT gateways serve multiple essential functions that enhance the overall effectiveness of IoT deployments:
Hardware Solutions for IoT Gateway nodes
IoT gateways often rely on resource-constrained, cost-effective computing devices that provide sufficient processing power while maintaining energy efficiency. Examples include:
These devices can run lightweight algorithms to perform local data processing, real-time analytics, and storage, minimising the dependency on cloud resources. Additionally, they can support multiple protocols, making them highly adaptable to various IoT deployment scenarios.
The Role of Edge Computing in IoT Gateway Nodes
IoT gateways equipped with edge computing capabilities significantly enhance the performance and efficiency of IoT networks:
Smart IoT Solutions with Gateway Nodes
IoT gateways pave the way for scalable, adaptable, and energy-efficient IoT deployments. They act as enablers for diverse applications, including:
IoT gateways are indispensable for creating seamless, secure, and efficient IoT networks. By bridging diverse devices, translating protocols, and enabling edge computing, these gateways ensure the scalability and functionality of IoT solutions across industries. Their integration with modern wireless technologies and edge devices makes them a cornerstone for the growing adoption of IoT in real-world applications.
In the rapidly expanding Internet of Things (IoT) landscape, fog and edge computing nodes play a critical role in bridging the gap between IoT devices and centralised cloud computing infrastructure. These nodes decentralise data processing, bringing computational resources closer to the source of data generation, enhancing responsiveness, reducing latency, and alleviating the load on cloud data centres. While “fog computing” and “edge computing” are often used interchangeably, they have distinct scopes. Fog computing is a broader architecture integrating processing at intermediate layers, such as gateways or local servers. In contrast, edge computing focuses on computations directly at or near the device level. These approaches offer a synergistic framework for efficient, real-time, and scalable IoT systems.
Key Characteristics of Fog and Edge Computing
1. Decentralised Processing:
Fog and edge nodes process data locally or in close proximity to IoT devices, minimising the need for constant communication with cloud servers.
2. Layered Architecture:
Advantages of Fog and Edge Computing
1. Reduced Latency
Traditional cloud computing involves data transmission over long distances, leading to delays. Fog and edge nodes address this issue by processing data closer to the source, ensuring faster response times critical for real-time applications such as:
2. Bandwidth Optimization
By preprocessing data locally, fog and edge nodes minimise the volume of raw data sent to the cloud, reducing bandwidth consumption and associated costs. For instance:
3. Enhanced Scalability
Decentralising computational tasks allows IoT networks to scale efficiently without overwhelming cloud infrastructure. Fog computing enables a hierarchical distribution of workloads, supporting vast IoT deployments in industries like energy, transportation, and logistics.
4. Improved Security and Privacy
Localised data processing reduces exposure to cyber threats during data transmission. Additionally, sensitive data can remain within predefined geographical boundaries to comply with regulations such as GDPR (General Data Protection Regulation).
5. Resilience in Intermittent Connectivity
In scenarios with unreliable continuous cloud access, fog and edge nodes ensure autonomous operations by performing critical tasks locally.
Use Cases for Fog and Edge Computing
1. Industrial IoT (IIoT):
2. Smart Cities:
3. Healthcare:
4. Autonomous Systems:
5. Agriculture:
Fog Computing and Artificial Intelligence (AI)
Integrating artificial intelligence (AI) with fog computing enhances the capabilities of IoT systems by enabling real-time analytics and decision-making at the edge.
AI-Enabled Fog Nodes:
Distributed AI Processing:
Examples
Technologies Enabling Fog and Edge Computing
1. Hardware Solutions:
2. Software Frameworks:
3. Networking Protocols:
Future Trends in Fog and Edge Computing
1. Integration with 5G: The rollout of 5G networks will further enhance fog and edge computing by providing high-speed, low-latency communication, supporting advanced use cases like AR/VR and autonomous systems.
2. Edge AI Innovations: Continued development of efficient AI models for edge devices will expand their capabilities, enabling predictive maintenance, fraud detection, and environmental monitoring applications.
3. Decentralised Architectures: Blockchain technology may be integrated with fog and edge nodes to ensure secure, tamper-proof data processing and storage.
4. Green Computing Initiatives: Energy-efficient hardware and renewable energy integration will drive sustainable fog and edge solutions.
Fog and edge computing represent transformative advancements in IoT system architecture, addressing the limitations of traditional cloud-centric models. By bringing computational resources closer to data sources, these approaches enable real-time analytics, reduce bandwidth requirements, and improve system reliability. As IoT deployments continue to grow in complexity and scale, the adoption of fog and edge computing will be instrumental in achieving responsive, secure, and efficient solutions across industries. With advancements in AI, 5G, and edge hardware, the future of fog and edge computing promises even greater integration and innovation.
Internet core networks are the backbone of the Internet of Things (IoT), enabling seamless connectivity and data exchange between billions of devices and cloud computing platforms. These networks are integral to the operation of IoT systems, ensuring the reliable transmission of vast amounts of data generated by interconnected sensors, actuators, and devices, collectively called IoT nodes.
IoT nodes capture and generate significant data volumes that need to be processed to extract actionable insights. This data journey involves two key communication paths:
This bidirectional communication underpins critical IoT applications, such as smart cities, industrial automation, healthcare systems, and autonomous vehicles. These applications rely on low-latency and high-throughput networks to support real-time responsiveness and data-driven decision-making, making the role of core networks indispensable.
Challenges in Handling IoT Traffic over Core Networks
While internet core networks provide essential connectivity for IoT systems, the exponential growth in IoT devices introduces unique challenges that must be addressed to ensure reliable, secure, and efficient operations.
1. Security Vulnerabilities
Transiting vast amounts of IoT data over core networks exposes the ecosystem to heightened cyber-attack risks. Common threats include:
To mitigate these risks, robust security measures are essential:
Without comprehensive security frameworks, IoT systems are vulnerable to breaches, data theft, and operational disruptions, which could compromise safety and reliability.
2. Maintaining Quality of Service (QoS)
The massive volume of IoT traffic places immense pressure on core networks, potentially leading to:
Even minor QoS degradation can result in severe consequences for applications such as autonomous vehicles, industrial automation, and telemedicine, including operational failures or safety hazards.
Solutions for QoS Optimisation:
By ensuring consistent QoS, core networks can meet the stringent demands of real-time IoT applications.
3. Energy Consumption
The continuous transmission and processing of IoT data across core networks require substantial energy resources, contributing to:
Strategies for Sustainable Energy Management:
Adopting these strategies helps balance operational demands with environmental responsibility, paving the way for greener IoT infrastructures.
4. Network Management Complexity
The dynamic and large-scale nature of IoT traffic introduces significant challenges in network administration, such as:
Traditional network management approaches often fall short of addressing these complexities. Advanced solutions include:
1. Software-Defined Networking (SDN):
2. Network Function Virtualisation (NFV):
Together, SDN and NFV enhance agility, scalability, and resilience, making them indispensable tools for managing complex IoT ecosystems.
The Future of Core Networks in IoT The rapid expansion of IoT networks demands continuous innovation in core network technologies. Future advancements are likely to focus on:
1. 5G and Beyond
2. AI-Driven Network Management
3. Blockchain for Secure IoT Communication
4. Green Networking Initiatives
Internet core networks are the lifeline of IoT ecosystems, enabling seamless data transmission and real-time responsiveness across diverse applications. However, the rapid growth of IoT introduces challenges, including security vulnerabilities, QoS maintenance, energy consumption, and network management complexities.
Core networks can meet the evolving demands of IoT systems by adopting advanced technologies such as SDN, NFV, edge computing, and AI-driven management and implementing robust security measures and energy-efficient practices. These innovations will ensure a sustainable, secure, and efficient future for IoT, driving transformative advancements across industries in an increasingly connected world.
IoT devices are typically constrained by limited computational power and memory, so they rely heavily on cloud data centres for advanced analytics and data storage. IoT cloud computing represents the intersection of cloud technology and the rapidly expanding Internet of Things (IoT) domain, offering a robust framework for processing and managing the massive data streams of IoT devices.
Cloud computing has transformed IT operations, providing unparalleled advantages in cost-effectiveness, scalability, and flexibility. When combined with IoT, these benefits are amplified, enabling seamless access to a broad array of computing resources—ranging from software to infrastructure and platforms—delivered remotely over the Internet. This integration allows IoT devices to connect to cloud environments from virtually any location, enabling real-time data processing, efficient resource management, and dynamic scalability.
By leveraging cloud computing, organisations can minimise the complexities and financial burdens of maintaining on-premises IT infrastructure. This capability accelerates the deployment of IoT solutions and reduces costs, empowering businesses to focus on innovation and growth rather than infrastructure management.
Key Benefits of IoT Cloud Computing
1. Cost Reduction and Resource Optimisation
One of the primary advantages of IoT cloud computing is the significant cost savings it offers by eliminating the need for extensive physical infrastructure. Traditionally, organisations had to invest heavily in on-premises data centres, incurring substantial costs related to hardware procurement, maintenance, security, and periodic upgrades.
Cloud computing shifts these responsibilities to service providers, who manage the infrastructure on behalf of users. This model reduces capital expenditure and operational costs, freeing up financial and human resources. For small and medium-sized enterprises (SMEs), this shift is particularly transformative, granting access to cutting-edge computing resources that were previously unaffordable.
Additionally, the pay-as-you-go model of cloud services ensures that organisations only pay for the resources they use, enabling efficient cost management and scaling.
2. Enhanced Security and Data Management
Cloud computing enhances data security by leveraging the expertise of leading service providers, who implement advanced measures to protect data and applications from cyber threats. Key security features include:
End-to-End Encryption: Protects data during transmission and storage. Regular Updates and Patches: Ensures systems are safeguarded against emerging vulnerabilities. Robust Authentication Mechanisms: Prevents unauthorised access. By outsourcing security to cloud providers, organisations can achieve a level of protection that would be costly and complex to maintain independently.
Furthermore, cloud platforms offer scalable and flexible storage solutions to accommodate the dynamic data volumes generated by IoT devices. Automated maintenance and updates ensure consistent performance and reduce the risk of downtime or data loss.
3. Accelerating IoT Application Development
IoT cloud computing provides developers with a robust ecosystem of tools, frameworks, and services that streamline application development. This environment allows for:
These advantages lead to faster rollout times for IoT applications and foster continuous innovation.
4. Support for IoT-Specific Cloud Platforms
The rise of IoT has driven the development of cloud platforms tailored to the unique demands of IoT systems. Popular platforms such as Microsoft Azure IoT Suite, Amazon AWS IoT, and DeviceHive offer comprehensive services, including:
These platforms enable businesses to implement IoT solutions quickly and cost-effectively, eliminating the need for extensive in-house infrastructure while maintaining flexibility and scalability.
Strategic Advantages of IoT Cloud Integration
The integration of IoT and cloud computing extends beyond cost efficiency and operational convenience, offering strategic benefits that drive business transformation:
1. Real-Time Insights:
Cloud-based analytics enable organisations to process and act on IoT data in real-time, improving decision-making and responsiveness. For example, in industrial automation, real-time data can predict equipment failures and trigger preventive actions, minimising downtime and costs.
2. Enhanced Operational Efficiency:
Cloud-based IoT platforms optimise workflows by automating repetitive tasks, streamlining processes, and improving resource allocation. For instance, smart city systems use cloud analytics to manage traffic flow, reduce energy consumption, and respond to emergencies more effectively.
3. Scalability for Growing IoT Ecosystems:
Cloud platforms are inherently scalable, allowing businesses to expand their IoT deployments without the need for additional physical infrastructure. This scalability supports long-term growth and adapts to fluctuating demands.
4. Innovation Enablement:
Cloud computing reduces the burden of infrastructure management, freeing up resources for innovation. It enables businesses to explore new IoT use cases and develop next-generation applications.
The Future of IoT Cloud Computing
As IoT continues to expand, the role of cloud computing will grow increasingly pivotal in supporting its evolution. Emerging trends and technologies shaping the future of IoT cloud computing include:
IoT cloud computing is a cornerstone of the modern IoT ecosystem, providing the scalability, flexibility, and efficiency needed to manage the massive data volumes generated by connected devices. By reducing costs, enhancing security, and accelerating application development, cloud computing empowers organisations to harness the full potential of IoT.
As the integration of these technologies continues to advance, IoT cloud computing will remain a driving force behind innovation and global connectivity, enabling a more innovative, more interconnected future.
IoT devices are naturally network-enabled and communication-oriented. For this reason, software development on any component of the IoT ecosystem requires a specific approach driven by communication requirements, energy efficiency, and other aspects of IoT network architecture.
The value of IoT lies not just in the devices themselves but in the software applications that leverage the data generated by these devices to provide actionable insights and drive automation. These software applications are at the heart of IoT solutions and can be designed for various purposes. Let's explore the different aspects of IoT Software Applications in detail.
1. Monitoring
Monitoring is one of the most common IoT application categories. In this use case, IoT devices (such as sensors, cameras, or smart meters) continuously collect data about the environment, processes, or systems they are designed to observe.
The role of the software application is to collect and aggregate data.
The software interfaces with the devices to retrieve real-time data, such as temperature, humidity, energy consumption, or security status.
For example, in industrial applications, IoT sensors might monitor equipment for signs of wear and tear, allowing a company to detect potential failures before they cause disruptions. In healthcare, IoT devices can continuously monitor patient vitals and send updates to doctors or hospitals for immediate action.
2. Control
Control-oriented IoT applications allow users to interact with and manage devices or systems remotely. This can include turning devices on or off, adjusting settings, or configuring them to operate in specific modes. Control applications offer the following capabilities:
For example, IoT applications might control lighting, heating, and even security systems in a smart home from a central interface like a smartphone app.
3. Automation
Automation is one of the most transformative aspects of IoT applications. By automating processes based on real-time data, IoT can eliminate the need for manual intervention and optimise systems for greater efficiency. Key functions of IoT automation applications include:
In agriculture, IoT-enabled irrigation systems can automatically adjust water flow based on soil moisture readings, ensuring that crops receive optimal care without human input.
4. Data-Driven Insights
One of the most significant advantages of IoT applications is their ability to extract valuable insights from the vast amounts of data generated by devices. These insights can inform business decisions, optimise operations, and improve outcomes across various sectors. Key capabilities of data-driven IoT applications include:
IoT data can track vehicle performance, predict maintenance needs, and enhance fuel efficiency in the automotive industry. Similarly, in the energy sector, IoT applications help to analyse consumption patterns and make adjustments that improve energy efficiency and reduce costs.
5. Security and Privacy
IoT applications also play a critical role in securing IoT devices and the data they generate. As the number of connected devices increases, ensuring the privacy and security of sensitive information is essential. IoT security applications focus on:
Data Encryption: Securing data both in transit and at rest to prevent unauthorised access or breaches.
For example, in a smart home, an IoT security system could monitor unauthorised access attempts and alert homeowners while enabling remote surveillance.
6. Integration with Other Systems
Many IoT applications are not standalone but integrate with other systems or platforms to enhance functionality. These integrations span various sectors, including enterprise resource planning (ERP), customer relationship management (CRM), and cloud platforms. Some common integrations include:
For example, in smart cities, IoT applications integrate with traffic management systems, environmental sensors, and city services, enabling more efficient and responsive urban management.
The true value of IoT applications lies in their ability to convert raw data from connected devices into actionable insights, drive automation, and improve decision-making. Whether for monitoring, control, or automation, IoT applications are revolutionising industries by improving efficiency, reducing costs, and enhancing user experiences. As IoT technology evolves, the potential for even more advanced, intelligent, and integrated applications will only grow, further embedding IoT into our daily lives and business operations.
Nowadays, virtually every IoT system processes sensitive data directly or indirectly. Many of those systems are mission-critical ones.
As the number of IoT devices grows, the need for robust security measures becomes even more critical. Protecting the sensitive data collected by these devices from unauthorised access, tampering, or misuse is paramount to ensure the integrity and privacy of users and organisations. Thus, network security systems should be considered when designing IoT networks and systems to ensure they're secure by design.
Security in IoT Networks:
Security within IoT networks is a multifaceted concern, as IoT devices often operate in decentralised and dynamic environments. These devices communicate through wireless networks, making them vulnerable to various cyberattacks. Given that IoT systems are frequently connected to the cloud or other external networks, vulnerabilities in one device can expose the entire network to risks. Hence, strong security protocols are essential for data protection in these networks.
Key Security Measures
Securing IoT networks requires a comprehensive, multi-layered approach that addresses various security aspects. By implementing measures like encryption, authentication, authorisation, and regular software updates, organisations can significantly reduce the risk of data breaches and unauthorised access to IoT systems. While IoT security presents significant challenges, these challenges can be mitigated with careful planning, robust protocols, and a proactive security strategy.
An IoT (Internet of Things) network comprises interconnected IoT nodes, including sensors, actuators, and fog nodes. Each IoT node typically includes several key components: a power supply system, a processing unit (such as microprocessors, microcontrollers, or specialised hardware like digital signal processors), communication units (including radio, Ethernet, or optical interfaces), and additional electronic elements (e.g., sensors, actuators, and cooling mechanisms). These components work in unison to enable the node to collect, process, and transmit data effectively, supporting various IoT applications.
The architecture of a typical IoT network is structured into four main layers: the perception layer, the fog layer, the Internet core network (transport layer), and the cloud data centre. This multi-layered structure allows for scalability, efficiency, and optimised data processing.
In an IoT network, the seamless integration of these layers enables efficient data collection, processing, and transmission. This layered approach supports diverse applications, from smart homes with automated climate control and security systems to large-scale industrial automation, smart cities, and agricultural monitoring. The robust structure of IoT networks allows for scalable solutions that can adapt to the needs of various industries, enhancing productivity, efficiency, and quality of life.
Details on networks are presented in the following chapters:
IoT networks are structured networks in which nodes are organised according to a defined hierarchy. An IoT network topology is a given arrangement or configuration of IoT devices to form an IoT network. IoT network topology refers to the structural layout of devices (nodes) in an IoT network, shaping how devices communicate and how data flows between them. The choice of topology significantly impacts the network’s performance, reliability, scalability, and cost. Below is an expanded discussion of fundamental IoT network topologies, their attributes, advantages, challenges, and use cases.
1. Star Topology
In a star topology (figure 24), all devices are connected directly to a central hub or gateway, the network’s communication and coordination point. The nodes are within the radio propagation of the gateway. Thus, they can communicate directly with the gateway, but if a node is out of the propagation or coverage range of the gateway, it is cut off from the network.
Advantages
Disadvantages
Use Cases
2. Tree Topology
Tree topology (figure 25) organises devices hierarchically, with a root node at the top and subsequent devices forming branches at multiple levels. It is a structured extension of the star topology. In this type of topology, some nodes operate as relays for others. If one of the relays fails (crashes or experiences poor link quality), all the descendant nodes that depend on it will be disconnected from the network.
There is a particular case of the tree-of-trees topology available (among others in Bluetooth) called Scatternet.
Advantages
Disadvantages
Use Cases
3. Mesh Topology
In a mesh topology (figure 26), each device is interconnected with one or more devices, creating multiple communication paths. Mesh networks can be partial (some nodes connected) or full (all nodes interconnected). It extends the tree topology by adding redundant paths. Each node in the network has at least two neighbours to which the packet can be transmitted. Therefore, if some nodes fail, the multi-hop networks or the traffic flow will not be interrupted.
Advantages
Disadvantages
Use Cases
4. Linear Topologies
Linear topology (figure 27) sequentially connects devices, linking each node to its immediate neighbours. A variation of this topology is a linear topology with redundancy, which allows each node to connect to its two adjacent neighbours both in front and behind. This setup provides backup routing capabilities in case one of the nodes fails. In linear topologies, all nodes, except for the last one, must be capable of functioning as data relays.
Advantages
Disadvantages
Use Cases
5. Bus Topology
In a bus topology (figure 28), all devices share a common communication backbone, and data is broadcast across the bus.
Advantages
Disadvantages
Use Cases
6. Ring Topology
Ring topology (figure 29) arranges devices in a closed loop, where data travels around the ring in one or both directions.
Advantages
Disadvantages
Use Cases
7. Hybrid Topology
Hybrid topology (figure 30) combines elements of multiple topologies to create a customised network that leverages their strengths and minimises weaknesses.
Advantages
Disadvantages
Use Cases
Choosing the proper IoT network topology requires carefully evaluating the application’s needs, including reliability, scalability, cost, and energy efficiency. Often, IoT deployments use a combination of topologies to optimise performance across diverse requirements. Understanding each topology’s strengths and limitations is essential for designing effective IoT networks.
Designing an Internet of Things (IoT) network requires tackling an intricate mix of technical, operational, and economic factors. These challenges stem from the diverse requirements and constraints of IoT applications. It is essential to consider these factors and challenges when designing IoT networks. Below is a brief discussion of these factors and challenges, also listed in the figure 31.
Hardware Limitations
IoT devices are typically constrained by size, cost, and power limitations. These limitations present several design challenges:
Range
IoT networks vary significantly in terms of communication range, which influences their architecture and cost:
Bandwidth
Efficient bandwidth management is critical to ensure the smooth operation of IoT networks:
Energy Consumption and Battery Life
Energy efficiency is vital for IoT devices, especially those deployed in remote locations:
Quality of Service (QoS)
Delivering consistent performance in IoT networks is challenging due to the following factors:
Security
Security remains one of the most critical and challenging aspects of IoT network design:
Flexibility
IoT networks need to be adaptable to meet evolving application requirements:
Cost
Balancing performance and affordability is a persistent challenge in IoT network design:
Interoperability
Ensuring seamless interaction between diverse devices and platforms is essential for IoT success:
User Interface Requirements
The usability of IoT systems directly impacts user adoption and satisfaction:
Standardisation
A lack of unified standards hinders IoT scalability and integration:
In addressing these considerations, IoT network designers must adopt a holistic approach that balances technical requirements, user needs, and cost constraints while embracing innovation and collaboration to build scalable, reliable, and secure systems.
The backbone of the Internet of Things (IoT) lies in its communication and networking technologies, which enable the seamless interconnection of devices and facilitate data exchange across networks. These technologies are fundamental to the functioning of IoT systems and are tailored to meet various needs, including scalability, energy efficiency, cost, and performance. They can be broadly categorised into network access technologies, networking technologies, and high-level communication protocols. Sample protocol stack for IoT Communication Networks is present in figure 32.
IoT network access technologies serve as the backbone of the Internet of Things (IoT) ecosystem by providing the essential means to connect devices to a network and enable seamless data communication. These technologies ensure that devices, sensors, and actuators can transmit and receive data efficiently, allowing the coordination and functionality required for IoT applications. The choice of technology depends on the specific requirements of the IoT application, which may vary significantly based on factors such as range, power consumption, data rate, cost, network density, and environmental constraints.
For example, IoT applications in smart homes and wearable technology prioritise low power consumption and short-range connectivity. In contrast, industrial IoT, smart agriculture, and smart cities often require long-range communication with low power usage to connect devices spread across large areas. Understanding the strengths and limitations of each access technology is critical to optimising network performance, reliability, and cost-effectiveness. IoT access technologies can be broadly categorised into short-range and long-range communication technologies, each tailored to address specific use cases in IoT deployments:
Short-range technologies are designed for close proximity communication, typically ranging from a few centimetres to a few hundred meters. They are often used in localised IoT applications like smart homes, wearable devices, and industrial automation.
Examples include technologies like Radio Frequency Identification (RFID), which is widely used for inventory tracking; Near Field Communication (NFC), which powers secure contactless payments; and Bluetooth Low Energy (BLE), which supports low-power connections in consumer electronics and medical devices. Short-range communication technologies are typically characterised by low latency, making them ideal for applications requiring frequent and real-time communication between devices.
Radio Frequency Identification (RFID)
Description
Radio Frequency Identification (RFID) technology leverages electromagnetic fields to wirelessly identify, track, and communicate with objects. The system typically consists of two main components: RFID tags, which contain stored data, and RFID readers, which capture and process this data. The tags can be attached to physical objects, enabling them to transmit information when brought into proximity with an RFID reader.
RFID tags are further classified into two types:
1. Passive RFID Tags
2. Active RFID Tags
RFID systems operate across various frequency ranges, including:
Applications
RFID technology is widely employed in various sectors, including:
RFID's ability to wirelessly and efficiently capture real-time data has made it an indispensable tool in IoT applications, bridging the gap between physical objects and digital systems.
Advantages
Limitations
2. Near Field Communication (NFC)
Near-field communication (NFC) is a specialised subset of Radio Frequency Identification (RFID) technology that enables wireless communication between devices over a very short range, typically 10 centimetres or less. Operating at a frequency of 13.56 MHz, NFC facilitates secure, fast, and convenient data exchange by bringing two NFC-enabled devices close together. Unlike standard RFID systems, NFC allows bidirectional communication, meaning both devices can send and receive data. This feature makes NFC more versatile, enabling it to support a broader range of applications beyond simple identification and tracking.
Key Characteristics of NFC
Modes of Operation
NFC supports three primary modes of operation:
Applications
NFC is widely adopted in various domains due to its security, simplicity, and versatility:
NFC's combination of security, ease of use, and broad application potential makes it a cornerstone technology in the modern IoT ecosystem. It seamlessly connects devices and services for enhanced user experiences.
Advantages
Limitations
3. Bluetooth Low Energy (BLE)
Bluetooth Low Energy (BLE) is an advanced iteration of Bluetooth technology designed to meet low-power IoT application demands. It operates in the globally available 2.4 GHz Industrial, Scientific, and Medical (ISM) frequency band and is engineered to balance power efficiency, performance, and cost. BLE is ideal for devices requiring long battery life and intermittent data transmission, such as wearables, sensors, and smart home gadgets.
Key Features of BLE
Advantages of BLE
Limitations of BLE
Applications of BLE
BLE is a key enabler of the IoT revolution, bridging devices with varying resource constraints and providing robust, energy-efficient connectivity. Its versatility makes it a popular choice for applications requiring cost-effective, low-power wireless communication, making it integral to the growth of interconnected smart systems.
4. Zigbee
Description
Zigbee is a wireless communication protocol designed specifically for low-power, low-data-rate applications, making it a popular choice for Internet of Things (IoT) networks. It operates primarily in the 2.4 GHz ISM band but can also use 868 MHz (Europe) and 915 MHz (US) bands, offering global versatility. Zigbee is well-suited for applications requiring short-range communication and mesh networking, such as smart homes, industrial automation, and healthcare monitoring systems.
Key Features of Zigbee
1. Low Power Consumption: Zigbee is optimised for battery-powered devices that need to run for extended periods (typically several years) without frequent battery replacements or recharges. It achieves this through low power consumption during active and idle states, making it ideal for sensor networks and other energy-constrained IoT applications.
2. Mesh Networking
3. Short-Range Communication
4. Low Data Rates
5. Security
6. Scalability
Zigbee Network Topologies
Zigbee supports multiple network topologies, each suited for different application requirements:
Applications of Zigbee
Zigbee is used in various IoT applications, especially those that require low power, short-range communication, and mesh networking. Some of the key applications include:
Advantages of Zigbee.
Limitations of Zigbee
Zigbee is a versatile and energy-efficient IoT networking technology that is well-suited for a wide range of low-power, short-range applications. Its mesh networking capabilities, low power consumption, and scalability make it an excellent choice for smart homes, industrial IoT, healthcare, and energy management systems. While it may not be ideal for high-bandwidth applications, it excels in use cases where small amounts of data must be transmitted over a reliable and resilient network of devices.
Long-range communication technologies are designed to connect devices over large distances, often spanning several kilometres. These technologies are critical for IoT deployments in rural areas, industrial environments, and outdoor applications like smart agriculture, smart cities, and environmental monitoring. Long-range technologies prioritise energy efficiency and scalability, often sacrificing data rates to ensure consistent performance in low-power and resource-constrained environments.
Notable examples include Low-Power Wide-Area Networks (LPWAN) technologies like LoRa and SigFox, which enable long-range communication with minimal power consumption. Cellular IoT technologies such as Narrowband IoT (NB-IoT) and LTE-M leverage existing mobile networks to provide reliable and scalable connectivity for IoT devices. Additionally, satellite IoT solutions extend coverage to remote and maritime areas, enabling global IoT connectivity.
Low Power Wide Area Networks (LPWAN)
LPWAN technologies are a class of wireless communication protocols engineered to meet the unique demands of IoT applications requiring long-range connectivity, low power consumption, and support for massive deployments. These technologies are particularly suited for scenarios where devices operate on limited power sources, such as batteries, for extended periods—sometimes years—while transmitting small amounts of data over long distances. LPWANs have become a cornerstone of outdoor IoT deployments, enabling connectivity in areas where traditional networking solutions like WiFi or cellular networks would be inefficient or too costly. They are commonly used in applications ranging from environmental monitoring to smart agriculture and industrial IoT.
Key Characteristics of LPWAN
Advantages of LPWAN
Challenges of LPWAN
Applications of LPWAN
LPWAN technologies have revolutionised IoT by addressing the challenges of long-range communication and energy efficiency. They continue to drive innovation in industries requiring scalable, low-cost connectivity across diverse and remote environments.
1. LoRa (Long Range)
Description
LoRa (Long Range) is a leading networking technology used for long-range, low-power, and low-data-rate IoT (Internet of Things) applications. It is part of the LPWAN (Low Power Wide Area Network) family, specifically designed to meet the unique needs of IoT systems by offering long-range communication capabilities while maintaining energy efficiency. LoRa technology is best known for its ability to support IoT devices deployed across vast areas, including rural and remote locations. It is ideal for many use cases, from smart cities to agriculture and environmental monitoring.
LoRa uses a Chirp Spread Spectrum (CSS) modulation technique, which is central to its ability to provide long-range communication while keeping power consumption low. Chirp Spread Spectrum spreads the signal over a wide frequency band, making it more resilient to interference, improving the signal-to-noise ratio, and allowing extended-range communications. This feature enables LoRa to perform well in various environments, even where traditional wireless communication technologies like WiFi or Bluetooth would struggle.
LoRa operates in unlicensed frequency bands (typically 868 MHz in Europe, 915 MHz in North America, and 433 MHz in Asia). IoT devices using LoRa can communicate without paying spectrum licenses, reducing deployment costs.
Key Features of LoRa Technology
LoRaWAN – The Network Protocol
LoRaWAN (LoRa Wide Area Network) is the protocol that operates on top of LoRa and enables communication between devices and a central server or cloud platform. While LoRa defines the physical layer and the radio communication, LoRaWAN adds the necessary protocols for routing, addressing, and managing communication within a LoRa network.
LoRaWAN supports private networks (where a single organisation manages the infrastructure) and public networks (where multiple users share a common infrastructure). The LoRaWAN protocol defines several key features:
Advantages
Use Cases
Limitations
LoRa technology offers a powerful solution for long-range, low-power IoT applications. It can support large-scale networks over vast geographic areas. Its simplicity, energy efficiency, and scalability make it ideal for various industry applications. By combining long-range communication with minimal power consumption, LoRa is driving the growth of the IoT ecosystem, particularly in areas where other wireless communication technologies fall short.
2. SigFox
Description
SigFox is a proprietary Low-Power Wide-Area Network (LPWAN) solution designed specifically for ultra-narrowband communication in the Internet of Things (IoT). It is a unique, highly energy-efficient technology that enables long-range connectivity for many IoT devices. SigFox operates in unlicensed radio frequency bands (typically 868 MHz in Europe, 915 MHz in North America, and 433 MHz in some parts of Asia) and utilises ultra-narrowband (UNB) communication to transmit small packets of data over long distances.
The key feature of SigFox is its ultra-narrowband technology, which significantly reduces the spectrum used by each signal. Unlike traditional wireless communication technologies, which use broader bandwidths for communication, SigFox's UNB communication minimises the energy and spectrum requirements, making it particularly well-suited for IoT devices that transmit small amounts of data over long distances without consuming much power. As a result, SigFox can provide reliable coverage across large areas, with an effective range of up to 50 kilometres in rural environments and 10-15 kilometres in urban areas.
SigFox's design is based on simplicity and efficiency, which are reflected in how it handles data. Each SigFox message can carry a payload of up to 12 bytes and is transmitted in short bursts. These small message sizes are ideal for many IoT applications where devices only need to send simple, periodic updates (e.g., sensor readings or status updates). SigFox operates on a star topology, where devices communicate directly with base stations (or “anchors”) that relay the data to the SigFox cloud platform for processing and integration with other systems.
Advantages of SigFox
Limitations of SigFox
Use Cases
SigFox is particularly suited for IoT applications that require low-bandwidth, long-range connectivity with minimal power consumption. Some everyday use cases include:
SigFox is a highly efficient and cost-effective LPWAN technology for long-range, low-power, and low-data-rate IoT applications. Its strengths lie in its simplicity, scalability, and suitability for applications requiring infrequent, small data transmissions over large distances. However, its limited data rate and message frequency constraints may not be suitable for high-bandwidth or real-time communication requirements.
3. Narrowband IoT (NB-IoT)
Description
NB-IoT (Narrowband IoT) is a cellular-based, low-power wide-area network (LPWAN) technology designed specifically for IoT (Internet of Things) applications. It is optimised to provide wide-area coverage, low power consumption, and support for many connected devices. Unlike traditional cellular networks, NB-IoT is designed to meet the unique needs of IoT devices, offering extended battery life, cost-effective communication, and reliable coverage in challenging environments.
Developed as part of the 3GPP (3rd Generation Partnership Project) standards, NB-IoT is a low-bandwidth technology that uses narrow channels within existing cellular networks to deliver robust IoT connectivity. It operates primarily in licensed spectrum bands, leveraging the infrastructure deployed by mobile network operators, making it a cost-effective solution for global IoT connectivity.
NB-IoT operates in a narrowband, typically using a 200 kHz channel, which is significantly smaller than the bandwidth used by other cellular technologies like LTE. This narrow channel is optimised for low data-rate transmissions and is designed to efficiently handle small, infrequent data bursts. The technology uses existing cellular infrastructure but requires a modified version of the standard LTE (Long-Term Evolution) framework. NB-IoT can be deployed in standalone mode (where it is deployed independently of other cellular technologies) or in in-band mode (where it uses unused resources within existing LTE networks).
Devices using NB-IoT typically send small packets of data with low frequency, making the technology well-suited for applications where devices don't need continuous communication but must report data periodically. NB-IoT also supports power-saving mechanisms that allow devices to sleep for extended periods between transmissions. This is ideal for IoT devices in remote locations or situations requiring long battery life.
Key Features of NB-IoT
Advantages of NB-IoT
Limitations of NB-IoT
Use Cases of NB-IoT
NB-IoT represents a key advancement in IoT networking technologies, offering long-range coverage, low power consumption, high device density, and cost-effective connectivity. Its ability to operate on existing cellular networks and deliver reliable communication for low-data-rate applications makes it ideal for various IoT use cases, particularly in remote monitoring, asset tracking, and smart cities. Although it is unsuitable for high-bandwidth applications, its extensive coverage, scalability, and security make it a vital technology for IoT ecosystems across the globe.
4. LTE-M (Long-Term Evolution for Machines)
Description
LTE-M, or Long Term Evolution for Machines, is a cellular-based networking technology designed explicitly for the Internet of Things (IoT). It is part of the broader LTE (Long-Term Evolution) family, the backbone of most modern mobile communication systems. LTE-M, however, has been optimised for low-power, wide-area (LPWA) IoT applications, offering a balance between low power consumption and relatively higher data rates compared to other IoT technologies like NB-IoT (Narrowband IoT). LTE-M is primarily used for machine-to-machine (M2M) communications, where devices such as sensors, meters, trackers, and industrial equipment must connect to the network to transmit small or moderate amounts of data. LTE-M operates within the licensed spectrum and is built to leverage the existing LTE infrastructure. It is a natural choice for mobile network operators looking to extend their coverage to IoT devices with relatively higher mobility and more substantial data throughput needs.
LTE-M operates in licensed spectrum, leveraging the existing cellular infrastructure that supports 4G LTE technologies. It can be deployed as a standalone solution or alongside other IoT technologies, such as NB-IoT, to provide different coverage and data rate options for various IoT use cases. The architecture of LTE-M is similar to that of standard LTE, but it is optimised for lower power consumption and low-data applications. LTE-M utilises FDD (Frequency Division Duplex) for data communication, allowing simultaneous two-way communication and providing a more efficient link for IoT devices. LTE-M devices are typically connected for long periods, sending data in bursts or based on scheduled events (e.g., temperature readings and location updates). This allows LTE-M devices to stay in sleep modes and only transmit data periodically, conserving energy and maximising battery life.
Key Features of LTE-M
Advantages of LTE-M
Limitations of LTE-M
Use Cases of LTE-M
LTE-M is a versatile, scalable, and efficient IoT networking technology that balances low power consumption with higher data rates, global coverage, and excellent mobility support. It is well-suited for many IoT applications, particularly those involving mobile devices or moderate data throughput needs.
5. Haystack
Description
Haystack is an open-source, low-power, wide-area network (LPWAN) technology designed to provide long-range, scalable communication solutions for the Internet of Things (IoT). It aims to address the challenges of IoT deployments that require long-range communication while maintaining energy efficiency, ease of integration, and cost-effectiveness. While not as widely known as LoRa or SigFox, Haystack offers a robust solution for IoT networks that must scale over large areas, particularly in industrial and infrastructure monitoring applications.
Haystack is designed to operate to enable connectivity over large areas using unlicensed radio spectrum bands (like 868 MHz, 915 MHz, etc.), which lowers the cost of deployment since there is no need to pay for spectrum licenses. It uses a combination of technologies and protocols to ensure efficient communication in environments with low power consumption and long-range needs.
Haystack devices communicate through LPWAN gateways and use data aggregation and mesh networking strategies to extend their reach and enable scalable IoT deployments. These devices typically operate in a star or mesh network topology, where they communicate directly with the gateway or hop from one device to another to get data to a central gateway.
Key Features of Haystack Technology
Haystack vs. Other LPWAN Technologies
Applications of Haystack Technology
Challenges and Limitations of Haystack
Haystack represents a promising LPWAN solution for IoT deployments, particularly for those seeking a flexible, cost-effective, and open-source alternative to more established technologies. It excels in long-range communication, low power consumption, and scalability, making it suitable for various IoT applications, especially in industrial, agriculture, and smart city domains. However, its adoption is still growing, and its ecosystem is not as developed as other LPWAN technologies, meaning it may not yet be the first choice for every IoT deployment.
Networking technologies establish the foundation for communication between IoT devices and systems, ensuring efficient routing, addressing, and connectivity. The networking technologies for IoT are based on the IPv6 (Internet Protocol version 6). It is the latest version of the Internet Protocol (IP) designed to address the limitations of its predecessor, IPv4. IPv6 introduces a vastly larger address space and enhanced features tailored to modern networking needs, making it a cornerstone for the Internet of Things (IoT). With the exponential growth of IoT devices, IPv6 plays a critical role in enabling seamless communication, scalability, and efficient management.
Key Features of IPv6
Benefits and Applications of IPV6 in IoT
IPv6 Technologies for IoT Networking
Several protocols and technologies built on IPv6 are specifically tailored for IoT applications:
1. 6LoWPAN (IPv6 over Low-Power Wireless Personal Area Networks)
A lightweight adaptation of IPv6 for resource-constrained devices, 6LoWPAN allows IoT devices to operate efficiently over low-power, low-data-rate wireless networks.
Features
Use Cases: Smart homes, industrial IoT, and environmental monitoring.
2. RPL (Routing Protocol for Low-Power and Lossy Networks)
A routing protocol designed for IPv6 networks with constrained devices and lossy communication links.
Features
Use Cases: Smart cities, precision agriculture, and remote monitoring systems.
3. ND (Neighbor Discovery Protocol)
An IPv6 protocol is used for device discovery and address resolution in IoT networks.
Features
Use Cases: Connected vehicles, healthcare devices, and smart appliances.
4. CoAP (Constrained Application Protocol)
Although not exclusively an IPv6 technology, CoAP operates over IPv6 to provide lightweight RESTful communication for constrained IoT devices.
Features
Use Cases: Smart lighting, HVAC systems, and energy management.
Challenges of IPv6 in IoT
Real-World Applications of IPv6 in IoT
IPv6 is a transformative IoT technology that addresses scalability, security, and efficiency challenges in connected ecosystems. Its vast address space, robust features, and compatibility with advanced IoT protocols make it an essential enabler for the IoT revolution. By leveraging IPv6, organisations can build scalable, secure, and future-proof IoT networks that cater to diverse applications across industries.
High-level communication protocols define how IoT devices communicate with each other or cloud services.
1. MQTT (Message Queue Telemetry Transport)
MQTT is a lightweight, publish-subscribe messaging protocol ideal for constrained devices. It uses a central broker to exchange communication among IoT devices, and IoT nodes connect to it. Devices play the role of either a publisher or subscriber, or both. MQTT uses topicsto address the data that is uniquely exchanged uniquely. Subscribers “subscribe” to selected topics, and the broker is responsible for ensuring the proper distribution of the messages. Subscribers can use wildcards to subscribe to a single topic, but many can be subscribed simultaneously. Communication models are virtually N:N, so one publisher can send messages to many subscribers, and a subscriber can receive messages from multiple publishers. The broker can retain messages and has a feature of the “last will” to notify when it detects a broken connection. Regular implementation of the MQTT uses TCP to connect nodes to the broker.
Advantages
Disadvantages
2. AMQP (Advanced Message Queuing Protocol)
AMQP is designed to deliver robust messages in enterprise-grade IoT systems. It uses mechanisms similar to MQTT, with a central server, also called a broker, implementing so-called “exchanges” with queues. AMQP is flexible with various exchange models that ensure correct flow from the Publishers to the Consumers. AMQP has an acknowledgement mechanism even if it uses TCP: it is to ensure delivery in non-reliable networks. Currently, a predefined set of exchanges is given in the 0.9 version of the protocol implementation. It includes Direct Exchange, Fanout Exchange, Topic Exchange and Headers Exchange, but users can define other models. The service's address uses URI schema, which is similar to the CoAP.
Advantages
Disadvantages
3. CoAP (Constrained Application Protocol)
CoAP is a RESTful protocol for resource-constrained IoT devices. In CoAP, every node provides a service virtually available to any connecting client, so the messaging model is 1:1 but distributed among devices. In CoAP, there is no central broker opposite the MQTT and AMQP. Each IoT node can create a service endpoint. CoAP is similar to HTTP but much more straightforward regarding resources and implementation. CoAP uses UDP and URI to address endpoints. A URI can contain the IP/service addressing name and a path and port. Implementation foresees scenarios with delayed replies to the request message for lazy devices. Because of the underlying UDP protocol, communication is stateless, but each request-response pair is identified with a token. CoAP's specification has a “discovery” mechanism so that IoT devices can present their endpoints to the other devices connected to the network in an automated way. CoAP messages can be proxied and cached.
Advantages
Disadvantages
4. Lightweight machine-to-machine (LWM2M) Lightweight Machine-to-Machine (LWM2M) is a communication protocol for managing IoT devices with constrained resources. Developed by the Open Mobile Alliance (OMA), it offers an efficient, interoperable framework for device management and data exchange between IoT devices and management platforms. LWM2M is particularly suited for devices with limited computational power, memory, or energy resources, such as battery-powered sensors or actuators.
Key Features:
1. Resource Efficiency:
2. Interoperability:
3. Security:
4. Device Management:
5. Data Models:
Advantages
5. UltraLight 2.0
Key Features
1. Minimalism:
2. Low Bandwidth Usage:
3. Compatibility with FIWARE:
4. Ease of Implementation:
5. Stateless Communication:
Advantages
Designing a network for the Internet of Things (IoT) requires a strategic approach integrating scalability, security, efficiency, and interoperability. IoT network design methodologies revolve around creating robust, flexible, and efficient networks supporting diverse devices, applications, and services. These methodologies emphasise handling large volumes of data, ensuring real-time communication, and maintaining high levels of security and reliability. This section explores the principles, methodologies, challenges, and best practices for designing IoT networks.
Below is a list of principles regarding IoT Network Design. Those principles vary from application to application but, in general, include (figure 33):
A short review of the IoT Network Design Methodologies is presented in figure 34 and described below.
1. Hierarchical Design
A hierarchical approach organises the IoT network into distinct layers, typically categorised as:
Advantages
2. Edge-Centric Design
It focuses on processing data closer to where it is generated, at the network edge. Edge devices like gateways and edge servers handle computation, storage, and analysis.
Advantages
3. Mesh Networking
It employs a decentralised design where devices connect directly to each other in a peer-to-peer manner. Mesh networks are often used in smart homes, industrial IoT, and smart cities.
Advantages
4. Centralised Design
It involves a hub-and-spoke model in which devices connect to a central controller, gateway, or server for data processing and management.
Advantages
5. Cloud-Based Design
Data from IoT devices is transmitted to a centralised cloud platform for processing, storage, and management. Cloud providers also offer analytics, machine learning, and application integration services.
Advantages
6. Hybrid Design
It combines edge and cloud computing to leverage their benefits. Critical, low-latency tasks are processed at the edge, while large-scale analytics and storage are handled in the cloud.
Advantages
Standard design workflow for IoT Networks includes the following steps (figure 35):
1. Requirement Analysis:
Identify the purpose of the IoT system, including device types, communication needs, expected data volumes, and performance requirements.
2. Topology Selection:
Choose the most suitable topology (e.g., star, mesh, tree, hybrid) based on the use case, device distribution, and scalability needs.
3. Protocol and Communication Technology:
Select protocols and technologies for connectivity:
4. Bandwidth and Capacity Planning
Ensure the network can handle peak data loads without performance degradation.
5. Security Architecture:
6. Energy Management
Design for energy efficiency using low-power communication protocols and scheduling device wake-up times.
7. Testing and Optimisation
IoT network design is a demanding process, and once started, is should target several challenges, including (figure 36) those presented and discussed below.
1. Device Diversity:
Supporting multiple device types, protocols, and standards is complex and may lead to compatibility issues.
2. Scalability:
Managing millions of devices and their data streams requires robust and scalable solutions.
3. Security Threats:
IoT networks are vulnerable to attacks such as DDoS, data breaches, and device hijacking. Integrating security systems into IoT networks is challenging due to hardware and networking resource constraints.
4. Latency Sensitivity:
Real-time applications demand ultra-low latency, which can be challenging in distributed environments.
5. Resource Constraints:
Balancing performance and energy efficiency for resource-constrained devices is a persistent challenge.
6. Regulatory Compliance
IoT networks must adhere to regional and industry-specific data privacy and security regulations.
Due to the complexity of the design process and the variety of approaches and options, there are some best practices as the IoT market nowadays has grown with many large and small-scale real-life use cases. Each application has its specific requirements, but some standard best practices exist as presented in figure 37 and discussed below.
1. Use Standardised Protocols:
Ensure compatibility and interoperability by adopting widely accepted standards like MQTT, CoAP, and IPv6.
2. Implement Redundancy:
Incorporate failover mechanisms and redundant pathways to enhance reliability.
3. Prioritise Security:
Encrypt data, use secure boot processes, and enforce least privilege access policies.
4. Adopt Modular Architecture
Design the network using modular components to simplify maintenance and scalability.
5. Monitor and Manage:
Deploy monitoring tools to track performance, detect anomalies, and optimise resource utilisation.
6. Optimise for Energy Efficiency:
Use low-power wireless technologies and energy-efficient hardware.
IoT technologies are closely related to the development of general ITC technologies. At the moment, significant factors driving the development of the IoT networks are discussed below and shortly presented in figure 37.
1. 5G/6G Networks: Future IoT networks will leverage 5G/6G technologies to achieve ultra-low latency, massive connectivity, and enhanced reliability.
2. AI-Driven Network Management: Artificial intelligence (AI) and machine learning (ML) are used to optimise IoT network performance and predict potential failures.
3. Blockchain for Security: Blockchain technology is increasingly used to secure IoT networks by providing immutable, decentralised record-keeping.
4. Digital Twins: Digital twins enable real-time simulation and optimisation of IoT networks, improving design and operation.
5. Fog Computing: Extending the capabilities of edge computing, fog computing processes data closer to devices, enhancing speed and efficiency.
IoT network design methodologies are critical for creating robust, scalable, and secure ecosystems that can handle the diverse demands of IoT applications. By adhering to structured methodologies and staying informed about emerging trends, organisations can build IoT networks that are efficient, reliable, and prepared for future challenges.
The design of a robust IoT (Internet of Things) network is fundamental to the success of any IoT project. A well-architected network ensures reliable communication between IoT devices, minimises latency, optimises power consumption, and enables efficient data transfer. However, building an IoT network is complex, requiring the integration of various technologies, protocols, and platforms. IoT network design tools assist in modelling, simulating, and managing the networks interconnecting the myriad IoT devices. This section explores the types of IoT network design tools, their features, and their use cases. A short list of tools is presented in the diagram 39.
IoT network design tools can be classified into the following categories:
Before deployment, network simulation tools allow developers to create and test IoT networks virtually. These tools simulate the behaviour of devices, communication protocols, and network conditions, allowing for better planning, optimisation, and troubleshooting.
Common Tools
a. Cisco Packet Tracer
b. OMNeT++
c. NS3 (Network Simulator 3)
d. Castalia
IoT networks require robust communication protocols to enable devices to exchange data efficiently. Network protocol design tools help define and optimise these protocols, ensuring they meet the specific needs of IoT environments.
Common Tools
a. Wireshark
b. Mininet
Features: A network emulator that creates custom virtual network topologies for testing network protocols.
Use Case: Used to test the interaction of IoT protocols and evaluate their scalability.
Key Benefits: High flexibility in designing and emulating IoT network topologies and protocols.
c. MQTT.fx
Connectivity is at the heart of any IoT network. These tools are designed to help manage and optimise the communication between IoT devices and their associated infrastructure (gateways, clouds, etc.).
Common Tools
a. LoRaWAN Network Server (LNS)
b. Zigbee2MQTT
c. NB-IoT (Narrowband IoT) Design Tools
Designing an efficient network topology is critical in IoT systems. These tools help create the architecture of an IoT network, determine how devices communicate with each other, and ensure data flows efficiently.
Common Tools
a. UVexplorer
UVexplorer is a network discovery and visualisation tool that simplifies the mapping and monitoring of network devices. For more details, see [9].
Features Useful for IoT Networks
1. Network Discovery:
2.Topology Mapping:
3. Device Inventory:
4. Troubleshooting:
Quickly identifies issues like unreachable devices, misconfigurations, or overloaded connections, which are critical in IoT networks where uptime is essential.
Possible use in IoT Network Design
b. Lucidchart
c. ManageEngine OpManager ManageEngine OpManager is a comprehensive network management tool designed to monitor, manage, and maintain the health of IT and IoT infrastructure.
Features Useful for IoT Networks
1. Real-Time Monitoring:
2. Alerting and Notifications:
3. Performance Management:
IoT networks need to be able to handle high device densities and traffic loads without compromising performance. These tools allow for testing the performance of IoT networks under varying conditions.
Common Tools
a. iPerf
b. JMeter
c. LoadRunner
Security is a significant concern in IoT networks. These tools help to identify vulnerabilities and ensure that IoT systems are secure against cyber threats.
Common Tools
a. Wireshark (as mentioned above)
b. Nessus
c. Kali Linux
End-to-end IoT network platforms provide a complete solution for managing IoT networks, from device connectivity to cloud-based data analytics and security.
Designing efficient, reliable, and scalable IoT networks requires addressing challenges such as resource optimisation, communication reliability, scalability, energy efficiency, and security. Mathematical modelling is a powerful tool for tackling these challenges by providing a structured framework for analysing, simulating, and optimising IoT systems.
Key Applications of Mathematical Modeling in IoT Network Design
1. Network Topology Design
Mathematical models help design network topologies by optimising the placement of devices and gateways. Graph theory often represents IoT networks, where devices are nodes and communication links are edges. Models analyse the trade-offs between cost, latency, and coverage, enabling the design of efficient topologies.
2. Resource Allocation and Optimisation
IoT networks have limited resources like bandwidth, energy, and computational power. To allocate resources effectively, Optimisation techniques, such as linear programming (LP), integer programming, and heuristic methods, are used.
3. Communication and Data Flow Management
Mathematical models ensure reliable data transmission in IoT networks by addressing packet loss, latency, and congestion issues. Queueing theory is often applied to model data traffic, while game theory can optimise device decision-making.
4. Scalability Analysis IoT networks often grow as more devices are added. Mathematical models help predict the network's performance under scaling scenarios and determine the maximum capacity before degradation occurs.
5. Security and Privacy Modelling
Ensuring data security and privacy is critical in IoT networks. Cryptographic algorithms and intrusion detection systems are often modelled using probability theory and stochastic processes to evaluate their effectiveness.
6. Energy Efficiency
IoT devices, especially in wireless sensor networks, often rely on battery power. Mathematical models optimise energy usage through sleep-wake cycles, energy harvesting, and efficient communication protocols.
Mathematical Techniques Commonly Used in IoT Design
1. Optimisation Techniques
2. Stochastic Processes and Probability Models
3. Graph Theory
4. Game Theory
5. Queueing Theory
Advantages of Mathematical Modelling in IoT Networks
Challenges and Future Directions
Future research may focus on hybrid approaches, integrating mathematical models with simulation and AI to address the evolving complexity of IoT ecosystems. Mathematical modelling will remain a cornerstone in designing robust, efficient, and future-ready IoT networks.
The Internet of Things (IoT) is a transformative technological paradigm still in its early stages of development. As IoT adoption continues to grow, there is an opportunity to design systems that are scalable, energy-efficient, cost-effective, interoperable, and secure by design while maintaining an acceptable level of Quality of Service (QoS). Achieving these objectives requires a holistic, system-centric approach that balances stakeholders' diverse and sometimes conflicting goals, including network operators, service providers, regulators, and end users.
The Need for Systems Thinking and System Dynamics in IoT
IoT systems are inherently complex, involving the interaction of heterogeneous devices, communication protocols, networks, applications, and stakeholders. Traditional design approaches, which often focus on isolated components, fail to address the interdependencies and dynamic behaviours that characterise these systems. Systems Thinking and System Dynamics (SD) provide a structured framework for analysing and addressing this complexity.
Key Benefits of Systems Thinking in IoT
Application of System Dynamics in IoT Design
System Dynamics (SD), as an extension of Systems Thinking, uses modelling and simulation tools to analyse the structure and behaviour of complex systems over time. By employing both qualitative and quantitative methods, SD helps in the design and operation of IoT systems with the following objectives:
1. Modeling Interactions:
SD tools like causal loop diagrams (CLDs) and stock-and-flow diagrams are instrumental in visualising the interactions between IoT devices, networks, and environmental factors. For instance:
2. Scenario Analysis: SD allows the simulation of various operational scenarios, such as introducing new devices, changes in traffic patterns, or security breaches, to predict system behaviour and identify potential vulnerabilities.
3. Optimisation of Resource Utilisation:
SD can identify energy consumption, bandwidth allocation, and computational resource usage inefficiencies by modelling IoT networks and guiding cost and energy efficiency improvements.
4. Designing Secure IoT Systems:
Security in IoT is a critical challenge due to the heterogeneity of devices and networks. SD can:
Feedback-Driven Improvement: SD models incorporate feedback loops, which are crucial for designing systems capable of self-adaptation. For example:
Case Studies and Applications in IoT Security and Efficiency
1. Smart Agriculture (e.g., Rice Farming):
As demonstrated in a study cited in [10], SD was used to develop causal loop diagrams to understand the interactions between environmental factors, IoT-enabled sensors, and farming outcomes. By identifying key leverage points, the researchers proposed IoT-based solutions to enhance rice productivity while minimising resource use.
2. Energy Management in Smart Grids:
IoT systems in smart grids involve dynamic interactions between energy generation, storage, and consumption. SD has been applied to:
3. Healthcare IoT:
In IoT-enabled healthcare systems, SD tools have been used to analyse:
4. IoT Security Simulation:
SD models simulate the effects of cyberattacks, such as Distributed Denial of Service (DDoS), to evaluate the resilience of IoT networks. These simulations help design proactive strategies, such as anomaly detection algorithms and dynamic resource allocation.
Comprehensive Framework for IoT Design
A comprehensive framework is needed to address IoT systems' growing complexity and evolving requirements. This framework should integrate:
The application of Systems Thinking and System Dynamics in IoT security and efficiency offers a powerful approach to navigating the complexities of modern IoT ecosystems. By focusing on feedback loops, stakeholder goals, and holistic modelling, these methodologies provide the tools to design IoT systems that are secure and reliable but also scalable, interoperable, and energy-efficient. Future research should emphasise the development of integrated frameworks that combine qualitative insights with quantitative rigour, paving the way for robust IoT solutions that address current and emerging challenges.
People often think of IoT systems as WSN systems (figure 40), which is usually a close but inaccurate concept. There are several distinctive features of WSNs among other systems:
WSN systems, depending on their application and technical solutions, might be split into several groups:
Depending on the application and particular functionality, WSN systems employ one of the following typical topologies:
Star network (single point to multi-point, figure 41):
Mesh network (figure 42):
Hybrid Star (figure 43):
Due to developments in infrastructure and communications technologies, IoT has grown far beyond simple interconnected devices as it is with WSN. While the IoT system might include WSN as its part, the IoT system functionality and application goal shift more towards decision-making and deeper data analysis. Because of its growing processing power, the availability of global wireless infrastructure, and synergies, IoT can solve complex tasks and support complex decisions.
WSN v.s. IoT challenges: Since the beginning, WSNs have been challenged by the availability of reliable data transport and power consumption. IoT has different challenges:
IoT is a network of physical things or devices that might include sensors or simple data processing units, complex actuators, and significant hybrid computing power. Today, IoT systems have transitioned from being perceived as sensor networks to smart-networked systems capable of solving complex tasks in mass production, public safety, logistics, medicine and other domains, requiring a broader understanding and acceptance of current technological advancements, including advanced AI data processing.
Since the very beginning of sensor networks, one of the main challenges has been data transport and data processing, where significant efforts have been put by the ICT community towards service-based system architectures. However, the current trend already provides considerable computing power, even for small mobile devices. Therefore, the concepts of future IoT already shifted towards more innovative and more accessible IoT devices, and data processing has become possible closer to the Fog and Edge.
Cloud-based computing (figure 44) is a relatively well-known and adequately employed paradigm where IoT devices can interact with remotely shared resources such as data storage, processing, and mining. Other services are unavailable to them locally because of the constrained hardware resources (CPU, ROM, RAM) or energy consumption limits. Although the cloud computing paradigm can handle vast amounts of data from IoT clusters, the transfer of extensive data to and from cloud computers presents a challenge due to limited bandwidth[11]. Consequently, there is a need to process data near data sources, employing the increasing number of smart devices with enormous processing power and a rising number of service providers available for IoT systems.
Fog computing (figure 45)addressed the bottlenecks of cloud computing regarding data transport while providing the needed services to IoT systems.
Fog computing is a trend that aims to process data near the source. It pushes applications, services, data, computing power, and decision-making away from the centralised nodes to the logical extremes of a network. Fog computing significantly decreases the data volume that must be moved between end devices and the cloud.
Fog computing enables data analytics and knowledge generation closer to the data source. Furthermore, the dense geographic distribution of fog helps to attain a better-localised accuracy for many applications than the cloud processing of the data [12].
The recent development of energy-efficient hardware with AI acceleration enters the fog class of the devices, putting fog computing in the middle of the interest of IoT application development and extending new horizons to them. Fog computing is more energy efficient than raw data transfer to the cloud and back, and in the current scale of the IoT devices, the application is meant for the future of the planet Earth. Fog computing usually also has a positive impact on IoT security, e.g., sending pre-processed and depersonalised data to the cloud and providing distributed computing capabilities that are more attack-resistant.
Recent developments in hardware, power efficiency, and a better understanding of IoT data nature, including privacy and security, led to solutions where data is processed and pre-processed right to their source in the Edge class devices. Edge data processing on end-node IoT devices is crucial in systems where privacy is essential and sensitive data is not to be sent over the network (e.g. biometric data in a raw form). Moreover, distributed data processing can be considered more energy efficient in some scenarios where, e.g. extensive, power-consuming processing can be performed during green energy availability (figure 46).
While Cloud, Fog, and Edge systems might seem the same to the end user from a functionality perspective, they are very different and provide different performance, scalability, and computing capabilities, which are emphasised in the following comparison, presented in figure 47.
According to [13], Cognitive IoT, besides a proper combination of hardware, sensors and data transport, comprises cognitive computing, which consists of the following main components:
Usually, cognitive IoT systems or C-IoT are expected to add more resilience to the solution. Resilience is a complex term explained differently in different contexts; however, there are standard features for all resilient systems. As a part of their resilience, C-IoT should be capable of self-failure detection and self-healing that minimises or gradually degrades the system's overall performance. In this respect, the non-resilient system fails or degrades in a step-wise manner. In case of security issues, that system should be able to change its security keys and encryption algorithms and take other measures to cope with the detected threats. Self-optimisation abilities are often considered part of the C-IoT feature list to provide more robust solutions. Recent developments in the Fog and Edge class devices and the efficient software leverage cognitive IoT Systems to a new level.
All IoT System Architectures presented before, from cloud to cognitive systems, focus on adding value to IoT devices, system users, and related systems on demand. Since market and technology acceptance of mobile devices is still growing, and the amount of produced data from those devices is growing exponentially, mobility as a phenomenon is one of the main driving forces of the technological advancements of the near future.
IoT systems are built to provide better insights into different processes and systems to make better decisions. The insights are provided by measuring the statuses of the systems or process elements represented by data. Unfortunately, the bits and bytes become useless without adequately interpreting the data content. Therefore, providing a means for understanding data is an essential property of a modern IoT system. Today, IoT systems produce a vast amount of data, which is very hard to use manually. Thanks to modern hardware and software developments, it is possible to develop fully or semi-automated systems for data analysis and interpretation, which may go further into decision-making and acting according to the decisions.
As various resources have stated, IoT, in most cases, complies with the so-called big 5Vs of Big Data, where just one correspondence is needed to solve a Big Data problem. As has been explained by Jain et al. [14] Big Data might be of different forms, volumes and structures, and in general, the 5Vs, e.g. Volume, Variety, Veracity, Velocity and Value might be interpreted as follows:
This characteristic is the most obvious and refers to the size of the data. In most practical applications of IoT systems, large volumes of data are reached through intensive production and collection of sensor data. It usually rapidly populates existing operational systems and requires dedicated IoT data collection systems to be upgraded or developed from scratch (which is more advisable).
Jain explained that big data is highly heterogeneous regarding source, kind, and nature. Having different systems, processes, sensors, and other data sources, variety is usually a distinctive feature of practical IoT systems. For instance, a system of intelligent office buildings would need data from a building management system, appliances and independent sensors, and external sources like weather stations or forecasts from appropriate external weather forecast APIs (Application programming interfaces). Additionally, the given system might require historical data from other sources, like XML documents, CSV files or other sources, diversifying the sources even more.
Unfortunately, the volume or diversity of data does not bring value; the data needs to be reliable and clean. In other words, data has to be of good quality; otherwise, the analysis might not bring additional value to the system’s owner or even compromise the decision-making process—the quality of data is represented by Veracity. In IoT applications, it is easy to lose data quality due to malfunctioning sensors that are missing or producing false data. Since the IoT essential part is hardware, the data must be preprocessed in most cases.
Data velocity characterises the data bound to the time and its importance during a specific period or at a particular time instant. A good example might be any real-time system like an industrial process control system, where reactions or decisions must be made during a fixed period, requiring data at particular time instants. In this case, data has a flow nature of a specific density.
Since IoT systems and their data analysis subsystems are built to add value to their owners, the development and ownership costs should not exceed the returned value. A system is of low or no value if it does not apply.
Dealing with big data requires specific hardware and software infrastructure. While there is a certain number of typical solutions and a lot more customise, some of the most popular are explained here:
Those systems are based on well-known relational data models and appropriate database management systems like MS SQL Server, Oracle Server, MySQL, etc. There are some advantageous features of those systems, for instance:
Unfortunately, scaling out data writing (figure refrelationaldbms) is not always possible and is usually supported at a high cost for software products.
CEP systems are very application-tailored, enabling significant productivity at a reasonable cost. High productivity is usually needed for processing data streams, such as voice or video. Maintaining a limited time window for data processing is possible, which is relevant for systems close to real-time (figure 49). Some of the most common drawbacks to be considered are:
As the name suggests, the main characteristic is higher flexibility in data models, which overcomes the limitations of highly structured relational data models (figure 50). NoSQL systems are usually distributed, where the distribution is the primary tool to enable supreme flexibility. In IoT systems, software typically gets older faster than hardware, which requires the maintenance of many versions of communication protocols and data formats to ensure back compatibility. Another reason is the variety of hardware suppliers, where some protocols or data formats are specific to the given vendor. It also provides a means for scalability out and up, enabling high future tolerance and resilience. A typical approach uses a key-value or key-document approach, where a unique key indexes incoming data blocks or documents (JSON, for instance). Some other designs might extend the SQL data models by others – object models, graph models, or the mentioned key-value models, providing highly purpose-driven and, therefore, productive designs. However, the complexity of the design raises problems of data integrity as well as the complexity of maintenance.
This is probably the most productive type of system, providing high flexibility, productivity and scalability. Because these systems are designed to operate in servers RAM, the in-memory data grids are the best choice for data preprocessing in IoT systems due to their high productivity and ability to scale dynamically depending on actual workloads. They provide all the benefits of the CEP and Relational systems, adding a scale-out functionality for data writing. There are only two major drawbacks – limited RAM and high development costs. Some examples of available solutions:
This chapter is devoted to the main groups of algorithms for numerical data analysis and interpretation, covering both mathematical foundations and application specifics in the context of IoT. The chapter is split into the following subchapters:
In the previous chapter, some essential properties of Big Data systems have been discussed and how and why IoT systems relate to Big Data problems. In any IoT implementation, data processing is the system's heart, which at least transforms into a data product. While it is still mainly a software subsystem, its development differs significantly from that of a regular software product. The difference is expressed through the roles involved and the lifecycle itself. It is often wrongly assumed that the main contributor is the data scientist responsible for developing a particular data processing or forecasting algorithm. It is somewhat valid, except other essential roles are vital to success. The team of developers playing the roles might be as small as three or as large as 20 people, depending on the scale of the project. The leading roles are explained below.
Business users have good knowledge of the application domain and, in most cases, benefit significantly from the developed data product. They know how to transform data into a business value in the organisation. Typically, they take positions like Production manager, Business/market analyst, and Domain expert.
He is the one who defines the business problem and is triggering the birth of the project. He represents the project's scope and volume and meets the necessary provisions. While he defines project priorities, he does not have deep knowledge or skills in the technology, algorithms, or methods used.
As in most software projects, the project manager is responsible for meeting project requirements and specifications within the given time frame and available provisions. He selects the needed talents, chooses development methods and tools, and selects goals for the development team members. Usually, he reports to the project sponsor and ensures that information flows within the team.
He possesses deep knowledge in the given business domain, supported by his skills and experience. Therefore, he is a valuable asset for the team in understanding the data's content, origin, and possible meaning. He defines the key performance indicators (KPI) and metrics to assess the project's success level. He selects information and data sources to prepare information and data dashboards for the organisation's decision-makers.
He is responsible for configuring the development environment and Database (one, many, or a complex distributed system). In most cases, the configuration must meet specific performance requirements, which must be maintained. He ensures secure access to the data for the team members. During the project, he backs up data, restores it if needed, updates configuration, and provides other support.
Data engineers usually have deep technical knowledge of data manipulation methods and techniques. During the project, he tuned data manipulation procedures, SQL queries, and memory management and developed specific stored or server-side procedures. He is responsible for extracting particular data chunks for the Sandbox environment and formatting and tuning them according to data scientists' needs.
Develops or selects data processing models needed to meet the project specifications. Develops, tests and implements data processing methods and algorithms; develops decision-making support methods and their implementations for some projects. Provides needed research capacities for selecting and developing the data processing methods and models.
As it might be noticed, there is no doubt that the Data Scientist is playing a vital role, but only in cooperation with the other roles. For a single person, depending on their competencies and capacities, roles might overlap, or a single team member could provide several roles. Once the team is built, the development process can start. As with any other product development, data product development follows a specific life cycle of phases. Depending on particular project needs, there might be variations, but the data product development follows the well-known waterfall pattern in most cases. The phases are explained in the figure 51:
The project team learns about the problem domain, the problem itself, its structure, and possible data sources and defines the initial hypothesis. The phase involves interviewing the stakeholders and other potentially related parties to reach as broad an insight as necessary. It said that during this phase, the problem is farmed – defined the analytical problem, indicators of the success for the potential solutions, business goals and scope. To understand business needs, the project sponsor is involved in the process from the very beginning. The identified data sources might include external systems or APIs, sensors of different types, static data sources, official statistics and other vital sources. One of the primary outcomes of the phase is the Initial Hypothesis (IH), which concisely represents the team's vision of the problem and potential solution simultaneously. For instance, “Introduction of deep learning models for sensor time series forecast provides at least 25% better performance over statistical methods used at the moment.” Whatever the IH is, it is a much better starting point than defining the hypothesis during the project implementation in later phases.
The phase focuses on creating a sandbox system by extracting, transforming and loading it into a sandbox system (ETL – Extract, Transform, Load). This is usually the most prolonged phase in terms of time and can take up 50% of the total time allocated to the project. Unfortunately, most teams tend to underestimate this time consumption, which costs the project manager and analysts dearly, leading to losing trust in the project's success. Data scientists given a unique role and authority in the team tend to “skip” this phase and go directly to phase 3 or 4, which is costly because of incorrect or insufficient data to solve the problem.
The main task of the phase is to select model candidates for data clustering, classification or other needs consistent with the Initial Hypothesis from Phase 1.
During this phase, the initially selected trim models are implemented on a full scale concerning the gathered data. The main question is whether the data is enough to solve the problem. There are several steps to be performed:
In some areas, false positives are more dangerous than false negatives. For example, targeting systems may inadvertently target “their own”.
During this phase, the results must be compared against the established quality criteria and presented to those involved in the project. It is important not to show any drafts outside a group of data scientists! - The methods used by most of those involved are too complex, which leads to incorrect conclusions and unnecessary communication to the team. Usually, the team is biased in not accepting the results, which falsifies the hypotheses, taking it too personally. However, the data led the team to the conclusions, not the team itself! Anyway, it must be verified that the results are statistically reliable. If not, the results are not presented. It is also essential to present all the obtained side results, as they almost always provide additional value to the business. The general conclusions need to be complemented by sufficiently broad insights into the interpretation of the results, which is necessary for users of the results and decision-makers.
The results presented are first integrated into the pilot project before full-scale implementation, after which the widespread roll-out follows the pilot's tests in the production environment. During this phase, some performance gaps may require replacing, for instance, Python or R code with compiled code. Expectations for each of the roles during this phase:
In most cases, data must be prepared before analysing or applying some processing methods. There might be different reasons for this, such as missing values, sensor malfunctioning, different time scales, different units, the specific format needed for a given method or algorithm, and many more. Therefore, data preparation is as necessary as the analysis itself. While data preparation is usually particular to a given problem, some standard general cases and preprocessing tasks are beneficial. Data preprocessing also depends on the data's nature – preprocessing is usually very different for data, where the time dimension is essential (time series), or it is not like a log of discrete cases for classification, where there are no internal causal dependencies among entries. It must be emphasised that whatever data preprocessing is done needs to be carefully noted and the reasoning behind it explained so that others can understand the results acquired during the analysis.
Some of the methods explained here might also be applied to time series but must be done with full awareness of possible implications. Usually, the data should be formatted as a table consisting of rows representing data entries or events and fields representing features of the event entry. For instance, a row might represent a room climate data entry, where fields or factors represent air temperature, humidity level, CO2 level and other vital measurements. For the sake of simplicity in this chapter, it is assumed that data is formatted as a table.
One of the most common situations is missing sensor measurements, which communication channel issues, IoT node malfunctioning or other reasons might cause. Since most data analysis methods require complete entries, it is necessary to ensure that all data fields are present before applying the analysis methods. Usually, there are some common approaches to deal with the missing values:
Scaling is a very often used method for continuous value numerical factors. The main reason is that different value intervals for other factors are observed. It is essential for methods like clustering, where a multi-dimensional Euclidian distance is used, where, in the case of different scales, one of the dimensions might overwhelm others just because of a higher order of the numerical values. Usually, scaling is performed by applying a linear transformation of the data with set min and max values, which mark the desired value interval. In most software packages, like Python Pandas [22], scaling is implemented as a simple-to-use function. However, it might be done manually if needed as well:
where:
Vold – the old measurement
Vnew – the new – scaled measurement
mmin – minimum value of the measurement interval
mmax – maximum value of the measured interval
Imin – minimum value of the desired interval
Imax – maximum value of the desired interval
Normalisation is effective when the data distribution is unknown or known as non-Gaussian (not following a bell curve of the Gaussian distribution). It is beneficial for data with varying scales, especially when using algorithms that do not assume any specific data distribution, such as k-nearest neighbours and artificial neural networks. Normalisation does not change the scale of the values but the distribution of the values to represent a Gaussian distribution. This technique is mainly used in machine learning and is performed with appropriate software packages due to the complexity of the calculations when compared to scaling.
Sometimes, it is necessary to emphasise a particular phenomenon in the data. For instance, it might be very helpful to bolden the changes in the factor value, i.e., those that are more distant from 0 should be even larger, but those closer should not be raised. In this case, applying the exponent function to the factor values—squaring or raising to a power of 4—is a simple technique. If negative values are present, uneven power values might be used. A variation of the technique might be summing up different factor values before or after applying the exponent. In this case, a group of similar values representing the same phenomenon emphasises it. Any other function can be used to represent the specifics of the problem.
Time series usually represent the dynamics of some process, and therefore, the order of the data entries has to be preserved. This means that in most cases, all of the mentioned methods might be used as long as the data order remains the same. A time series is simply a set of data - usually events, arranged by a time marker. Typically, time series are arranged in the order in which events occur/are recorded. Several significant consequences follow from this simple fact:
Therefore, there are several questions that data analysis typically tries to answer:
Autocorrelation - A process is autocorrelated if the similarity of the values of a given observation is a function of the time between observations. In other words, the difference between the values of the observations depends on the interval between the observations. This does not mean that the process values are identical but that their differences are similar. The process can equally well be decaying or growing in the mean value or amplitude of the measurements, but the difference between subsequent measurements is always the same (or close).
Seasonality - The process is seasonal if the deviation from the average value is repeated periodically. This does not mean the values must match perfectly, but there must be a general tendency to deviate from the average value regularly. A perfect example is a sinusoid.
Stationarity - A process is stationary if its statistical properties do not change over time. Generally, the mean and variance over a period serve as good measures. In practice, a certain tolerance interval is used to tell whether a process is stationary since ideal cases (no noise) do not tend to occur in practice. For example, temperature measurements over several years are stationary and seasonal. It is not autocorrelated because temperatures are still relatively variable across days. Numerically, stationarity is evaluated with the so-called Dickey-Fuller test [23], which uses a linear regression model to measure change over time at a given time step. The model's t-test [24] indicates how statistically strong the hypothesis of process stationarity is.
In many cases, it is necessary to emphasise the main pattern of the time series while removing the “noise”. In general, there are two main techniques – decimation and smoothing. Both are widely used but need to be treated carefully.
The essence of the method is to obtain an average value within a particular time window, M, thereby giving inertia to the incoming signal and reducing noise's impact on the overall analysis result. Different effects might be obtained depending on the size of the time window M.
where:
SMAt - the new smoothed value at time instant t
Xi – ith measurement at a time instant i
M – time window
The image in figure 54 demonstrates the effects of a time window size of 10 and 100 measurements – an incoming signal from a freezer's thermometer.
The exponential moving average is widely used in noise filtering, for example, in analysing changes in stock markets. Its main idea is that each measurement's weight (influence) decreases exponentially as time increases. Thus, the evaluation takes more recent measurements and less considers older ones.
where:
EMAt - the new smoothed value at time instant t
Xi – ith measurement at a time instant i
Alpha - smoothing factor between 0 and 1, which reflects the weight of the last - the most recent measurement.
As seen in the picture 56, the exponential moving average in the case of different weighting factor values preserves the shape of the initial signal. It has a minimal lag while removing the noise, which makes it a handy smoothing technique.
Decimation is a technique of excluding some entries from the initial time series to reduce the overwhelming or redundant data. As the name suggests, every tenth entry is usually excluded to reduce the data by 10%. It is a simple method that significantly benefits cases of over-measured processes with slow dynamics. With preserved time stamps, the data still allows the application of general time-series analysis techniques like forecasting.
While AI and especially Deep Learning techniques have advanced tremendously, the fundamental data analysis methods still provide a good and, in most cases, efficient way of solving many data analysis problems. Linear regression is one of those methods that provide at least a good starting point to have an informative and insightful understanding of the data. Linear regression models are relatively simple and do not require significant computing power in most cases, which makes them widely applied in different contexts. The term regression towards a mean value of a population was widely promoted by Francis Galton, who introduced the term “correlation” in modern statistics[25] [26] [27].
Linear regression is an algorithm that computes the linear relationship between the dependent variable and one or more independent features by fitting a linear equation to observed data. In its essence, linear regression allows the building of a linear function – a model that approximates a set of numerical data in a way that minimises the squared error between the model prediction and the actual data. Data consists of at least one independent variable (usually denoted by x) and the function or dependent variable (usually denoted by y). If there is just one independent variable, then it is known as Simple Linear Regression, while in the case of more than one independent variable, it is called Multiple Linear Regression. In the same way, in the case of a single dependent variable, it is called Univariate Linear Regression. In contrast, in the case of many dependent variables, it is known as Multivariate Linear Regression. For illustration purposes in the figure 57 below, a simple data set is illustrated that was used by F. Galton while studying relationships between parents and their children's heights. The data set might be found here: [28]
If the fathers' heights are Y and their children's heights are X, the linear regression algorithm is looking for a linear function that, in the ideal case, will fit all the children's heights to their parent heights. So, the function would look like the following equation:
where:
Unfortunately, in the context of the given example, finding such a function is not possible for all x-y pairs at once since x and y values differ from pair to pair. However, finding a linear function that minimises the distance of the given y to the y' produced by the function or model for all x-y pairs is possible. In this case, y' is an estimated or forecasted y value. At the same time, the distance between each y-y' pair is called an error. Since the error might be positive or negative, a squared error estimates the error. It means that the following equation might describe the model:
where
The estimated beta values might be calculated as follows:
where:
Most modern data processing packages possess dedicated functions for building linear regression models with few lines of code. The result is illustrated in the figure 62:
As discussed previously, an error in the context of the linear regression model represents a distance between the estimated dependent variable values and the estimate provided by the model, which the following equation might represent:
where,
Since an error for a given yith might be positive or negative and the model itself minimises the overall error, one might expect that the error is typically distributed around the model, with a mean value of 0 and its sum close to or equal to 0. Examples of the error for a few randomly selected data points are depicted in the following figure 64 in red colour:
Unfortunately, knowing the following facts does not always provide enough information about the modelled process. In most cases, due to some dynamic features of the process, the distribution of the errors is as important as the model itself. For instance, a motor shaft wears out over time, and the fluctuations steadily increase from the centre of the rotation. To estimate the overall wearing of the shaft is enough to have just a max amplitude measurement. However, it is not enough to understand the dynamics of the wearing process. Another important aspect is the order of magnitude of the errors compared to the measurements, which, in the case of small quantities, might be impossible to notice even if the modeller illustrated the model. The following figure 65 might illustrate such situation:
In figure 65 both small error quantities and progression dynamics are illustrated. Another example of cyclic error distribution is provided in the following figure 66:
From this discussion, a few essential notes have to be taken:
If any regularities are noticed, whether a simple variance increase or cyclic nature, they point to something the model does not consider. It might point to a lack of data, i.e., other factors that influence the modelled process, but they are not part of the model, which is therefore exposed through the nature of the error distribution. It also might point to an oversimplified look at the problem, and more complex models should be considered. In any of the mentioned cases, a deeper analysis should be considered. In a more general way, the linear model might be described with the following equation:
Here, the error is considered to be normally distributed around 0, with its standard deviation sigma and variance sigma squared. Variance provides at least a numerical insight into the error distribution; therefore, it should be considered an indicator for further analysis. Unfortunately, the true value of sigma is not known; thus, its estimated value should be used:
Here, the variance estimated value's expected value equals the true variance value:
In many practical problems, the target variable Y might depend on more than one independent variable X, for instance, wine quality, which depends on its level of serenity, amount of sugars, acidity and other factors. In the case of applying a linear regression model that doesn't seem very easy, but it is still a linear model of the following form:
During the application of the linear regression model, the error term to be minimised is described by the following equation:
Unfortunately, due to the number of factors (dimensions), the results of multiple linear regression cannot be visualised in the same way as those of a single linear regression. Therefore, numerical analysis and interpretation of the model should be done. In many situations, numerical analysis is complicated and requires a semantic interpretation of the data and model. To do it, visualisations reflecting the relation between the dependent variable and independent variables result in multiple graphs. Otherwise, the quality of the model is hardly assessable or even unassessable.
Piecewise linear models, as the name suggests, allow splitting the overall data sample into pieces and building a separate model for every piece, thus achieving better prediction for the data sample. The formal representation of the model is as follows:
As it might be noticed, the individual models are still linear and individually simple. However, the main difficulty is to set the threshold values b that splits the sample into pieces. To illustrate the problem better, one might consider the following artificial data sample (figure 73):
Intuition suggests splitting the sample into two pieces and, with the boundary b around 0, fitting a linear model for each of the pieces separately (figure 74):
Since we do not know the exact best split, it might seem logical to play with different numbers of splits at different positions. For instance, a random number of splits might generate the following result (figure 75):
It is evident from the figure above that some of the individual linear models do not reflect the overall trends, i.e. the slope steepness and direction (positive or negative) seem to be incorrect. However, it is also apparent that those individual models might be better for the given limited sample split. This simple example brings a lot of confusion in selecting the number of splits and their boundaries. Unfortunately, there is no simple answer, and the possible solution might be one of the following:
Clustering is a methodology that belongs to the class of unsupervised machine learning. It allows for finding regularities in data when the group or class identifier or marker is absent. To do this, the data structure is used as a tool to find the regularities. Because of this powerful feature, clustering is often used as part of data analysis workflow prior to classification or other data analysis steps to find natural regularities or groups that may exist in data.
This provides very insightful information about the data's internal organisation, possible groups, their number and distribution, and other internal regularities that might help us better understand the data content. One might consider grouping customers by income estimate to explain the clustering better. It is natural to assume some threshold values of 1KEUR per month, 10KEUR per month, etc. However:
It is evident that, most probably, customers' behaviour depends on factors like occupation, age, total household income, and others. While the need for considering other factors is obvious, grouping is not – how exactly different factors interact to decide which group a given customer belongs to. That is where clustering exposes its strength – revealing natural internal structures of the data (customers in the provided example).
In this context, a cluster refers to a collection of data points aggregated together because of certain similarities [30]. Within this chapter, two different approaches to clustering are discussed:
In both cases, a distance measure estimates the distance among points or objects and the density of points around the given. Therefore, all factors used should be numerical, assuming an Euclidian space.
Before starting clustering, several necessary steps have to be performed:
The first method discussed here is one of the most commonly used – K-means. K-means clustering is a method that splits the initial set of points (objects) into groups, using distance measure, representing a distance from the given point of the group to the group's centre representing a group's prototype: centroid. The result of the clustering is N points grouped into K clusters, where each point has assigned a cluster index, which means that the distance from the point of the cluster centroid is closer than the distance to any other centroids of other clusters. Distance measure employs Euclidian distance, which requires scaled or normalised data to avoid the dominance of a single dimension over others. The algorithm steps schematically are represented in the following figure 76:
In the figure:
Steps 4-6 are repeated until changes in cluster positions do not change or changes are insignificant. The distance is measured using Euclidian distance:
where:
Example of initial data and assigned cluster marks with cluster centres after running the K-means algorithm (figure 78):
Unfortunately, the K-means algorithm does not possess automatic mechanisms to select the number of clusters K, i.e., the user must set it. Example of setting different numbers of cluster centres (figure 79):
In K-means clustering, a practical method – the Elbow method is used to select a particular number of clusters. The elbow method is based on finding the point at which adding more clusters does not significantly improve the model's performance. As explained, K-means clustering optimises the sum of squared errors (SSE) or squared distances between each point and its corresponding cluster centroid. Since the optimal number of clusters (NC) is not known initially, it is wise to increase the NCs iteratively. The SSE decreases as the number of clusters increases because the distances to the cluster centres also decrease. However, there is a point where the improvement in SSE diminishes significantly. This point is referred to as the “elbow” [31].
Steps of the method:
Since the method requires iteratively running the K-means algorithm, which might be resource-demanding, a selection of data might be employed to determine the NC first and then used to run the K-means on the whole dataset.
Limitations:
The figure above (figure 80) demonstrates more and less obvious “elbows”, where users could select the number of clusters equal to 3 or 4.
The Silhouette Score is a metric used to evaluate the quality of a clustering result. It measures how similar an object (point) is to its own cluster (cohesion) compared to other clusters (separation). The score ranges from −1 to +1, where higher values indicate better-defined clusters [32].
The Silhouette score considers two main factors for each data point:
The silhouette score for a point i is then calculated as:
where:
Steps of the method:
Limitations:
An example is provided in the following figure 82:
The user should look for the highest score, which in this case is for the 3-cluster option.
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) employs density measures to mark points in high-density regions and those in low-density regions – the noise. Because of this natural behaviour of the algorithm, it is particularly useful in signal processing and similar application domains. [33].
One of the essential concepts is the point's p neighbourhood, which is the set of points reachable within the user-defined distance eps (epsilon):
where:
The algorithm treats different points differently depending on density and neighbouring points distribution around the point – its neighbourhood:
DBSCAN is excellent for discovering clusters in data with noise, especially when clusters are not circular or spherical.
Some application examples (figures 84 and 85):
A typical application in signal processing (figure 86):
Usually, MinPts is selected using some prior knowledge of the data and its internal structure. If it is done, the following steps might be applied:
The red horizontal line shows a possible eps value, around 0,045.
Classification assigns a class mark to a given object, indicating that the object belongs to the selected class or group. In contrast to clustering, classes should be pre-existent. In many cases, clustering might be a prior step to classification. Classification might be slightly understood differently in different contexts. However, in the context of this book, it will be used to describe a process of assigning marks of pre-existing classes to objects depending on their features.
Classification is used in almost all domains of modern data analysis, including medicine, signal processing, pattern recognition, different types of diagnostics and other more specific applications.
The classification process consists of two steps: first, an existing data sample is used to train the classification model, and then, in the second step, the model is used to classify unseen objects, thereby predicting to which class the object belongs. As with any other prediction, in classification, the model output is described by the error rate, i.e., true prediction vs. wrong prediction. Usually, objects that belong to a given class are called – positive examples, while those that do not belong are called – negative examples.
Depending on a particular output, several cases might be identified:
Example: A SPAM message is classified as SPAM, or a patient classified as being in a particular condition is, in fact, experiencing this condition.
Example: A harmless message is classified as SPAM, or a patient who is not experiencing a certain condition is classified as being in this condition.
Example: A harmless message is classified as harmless, or a patient not experiencing a certain condition is classified as not experiencing.
Example: A SPAM message is classified as harmless, or a patient experiencing a certain condition is classified as not experiencing.
While training the model and counting the number of training samples falling into the mentioned cases, it is possible to describe its accuracy mathematically. Here are the most commonly used statistics:
The classification model is trained using the initial sample data, which is split into training and testing subsamples. Usually, the training is done using the following steps:
The average statistics are used to describe the model.
The model's results on the test subsample depend on different factors—noise in the data, the proportion of classes represented in the data (how even classes are distributed), and others which are out of the developer's reach. However, by manipulating the sample split, it is possible to provide more data for training and thereby expect better training results—seeing more examples might lead to a better grasp of the class features. However, seeing too much might lead to a loss of generality and, consequently, dropped accuracy on test subsamples or previously unseen examples. Therefore, it is necessary to maintain a good balance between testing and training subsamples, usually 70% for training and 30% for testing or 60% for training and 40% for testing. In real applications, if the initial data sample is large enough, a third subsample is used – a validation set used only once to acquire final statistics and not provided to developers. It usually holds small but representative subsamples in 1-5% of the initial data sample.
Unfortunately, the data sample is not large enough in many practical cases. Therefore, several testing techniques are used to ensure the reliability of statistics and respect the scarcity of data. The method is called cross-validation, which uses the training and testing data subsets but allows data to be saved without using the validation set.
Most of the data is used for training in random sample cases (figure 88), and only a few randomly selected samples are used to test the model. The procedure is repeated many times to ensure the model's average accuracy. Random selection has to be made without replacements. In the case of using replacements, the method is called bootstrapping, which is widely used and generally is more optimistic.
This approach splits the training set into smaller sets called splits (in the figure 89 above, there are three splits). Then, for each split, the following steps are performed:
The overall performance for the k-fold cross-validation is the average performance of the individual performances computed for each split. It requires extra computing but respects data scarcity, which is why it is used in practical applications.
This approach splits the training set into smaller sets called splits in the same way as previous methods described here (in the figure 90 above; there are three splits). Then, for each split, the following steps are performed:
This method requires many iterations due to the limitations of the testing set.
Within the following sub-chapters, two very widely used algorithm groups are discussed:
Decision trees are the most commonly used base technique in classifications. To describe the idea of the decision trees, a simple data set might be considered, as presented in figure 91:
In this dataset, xn indicates the n-th observation; each column refers to a particular factor, while the last column, “Call for technical assistance,” refers to the class variable with values Yes or No, respectively.
To build a decision tree for the given problem of calling the technical assistance, one might consider constructing a tree where each path from the root to tree leaves represents a separate example xn with a complete set of factors and their values corresponding to the given example. This solution would provide the necessary outcome – all examples will be classified correctly. However, there are two significant problems:
Referring to Occam's razor principle [34] the most desirable model is the most compact one, i.e., using only the factors necessary to make a valid decision. This means that one needs to select the most relevant factor and then the next most relevant factor until the decision is made without a doubt.
In the figure 92 above, on its left, the factor “The engine is running” is considered, which has two potential outputs: Yes and No. For the outcome Yes, the target class variable has an equal number of positive (Yes) and negative (No) class values, which does not help much in deciding since it is still 50/50. The same is true for output No. So, checking if the engine works does not bring the decision closer.
The figure 92 on its right considers a different factor with similar potential outputs: “There are small children in the car.” For the output No, all the examples have the same class variable value—No, which makes it ideal for deciding since there is no variability in the output variable. A slightly less confident situation is for the output Yes, which produces examples with six positive class values and one negative. While there is a little variability, it is much less than for the previously considered factor.
In this simple example, it is obvious that checking if children are in the car is more effective than checking the engine status. However, an effective estimate is needed to assess the potential effectiveness of a given factor. Ross Quinlan, in 1986, proposed an algorithm ID3 [35], which employs an Entropy measure:
where:
E(D) - Entropy for a given data set D.
C - Total number of values c of the class variable in the given data set D.
p(c) - The proportion of examples with class value c to the total number of examples in D.
E(D) = 0, when only one class value is represented (the most desirable case), and E(D) = 1, where class values are evenly distributed in the D (the least desirable case).
To select a particular, estimating how much uncertainty is removed from the data set after applying a specific factor (test) is necessary. Quinlan proposed to use Information gain:
where:
IG(D,A) - Information gain of the dataset D, when factor A is applied to split it into subsets.
E(D) - Entropy for a given data set D.
T - Subsets of D, created by applying factor A.
p(t) - The proportion of examples with class value t to the total number of examples in D.
E(t) - Entropy of subset t.
The attribute with the most significant information gain is selected to split the data set into subsets. Then, each subset is divided into subsets in the same way. The procedure is continued until each of the subsets has zero entropy or no factors remain to test. The approach, in its essence, is a greedy search algorithm with one hypothesis, which is refined by each iteration. It uses statistics from the entire data set, which makes it relatively immune to missing values, contradictions or errors. Since the algorithm seeks the best-fitting decision tree, it might run into a local minima trap, where the generalisation is lost. To avoid possible local minima solutions, it is necessary to simplify or generalise the decision tree. There are two common approaches:
However, knowing the best factor to split the data set is not always helpful due to the costs related to the factor value estimation. For instance, in the medical domain, the most effective diagnostic methods might be the most expensive and, therefore, not always the most appropriate. Over time, different alternatives to information gain have been developed to respect expenses that are related to factor value estimation:
Alternative 1:
Alternative 2:
Currently, many other alternatives to the known ID3 family are used: ILA [36], RULES 6 [37], CN2 [38], CART [39].
The mentioned here does not use entropy-based estimates, reducing the computational complexity of the algorithms.
Random forests [40] are among the best out-of-the-box methods highly valued by developers and data scientists. For a better understanding of the process, an imaginary weather forecast problem might be considered, represented by the following true decision tree (figure 93):
Now, one might consider several forecast agents – friends of neighbours - where each provides their forecast depending on the factor values. Some will be higher than the actual value, and some will be lower. However, since they all use some experience-based knowledge, the forecast collected will be distributed around the exact value. The Random forest (RF) method uses hundreds of forecast agents and decision trees and then applies majority voting (figure 94).
Some advantages:
RF features:
Each tree in the forest is grown as follows:
Correlation Between Trees in the Forest: The correlation between any two trees in a Random Forest refers to the similarity in their predictions across the same dataset. When trees are highly correlated, they will likely make similar mistakes on the same inputs. In other words, if many trees make similar errors, the model's aggregated predictions will not effectively reduce the bias and variance, and the overall error rate of the forest will increase. The Random Forest method addresses this by introducing randomness in two main ways:
Strength of Each Individual Tree: The strength of an individual tree refers to its classification accuracy on new data, i.e., its ability to perform as a string classifier. In Random Forest terminology, a tree is strong if it has a low error rate. If each tree can classify well independently, the aggregate predictions of the forest will be more accurate.
Each tree's strength depends on various factors, including its depth and the features it uses for splitting. However, there is a trade-off between correlation and strength. For example, reducing m (the number of features considered at each split) increases the diversity among the trees, lowering correlation. Still, it may also reduce the strength of each tree, as it may limit its access to highly predictive features.
Despite this trade-off, Random Forests balance these dynamics by optimising m to minimise the ensemble error. Generally, a moderate reduction in m lowers correlation without significantly compromising the strength of each tree, thus leading to an overall decrease in the forest's error rate.
Implications for the Forest Error Rate: The forest error rate in a Random Forest model is influenced by the correlation among the trees and the strength of each tree. Specifically:
Consequently, an ideal Random Forest model balances between individually strong and sufficiently diverse trees, typically achieved by tuning the m parameter.
For further reading on practical implementations, it is highly recommended to look at the SciKit-learn package of the Python community [41]
As discussed in the data preparation chapter, time series usually represent the dynamics of some process. Therefore, the order of the data entries has to be preserved. As emphasised, a time series is simply a set of data—usually events—arranged by a time marker. Typically, time series are placed in the order in which events occur/are recorded.
In the context of IoT systems, there might be several reasons why time series analysis is needed. The most widely ones are the following:
Due to its diversity, various algorithms might be used in anomaly detection, including those covered in previous chapters. For instance, clustering for typical response clusters, regression for normal future states estimation and measuring the distance between forecast and actual measurements, and classification to classify normal or abnormal states. An excellent example of using classification-based methods for anomaly detection is Isolation forests [44]
While most of the methods covered here might be employed in time series analysis, this chapter outlines anomaly detection and classification cases through an industrial cooling system example.
A given industrial cooling system has to maintain a specific temperature mode of around -18oC. Due to the technology specifics, it goes through a defrost cycle every few hours to avoid ice deposits, leading to inefficiency and potential malfunction. However, a relatively short power supply interruption has been noticed at some point, which needs to be recognised in the future for reporting appropriately. The logged data series is depicted in the following figure 95:
It is easy to notice that there are two standard behaviour patterns: defrost (small spikes), temperature maintenance (data between spikes) and one anomaly – the high spike.
One possible alternative for building a classification model is to use K-nearest neighbours (KNN). Whenever a new data fragment is collected, it is compared to the closest ones and applies a majority principle to determine its class. In this example, three behaviour patterns are recognised; therefore, a sample collection must be composed for each pattern. It might be done by hand since, in this case, the time series is relatively short.
Examples of the collected patterns (defrost on the left and temperature maintenance on the right) are present in figure 96:
Unfortunately, in this example, only one anomaly is present (figure 97):
A data augmentation technique might be applied to overcome data scarcity, where several other samples are produced from the given data sample. This is done by applying Gaussian noise and randomly changing the sample's length (for example, the original anomaly sample is not used for the model). Altogether, the collection of initial data might be represented by the following figure 98:
One might notice that:
The abovementioned issues expose the problem of calculating distances from one example to another since comparing data points will produce misleading distance values. To avoid it, a Dynamic Time Warping (DTW) metric has to be employed [45]. For the practical implementations in Python, it is highly recommended to visit TS learn library documentation [46].
Once the distance metric is selected and the initial dataset is produced, the KNN might be implemented. The closest ones can be determined using DTW by providing the “query” data sequence. As an example, a simple query is depicted in the following figure 99:
For practical implementation, the TSleanr package is used. In the following example, 10 randomly selected data sequences are produced from the initial data set. While the data set is the same, none of the selected data sequences are “seen” by the model due to the randomness. The following figure shows the results 100:
As it might be noticed, the query (black) samples are somewhat different from the ones found to be “closest” by the KNN. However, because of the DTW advantages, the classification is done perfectly. The same idea demonstrated here might be used for unknown anomalies by setting a similarity threshold for DTW, classifying known anomalies as shown here, or even simple forecasting.
This chapter has covered some of the most widely used data analysis methods applicable in sensor data analysis, which might be typical for IoT systems. However, it is only the surface of the exciting world of data analytics and AI. The authors suggest the following online resources besides the well-known online learning platforms to dive into this world.
IoT systems and services are widely adopted in various industries, such as health care, agriculture, smart manufacturing, smart energy systems, intelligent transport systems, logistics (supply chain management), smart homes, smart cities, and security and safety. The primary goal of incorporating IoT into existing systems in various industries is to improve productivity and efficiency. Despite the enormous advantages of integrating IoT into existing systems in multiple sectors, including critical infrastructure, there are concerns about the security vulnerabilities of IoT systems. Businesses are increasingly anxious about the possible risks IoT systems introduce into their infrastructure and how to mitigate them.
One of the weaknesses of IoT devices is that they can easily be compromised. This is because some IoT manufacturers of IoT devices fail to incorporate security mechanisms into the devices, resulting in security vulnerabilities that can easily be exploited. Some manufacturers and developers often focus on device usability and adding features that satisfy the users' needs while paying little or no attention to security measures. Another reason that IoT device manufacturers and developers pay little or no attention to security because they are often focused on getting the device to the market as soon as possible. Also, some IoT users focus mainly on the price of the devices and ignore security requirements, incentivising the manufacturers to focus on minimising the cost of the devices while trading off the security of the devices.
Also, IoT hardware constraints make it challenging to implement reliable security mechanisms, making them vulnerable to cyber-attacks. Since batteries with limited energy capacities power IoT devices, they possess low-power computing and communication systems, making it hard to implement sufficient security mechanisms. Using power-hungry computing and communication systems that would permit the incorporation of reliable security mechanisms will significantly reduce the device's lifetime (the time from when the device is deployed to when the energy stored in its battery is completely drained). As a result, manufacturers and developers tend to trade off the security of the device with the reliability and lifetime of the device.
A successful malicious attack on an IoT system could result in data theft, loss of data privacy, and damage to other critical systems connected to the IoT systems. IoT systems are increasingly being targeted due to the relative ease with which they can be compromised. Also, they are increasingly being incorporated into critical infrastructure such as energy, water, transportation, health care, education, communication, security, and military infrastructures, making them attractive targets, especially during conventional, hybrid, and cyber warfare. In this case, the attackers' goal is not only to compromise IoT systems but to exploit the vulnerabilities of the IoT device to compromise or damage critical infrastructures. Some examples of attacks that have been orchestrated by exploiting vulnerabilities of IoT devices include:
The attacks mentioned above are just a few examples of how cybercriminals may exploit the vulnerabilities of IoT devices to compromise and disrupt services in other sectors, especially the disruption of critical infrastructure. These examples demonstrate the urgent need to incorporate security mechanisms into IoT infrastructures, especially those integrated with essential infrastructures. The above attack examples also indicate that the threat posed by IoT is real and can seriously disrupt the functioning of society and result in substantial final and material losses. It may even result in the loss of several lives. Thus, if serious attention is not given to IoT security, IoT will soon be an Internet of Threats rather than an Internet of Things.
Therefore, IoT security involves design and operational strategies to protect IoT devices and other systems against cyber attacks. It includes the various techniques and systems developed to ensure the confidentiality of IoT data, the integrity of IoT data, and the availability of IoT data and systems. These strategies and systems are designed to prevent IoT-based attacks and ensure IoT infrastructures' security. In this chapter, we will discuss IoT security concepts, IoT security challenges, and techniques that can be deployed to secure IoT data and systems from being compromised by attackers and used for malicious purposes.
The following chapters discuss details on cybersecurity in IoT systems:
IoT designers and engineers need to understand cybersecurity concepts. This will help them understand the various attacks that can be conducted against IoT devices and how to implement security mechanisms to protect them against cyber attacks. This section discusses some cybersecurity concepts required to understand IoT security.
Cybersecurity refers to the technologies, strategies, and practices designed to prevent cyberattacks and mitigate the risk posed by cyberattacks on information systems and other cyber-physical systems. It is sometimes called information technology security, which involves developing and implementing technologies, protocols, and policies to protect information systems against data theft, illegal manipulation, and service interruption. The main goal of cybersecurity systems is to protect the hardware and software systems, networks, and data of individuals and organisations against cybersecurity attacks that may bridge these systems' confidentiality, integrity, and availability.
After understanding cybersecurity, it is also essential to understand what a cyberattack is. A cyberattack can be considered any deliberate compromise of an information system's confidentiality, integrity, or availability. That is unauthorised access to a network, computer system or digital device with a malicious intention to steal, expose, alter, disable, or destroy data, applications or other assets. A successful cyberattack can cause a lot of damage to its victims, ranging from loss of data to financial losses. An organisation whose systems have been compromised by a successful cyber attack could lose its reputation and be forced to pay for damages incurred by customers due to a successful cybersecurity attack.
The question is why should we be worried about cybersecurity attacks, especially in the context of IoT. The widespread adoption of IoT to improve business processes and personal well-being has exponentially increased the options available to cybercriminals to conduct cybersecurity attacks, increasing cybersecurity-related risks for businesses and individuals. This underscores the need for IoT engineers, IT engineers, and other non-IT employees to understand cybersecurity concepts.
The CIA triad is a conceptual framework that combines three cybersecurity concepts, confidentiality, integrity, and availability, to provide a simple and complete checklist for implementing, evaluating, and improving cybersecurity systems. They form a set of requirements that a well-designed cybersecurity system must sacrifice to ensure information systems' confidentiality, integrity, and availability. It provides a powerful approach to identify vulnerabilities and threats in information systems and then implement appropriate technologies and policies to protect the information systems from being compromised. It provides a high-level framework that guides organisations and cybersecurity experts when designing, implementing, evaluating, and auditing information systems. In the following paragraphs, we briefly discuss the elements of the CIA triad (figure 101).
Confidentiality
It involves the technologies and strategies to ensure that sensitive data is kept private and inaccessible to unauthorised individuals. That is, sensitive data should be viewed only by authorised individuals within the organisation and kept private from unauthorised individuals. Some of the data collected by IoT sensors is very sensitive, and it must be kept private and should not be viewed by unauthorised individuals with malicious intentions. Data confidentiality involves a set of technologies, protocols, and policies designed and implemented to protect data against unintentional, unlawful, or unauthorised access, disclosure, or theft. To ensure data confidentiality, it is essential to answer the following questions:
To ensure the confidentiality of the data stored in computer systems and transported through computer and telecommunication networks, some security guidelines should be followed:
Integrity
Integrity in cybersecurity involves technologies and strategies designed to ensure that data is not modified or deleted during storage or transportation by unauthorised persons. It is essential to maintain the integrity of the data to ensure that it is consistent, accurate, and reliable. In the context of IoT, integrity is the assurance that the data collected by the IoT sensors is not illegally altered during transportation, processing, and storage, making it incomplete, inaccurate, inconsistent, and unreliable. The data can only be modified or changed by those authorised to access it. The collected data must be kept complete, accurate, consistent and safe throughout its entire lifecycle in the following ways [54]:
The IoT system designers, manufacturers, developers, and operators should ensure that the data collected is not lost, leaked, or corrupted during transportation, processing, or storage. As the data collected by IoT sensors is growing and lots of companies depend on the results from the processing of IoT data for decision-making, it is vital to ensure the integrity of the data. It must be assured that the IoT data collected is complete, accurate, consistent and secure throughout its lifecycle, as compromised data is of little or no interest to organisations and users. Also, data losses due to human error and cyberattacks are undesirable for organisations and users. Physical and logical factors can influence the integrity of the data.
The physical integrity of data could be enforced by:
IoT system designers, manufacturers, and developers can adopt various technologies and policies to ensure the integrity of the hardware from the IoT devices and communication to fog/cloud data centres.
Enforcing data integrity is a complex task that requires carefully integrating cybersecurity tools, policies, regulations, and people. Some of the ways that data integrity can be enforced include but are not limited to the following strategies:
Availability
The computing, communication, and data storage and retrieval systems should be accessible anytime and when needed. Availability in the context of cybersecurity is the ability of authorised users or applications to have reliable access to the information systems when necessary at any time. It is one of the elements of the CIA triad that constitutes the requirement for designing secure and reliable information and communication systems such as IoT. Given that IoT nodes are being integrated into critical infrastructure and other existing infrastructure of companies and individuals, longer downtimes are not tolerated, making availability a crucial requirement. Availability disruption could result from any of the following causes:
Some of the ways to ensure the availability of information systems and data include the following:
To understand advanced cybersecurity concepts and technologies, it is crucial to have a good understanding of some basic cybersecurity concepts. Below, some cybersecurity concepts are presented.
Cybersecurity risk: It is the probability of being exposed to a cybersecurity attack or that any of the cybersecurity requirements of confidentiality, integrity, or availability is violated, which may result in data theft, leakage, damage or corruption. It may also result in service disruption or downtime that may cause the company to lose revenue and damage infrastructure. An organisation that falls victim to a successful cyber-attack may lose its reputation and be compelled to pay damages to its customers or to pay a fine to regulatory agencies. Thus, a cybersecurity risk is the potential losses that an organisation or individuals may experience as a result of successful cyberattacks or failures of the information systems that may result in loss of data, customers, revenues, and resources (assets and financial losses).
Threats: It is an action performed to violate any cybersecurity requirements that may result in data theft, leakage, damage, corruption, or losses. The action may either disclose the data to unauthorised individuals or alter the data illegally. It may equally result in the disruption of services due to system downtime, system unavailability, or data unavailability. Threats may include, among others, device infections with viruses or malware, ransomware attacks, denial of service, phishing attacks, social engineering attacks, password attacks, SQL injection, data breaches, man-in-the-middle attacks, energy depletion attacks (the case of IoT devices), or many other attack vectors. Cybersecurity threats could result from threat actors such as nation stations, cybercriminals, hacktivists, disgruntled employees, design errors, misconfiguring of systems, software flaws or bugs, terrorists, spies, errors from authorised users, and natural disasters [55].
Cybersecurity vulnerability: It is a weakness, flaw, or error found in an information system or a cybersecurity system that cybercriminals could exploit to compromise the security of an information system. There are several cybersecurity vulnerabilities, and so many are still being discovered. Still, the most common ones include SQL injection, buffer overflows, cross-site scripting, security misconfiguration [56], weak authentication and authorisation mechanisms, and unencrypted data during transportation or storage. Security vulnerabilities can be identified using vulnerability scanners and performing penetration testing. When a vulnerability is detected, necessary steps should be taken to eliminate or mitigate its risk.
Cybersecurity exploit: A cybersecurity exploit is the various ways that cybercriminals take advantage of cybersecurity vulnerabilities to conduct cyberattacks to compromise the confidentiality, integrity, and availability of information systems. The exploit may involve the use of advanced techniques (e.g., commands, scripting, or programming) and software tools (proprietary or open-source) to identify and exploit vulnerabilities to steal data, disrupt the services, damage or corrupt the data, and hijack data or systems in exchange for money.
Attack vector: It is the various ways that attackers may compromise the security of an information system, such as computing, communication, or data storage and retrieval systems. Some of the common attack vectors include:
The various approaches to eliminate attack vectors to reduce the chances of a successful attack include the following [57]:
Attack surface: An attack surface is a location or possible attack vectors that cybercriminals can target or use to compromise data and information systems' confidentiality, integrity, and availability. Organisations and individuals should always strive to minimise their attack surfaces; the smaller the attack surfaces, the smaller the likelihood that their data or information systems will be compromised. So, they must constantly monitor their attack surfaces to detect and block attacks as soon as possible and minimise the potential risk of a successful attack. Some of the common attack surfaces are poorly secured devices (e.g., devices such as computers, mobile phones, hard drives, and IoT devices), weak passwords, a lack of email security, open ports, and a failure to patch software, which offers an open backdoor for attackers to target and exploit users and organisations. Another common attack surface is weak web-based protocols, which hackers can exploit to steal data through man-in-the-middle (MITM) attacks. There are two categories of attack surface, which include [58]
A practical attack surface management provides the following advantages to organisations and individuals:
As IT infrastructures increase and are connected to external IT systems over the internet, they become more complex, hard to secure, and frequently targeted by cybercriminals. Some of the ways to minimise attack surfaces to reduce the risk of cyberattacks include:
Encryption: Encryption is scrambling data into a secret code (encrypted data) to only be transformed back into the original data (decrypted) with a unique key by authorised users or applications. It ensures that the confidentiality and integrity of the data are not compromised. That is, it prevents the data from being stolen or illegally altered by cybercriminals. Encryption is often used to protect data during transportation, storage, and processing/analysis. The process of encryption involves the use of a mathematical cryptographic algorithm (encryption algorithm) to scramble data (plaintext) to a cyphertext that can only be unscrambled back into the plain text using another cryptographic algorithm (decryption algorithm) and an appropriate unique key. The cryptographic keys should be long enough that cybercriminals can not easily guess them through a brute-force attack or cryptanalysis. The goals of implementing encryption algorithms in information systems are:
Cryptographic algorithms can be categorised into two main types as follows:
Although encryption is very valuable for securing data during transportation, processing, and storage, it still has disadvantages. Some of the drawbacks of encryption are:
Authentication: Authentication is an access control mechanism that makes it possible to verify that a user, device, or application is who they claim to be. The authentication credentials (username and password) are matched against a database of authorised users or data authentication servers to verify their identities and ensure they have access rights to the device, servers, application or database. Using a username or ID and a password for authentication is called single-factor authentication. Recently, organisations, especially those dealing with sensitive data (e.g., banks), require their users and applications to provide multiple factors for authentication (rather than only an ID and password), resulting in what is now known as multi-factor authentication. In the case of two factors, it is known as two-factor authentication. Using human features such as fingerprint scans, facial or retina scans, and voice recognition is known as biometric authentication [59]. Authentication ensures the confidentiality and integrity of data and information systems by allowing only authenticated users, applications, and processes access valuable and sensitive resources (e.g., computers, wireless networks, wireless access points, databases, websites, and other network-based applications and services).
Authorisation: Just like authentication, authorisation is another process often used to protect data and information systems from being abused or misused by cybercriminals and unintended (or intended) actions of authorised users. Authorisation is the process of determining the access rights of users and applications to ensure they have the right to perform the action they are trying to perform. Unlike authentication, which verifies the users' identities and then grants them access to the systems, authorisation determines the permissions they have to perform specific actions. One example of authorisation is the Access Control List (ACL), which allows or denies users and applications access to particular information system resources and to perform specific actions. General users may be allowed to perform some actions but may be refused permission to perform others. In contrast, super users or system administrators can perform almost every action in the system. Also, some users are authorised to access some data and are denied access to more sensitive data; thus, in database systems, general users may be permitted to access less sensitive data, and the administrator is permitted access to more sensitive data.
Access control: It consists of the various mechanisms designed and implemented to grant authorised users access to information system resources and to control the actions that they are allowed to perform (e.g., view, modify, update, install, delete). It can also control an organisation's physical access to critical resources. It ensures that the confidentiality and integrity of data and information systems are not compromised. Thus, physical access controls physical access to critical resources, while logical access control controls access to information systems (networks, computing nodes, servers, files, and databases). Access to locations where critical assets (servers, network equipment, files) are stored is restricted using electronic access control systems that use keys, access card readers, personal identification number (PIN) pads, auditing and reports to track employee access to these locations. Access to information systems (networks, computing nodes, servers, files, and databases) is restricted using authentication and authorisation mechanisms that evaluate the required user login credentials, which can include passwords, PINs, biometric scans, security tokens or other authentication factors [60].
Non-repudiation: It is a way to ensure that the data sender does not refute that it sent the data and that the receiver does not deny that it received the data. It also ensures that an entity that signs a document cannot refute its signature. It is a concept adopted from the legal field and has become one of the five pillars of information assurance, including confidentiality, integrity, availability, and authentication. It ensures the authenticity and integrity of the message. It provides the sender's identity to the receiver and assures the sender that the message was delivered without being altered along the way. In this way, the sender and receiver cannot deny they send, receive or process the data. Signatures can be used to ensure non-repudiation as long as they are unique for each entity.
Accountability: Accountability requires organisations to take all the necessary steps to prevent cyberattacks and mitigate the risk of a possible attack. If an attack occurs, the organisation must take responsibility for the damages and engage relevant stakeholders to handle the consequences and prevent future attacks. It must also accept responsibility for dealing with security challenges and fallouts from security breaches.
A typical IoT architecture consists of the physical layer, which consists of IoT sensors and actuators, which may be connected in the form of a star, linear, mesh, or tree network topology. The IoT devices can process the data collected by the IoT sensors at the physical layer or send it to the fog/cloud computing layers for analysis through IoT access and Internet core networks. The Fog/cloud computing nodes perform lightweight or advanced analytics on the data, and the result may be sent to users for decision-making or IoT actuators to perform a specific task or control a given system or process. This implies that in an IoT infrastructure, we may have IoT devices, wireless access points, gateways, fog computing nodes, internet routers and switches, telecommunication transmission equipment, cellular base stations, servers, databases, cloud computing nodes, mobile applications, and web applications. All these hardware devices and applications constitute attack surfaces that cybercriminals can target to compromise IoT devices.
In implementing IoT security, it is vital to consider the kind of hardware found in IoT systems, from the IoT device level through the IoT networks, fog computing nodes, and internet core networks to the cloud. Security of traditional internet and cloud-based infrastructure is very complex but less challenging due to the massive amount of computing and communication resources that are deployed to handle cybersecurity algorithms and applications that are used to eliminate vulnerabilities, detect and prevent cyberattacks to ensure the confidentiality, integrity, and availability of data and information systems. In the case of IoT devices, the computing and communication resources are very limited due to the limited energy required to power the IoT device. Hence, energy-hungry and computationally expensive cybersecurity algorithms and applications can not be used to secure IoT nodes. This hardware limitation makes IoT devices vulnerable to cyberattacks and easy to compromise.
IoT devices are vulnerable to certain types of security attacks due to the nature of IoT hardware. Some of these vulnerabilities or weaknesses resulting from IoT hardware limitations include:
IoT hardware attacks are the various ways that security weaknesses resulting from limitations in IoT hardware can be exploited to compromise the security of IoT data and systems. An attacker may install malware on IoT devices, manipulate their functionality, or exploit their weaknesses to gain access to steal or damage data, degrade the quality of services, or disrupt the services. An attack could conduct an IoT on devices to use them for a more sophisticated large-scale attack on ICT infrastructures and critical systems. There is an increase in the scale and frequency of IoT attacks due to the rise in IoT attack surfaces, the ease with which IoT devices can be compromised, and the integration of IoT devices into existing systems and critical infrastructure. Some of the common IoT hardware attacks include:
It is tough to eliminate IoT hardware vulnerabilities due to the hardware resource constraint of IoT devices. Some of the measures for securing IoT devices and mitigating the risk posed by IoT security vulnerabilities include the following:
The security of computer systems and networks has garnered significant attention in recent years, driven by malicious attackers' ongoing exploitation of these systems, which leads to service disruptions. The increasing prevalence of known and unknown vulnerabilities has made designing and implementing effective security mechanisms increasingly complex and challenging. This section discusses the challenges and complexities of IoT cybersecurity systems.
An in-depth description of the cybersecurity challenges is presented below and shortly listed on the diagram 102.
Complexities in Security Implementation
Implementing robust security in IoT ecosystems is a multifaceted challenge that involves satisfying critical security requirements, such as confidentiality, integrity, availability, authenticity, accountability, and non-repudiation. While these principles may appear straightforward, the technologies and methods needed to achieve them are often complex. Ensuring confidentiality, for example, may involve advanced encryption algorithms, secure key management, and secure data transmission protocols. Similarly, maintaining data integrity requires comprehensive hashing mechanisms and digital signatures to detect unauthorised changes.
Availability is another essential aspect that demands resilient infrastructure to protect against Distributed Denial-of-Service (DDoS) attacks and ensure continuous access to IoT services. The authenticity requirement involves using public key infrastructures (PKI) and digital certificates, which introduce key distribution and lifecycle management challenges.
Achieving accountability and non-repudiation involves detailed auditing mechanisms, secure logging, and tamper-proof records to verify user actions and device interactions. These systems must operate seamlessly within constrained IoT environments with limited processing power, memory, or energy resources. Implementing these mechanisms thus demands technical expertise and the ability to reason through subtle trade-offs between security, performance, and resource constraints. The complexity is compounded by the diversity of IoT devices and communication protocols and the potential for vulnerabilities arising from integrating these devices into broader networks.
Inability to Exhaust All Possible Attacks
When developing security mechanisms or algorithms, it is essential to anticipate and account for potential attacks that may target the system's vulnerabilities. However, fully predicting and addressing every conceivable attack is often not feasible. This is because malicious attackers constantly innovate, usually approaching security problems from entirely new perspectives. By doing so, they can identify and exploit weaknesses in the security mechanisms that were not initially apparent or considered during development. This dynamic nature of attack strategies means that security features can never be wholly immune to every potential threat, no matter how well-designed. As a result, the development process must involve defensive strategies, ongoing adaptability, and the ability to respond to novel attack vectors that may emerge quickly. The continuous evolution of attack techniques, combined with the complexity of modern systems, makes it nearly impossible to guarantee absolute protection against all threats.
The problem of Where to Implement the Security Mechanism
Once security mechanisms are designed, a crucial challenge arises in determining the most effective locations for their deployment to ensure optimal security. This issue is multifaceted and involves both physical and logical considerations.
Physically, it is essential to decide at which points in the network security mechanisms should be positioned to provide the highest level of protection. For instance, should security features such as firewalls and intrusion detection systems be placed at the perimeter, or should they be implemented at multiple points within the network to monitor and defend against internal threats? Deciding where to position these mechanisms requires careful consideration of network traffic flow, the sensitivity of different network segments, and the potential risks of various entry points.
Logically, the placement of security mechanisms also needs to be considered within the system's architecture. For example, within the TCP/IP model, security features could be implemented at different layers, such as the application layer, transport layer, or network layer, depending on the nature of the threat and the type of protection needed. Each layer offers different opportunities and challenges for securing data, ensuring privacy, and preventing unauthorised access. The choice of layer for deploying security mechanisms affects how they interact with other protocols and systems, potentially influencing the overall performance and efficiency of the network.
In both physical and logical terms, selecting the proper placement for security mechanisms requires a comprehensive understanding of the system's architecture, potential attack vectors, and performance requirements. Poor placement can leave critical areas vulnerable or lead to inefficient resource use, while optimal placement enhances the system's overall defence and response capabilities. Thus, strategic deployment is essential to achieving robust and scalable security for modern networks.
The problem of Trust Management
Security mechanisms are not limited to implementing a specific algorithm or protocol; they often require a robust system of trust management that ensures the participants involved can securely access and exchange information. A fundamental aspect of this is the need for participants to possess secret information—such as encryption keys, passwords, or certificates—that is crucial to the functioning of the security system. This introduces various challenges regarding how such sensitive information is generated, distributed, and protected from unauthorised access.
For instance, cryptographic keys must be created and distributed carefully to prevent interception or theft. Secure key exchange protocols must be employed, and mechanisms for storing keys securely—such as hardware security modules or secure enclaves—must be in place. Additionally, the management of trust between parties is often based on keeping these secrets confidential. If any party loses control over their secret information or if it is exposed, the entire security framework may be compromised.
Beyond the management of secrets, trust management also relies on communication protocols whose behaviour can complicate the development and reliability of security mechanisms. Many security mechanisms depend on the assumption that specific communication properties will hold, such as predictable latency, order of message delivery, or the integrity of data transmission. However, in real-world networks, factors like varying network conditions, congestion, and protocol design can introduce unpredictable delays or alter the sequence in which messages are delivered. For example, if a security system depends on setting time-sensitive limits for message delivery—such as in time-based authentication or transaction protocols—any communication protocol or network that causes delays or variability in transit times may render these time limits ineffective. This unpredictability can undermine the security mechanism's ability to detect fraud, prevent replay attacks, or ensure timely authentication.
Moreover, trust management issues also extend to the trustworthiness of third-party services or intermediaries, such as certificate authorities in public key infrastructures or cloud service providers. If the trust assumptions about these intermediaries fail, it can lead to a cascade of vulnerabilities in the broader security system. Thus, a well-designed security mechanism must account for the secure handling of secret information, the potential pitfalls introduced by variable communication conditions and the complexities of establishing reliable trust relationships in a decentralised or distributed environment.
Continuous Development of New Attack Methods
Computer and network security can be viewed as an ongoing battle of wits, where attackers constantly seek to identify and exploit vulnerabilities. In contrast, security designers or administrators work tirelessly to close those gaps. One of the inherent challenges in this battle is the asymmetry of the situation: the attacker only needs to discover and exploit a single weakness to compromise a system, while the security designer must anticipate and mitigate every potential vulnerability to achieve what is considered “perfect” security.
This stark contrast creates a significant advantage for attackers, as they can focus on finding just one entry point, one flaw, or one overlooked detail in the system's defences. Moreover, once a vulnerability is identified, it can often be exploited rapidly, sometimes even by individuals with minimal technical expertise, thanks to the availability of tools or exploits developed by more sophisticated attackers. This constant risk of discovery means that the security landscape is always in a state of flux, with new attack methods emerging regularly.
On the other hand, the designer or administrator faces the monumental task of identifying every potential weakness in the system and understanding how each vulnerability could be exploited in novel ways. As technology evolves and new systems, protocols, and applications are developed, new attack vectors emerge, making it difficult for security measures to remain static. Attackers continuously innovate, leveraging new technologies, techniques, and social engineering strategies, further complicating the defence task. They may adapt to environmental changes, bypassing traditional security mechanisms or exploiting new weaknesses introduced by system updates or third-party components.
This dynamic forces security professionals to stay one step ahead, often engaging in continuous research and development to identify new threat vectors and implement countermeasures. It also underscores the impossibility of achieving perfect security. Even the most well-designed systems can be vulnerable to the next wave of attacks, and the responsibility to defend against these evolving threats is never-ending. Thus, developing new attack methods ensures that the landscape of computer and network security remains a complex, fast-paced arena in which defenders must constantly evolve their strategies to keep up with increasingly sophisticated threats.
Security is Often Ignored or Poorly Implemented During Design
One of the critical challenges in modern system development is that security is frequently treated as an afterthought rather than being integrated into the design process from the outset. Security considerations are often only discussed after the system's core functionality and architecture have been designed, developed, and even deployed. This reactive approach, where security is bolted on as an additional layer at the end of the development cycle, leaves systems vulnerable to exploitation by malicious actors who quickly discover and exploit flaws that were not initially considered.
The tendency to overlook security during the early stages of design often stems from a focus on meeting functionality requirements, deadlines, or budget constraints. When security is not a primary consideration from the start, it is easy for developers to overlook potential vulnerabilities or fail to implement adequate protective measures. As a result, the system may have critical weaknesses that are difficult to identify or fix later on. Security patches or adjustments, when made, can become cumbersome and disruptive, requiring substantial changes to the architecture or design of the system, which can be time-consuming and expensive.
Moreover, systems not designed with security are often more prone to hidden vulnerabilities. For example, they may have poorly designed access controls, insufficient data validation, inadequate encryption, or weak authentication methods. These issues can remain undetected until an attacker discovers a way to exploit them, potentially leading to severe data integrity, confidentiality, or availability breaches. Once a security hole is identified, patching it in a system not built with security in mind can be challenging. It may require reworking substantial portions of the underlying architecture or logic, which may not have been anticipated during the initial design phase.
The lack of security-focused design also affects the system's scalability and long-term reliability. As new features are added or updates are made, vulnerabilities can emerge if security isn't continuously integrated into each step of the development process. This results in a system that may work perfectly under normal conditions but is fragile or easily compromised when exposed to malicious threats.
To address this, security must be treated as a fundamental aspect of system design, incorporated from the beginning of the development lifecycle. It should not be a separate consideration but rather an integral part of the architecture, just as essential as functionality, performance, and user experience. By prioritising security during the design phase, developers can proactively anticipate potential threats, reduce the risk of vulnerabilities, and build robust and resilient systems for future security challenges.
Difficulties in Striking a Balance Between Security and Customer Satisfaction
One of the ongoing challenges in information system design is finding the right balance between robust security and customer satisfaction. Many users, and even some security administrators, perceive strong security measures as an obstacle to a system's smooth, efficient, and user-friendly operation or the seamless use of information. The primary concern is that stringent security protocols can complicate system access, slow down processes, and interfere with the user experience, leading to frustration or dissatisfaction.
For example, implementing strong authentication methods, such as multi-factor authentication (MFA), can significantly enhance security but may also create additional steps for users, increasing friction during login or access. While this extra layer of protection helps mitigate security risks, it may be perceived as cumbersome or unnecessary by end-users who prioritise convenience and speed. Similarly, enforcing strict data encryption or secure communication protocols can slow down system performance, which, while necessary for protecting sensitive information, may result in delays or decreased efficiency in routine operations.
Furthermore, security mechanisms often introduce complexities that make the system more difficult for users to navigate. For instance, complex password policies, regular password changes, or strict access control rules can lead to confusion or errors, especially for non-technical users. The more stringent the security requirements, the more likely users may struggle to comply or bypass security measures in favour of convenience. In some cases, this can create a dangerous false sense of security or undermine the protections the security measures are designed to enforce.
Moreover, certain security features may conflict with specific functionalities that users require for their tasks, making them difficult or impossible to implement in specific systems; for example, ensuring that data remains secure during transmission often involves limiting access to specific ports or protocols, which could impact the ability to use certain third-party services or applications. Similarly, achieving perfect data privacy may necessitate restricting the sharing of information, which can limit collaboration or slow down the exchange of essential data.
The challenge lies in finding a compromise where security mechanisms are robust enough to protect against malicious threats but are also sufficiently flexible to avoid hindering user workflows, system functionality, and overall satisfaction. Striking this balance requires careful consideration of the needs of both users and security administrators and constant reassessment as technologies and threats evolve. To achieve this, designers must work to develop security solutions that are both effective and as seamless as possible, protecting without significantly disrupting the user experience. Practical user training and clear communication about the importance of security can also help mitigate dissatisfaction by fostering an understanding of why these measures are necessary. Ultimately, the goal should be creating an information system that delivers a secure environment and a positive, user-centric experience.
Users Often Take Security for Granted
A common issue in cybersecurity is that users and system managers often take security for granted, not fully appreciating its value until a security breach or failure occurs. This tendency arises from a natural human inclination to assume that systems are secure unless proven otherwise. Users are less likely to prioritise security when everything functions smoothly, viewing it as an invisible or abstract concept that doesn't immediately impact their day-to-day experience. This attitude can lead to a lack of awareness about their potential risks or the importance of investing in strong security measures to prevent those risks.
Many users, especially those looking for cost-effective solutions, are primarily concerned with acquiring devices or services that fulfil their functional needs—a smartphone, a laptop, or an online service. Security often takes a backseat to factors like price, convenience, and performance. In pursuing low-cost options, users may ignore or undervalue security features, opting for devices or platforms that lack robust protections, such as outdated software, weak encryption, or limited user controls. While these devices or services may meet the immediate functional demands, they may also come with hidden security vulnerabilities that expose users to cyber threats, such as data breaches, identity theft, or malware infections.
Additionally, system managers or administrators may sometimes adopt a similar mindset, focusing on operational efficiency, functionality, and cost management while overlooking the importance of implementing comprehensive security measures. Security features may be treated as supplementary or burdens, delaying or limiting their integration into the system. This results in weak points in the system that are only recognised when an attack happens, and by then, the damage may already be significant.
This lack of proactive attention to security is further compounded by the false sense of safety that can arise when systems appear to be running smoothly. Without experiencing a breach, many users may underestimate the importance of security measures, considering them unnecessary or excessive. However, the absence of visible threats can be deceiving, as many security breaches happen subtly without immediate signs of compromise. Cyber threats are often sophisticated and stealthy, evolving in ways that make it difficult for the average user to identify vulnerabilities before it's too late.
To counteract this complacency, it's essential to foster a deeper understanding of the value of cybersecurity among users and system managers. Security should be presented as an ongoing investment in protecting personal and organisational assets rather than something that can be taken for granted. Education and awareness campaigns can play a crucial role in helping users recognise that robust security measures protect against visible threats and provide long-term peace of mind. By prioritising security at every stage of device and system use—whether in design, purchasing decisions, or regular maintenance—users and system managers can build a more resilient, secure environment less vulnerable to emerging cyber risks.
Security monitoring challenges in IoT infrastructures
Security requires regular, even constant monitoring, which is difficult in today's short-term, overloaded environment. One of the key components of maintaining strong security is continuous monitoring, yet in today's fast-paced, often overloaded environment, this is a complex and resource-intensive task. Security is not a one-time effort or a set-it-and-forget-it process; it requires regular, and sometimes even constant, oversight to identify and respond to emerging threats. However, the demand for quick results and the drive to meet immediate business objectives often lead to neglect in long-term security monitoring efforts. In addition, many security teams are stretched thin with multiple responsibilities, making it challenging to prioritise and maintain the vigilance necessary for effective cybersecurity.
This challenge is particularly evident in the context of Internet of Things (IoT), where security monitoring becomes even more complex. The IoT ecosystem consists of a vast and ever-growing number of connected devices, many deployed across different environments and serving particular niche purposes. One of the main difficulties in monitoring IoT devices is that some are often hidden or not directly visible to traditional security monitoring tools. For example, specific IoT devices may be deployed in remote locations, embedded in larger systems, or integrated into complex networks, making it difficult for security teams to comprehensively view all the devices in their infrastructure. These “invisible” devices are prime targets for attackers, as they can easily be overlooked during routine security assessments.
The simplicity of many IoT devices further exacerbates the monitoring challenge. These devices are often designed to be lightweight, inexpensive, and easy to use, which means they may lack advanced security features such as built-in encryption, authentication, or even the ability to alert administrators to suspicious activities. While their simplicity makes them attractive from a consumer standpoint—offering ease of use and low cost—they also make them more vulnerable to attacks. Without sophisticated monitoring capabilities or secure configurations, attackers can exploit these devices to infiltrate a network, launch DDoS attacks, or compromise sensitive data.
Moreover, many IoT devices are deployed without proper oversight or follow-up, as organisations may prioritise functionality over security during procurement. This “set-and-forget” mentality means that once IoT devices are installed, they are often left unchecked for long periods, creating a window of opportunity for attackers to exploit any weaknesses. Additionally, many IoT devices may not receive regular firmware updates, leaving them vulnerable to known exploits that could have been patched if monitored and maintained.
The rapidly evolving landscape of IoT, combined with the sheer number of devices, makes it almost impossible for security teams to stay on top of every potential threat in real time. To address this challenge, organisations must adopt more robust, continuous monitoring strategies to detect anomalies across various devices, including IoT. This may involve leveraging advanced technologies such as machine learning and AI-based monitoring systems that automatically detect suspicious behaviour without constant human intervention. Additionally, IoT devices should be integrated into a broader, cohesive security framework that includes regular updates, vulnerability assessments, and comprehensive risk management practices to ensure these devices are secure and potential security gaps are identified and addressed on time.
Ultimately, as IoT grows in scale and complexity, security teams must be more proactive in implementing monitoring solutions that provide visibility and protection across all network layers. This requires advanced technological tools and a cultural shift toward security as a continuous, ongoing process rather than something that can be handled in short bursts or only when a breach occurs.
The Procedures Used to Provide Particular Services Are Often Counterintuitive
Security mechanisms are typically designed to protect systems from various threats. Still, the procedures to implement these mechanisms are often counterintuitive or not immediately apparent to users or those implementing them. In many cases, security features are complex and intricate, requiring multiple layers of protection, detailed configurations, and extensive testing. When a user or system administrator is presented with a security requirement—such as ensuring data confidentiality, integrity, or availability—it is often unclear whether such elaborate and sometimes cumbersome measures are necessary. At first glance, the measures may appear excessive or overly complicated for the task, leading some to question their utility or necessity.
The need for these complex security mechanisms becomes evident only when the various aspects of a potential threat are thoroughly examined. For example, a seemingly simple requirement, such as ensuring the secure transfer of sensitive data, may involve a series of interconnected security protocols, such as encryption, authentication, access control, and non-repudiation, often hidden from the end user. Each of these mechanisms serves a critical role in protecting the data from potential threats—such as man-in-the-middle attacks, unauthorised access, or data tampering—but this level of sophistication is not always apparent. The complexity is driven by the diverse and evolving nature of modern cyber threats, which often require multi-layered defences to be effective.
The necessity for such intricate security procedures often becomes more evident when a more in-depth understanding of the potential threats and vulnerabilities is gained. For instance, an attacker may exploit seemingly minor flaws in a system, such as weak passwords, outdated software, or unpatched security holes. These weaknesses may not be immediately apparent or seem too trivial to warrant significant attention. However, once a security audit is conducted and the full scope of potential risks is considered—ranging from insider threats to advanced persistent threats (APTs)—it becomes apparent that a more robust security approach is required to safeguard against these risks.
Moreover, the procedures designed to mitigate these threats often involve trade-offs in terms of usability and performance. For example, enforcing stringent authentication methods may slow down access times or require users to remember complex passwords, which may seem inconvenient or unnecessary unless the potential for unauthorised access is fully understood. Similarly, implementing encryption or firewalls may add processing overhead or introduce network delays, which might seem like a burden unless it is clear that these measures are essential for defending against data breaches or cyberattacks.
Security mechanisms are often complex and counterintuitive because they must account for many potential threats and adversaries, some of which may not be immediately apparent. The process of securing a system involves considering not only current risks but also future threats that may emerge as technology evolves. As such, security measures must be designed to be adaptable and resilient in the face of new and unexpected challenges. The complexity of these measures reflects the dynamic and ever-evolving nature of the cybersecurity landscape, where seemingly simple tasks often require sophisticated, multifaceted solutions to provide the necessary level of protection.
The Complexity of Cybersecurity Threats from the Emerging Field of Artificial Intelligence (AI)
As Artificial Intelligence (AI) continues to evolve and integrate into various sectors, the cybersecurity landscape is becoming increasingly complex. AI, with its advanced capabilities in machine learning, data processing, and automation, presents a double-edged sword. While it can significantly enhance security systems by improving threat detection and response times, it also opens up new avenues for sophisticated cyberattacks. The growing use of AI by malicious actors introduces a new dimension to cybersecurity threats, making traditional defence strategies less effective and increasing the difficulty of safeguarding sensitive data and systems.
One of AI's primary challenges in cybersecurity is its ability to automate and accelerate the identification and exploitation of vulnerabilities. AI-driven attacks can adapt and evolve in real-time, bypassing traditional detection systems that rely on predefined rules or patterns. For example, AI systems can use machine learning algorithms to continuously learn from the behaviour of the system they are attacking, refining their methods to evade security measures, such as firewalls or intrusion detection systems (IDS). This makes detecting AI-based attacks much harder because they can mimic normal system behaviour or use techniques previously unseen by human analysts.
Furthermore, AI's ability to process and analyse vast amounts of data makes it an ideal tool for cybercriminals to mine for weaknesses. With AI-powered tools, attackers can sift through large datasets, looking for patterns or anomalies that could indicate a vulnerability. These tools can then use that information to craft highly targeted attacks, such as spear-phishing campaigns, that are more convincing and difficult to detect. Additionally, AI can automate social engineering attacks by personalising and optimising messages based on available user data, making them more effective at deceiving individuals into divulging sensitive information or granting unauthorised access.
Another layer of complexity arises from the potential misuse of AI in creating deepfakes or synthetic media, which can be used to manipulate individuals or organisations. Deepfakes, powered by AI, can generate realistic videos, audio recordings, or images that impersonate people in positions of authority, spreading misinformation or causing reputational damage. In cybersecurity, such techniques can be employed to manipulate employees into granting access to secure systems or to convince stakeholders to make financial transactions based on false information. The ability of AI to produce high-quality, convincing fake content complicates the detection of fraud and deception, making it harder for individuals and security systems to discern legitimate communication from malicious ones.
Moreover, AI's influence in the cyber world is not limited to the attackers; it also has significant implications for the defenders. While AI can help improve security measures by automating the analysis of threats, predicting attack vectors, and enhancing decision-making, it also presents challenges for security professionals who must stay ahead of increasingly sophisticated AI-driven attacks. Security systems that rely on traditional, signature-based detection methods may struggle to keep pace with AI-driven threats' dynamic and adaptive nature. AI systems in cybersecurity must be continually updated and refined to combat new and evolving attack techniques effectively.
The use of AI in cybersecurity also raises concerns about vulnerabilities within AI systems. AI algorithms, especially those based on machine learning, are not immune to exploitation. For instance, attackers can manipulate the training data used to teach AI systems, introducing biases or weaknesses that can be exploited. This is known as an “adversarial attack,” where small changes to input data can cause an AI model to make incorrect predictions or classifications. Adversarial attacks pose a significant risk, particularly in systems relying on AI for decision-making, such as autonomous vehicles or critical infrastructure systems.
As AI continues to advance, it is clear that cybersecurity strategies will need to adapt and evolve in tandem. The complexity of AI-driven threats requires a more dynamic and multifaceted approach to defence, combining traditional security measures with AI-powered tools to detect, prevent, and respond to threats in real time. Additionally, as AI technology becomes more accessible, organisations must invest in training and resources to ensure that their cybersecurity teams can effectively navigate the complexities AI introduces in attack and defence scenarios. The convergence of AI and cybersecurity is a rapidly evolving field, and staying ahead of emerging threats will require constant vigilance, innovation, and collaboration across industries and sectors.
The Difficulty in Maintaining a Reasonable Trade-off Between Security, QoS, Cost, and Energy Consumption
One of the key challenges in modern systems design, particularly in areas like network architecture, cloud computing, and IoT, is balancing the competing demands of security, Quality of Service (QoS), cost, and energy consumption. Each of these factors plays a critical role in a system's performance and functionality, but prioritising one often comes at the expense of others. Achieving an optimal trade-off among these elements is complex and requires careful consideration of how each factor influences the overall system.
Security is a critical component in ensuring the protection of sensitive data, system integrity, and user privacy. Strong security measures—such as encryption, authentication, and access control—are essential for safeguarding systems from cyberattacks, data breaches, and unauthorised access. However, implementing high-level security mechanisms often increases system complexity and processing overhead. For example, encryption can introduce delays in data transmission, while advanced authentication methods (e.g., multi-factor authentication) can slow down access times. This can negatively impact the Quality of Service (QoS), which refers to the performance characteristics of a system, such as its responsiveness, reliability, and availability. In environments where low latency and high throughput are essential, such as real-time applications or high-performance computing, security measures that introduce delays or bottlenecks can degrade QoS.
Cost is another critical consideration, as organisations must manage the upfront and ongoing expenses associated with system development, implementation, and maintenance. Security mechanisms often involve significant costs regarding the resources required to design and deploy them and the ongoing monitoring and updates needed to keep systems secure. Similarly, ensuring high QoS may require investments in premium infrastructure, high-bandwidth networks, and redundant systems to guarantee reliability and minimise downtime. Balancing these costs with budget constraints can be difficult, mainly when investing in top-tier security or infrastructure, which can result in higher operational expenses.
Finally, energy consumption is an increasingly important factor, particularly in the context of sustainable computing and green technology initiatives. Systems requiring constant security monitoring, high-level encryption, and redundant infrastructures consume more energy, increasing operational costs and contributing to environmental concerns. Managing power usage is particularly challenging in energy-constrained environments, such as IoT devices or mobile applications. Energy-efficient security measures may not be as robust or require trade-offs regarding the level of protection they provide.
Striking a reasonable balance among these four factors requires careful optimisation and decision-making. In some cases, prioritising security can reduce system performance (QoS) or increase energy consumption, while focusing on minimising energy usage might result in security vulnerabilities. Similarly, trying to cut costs by opting for cheaper, less secure solutions can lead to higher long-term expenses if a security breach occurs.
Organisations must take a holistic approach to achieve an effective balance, considering the system's specific requirements, potential risks, and resource constraints. For example, in critical infrastructure or financial systems, security may need to take precedence over cost or energy consumption, as the consequences of a breach would be too significant to ignore. In contrast, consumer-facing applications may emphasise maintaining QoS and minimising energy consumption while adopting security measures that are adequate for the threat landscape but not as resource-intensive.
Advanced technologies like machine learning and AI can help dynamically adjust trade-offs based on real-time conditions. For example, AI-powered systems can adjust security measures based on the sensitivity of the transmitted data or the system's load, optimising security and performance. Similarly, energy-efficient algorithms and hardware can minimise power usage without sacrificing too much security or QoS.
Achieving a reasonable trade-off between security, QoS, cost, and energy consumption requires a careful, context-specific approach, ongoing monitoring, and the ability to adjust strategies as system requirements and external conditions evolve.
Neglecting to Invest in Cybersecurity
Failing to allocate adequate resources to cybersecurity is a critical mistake many organisations, significantly smaller businesses and startups make. Neglecting cybersecurity investments can be far-reaching, with potential damages affecting the organisation's immediate operations and long-term viability. In today's increasingly digital world, where sensitive data and critical infrastructure are interconnected through complex networks, cybersecurity is no longer a luxury or a secondary concern—it is an essential element of any business strategy. Ignoring or underestimating the importance of cybersecurity exposes an organisation to a wide range of threats, ranging from data breaches to ransomware attacks, each of which can result in significant financial losses, reputational damage, and legal ramifications.
One of the most immediate risks of neglecting cybersecurity is the increased vulnerability to cyberattacks. Hackers and cybercriminals continuously evolve their techniques, using sophisticated methods to exploit weaknesses in systems, networks, and applications. Organisations create a fertile ground for these attacks without adequate investment in cybersecurity measures, such as firewalls, encryption, intrusion detection systems (IDS), and multi-factor authentication (MFA). Once a system is compromised, the damage can be extensive: sensitive customer data may be stolen, intellectual property could be leaked, and systems may be crippled, leading to prolonged downtime and operational disruptions.
Beyond the immediate damage, neglecting cybersecurity can also negatively impact an organisation's reputation. In today's hyper-connected world, news of a data breach or cyberattack spreads quickly, potentially causing customers and partners to lose trust in the organisation. Consumers are increasingly concerned about the privacy and security of their personal information, and a single breach can lead to a loss of customer confidence that may take years to rebuild. Moreover, businesses that fail to protect their customers' data may also face significant legal and regulatory consequences. Laws such as the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) impose strict requirements on data protection, and failure to comply with these regulations due to inadequate cybersecurity measures can result in heavy fines, lawsuits, and other legal penalties.
Another key consequence of neglecting cybersecurity is the potential for operational disruptions. Cyberattacks can cause significant downtime, rendering critical business systems inoperable and halting normal operations. For example, a ransomware attack can lock organisations out of their systems, demanding a ransom payment for the decryption key. During this period, employees may be unable to access important files, emails, or customer data, and business processes may come to a standstill. This operational downtime disrupts the workflow and results in lost productivity and revenue, with some companies facing weeks or even months of recovery time.
Additionally, the cost of dealing with the aftermath of a cyberattack can be overwhelming. Organisations not investing in proactive cybersecurity measures often spend significantly more on recovery after an incident. These costs can include legal fees, public relations campaigns to mitigate reputational damage, and the implementation of new security measures to prevent future breaches. In many cases, these costs far exceed the initial investment that would have been required to establish a robust cybersecurity program.
Neglecting cybersecurity also risks an organisation missing out on potential opportunities. As businesses increasingly rely on digital technologies, clients, partners, and investors emphasise the security of an organisation's systems. Organisations that cannot demonstrate strong cybersecurity practices may be excluded from partnerships, denied contracts, or even lost on investment opportunities. For example, many companies today require their suppliers and partners to meet specific cybersecurity standards before entering into business agreements. Failing to meet these standards can limit growth potential and damage business relationships.
Furthermore, cybersecurity requires ongoing attention and adaptation as technology evolves and the digital threat landscape becomes more complex. A one-time investment in security tools and protocols is no longer sufficient to protect systems. Cybercriminals constantly adapt their tactics, developing new attacks and finding innovative ways to bypass traditional defences. Therefore, cybersecurity is an ongoing effort that requires regular updates, continuous monitoring, and employee training to stay ahead of the latest threats. Neglecting to allocate resources for regular security audits, patch management, and staff education leaves an organisation vulnerable to these evolving threats.
In conclusion, neglecting to invest in cybersecurity is risky and potentially catastrophic for any organisation. The consequences of a cyberattack can be severe, ranging from financial losses and operational downtime to reputational harm and legal penalties. Organisations can protect their data, systems, and reputations from the growing threat of cybercrime by prioritising cybersecurity and investing in the right tools, processes, and expertise. Cybersecurity is not just a technical necessity but a critical business strategy that can safeguard an organisation's future and foster trust with customers, partners, and investors.
In order to secure IoT systems to data confidentiality, privacy, and integrity, it is important to understand the various vulnerabilities or security weaknesses of IoT systems that cybercriminals can exploit. Most of the security vulnerabilities of IoT are found at the physical layer of the IoT reference architecture, which consists of the IoT devices. As discussed in the previous sections, IoT devices have limited computing and communication resources, making it difficult to implement strong security protocols and algorithms that can ensure that the confidentiality, integrity, availability, accountability, and nonrepudiation security requirements of IoT data and systems are satisfied. Hence, the security measures often designed and implemented to secure IoT data and systems are not sufficient, making IoT systems vulnerable to several types of cybersecurity attacks and more straightforward to compromise.
As IoT devices are being integrated into existing systems of businesses, personal devices, household systems, and critical infrastructure, they are becoming attractive targets for cybercriminals, making them vulnerable to constant attacks. Cybercriminals are often searching for security weaknesses (vulnerabilities) in IoT devices that they can exploit in order to steal or damage data, disrupt the quality of service, or coordinate the devices to conduct large-scale attacks such as DoD/DDoS attacks or any attack to compromise other systems, especially critical infrastructures.
Given the severe risk posed by security weaknesses in IoT systems to IoT services and other services in society, including the possibility of causing the loss of human lives or disrupting society, it is crucial to identify and address IoT security vulnerabilities before cybercriminals can exploit them. The proliferation of diverse IoT devices across various sectors in society with very little or no standardisation and regulation has increased IoT vulnerabilities and attack surfaces that cybercriminals can leverage to compromise the data collected using IoT devices and to compromise existing systems. Some of the IoT security vulnerabilities include the following (figure 103):
Although IoT vulnerabilities cannot all be eliminated, there are best practices that can be adopted to ensure that IoT vulnerabilities are not easily exploited to compromise IoT data and systems. Some of the security measures and techniques that can be adapted to harden IoT security and mitigate the risk of an IoT attack resulting from the exploitation of any of the IoT vulnerabilities include the following (figure 104):
In this section, we discuss the concept of IoT attack vectors, attack surfaces, and threat vectors to clarify the difference between these cybersecurity terms, which are often used interchangeably. We discuss some IoT attack vectors that should be considered when designing cybersecurity strategies for IoT networks and systems. We also discuss some strategies that can be used to eliminate or mitigate the risk posed by IoT attack vectors.
IoT attack vectors are the various methods that cybercriminals can use to access IoT devices to launch cyberattacks on the IoT infrastructure or other information system infrastructure of an organisation or the Internet as a whole. They provide a means for cybercriminals to exploit security vulnerabilities to compromise sensitive data's confidentiality, integrity, and availability. It is essential to minimise the attack vectors to reduce the risk of a security breach. It may cost an organisation a lot of money, and its reputation may be negatively impacted after a security breach.
The number of attack vectors keeps growing as cybercriminals develop numerous simple and sophisticated methods to exploit unresolved security vulnerabilities and zero-day abilities on computer systems and networks. In this way, there is no single solution to mitigate the risk posed by the growing number of attack vectors in classical computer systems and networks. As the number of IoT devices connected to the Internet increases, the number of IoT-related attack vectors also increases, requiring the development of a holistic cybersecurity strategy that handles the traditional attack vectors (e.g., malware, viruses, email attachments, web pages, pop-ups, instant messages, text messages, and social engineering, credential theft, vulnerability exploits, and insufficient protection against insider threats) and those that are designed to target IoT systems (e.g., exploitation of IoT-based vulnerabilities such as weak or no passwords, lack of firmware and software updates, unencrypted communications).
To defend IoT networks and systems, it is crucial to understand the various ways a cybercriminal can use to gain unauthorised access to IoT networks and systems. The term threat vector is often used interchangeably with attack vector. An IoT threat vector is the number of potential ways or methods cybercriminals can use to compromise the confidentiality, integrity, or availability of IoT data and systems. As IoT networks grow and are integrated with other IT and cyber-physical systems, the complexities of managing them and the number of threat or attack vectors increase. Therefore, it is very challenging to illuminate all threat or attack vectors, but IoT-based cybersecurity systems are designed to eliminate threat or attack vectors whenever possible.
An IoT attack surface is the number of attack vectors that cybercriminals can use to manipulate an IoT network or system to compromise data confidentiality, integrity, or availability. It combines all IoT attack vectors available to cybercriminals to compromise IoT data and systems. It implies that the more IoT attack vectors an organisation has due to deploying IoT systems, the larger its cybersecurity attack surface and vice versa. Therefore, organisations must minimise the number of attack vectors to minimise the attack surface.
To eliminate IoT attack vectors, it is essential to understand the nature of some of them and their sources and then develop comprehensive security strategies to deal with them. This section will discuss IoT attack vectors from the perception layer to the application layer. Some of the IoT attack vectors or ways in which cybercriminals can gain illegal access to IoT networks and systems (to compromise data security or launch further attacks) include the following:
The attack vectors discussed above could be grouped into two categories: passive and active. Passive attack vector exploits allow attackers to gain unauthorised access to IoT networks and systems without intruding or interfering with their operation. Examples of these attack vectors include phishing and other social engineering-based attack vectors. On the other hand, active attack vector exploits interfere with the operation of the IoT network and system. Examples of this category of attack vector include DDoD attacks, brute-force attacks, malware attacks, etc.
To address common attack vectors, it is vital to understand the nature of the attack vector exploits, including passive and active ones. Most attack vector exploits share some common characteristics, which include the following:
Identifying and deploying practical security tools and policies to deal with IoT attack vectors is essential. These security tools and policies should be designed to eliminate or reduce the risk from IoT attack vectors from the IoT perception layer to the application layers. Some of the strategies that can be designed to defend IoT networks and systems against well-known IoT attack vectors include the following:
In the previous sections of this chapter, we discussed the various IoT vulnerabilities, cybersecurity attacks, and attack vectors and the various best practices to address these vulnerabilities, threats, and attack vectors. This section presents the various IoT security technologies and a general methodology for securing IoT networks and systems.
Various cybersecurity tools are deployed to design a robust and comprehensive cybersecurity system. No single cybersecurity tool can handle security issues at all the layers of the IoT reference architecture. Therefore, appropriate security tools can be implemented at the various layers, from the IoT perception or device layer to the application layer. Hence, IoT security can be categorised into the following categories (figure 105):
The hardware constraints of IoT devices make it hard to deploy traditional end-node security tools like firewalls and antimalware software to secure them. It is also challenging to update and patch these devices, similar to how we update and install security patches in traditional end nodes. However, many efforts are still being made to adapt conventional security technologies to secure IoT devices. However, there is a growing need for security technologies that could address the specific security of all IoT nodes at a lower energy and communication cost. Some of the technologies designed to secure IoT devices include:
It is critical to implement lightweight cryptographic encryption algorithms designed for efficient performance on devices with limited processing power and energy constraints to enhance the security of data transmitted by IoT devices. Algorithms such as Data Encryption Standard (DES), Advanced Encryption Standard (AES), and other optimised, energy-efficient cryptographic schemes protect data integrity and confidentiality.
Importance of Lightweight Encryption Algorithms for IoT
Data Protection During Storage and Transmission
Firmware Integrity Verification
Enhanced Security Through Layered Cryptographic Solutions
Implementing lightweight cryptographic algorithms, such as DES and AES, is fundamental for ensuring that data transmitted by IoT devices is secure. These algorithms safeguard data during storage and communication and play a critical role in verifying the integrity of firmware updates. By utilising cryptographic digital signatures, IoT systems can confirm that updates are authentic and unaltered, reinforcing the trustworthiness of the entire IoT ecosystem. For comprehensive security, integrating these cryptographic practices with other proactive measures ensures resilience against a range of cyber threats.
IoT devices' security and reliability depend heavily on their firmware, the foundational software layer that controls the hardware's functions. Because IoT devices are typically connected to the internet 24/7, they are exposed to a wide range of cybersecurity threats. Regular and secure firmware updates are critical to patch vulnerabilities, enhance functionality, and defend against new attack vectors. Without secure mechanisms for firmware verification and updates, IoT devices can become entry points for attackers to compromise network security, disrupt services, or steal sensitive data.
Common Firmware-Based Security Risks in IoT Devices
Best Practices for Secure Firmware Verification and Updates
The Role of Standards and Regulations
Adhering to industry standards and regulations, such as those outlined by the Internet Engineering Task Force (IETF) and the National Institute of Standards and Technology (NIST), can bolster the security of IoT firmware. These guidelines provide best practices for secure development, encryption protocols, and authentication mechanisms. Compliance with these standards helps establish user trust and aligns with global cybersecurity expectations.
Manufacturers and businesses deploying IoT devices should ensure that their firmware update processes and verification mechanisms comply with relevant security standards. This protects devices from cyberattacks, demonstrates a commitment to security, and can provide competitive advantages in industries where data protection is paramount.
Secure firmware verification and update mechanisms are indispensable for maintaining the security and integrity of IoT devices. Implementing a secure boot process that loads and executes only trusted, digitally signed firmware is essential to prevent unauthorised or tampered firmware from running. This measure protects IoT devices from malware injection attacks during start-up. Additionally, secure over-the-air (OTA) update mechanisms should be established to enable the safe delivery of patches and security updates to IoT devices, safeguarding against man-in-the-middle attacks and unauthorised modifications during the update process [71]. These strategies, combined with rigorous development practices and compliance with industry standards, create a robust security framework that supports the safe operation of IoT ecosystems.
Blockchain-based firmware updates
Regular firmware updates for IoT devices are essential to maintaining security and functionality; however, ensuring these updates' authenticity, integrity, and compatibility poses significant challenges. Leveraging blockchain technology can enhance the security and reliability of the entire update process—from generation and signing to distribution, verification, and installation. This approach greatly reduces the risk of malicious tampering, unauthorised modifications, or errors that could compromise devices or networks.
Blockchain technology facilitates transparent collaboration among multiple stakeholders, allowing them to contribute to and review firmware code while maintaining a clear, traceable record of versions and code changes. Digital signatures and cryptographic hashes can be employed to confirm the source's identity and the integrity of the updated content. Additionally, blockchain consensus mechanisms and smart contracts provide a robust framework for verifying and executing updates and recording and auditing the results. This ensures a comprehensive and secure process for firmware updates, safeguarding both devices and connected networks.
Cybercriminals are creating increasingly sophisticated malware to target the specific vulnerabilities of IoT devices. These attacks can vary in severity, from harmless pranks, such as altering the temperature on a smart thermostat, to more serious threats, like taking control of security cameras or compromising industrial control systems. IoT malware differs significantly from traditional computer viruses. These malicious programs are typically engineered to function on devices with limited processing power and memory, making detection and removal more difficult. Additionally, they can quickly propagate through networks of connected devices, forming extensive botnets capable of carrying out powerful distributed denial-of-service (DDoS) attacks.
The variety of IoT malware showcases the ingenuity of cybercriminals, who are continually devising new methods to exploit these devices—often outpacing manufacturers' ability to release timely patches for vulnerabilities [72]. It is advisable to implement comprehensive security technologies to safeguard IoT devices from malware-based threats. Deploying robust antimalware solutions, including antivirus, antispyware, anti-ransomware, and anti-trojan software, can significantly enhance the protection of IoT devices. These security measures help detect, prevent, and neutralise malicious programs before they can compromise device functionality or data integrity. Given many IoT devices' unique vulnerabilities and limited processing power, choosing lightweight, efficient security solutions tailored to their specific needs is crucial. Integrating these antimalware tools with real-time threat monitoring and automatic updates can further bolster the defence against rapidly evolving cyber threats.
Effective authentication management technologies such as password management systems and multifactor authentication should be adopted to ensure robust access control mechanisms for IoT data privacy and confidentiality.
Secure Credential Management: Avoid using default or hardcoded credentials in firmware, as attackers can quickly discover them and gain unauthorised access. Instead, strong authentication mechanisms, such as multifactor authentication, should be implemented to enhance security. Encourage users to change default passwords during the initial setup of the IoT device to prevent potential attacks based on known credentials.
A Simple Network Management Protocol (SNMP) is essential in maintaining IoT devices' security and operational integrity within a network. This widely adopted protocol is designed to collect data and manage network-connected devices, ensuring they remain protected against unauthorised access and other security threats. However, organisations should utilise robust monitoring and management tools tailored for comprehensive oversight to harness SNMP's capabilities effectively.
The Importance of SNMP Monitoring and Management: SNMP is a communication protocol that facilitates the exchange of management information between network devices and monitoring systems. It allows network administrators to oversee a range of connected devices, such as routers, switches, IoT sensors, and other hardware. The information collected through SNMP can be invaluable for identifying potential security risks, detecting performance bottlenecks, and preemptively addressing issues before they escalate.
Key Features and Capabilities of SNMP Monitoring Solutions
Centralised Monitoring Platform: SNMP monitoring solutions provide a unified platform for administrators to keep track of all network-connected devices. This centralised approach simplifies managing diverse IoT devices, enabling administrators to monitor real-time device traffic, access points, and overall activity. Such comprehensive visibility ensures that any potential security breach or abnormal behaviour can quickly be addressed.
Enhancing IoT Security with SNMP: By integrating SNMP monitoring tools into the broader security strategy, organisations can bolster their defence mechanisms and strengthen their IoT ecosystem's resilience. Regular audits and real-time oversight provided by SNMP solutions enable better compliance with security protocols and help maintain the integrity of sensitive data transmitted through IoT devices. Additionally, integrating SNMP data with other cybersecurity tools, such as Security Information and Event Management (SIEM) systems, can provide deeper insights and enhance incident response capabilities.
Best Practices for Implementing SNMP Solutions
Therefore, SNMP monitoring and management are vital for organisations looking to safeguard their IoT infrastructure. By implementing advanced SNMP solutions, businesses can achieve better visibility, proactive threat detection, and comprehensive control over their network, thus enhancing overall security and operational efficiency.
Communication security between IoT devices and backend servers is fundamental to a strong network security framework. As IoT ecosystems grow in complexity and scale, protecting data transmissions' integrity, confidentiality, and authenticity becomes increasingly critical. One of the most effective strategies for securing these interactions is implementing robust encryption protocols, such as Transport Layer Security (TLS).
The Importance of Robust Encryption in IoT Security: IoT devices often transmit sensitive data, from personal user information to industrial control signals. If intercepted or tampered with, this data can have severe consequences, including breaches, unauthorised access, and disruption of essential services. Encryption protocols act as a protective barrier, ensuring that data remains confidential and unaltered between devices and servers. Organisations can minimise the risks associated with data interception by encrypting data in transit and providing secure communication.
How TLS Enhances IoT Security
Transport Layer Security (TLS) is a widely recognised encryption protocol designed to secure data transmitted over networks. TLS establishes an encrypted connection between IoT devices and backend servers, protecting data from eavesdropping and tampering. Here's how TLS helps fortify network security in IoT ecosystems:
Implementing TLS in IoT Networks
Implementing TLS across an IoT network involves several best practices and considerations:
Complementary Security Measures
While TLS is a powerful tool for securing data in transit, it should be part of a comprehensive security strategy that includes:
Robust encryption protocols like TLS are essential for safeguarding the communication channels between IoT devices and backend servers. By encrypting data, authenticating parties, and ensuring data integrity, TLS minimises the risk of unauthorised access and data breaches. However, effective TLS implementation should complement continuous monitoring, updates, and a layered security approach to maximise protection in an increasingly interconnected world.
Logging and Monitoring for Comprehensive Threat Management
Security Information and Event Management (SIEM) systems play a vital role in protecting IoT ecosystems by combining logging, monitoring, and advanced data analysis to safeguard devices and networks. These technologies provide a unified platform for collecting and analysing security data, essential for maintaining a secure environment in an increasingly interconnected landscape. Below, we explain how logging and monitoring capabilities contribute to comprehensive IoT security and why they are indispensable for modern organisations.
Real-Time Monitoring and Live Tracking
Comprehensive Log Collection and Analysis
Alert Mechanisms and Incident Response
Benefits of Implementing SIEM in IoT Security
SIEM systems are integral to IoT security, providing a powerful combination of logging, real-time monitoring, and automated alerts to help organisations detect and respond to threats efficiently. By aggregating data from a wide range of sources, analysing logs for anomalies, and providing comprehensive alerts, SIEM solutions enhance an organisation's ability to maintain secure operations in an increasingly connected world. Implementing a high-quality SIEM system ensures that businesses are reactive and proactive in their IoT security efforts, positioning them to handle present and future challenges confidently.
Navigating the unpredictable landscape of digital threats is challenging, but effective risk management in an IoT ecosystem is achievable. Businesses of all sizes must integrate robust security protocols into their operations, focusing on enhancing threat detection and response. Dedicated IT administrators or specialised security teams (e.g., security operation centres) should secure networks, including all IoT devices. To design and implement robust cybersecurity tools and policies to secure IoT networks and systems, cybersecurity analysts or teams should conduct comprehensive network and software risk assessments, implement robust defensive measures, and leverage SIEM solutions and other security monitoring tools. Some of these strategies have been discussed in [73].
Conduct Comprehensive Network and Software Risk Assessments. Practical cyber threat intelligence revolves around finding and addressing vulnerabilities within a cybersecurity framework. This process should be continuous and consist of planning, data collection, analysis, and reporting. The resulting report should be evaluated and adapted to include new findings before being incorporated into strategic decisions.
Risk assessments can be broken down into three main types:
Implement Robust Defensive Measures. A comprehensive cybersecurity policy is essential for protecting your IoT ecosystem. This policy should incorporate a range of strategies to minimise risks. Standard defensive practices include:
Leverage SIEM Solutions. Security Information and Event Management (SIEM) systems are crucial for real-time cybersecurity management. These solutions enhance security by integrating threat intelligence with incident response, making them an invaluable tool for analysing security operations within an IoT ecosystem.
SIEM platforms gather event data from applications, devices, and other systems within the IoT infrastructure and consolidate this data into a clear, actionable format. The system issues customisable alerts based on different threat levels. Key benefits of using SIEM solutions include:
To effectively defend against IoT malware, a comprehensive, multi-layered approach that integrates advanced technology and robust security practices is essential. Here are some expert-recommended best practices discussed in [74]:
The Internet of Things (IoT) proliferation has revolutionised industries by enabling data collection, transmission, and analysis from billions of interconnected devices. However, this rapid adoption has also introduced significant security challenges, particularly concerning the storage and management of IoT data in databases. IoT database security protects sensitive data collected from IoT devices, ensuring its integrity, availability, and confidentiality.
This detailed overview explores the unique challenges of IoT database security, common threats, best practices, and emerging trends in securing databases for IoT ecosystems.
The typical protection stack is presented in the figure 106. It involves protection and management mechanisms on a variety of levels.
Network Security:
Network security in IoT databases protects the data flow between IoT devices and their associated databases from unauthorised access and cyberattacks. This involves securing communication protocols with encryption standards such as TLS, implementing firewalls to filter traffic, and utilising virtual private networks (VPNs) for remote access. Network segmentation can isolate IoT databases from other parts of the system, reducing the risk of lateral movement during a breach. Real-time monitoring and intrusion detection systems (IDS) ensure anomalies in traffic are promptly identified and mitigated.
Access Management:
Access management for IoT databases ensures that only authorised users, devices, and applications can access stored data. This is critical in preventing unauthorised manipulation or theft of sensitive information. Multi-factor authentication (MFA), role-based access control (RBAC), and device-specific tokens are commonly employed to regulate access. Additionally, periodic audits of access logs can reveal patterns indicative of suspicious activities, enabling proactive security measures.
Threat Management:
Threat management in IoT databases focuses on detecting, mitigating, and preventing risks such as malware, ransomware, or insider threats that could compromise data integrity and availability. Organisations can use advanced threat detection tools powered by machine learning to identify unusual patterns in database queries or access attempts. Automated threat response mechanisms, such as isolating compromised database nodes, further enhance protection. Regular vulnerability assessments and patch management ensure the database remains resilient against emerging threats.
Data Protection:
Data protection in IoT databases ensures that sensitive information remains secure throughout its lifecycle—collection, storage, processing, and deletion. Encryption techniques like AES safeguard data at rest, while TLS protects data in transit. Secure backup strategies and redundancy mechanisms help mitigate the impact of data loss or corruption. Compliance with data protection regulations, such as GDPR or CCPA, ensures that personally identifiable information (PII) from IoT devices is handled responsibly. Data masking and anonymisation techniques are often employed to enhance privacy and limit exposure in case of a breach.
IoT devices generate vast amounts of data, often in real-time, encompassing sensitive information such as personal identifiers, health records, location data, and industrial metrics. Ensuring the security of databases storing this data is critical for several reasons:
IoT database security presents distinct challenges due to the scale, diversity, and dynamic nature of IoT systems:
IoT databases face various security threats, many of which exploit the vulnerabilities inherent in IoT systems:
Implementing robust security measures for IoT databases involves a multi-layered approach to protect against various threats. Key best practices include:
As IoT ecosystems grow and evolve, new approaches and technologies are emerging to address database security challenges:
IoT database security is critical to ensuring IoT ecosystems' safe and efficient operation. Organisations can protect sensitive IoT data and maintain users' trust by addressing unique challenges, understanding common threats, and implementing best practices. As IoT adoption expands, proactive security strategies and emerging technologies will be essential in safeguarding IoT databases against evolving threats.
This chapter delves into blockchain technology. While often associated with cryptocurrency, blockchain is a flexible framework for securely storing, sharing, and protecting data across diverse domains. The chapter explores blockchain applications beyond financial transactions, widening readers' view of the technology and potential markets.
For developers, blockchain offers tools and encryption techniques for secure, distributed data storage. In business and finance, it enables decentralized transaction tracking without central authorities. Tech enthusiasts see it as a driver of the Internet's future, while others view it as a transformative tool for decentralizing control in society and the economy.
At its core, blockchain is a secure, distributed database powered by cryptography and distributed computing. Originating from Satoshi Nakamoto's innovative design, it enables global networks of computers to maintain a shared, tamper-resistant ledger. By fostering trust through technology rather than institutions, blockchain facilitates direct, secure collaboration, paving the way for new forms of global cooperation without reliance on traditional central entities.
The following subchapter introduces the concepts and applications of blockchains:
This chapter will explore how blockchain technology can be applied in various fields. While we will primarily use examples related to financial transaction processing, it's essential to understand that blockchain's potential is not limited to this area. This technology offers a flexible framework for implementing decentralised solutions to securely store, share, and protect data across multiple domains.
The term 'blockchain' has come to mean different things to different people. For developers, it's a set of tools and encryption techniques that make it possible to store data securely across a network of computers. In business and finance, it's seen as the technology behind digital currencies and a way to keep track of transactions without needing a central authority. For tech enthusiasts, blockchain is driving the future of the Internet. Others view it as a powerful tool that could reshape society and the economy, moving us toward a world with less centralised control.
At its core, blockchain is a new type of data structure that merges cryptography with distributed computing. Satoshi Nakamoto developed this technology by combining these elements to create a system where a network of computers works together to maintain a shared, secure database. In essence, blockchain technology can be described as a secure, distributed database.
Blockchain technology demonstrates that people anywhere in the world can trust each other and conduct business directly within large networks without needing a central authority to manage everything. This trust isn't based on big institutions but on technology—protocols, cryptography, and computer code. This shift makes it much easier for people and organisations to work together, opening up new possibilities for global collaboration without relying on traditional central institutions.
What is blockchain in simple terms?
A blockchain is a method of storing data. Data is stored in blocks that are linked to the previous block.
Each block contains:
Data in the block usually consists of transactions, each block can contain hundreds of transactions (for example person A sends 100 EUR to person B, this transaction describes 3 variables: sender identification, receiver identification and amount).
A hash generated from a transaction record is a unique combination of letters and numbers. It's always unique to every block on the blockchain. When the data in the block changes, the hash will also change. When a hash is applied to transaction data, it turns off the option to make changes in a record, as the resulting hash of the new record will not equal the previous value. (For example, if we generate a hash for records “PersonA, PersonB,100,” the hash result of this record will be a unique value and will be changed if at least one symbol from the original record is changed.) Each block also contains the previous block's hash, forming a chain structure.
As a result, if a transaction in any block changes, the block's hash will change. When the hash of the block changes, the next block will show a mismatch with the previous hash that was recorded by it. This gives blockchain the property of being tamper-resistant as it becomes very easy to identify when data in a block has changed. Blockchain has one more property that makes it secure. A blockchain is not stored on one computer or server, which is usually the case with a database. Instead, it is stored in a large network of computers called a peer-to-peer network.
Peer-to-peer is a network in which all computers play server and node roles. Such networks usually do not have a centralised server; this role is shared across network nodes. This structure allows the network to remain operational with any number and combination of available nodes.
Every time a new block of transactions is added to the network, all network members or nodes must verify whether all transactions in the block are valid. If all nodes in the network agree that the transactions in the block are correct, the new block will be added to every node's blockchain.
This process is called consensus. Hence, any attacker who tries to tamper with the data on the blockchain must tamper with the data in most of the computers in the peer-to-peer network.
Transactions
Blockchain technology uses two main types of cryptographic keys to provide the security of transactions and data: public keys and private keys. These keys work together to protect the integrity of the blockchain, enabling secure exchanges of digital records and protecting user identities. Consider the example of a mailbox. The public key is your email ID, which everyone knows about and can send you messages. The private key, on the other hand, is like the password to that mailbox. Only you own it, and you can read the messages inside.
A public key is a cryptographic code that others share and use to interact with your blockchain account. It's generated from your private key using a specific mathematical process. Public keys are used to verify digital signatures and to encrypt data that only the private key can decrypt. This ensures that messages or transactions are intended for the correct recipient.
A private key is a secret cryptographic code that grants access to your blockchain records. It must be kept confidential because anyone accessing the private key can control the records associated with the corresponding public key. This key is used to authorise transactions on the blockchain. When it is necessary to transfer information (make a transaction), you use your private key to create a digital signature that proves you are the owner of those transactions.
Public and private keys work together to secure blockchain operations:
Categories of blockchain.
There are three categories of blockchain:
Public blockchains, anyone can access the database, store a copy and make changes subject to consensus in the public blockchain. Bitcoin is a classic public blockchain. The key characteristic of public blockchains is that they are entirely decentralised. The network is open to any new participants. All participants, having equal rights, can be involved in validating the blocks and accessing the data contained in the blocks.
Public blockchains process transactions more slowly because they are decentralised; as a result, each node should agree on each transaction. This requires time-consuming consensus methods like Proof of Work and prioritising security over speed.
Private blockchains (sometimes referred to as managed blockchains) are closed networks accessible only to authorised or select verified users. They are often owned by companies or organisations which use them to manage sensitive data and internal information.
Private blockchain is very similar to existing databases regarding access restrictions but is implemented with blockchain technology. As a result, such networks are not aligned with the principle of decentralisation.
Since it is accessible only by certain people, there is no requirement for mining blocks (validating). As a result, such networks are faster than other types because they do not have the necessary mining, consensus, etc.
Hybrid or consortium blockchains are permission-based blockchains, but in comparison to private blockchains, control is provided by a group of organisations rather than one coordinator. Such blockchains have more restrictions than public ones but are less restrictive than private ones. For this reason, they are also known as hybrid blockchains. New nodes are accepted based on a consensus with the consortium. Blocks are validated according to predefined rules defined by the consortium. Access rights can be public or limited to certain nodes. User rights might differ from user to user. Hybrid blockchains are partly decentralised.
Blockchain type selection
When choosing the right type of blockchain for a project, it's important to consider how it will be used, who will use it, and how it needs to perform. There are three main types of blockchains, each suited for different situations:
Private Blockchain:
When to Use: A private blockchain is the best option if the blockchain is to be used only within a single organisation by a specific group of people. Advantages: It gives the organisation more control over who can join and see the data. It's suitable for internal processes like keeping track of company records or managing internal operations. Performance: Since only a few trusted users are involved, the system can run faster and more efficiently because it doesn't need complex methods to agree on things. Examples: Hyperledger Fabric, Corda.
Consortium Blockchain:
When to Use: A consortium blockchain is the right choice if the blockchain will be shared by a group of companies or organisations working together. Advantages: It allows several organisations to work together while controlling who can access the blockchain. This is great for industries where businesses need to collaborate and share data securely. Performance: Since only trusted groups are involved, it works faster and more efficiently than a public blockchain. Examples: R3, Quorum.
Public Blockchain:
When to Use: A public blockchain is the best fit if the goal is to create a completely open and decentralised system that anyone can join, such as for cryptocurrencies. Advantages: It allows anyone to participate and offers complete transparency. This is perfect for digital currencies, where trust needs to be spread across everyone using them. Performance: Public blockchains can be slower and use more energy because they require complex processes to ensure everyone agrees. However, they are highly secure and trustworthy. Examples: Bitcoin, Ethereum.
To summarise – If, in your project, the blockchain is only for internal use, go with a private blockchain. Choose a consortium blockchain if it's for a group of related businesses. And if it needs to be open to everyone, a public blockchain is the way to go.
While first-generation blockchain applications, such as Bitcoin, primarily focused on decentralised digital currencies, second-generation blockchain applications introduced more sophisticated functionalities. These advancements allowed for broader use cases beyond simple peer-to-peer transactions, laying the groundwork for smart contracts, decentralised applications (dApps), and improved scalability. Enhanced programmability, consensus mechanisms, and adaptability to various industries often characterise second-generation blockchains.
Key Features of Second-Generation Blockchain Applications
Smart Contracts
One of the innovations of second-generation blockchain applications is the introduction of smart contracts. Initially pioneered by Ethereum, smart contracts are self-executing agreements where the terms of the contract are written directly into code. Once predetermined conditions are met, the contract is automatically executed. This eliminates the need for intermediaries and significantly reduces transaction costs and delays.
Smart contracts have diverse applications, including financial agreements, supply chain automation, real estate, insurance, and beyond. They have enabled decentralised finance (DeFi) platforms to flourish by providing services like lending, borrowing, trading, and liquidity provision in a trustless, decentralised manner.
Decentralised Applications (dApps)
Second-generation blockchains also serve as platforms for decentralised applications, or dApps, which are applications that run on a blockchain instead of centralised servers. Ethereum, again, was the first platform to popularise the use of dApps by providing a robust infrastructure for developers to build decentralised applications with the Ethereum Virtual Machine (EVM).
dApps are transparent, autonomous, and can operate without a central authority. Their decentralised nature means they are less vulnerable to censorship and hacking, as they run on a distributed network of nodes rather than a single point of failure. This has led to the creation of various decentralised services, including decentralised exchanges (DEXs), prediction markets, gaming platforms, and more.
Programmability and Turing-Completeness
Unlike Bitcoin, which was specifically designed for financial transactions, second-generation blockchains like Ethereum introduced Turing-completeness. This means the blockchain can process any computational logic and execute any program, given enough resources. This allows developers to create complex and sophisticated blockchain-based applications that can address various problems.
Other platforms that focus on programmability include EOS, Tezos, Tron, and Solana. All of these allow for the deployment of smart contracts and dApps. These platforms differ from first-generation blockchains by being application-oriented rather than transaction-oriented.
Interoperability
One of the challenges addressed by second-generation blockchains is the need for interoperability between different blockchain networks. Many blockchain applications work in silos, but with the growth of DeFi and dApps, there has been a demand for different blockchain systems to communicate with each other. Interoperability solutions aim to enable blockchains to transfer data, tokens, and assets between them seamlessly.
Projects like Polkadot and Cosmos have focused on creating interoperable blockchain ecosystems. These networks use relay chains and hubs to connect different blockchains, facilitating cross-chain transactions and enabling various blockchain networks to work together. Interoperability helps improve liquidity, expands market reach, and enhances the overall utility of blockchain applications.
Decentralised Finance (DeFi)
One of the most transformative developments of second-generation blockchain applications is Decentralised Finance (DeFi). DeFi refers to a collection of financial services and platforms built on blockchain technology that aims to recreate traditional financial systems such as banks, exchanges, and lending platforms in a decentralised and permissionless way.
DeFi applications leverage smart contracts to create financial services like decentralised lending and borrowing platforms (e.g., Aave, Compound), decentralised exchanges (DEXs) (e.g., Uniswap, Sushiswap), and yield farming platforms. These services allow users to borrow, lend, trade, and earn interest on digital assets without relying on centralised entities. The global DeFi market has exploded in recent years, with billions of dollars locked in DeFi protocols, transforming how people access and manage financial services.
Governance and Decentralised Autonomous Organizations (DAOs)
Second-generation blockchain applications have introduced new models for decentralised governance, most notably in the form of Decentralised Autonomous Organizations (DAOs). DAOs are blockchain-based entities governed by a set of rules encoded in smart contracts. Token holders typically have voting rights and can collectively decide the organisation's direction, including funding, development, and protocol changes.
DAOs aim to provide a transparent, decentralised governance model, eliminating the need for traditional hierarchical structures. Many DeFi projects and blockchain ecosystems have adopted the DAO model for decision-making processes. For instance, MakerDAO is a popular DAO that governs the Maker Protocol, which allows users to generate the Dai stablecoin.
Examples of Second-Generation Blockchain Platforms
Ethereum
Ethereum is the most notable second-generation blockchain platform. It is designed to go beyond cryptocurrency by providing a general-purpose framework for building decentralised applications. Ethereum's ability to execute smart contracts and support decentralised applications has made it the go-to platform for innovators in DeFi, NFTs, and beyond.
EOS
EOS is another second-generation blockchain platform known for its high scalability, faster transaction speeds, and user-friendly development tools. EOS aims to address the scalability issues faced by Ethereum by offering higher throughput and lower transaction fees, making it a popular choice for developers building high-performance dApps.
Cardano
Cardano is a second-generation blockchain platform that provides a secure and scalable infrastructure for decentralised applications and smart contracts. It uses a unique Proof of Stake (PoS) consensus mechanism called Ouroboros, designed to be more energy-efficient than Ethereum's original Proof of Work. Cardano's research-based development approach emphasises formal verification to ensure the security and correctness of its blockchain protocols.
Polkadot
Polkadot is a platform designed to enable different blockchains to work together. It introduces the concept of “parachains,” which are parallel chains that can interoperate with each other. Polkadot's interoperability aims to solve the fragmentation problem by connecting various blockchains, enabling them to exchange information and assets seamlessly.
Solana
Solana is known for its high-performance blockchain, which is capable of handling thousands of transactions per second. It uses a novel consensus mechanism called Proof of History (PoH), which enables fast block confirmation times. This makes Solana suitable for high-frequency trading, gaming, and other high-demand dApps.
Blockchain technology has evolved far beyond its origins in cryptocurrency, finding applications across various industries. Here are some expanded applications of blockchain:
Green IoT (G-IoT) is the adoption of energy-efficient procedures (hardware, software, communication, or management) and waste reduction methods (energy harvesting and recycling of e-waste) to conserve resources and reduce waste (including pollutants like carbon dioxide) produced by the IoT ecosystem from the design, manufacturing, deployment and operation of IoT systems from the IoT devices to IoT cloud computing data centres. Green IoT is an emerging field within the IoT ecosystem that is aimed at raising awareness of the sustainability problems that may result from the massive deployment of IoT applications in the various sectors of society (health care, agriculture, manufacturing, intelligent transport systems, smart cities, supply chains, smart homes, and smart energy systems) and exploring ways to address those challenges. These challenges include the increase in energy consumption, which increases the IoT industry's carbon footprint, and the amount of e-waste created resulting from discarding electronic components of IoT devices, especially IoT batteries, as they need to be replaced after a few years.
Although energy-efficient strategies have been developed to minimise the energy consumption of IoT devices, the energy consumption of billions or trillions of IoT devices will be enormous. The amount of traffic generated by IoT devices is increasing exponentially, and it is predicted that by 2024, IoT traffic will constitute about 45% of the total internet traffic. A rapid increase in the amount of traffic generated by billions to trillions of IoT devices and transported through the internet to cloud computing platforms will significantly increase the energy consumption of the internet network infrastructures, especially with the dense deployment of 5G base stations and IoT wireless access points to service IoT devices. Also, data centres consume tremendous energy to process or analyse the massive amount of data collected using IoT devices.
Much attention is often focused on the energy consumed by IoT devices, networks, and computing platforms. However, less attention is given to the energy consumed by manufacturing and transporting IoT devices and other ICT systems used in the IoT ecosystem. The carbon footprint of the IoT industry can be traced from mining the minerals required to manufacture IoT devices, the manufacturing process, and the supply chains involved. To realise the green IoT goal, energy efficiency and sustainable practices should be designed to ensure that the mining, manufacturing and supply chains are environmentally friendly or sustainable.
The design and implementation of energy-efficient strategies may significantly reduce the energy consumption of IoT systems. However, the rapid increase in the use of IoT to address problems and increase efficiency and productivity in other sectors of the economy will result in a significant net increase in the energy consumed by these systems. Another approach to enforcing green IoT is using renewable resources such as renewable energy sources to continuously recharge IoT batteries, reducing the maintenance cost of replacing IoT batteries and increasing the amount of e-waste created by the IoT industry.
Another Green IoT strategy is to reuse and recycle IoT components and resources. This will significantly reduce the amount of waste produced by the IoT industry and optimise using natural resources to manufacture IoT devices. Hence, reusing and recycling IoT components and resources is a green IoT strategy to increase the sustainability of the IoT industry.
An effective green IoT strategy should span the entire IoT product lifecycle from the design to production (manufacturing) to the deployment, operations and maintenance, and recycling. The primary goal in each stage is to reduce energy consumption, adopt sustainable resources (e.g., harvesting energy from sustainable energy sources, using sustainable materials) usage, minimise e-waste and other pollutants, and adopt recycling of resources or waste. Therefore, a shift toward Green IoT (GIoT) emphasises the need to adopt energy-efficient practices and processes prioritising resource conservation, waste reduction, and environmental sustainability[75].
Green IoT strategies can be grouped into the following categories: green IoT design, green IoT manufacturing, green IoT applications, green IoT operation, and green IoT disposal [76].
Green IoT design: Designing IoT hardware, software, management systems, and policies considering the requirement of minimising the energy consumption, carbon footprint and environmental impact of IoT systems. One of the design goals should be to implement energy-efficient strategies to reduce energy consumption and to develop strategies to minimise the amount of e-waste produced from the IoT systems and infrastructures. Green IoT design techniques include green hardware, green communication and networking infrastructure, green software, green architecture, green software, energy-efficient security mechanisms, and energy harvesting.
Green IoT Operations: Deploying, operating, and managing IoT systems in such a way as to minimise energy consumption and to minimise waste. One such strategy is to switch off idle networking and computing nodes, applying radio resource optimisation mechanisms (e.g., control of the transmission power and the modulation), energy-efficient routing mechanisms, and software energy optimisation mechanisms (improving software code to be energy-efficient and using software optimisation algorithm to minimise energy consumption).
Green IoT applications or use cases: Using IoT applications to reduce energy consumption (or the carbon footprint) and to conserve resources to ensure sustainability in other industries, for example, using IoT to reduce energy consumption, water consumption, and the use of chemicals (fertilisers, herbicides, fungicides, insecticides etc) in the agricultural industry. IoT can reduce energy consumption, carbon footprint, waste production, and the over-utilisation of resources in the various sectors of the economy, including manufacturing, energy production, mining, health care, and transportation. Therefore, the massive deployment of IoT in these sectors to address efficiency and productivity challenges should be done in such a way as also to address sustainability issues.
Green IoT waste disposal and management: Reducing the waste created from deploying and operating IoT systems. Renewable energy sources should be used to recharge IoT batteries to reduce the amount of IoT battery waste generated and dumped in landfills. Recycling IoT components and resources should be adopted and promoted to reduce the amount of e-waste generated by the IoT industries and dumped in landfills, which may increase significantly with the large-scale adoption and deployment of IoT systems in the various sectors of the economy.
Green IoT manufacturing: Energy-efficient manufacturing infrastructure for IoT hardware. With the expectation to connect hundreds of billions or trillions of IoT devices to satisfy the demand for IoT to improve various sectors or industries in the evolving tech-driven economy, the carbon footprint from factories manufacturing IoT devices will be enormous. Also, the manufactured IoT systems should be energy efficient.
Details for each topic are presented in the following chapters:
Green IoT design is a paradigm based on a holistic IoT design framework that focuses on maintaining a balanced trade-off between the functional requirements, Quality of Service (QoS), interoperability, cost, security, and sustainability within the IoT ecosystem. It emphasises the need to prioritise energy efficiency and the reduction of waste in the IoT ecosystem by manufacturing IoT devices, deploying IoT systems, and operating IoT systems.
The emergence of modern technologies such as Fifth Generation (5G) mobile networks, blockchain, Artificial Intelligence (AI), and fog/cloud computing are unlocking new IoT use cases in various industries and sectors of the modern technology-driven economy or society. As a result, the number of IoT devices connected to the internet and the volume of traffic generated from IoT infrastructures will increase significantly, increasing the energy demand in the IoT ecosystem. The result is an increase in the carbon footprint and e-waste (especially from battery-powered IoT devices) from IoT-related services or the IoT ecosystem.
An effective green IoT strategy should span the entire IoT product lifecycle from the design to production (manufacturing) to the deployment, operations and maintenance, and recycling. The primary goal in each stage is to reduce energy consumption, adopt sustainable resources (e.g., harvesting energy from sustainable energy sources, using sustainable materials) usage, minimise e-waste and other pollutants, and adopt recycling of resources or waste. Therefore, a shift toward green IoT (G-IoT) emphasises the need to adopt energy-efficient practices and processes prioritising resource conservation, waste reduction, and environmental sustainability[77].
Green IoT design is a design framework consisting of design, production, implementation, deployment, and operation choices to reduce energy consumption and waste from the IoT ecosystem. They are energy-efficient strategies devised to reduce the carbon footprint from manufacturing, deploying, and operating IoT systems (IoT sensor devices, networking nodes, data centres or computing devices). They are also strategies devised to reduce the waste from IoT infrastructures. They may involve hardware, software, management or policy decisions. The green IoT design framework should consist of the following design considerations: developing and deploying energy-efficient mechanisms, choosing energy sources, and mechanisms to ensure environmental and resource sustainability.
Energy-efficient design
It involves designing and deploying energy-saving mechanisms to reduce the energy consumption of IoT devices. These mechanisms include the following:
The above energy-efficient or sustainable computing, security, networking, hardware, and software design strategies can significantly reduce the energy demand from large-scale IoT infrastructures deployed throughout the world. Although significant amounts of energy can be saved by applying these strategies, the rapid growth in the size of the IoT industry may offset these gains, but they offer a significant gain for the environment.
Design choices for energy sources
The type of energy sources required to power IoT infrastructures varies from the IoT cyber-physical infrastructure to the core infrastructures. Electrical and electronic devices in the IoT infrastructure can be powered with energy from:
Environmental sustainability mechanisms
IoT systems should be designed, implemented, and operated in such a way as to ensure the conservation of natural resources and reduce the waste or pollutants that are generated by the IoT industry. Energy-efficient design and use of renewable energy sources are sustainability mechanisms. Deploying energy-efficient mechanisms and using renewable energy reduces the carbon footprint of IoT infrastructures. Other environmental sustainability strategies are:
As IoT is adopted to address problems in the various sectors of society or economy, the energy demand for IoT is increasing rapidly and almost following an exponential trend. As the number of IoT devices increases, the amount of traffic created by IoT devices increases, increasing the energy demand of the core networks that are used to transport the IoT traffic and also increasing the energy demand of data centres that are used to analyse the massive amounts of data collected by the IoT devices. The large-scale adoption and deployment of IoT infrastructure and services in the various sectors of the economy will significantly increase the energy demand from the IoT cyber-physical infrastructure (sensor and actuator devices) through the transport network infrastructure and the cloud computing data centre infrastructure. Therefore, one of the design goals of green IoT is to develop effective strategies to reduce energy consumption. These strategies should be deployed across the IoT architecture stacks. That is, energy-saving strategy should be implemented across all the IoT layers, including:
At each layer, various energy-efficient strategies are implemented to reduce energy consumption. Much energy is used to perform computation and communicate at the multiple layers. A significant amount of energy is saved by deploying energy-efficient computing mechanisms (hardware and software), low-power communication and networking protocols, and energy-efficient architectures. Energy efficiency should be one of the main goals of green IoT systems: design, manufacturing, deployment, and standardisation. The energy-saving mechanisms may vary from one layer to another, but they can be classified into the following categories (figure 107):
A realistic approach to significantly reduce the energy consumption in IoT systems or infrastructures is to dramatically improve the energy efficiency of hardware systems because a large proportion of energy is used to power the electrical and electronic hardware such as computing nodes, networking nodes, cooling (and air conditioning) systems, and power electronics systems, security, and lighting systems. Recently, much attention has been paid to improving the energy efficiency of hardware systems in ICT infrastructures, especially in the IoT industry. The energy-saving mechanisms in IoT infrastructures include:
To achieve the green IoT vision, deploying energy-efficient hardware in the entire IoT infrastructure (from the perception layer to the cloud) throughout the IoT industry is essential. Green IoT hardware is not limited to energy-efficient hardware design and hardware-based energy-saving mechanisms in the IoT infrastructure but also includes sustainable hardware approaches such as:
Reducing the size of hardware device
There has been a significant reduction in the size of electronic hardware from the times of the vacuum tube to modern-day semiconductor chips. In the early days of electronics, computers occupied entire floors of buildings, radio communication systems were large systems integrated into cabinets, and the smallest electronic device at the time was a two-way radio system that was often carried on the back [78]. As the sizes of electronics devices decreased, their energy demand also dropped drastically.
Over the past few decades, the sizes of computing and communication devices have decreased significantly, reducing the power required to operate them. Despite the significant progress made by the semiconductor industry to decrease the size of semiconductor chips while improving their performance, there is still a persistent drive to keep lowering the sizes of semiconductor chips to decrease their cost, reduce energy consumption, and conserve the resources required to manufacture them.
One of the Co-founders of Intel, Gordon Moore, observed that “the number of transistors and resistors on a chip doubles every 24 months”, and the computer industry adopted it as the well-known Moore's law and became a performance metric in the semiconductor or computer chip industry. As more transistors were being packed into a single small-sized chip, the sizes of computing and network equipment decreased significantly, translating to a significant decrease in power consumption. Although advanced chip manufacturing has dramatically reduced the transistor gate length, current leakage has increased, increasing chip power consumption and heat dissipation. Thus, doubling the number of transistors on the chip could double the amount of power consumed by the chip[79].
In some energy-hungry IoT devices, batteries with higher energy capacity are required. The energy capacity of a battery is correlated with its size. That is, batteries with higher energy capacities may be larger and heavier, limiting the extent to which the device's size can be decreased. The energy capacity of the battery may be relatively small. Still, an energy harvesting module is attached to the battery to recharge the battery with energy harvested from the environment continuously. Adding an energy harvesting module may increase the size of the IoT device, but it improves the device's operational life or lifetime. It should be noted that the energy harvested by energy harvesting modules is minimal and that the power electronics components consume energy.
Another approach to keep decreasing the sizes of IoT devices and possibly reduce energy consumption is to integrate the entire electronics of an IoT device, computer or network node into a single Integrated Circuit (IC) called a System on a Chip (SoC) [80]. The components are the devices or nodes often integrated into an IC or SoC, including a Central Processing Unit (CPU), input and output ports, memory, analogue input and output module, and the power supply unit. The SoC can efficiently perform specific functions such as signal processing, wireless communication, executing security algorithms, image processing, and artificial intelligence. The primary reason for integrating the entire electronics of a system into a chip is to reduce energy consumption, size, and cost of the system as a whole. That is, a system that was initially made of multiple chips is integrated into a single chip that is smaller in size, may be cheaper, and consumes less energy. External devices such as the power sources (batteries or energy harvesting, antennas and other analogue electronics components) can be integrated into a SoC to reduce size, energy consumption, and cost.
Using Energy-Efficient Materials and Sensors
Energy-efficient IoT systems start with the careful selection of materials and sensors. Modern IoT devices increasingly utilise low-power electronic components and sensors designed to minimise energy consumption without compromising performance. For instance:
Energy-efficient hardware design
At the IoT perception layer, some of the energy-efficient mechanisms include:
As tens of billions to trillions of IoT devices are being deployed in various sectors (e.g., intelligent transport systems, smart health care, smart manufacturing, smart homes, smart cities, smart agriculture, and smart energy) of the society or economy, the amount of traffic generated by IoT devices and transported through the local network and the Internet to fog or cloud computing platforms is also multiplying. The computing or processing required to analyse the massive amounts of data generated has also increased significantly. The increase in traffic and computing or processing requirements also increases the energy consumption of hardware deployed in the networking and data centre infrastructures handling the IoT traffic and data. Some of the hardware-based energy-saving strategies that can be leveraged to reduce the energy consumption of networking and computing nodes in IoT based-infrastructure (some of which were discussed in [81]) include:
The increasing proliferation of IoT devices in almost every sector or industry in developing and developed economies has increased the amount of data collected from the environment, increasing the demand for processing or computing. IoT and traditional devices require high performance, QoS, and longer battery life, which can be achieved primarily by developing strategies to improve computing performance and energy consumption. Green or sustainable computing is the practice of developing strategies to maximise energy efficiency (minimise energy consumption) and to minimise the environmental impact from the design and use of computer chips, systems, and software, spanning across the supply chain from the extraction of raw materials needed to make computers to how systems are recycled [86].
Green computing strategies can be implemented in software or hardware. Some of the hardware-based green computing strategies have been discussed above in the section on Green IoT hardware. The software strategies will be addressed in the Green IoT software section below. Hardware acceleration is a primary green computing strategy that improves performance and energy efficiency. Hardware accelerators such as GPUs and Data Processing Units (DPUs) are major green computing drivers because they provide high-performance and energy-efficient computing for AI, networking, cybersecurity, gaming, and High-Performance Computing (HPC) services or tasks. It is estimated that about 19 terawatt-hours of electricity a year could be saved if all AI, HPC and networking computing tasks could be offloaded to GPUs and DPU accelerators. With the increasing use of sophisticated data analytics and AI tools to process the massive amounts of data generated by IoT devices, green computing strategies such as hardware acceleration will be essential [87].
Green software goes back to the beginning of the computer era in terms of code efficiency and compactness. For example, it uses assembler and C/C++ code that is far more efficient in terms of performance and memory compared to modern high-level programming languages such as Python or Java. It also emphasises the importance of proper software-based energy management, such as asynchronous routines, use of interrupts, and sleep modes.
Recent developments in AI models and edge and fog computing enable the use of lightweight AI models in the fog and edge class of devices commonly powered by green energy sources.
Green computing is not only about devising strategies to reduce energy consumption. It also includes leveraging high-performance computing resources to tackle climate-related challenges. For example, GPUs and DPUs are used to run climate models (e.g., predict climate and weather patterns) and develop other green technologies (e.g., energy-efficient fertiliser production, development of battery technologies, etc.). Combining IoT and green computing technologies provides powerful tools for scientists, policymakers, and companies to tackle complex climate-related problems.
Communication infrastructure is a significant energy consumer in IoT systems as device-generated data increases exponentially. Strategies to enhance energy efficiency include:
a. Low-power networking and communication technologies:
Communication protocols were adopted for low bandwidth and low power operations, such as Zigbee, LoRaWAN, Sigfox, and BLE (Bluetooth Low Energy).
Energy-efficient adaptations of 5G technologies through techniques like massive MIMO (Multiple Input, Multiple Output) and dynamic spectrum sharing.
b. Energy-efficient data transmission:
Data aggregation and compression reduce the transmitted data volume, conserving network bandwidth and lowering energy usage.
Scheduling transmissions during periods of low network usage minimises power surges and optimises resource utilisation.
c. Network-level offloading of computation:
Devices conserve battery power by shifting intensive computational tasks from resource-constrained IoT devices to more capable edge or fog nodes.
Edge computing reduces data transfer requirements and latency, leading to energy savings at device and infrastructure levels.
d. Energy-efficient communication techniques:
Algorithms that adaptively control transmission power based on signal strength and environmental conditions ensure optimal energy use.
Implementing sleep and wake cycles for IoT devices, where communication modules remain dormant when not in use, significantly reduces energy consumption.
Energy-efficient IoT systems are built around architectural frameworks that integrate energy optimisation across all layers of the IoT ecosystem, including device, network, and application levels. Key strategies include:
Optimised software plays a critical role in reducing the energy footprint of IoT systems. Besides computing considerations presented in the chapter above, the following approaches are efficient:
Energy-efficient security measures are vital to ensure sustainable IoT systems:
Developing advanced design and manufacturing processes to produce energy-efficient chips is one of the strategies currently being used to reduce energy consumption to achieve the green computing and communication goals. Given the rapid adoption of smartphones and IoT systems, producing energy-efficient chips is very important. An example of how advanced manufacturing may significantly reduce energy consumption in computing and communication devices is the A-series chips used in Apple's iPhones. The power consumption of the 7-nm A12 chip is $50\%$ less than its 10-nm A11 predecessor. Also, the 5-nm A14 chip is $30%$ more power efficient than the 7-nm A13 chip, and the 4-nm A16 is $20%$ more power efficient than the 5-nm A15. [88].
A similar trend can be observed in the PC industry, although there is no guarantee that more advanced chip manufacturing processes will continue to improve chip performance and energy efficiency. Designing energy-efficient chips for 5G/6G base stations is crucial to meet the growing demands of high-speed communication while minimising energy consumption and environmental impact. These chips are engineered with advanced semiconductor technologies to reduce power consumption and improve energy efficiency. They integrate specialised hardware accelerators for signal processing and AI-driven resource management to optimise network performance dynamically. Power-saving techniques like dynamic voltage and frequency scaling (DVFS) are also employed to adapt energy usage based on real-time load.
Regulatory frameworks and corporate policies play a foundational role in driving energy-efficient IoT adoption:
Energy-efficient IoT systems demand an integrated approach, combining advanced hardware, optimised software, sustainable manufacturing, and policy support to meet the goals of green computing and communication. As the IoT ecosystem expands, these strategies are essential to balance innovation with environmental sustainability.
Choosing an appropriate energy source for IoT systems is critical to ensuring reliability, efficiency, and sustainability. These considerations are guided by the diverse requirements of IoT devices and their deployment scenarios. Below, we expand on key design aspects (figure 108):
1. Scalability
IoT deployments often involve a large number of devices operating in diverse environments. The energy solution must:
2. Minimum Maintenance
IoT devices are often deployed in remote or hard-to-access locations where frequent maintenance is impractical. Energy sources must:
3. Mobility
For IoT applications requiring mobile devices, such as wearables, drones, or vehicle-mounted sensors, energy sources must:
4. Energy Requirements
The energy consumption of IoT devices varies widely, depending on their purpose and workload. Key considerations include:
5. Flexibility
IoT systems are deployed in diverse environments, from urban areas to remote, off-grid locations. Flexible energy solutions should:
6. Efficiency
Efficient energy usage is vital to maximize device lifespans and reduce energy waste. Considerations include:
7. The Need for Backup Energy Sources
IoT devices must remain operational during power outages or periods when primary energy sources are unavailable. Backup considerations include:
8. Minimum Cost
Cost-effectiveness is critical for large-scale IoT deployments. Energy source design must:
9. Sustainability
Sustainable energy solutions are essential to reducing the environmental footprint of IoT systems. Considerations include:
10. Green and Environmentally Friendly
To align with green IoT principles, energy sources should:
Designing energy sources for IoT systems requires a holistic approach that balances power needs, cost, efficiency, and sustainability. By addressing these considerations, developers can create reliable, scalable, and environmentally responsible IoT systems, paving the way for innovative and sustainable IoT solutions.
The electrical and electronic devices in IoT infrastructure require electrical energy to operate. The energy requirements of the device depend on its size, computing or processing requirements, traffic load, and other mechanical and electrical loads that need to be handled, especially in IoT applications where the feedback commands from fog/cloud computing platforms are used to control a physical process or system through actuators. The main power sources for IoT devices are (figure 109):
In IoT applications where the hardware devices do not need to be mobile and are energy-hungry (consume significant energy), they can be reliably powered using grid power sources. The mains power from the grid is AC power, which should be converted to DC power and scaled down to meet the power requirements of sensing, actuating, computing, and networking nodes. The hardware devices at the networking or transport layer and those at the application layer (fog/cloud computing nodes) are often power-hungry and supplied using grid energy.
A drawback of using the main power to supply an IoT infrastructure with many IoT devices that depend on the grid power source is the complexity of connecting the devices to the power source using cables. In the case of hundreds or thousands of devices, supplying them using the main power is impractical. If the energy from the grid source is generated using fossil fuels, then the carbon footprint from the IoT infrastructure increases as its energy demands increase.
Energy storage systems are systems that are used to store energy so that it can be consumed later. In IoT infrastructures, some sensors, actuators, computing and networking nodes, and other electrical systems are powered by energy storage systems. The energy is stored in forms that can readily be converted into electrical energy required to power the IoT devices, computing and networking nodes and other electrical systems in the IoT infrastructure. In some scenarios, electrical energy from a main power supply or local renewable energy plants (or energy harvesting systems) is converted to storable energy forms and stored in energy storage systems to be used when the source is not able to generate energy to meet the needs of the electrical systems in the IoT infrastructure. Energy storage systems can be categorised depending on the form of the energy (mechanical, electrical, chemical, and thermal energy) that is stored and then subsequently converted into electrical energy.
Most IoT devices are powered using a small energy storage system (e.g., battery or supercapacitor) with minimal energy capacity. The energy storage system, in the form of a battery or supercapacitor, is charged to its full capacity when the device is being deployed. The device is shut down when all the energy stored in the energy storage system is completely consumed or drained. The device's lifetime is the time from when the device is deployed to when all the energy stored in its energy storage system is consumed. The capacity of the energy storage is often chosen in such a way as to satisfy the energy consumption demand of the device and ensure a longer lifetime for the device. In a massive deployment of thousands or hundreds of thousands of IoT devices, frequent replacement or recharging of batteries or supercapacitors can be tedious and costly and may also degrade the quality of service.
An energy storage system is recommended mainly for IoT devices that require a tiny amount of power (in the order of micro- or milliwatts) to operate and spend most of their time in sleep mode to save energy. The lifetime of a low-power IoT device powered by a small battery is desired to be at least a decade. The energy storage systems' energy capacity is contained by its size and weight. That is, increasing the capacity of an energy storage system increases its size or weight. Still, it is desired to keep the size and weight of IoT devices as small as possible, especially in IoT applications where mobility is critical.
The computing and networking nodes at the edge/fog/cloud layer of the IoT architecture are energy-hungry devices not often powered solely by energy storage systems. They are often powered by a main power source, such as an electricity grid or renewable energy sources (e.g., wind, solar, pumped hydro-power). A backup energy storage system is often installed to store energy so that when the main power source fails (especially in the case where energy is generated from renewable energy sources as they are intermittent in nature), the energy storage system will supply the computing or networking node until the main source is restored.
Small IoT Devices
Most small IoT devices rely on compact energy storage systems such as batteries or supercapacitors. These devices are typically constrained by:
The most common energy storage systems used in small IoT devices include:
Large IoT Infrastructure
IoT infrastructure at the edge, fog, and cloud layers (e.g., base stations, access points, fog nodes, and data centres) require more robust and large-scale energy storage solutions. These include:
Such systems often serve as backup power sources to ensure uninterrupted operation during grid outages or renewable energy intermittency.
Electrical Energy Storage Systems
Mechanical Energy Storage Systems
Chemical Storage
Thermal Storage
Energy storage systems are pivotal in enabling reliable, efficient, and sustainable IoT operations. These technologies, from small-scale batteries in sensors to large-scale mechanical systems in data centres, ensure that IoT infrastructures can function even without a direct power supply. IoT designers can meet the growing demands of connected ecosystems while addressing environmental and operational challenges by leveraging diverse storage options and optimising for specific use cases.
To deal with limitations of energy storage systems such as the limited lifetime (the time from when an IoT device is deployed to when all the energy stored in its energy storage system is depleted or consumed), maintenance complexity, and scalability, energy harvesting systems are incorporated into IoT systems to harvest energy from the environment. The energy can be harvested from the ambient environment (energy sources naturally present in the immediate environment of the device, e.g., solar, wind, thermal, radiofrequency energy sources) or from external sources (the source of energy is from external systems, e.g., mechanical or human body) and then converted into electrical energy to power IoT devices or storage in an energy storage system for later usage.
The energy can be harvested from ambient sources (environmental energy sources) such as solar and photovoltaic, Radio Frequency (RF), flow (wind and hydro energy sources), and thermal energy sources. Ambient energy harvesting is the process of capturing energy from the immediate environment of the device (ambient energy sources) and then converting it into electrical energy to power IoT devices. Each energy source has unique characteristics that make it suitable for specific IoT applications, providing tailored solutions to power devices based on their requirements. The ambient energy harvesting systems that can be used to harvest energy to power IoT devices, access points, fog nodes or cloud data centres include:
1. Solar and Photovoltaic Energy Harvesting
Source: Solar energy is derived from natural sunlight, while artificial light sources can be harnessed indoors. Solar panels or photovoltaic cells are the primary tools for capturing this energy.
Process: Photovoltaic (PV) cells, composed of semiconductor materials, absorb photons from light. This absorption excites electrons, generating an electric current that powers IoT devices or charges energy storage systems.
Applications:
Advantages:
Challenges:
2. Radio Frequency (RF) Energy Harvesting
Source: RF energy is emitted by various wireless communication systems such as Wi-Fi routers, mobile networks, and television transmitters.
Process: RF energy is captured using specialised antennas and rectified to produce usable electrical power. Depending on the application, these systems can operate over various frequencies.
Applications: Low-power IoT devices: Wearable sensors, asset trackers, and remote controllers in urban and indoor environments where RF signals are prevalent.
Advantages:
Challenges:
3. Flow Energy Harvesting
Source: Energy from the movement of air (wind) or water (hydro) is captured and converted into electrical energy.
Process:
Applications: Remote IoT devices in areas with consistent air or water flow, such as wind-powered weather stations or hydro-powered sensors in smart water management systems.
Advantages:
Challenges:
4. Thermal Energy Harvesting
Source: Temperature differences or heat dissipation from industrial processes, human bodies, or natural sources.
Process: Thermoelectric generators (TEGs) use the Seebeck effect, where a voltage is generated due to a temperature gradient across a material, to convert heat into electrical energy.
Applications:
Advantages:
Challenges:
5. Acoustic Noise Energy Harvesting
Source: Pressure waves from sound or vibrations caused by machines, vehicles, or environmental noise.
Process: Piezoelectric or acoustic materials capture sound vibrations and convert them into electrical energy.
Applications:
Advantages:
Challenges:
Mechanical energy sources, such as vibrations and pressure changes, are prevalent in dynamic environments like transportation and industrial settings.
1. Vibration Energy Harvesting
Source: Vibrations generated by machinery, vehicles, or natural phenomena.
Process: Devices with piezoelectric or electromagnetic materials capture vibrational energy and convert it to electrical energy.
Applications:
Advantages:
Challenges: Dependent on vibration consistency and intensity.
2. Pressure and Stress-Strain Energy Harvesting
Source: Pressure variations or mechanical stress on materials.
Process: Piezoelectric materials produce electrical charges when subjected to stress or strain.
Applications:
Advantages: Effective for compact devices.
Challenges: Limited applications outside specific industries.
The human body is a valuable energy source, especially for wearable and implantable IoT devices.
1. Human Activity Energy Harvesting
Source: Biomechanical movements like walking, running, or cycling.
Process: Kinetic systems convert movement into electrical energy, which can power wearables or charge onboard batteries.
Applications:
Advantages: Eliminates external charging needs.
Challenges: Energy generation depends on user activity levels.
2. Human Physiological Energy Harvesting
Source: Body heat, biochemical reactions, or other physiological processes.
Process:
Applications:
Advantages:
Challenges: Requires advanced materials for efficient energy conversion.
Hybrid systems combine multiple energy sources to ensure reliability and maximise efficiency. They are instrumental in scenarios where environmental conditions vary unpredictably.
Advantages:
Challenges:
Energy harvesting from ambient sources is a transformative approach to powering IoT devices sustainably. These systems provide self-sufficient, low-maintenance energy solutions by leveraging solar, RF, thermal, acoustic, and mechanical sources. Innovations in hybrid energy systems and advanced materials are expected to enhance the efficiency and applicability of energy harvesting technologies, paving the way for widespread adoption in IoT infrastructures across industries.
Balancing various design criteria is critical to achieving optimal performance while minimising environmental impact in designing and implementing IoT devices and infrastructures. The concept of Green IoT (G-IoT) emphasises designing IoT systems that are energy-efficient, sustainable, and environmentally friendly, addressing the growing concern about the ecological footprint of IoT technologies. However, achieving these goals often involves trade-offs between competing priorities such as energy consumption, performance, security, cost, and sustainability (figure 110).
One of the primary design goals of IoT is minimising energy consumption, as many IoT devices rely on limited-capacity batteries. Energy-efficient hardware components, software optimisations, and low-power communication protocols are widely adopted to prolong device operating lifetimes. For example:
These measures reduce energy demand and extend battery life. However, the benefit of energy savings often comes at the cost of reduced performance:
Security is another critical consideration that often conflicts with energy efficiency in IoT design. Traditional robust security algorithms, such as those used in standard computing systems, are computationally intensive and consume significant energy. Applying such algorithms directly to IoT devices would rapidly deplete their batteries.
However, prioritising energy efficiency may compromise the level of security, leaving devices vulnerable to attacks such as data breaches, eavesdropping, or denial of service (DoS).
Cost is another key factor influencing IoT design. Manufacturers often strive to keep production costs low to ensure the affordability of devices, especially for mass-market applications. This focus on cost reduction may lead to the following:
While minimising cost is essential for market viability, it can compromise other critical aspects, such as reliability, durability, or security, leading to potential issues over the device's lifecycle.
Green IoT aims to address the environmental and sustainability challenges associated with IoT systems. It focuses on:
Examples include precision farming, smart grids, and waste management systems. However, Green IoT design must also balance other key requirements:
Achieving the goals of Green IoT requires careful consideration of trade-offs:
To navigate these trade-offs, designers can adopt strategies such as:
Green IoT represents a transformative approach to designing IoT systems that align with environmental and sustainability goals. By addressing energy efficiency, e-waste reduction, and sustainable resource management, Green IoT can contribute to a more sustainable future. However, realising these benefits requires a balanced approach considering the trade-offs between QoS, security, energy efficiency, and cost, ensuring that IoT systems are functional and eco-friendly.
Green IoT applications leverage energy-efficient and sustainable technologies to address critical challenges in various domains. By optimising resources, reducing energy consumption, and integrating renewable energy sources, these applications contribute to environmental sustainability while enhancing efficiency and performance. The list of selected Green IoT Applications and their features are discussed below (figure 111).
A smart grid is an energy distribution network integrating IoT technologies to monitor, manage, and optimise real-time electricity flow. Key features include:
IoT applications in agriculture, often called precision agriculture, improve resource utilisation and environmental sustainability. Examples include:
Also known as Industry 4.0, smart manufacturing integrates IoT technologies to enhance efficiency and sustainability in production processes:
Smart homes utilise IoT technologies to improve energy efficiency, comfort, and security:
IoT applications in transport focus on creating efficient, sustainable, and intelligent mobility systems:
Smart cities integrate IoT solutions across various urban systems to improve sustainability and quality of life:
IoF integrates many of the applications mentioned above. It enables to track food manufacturing and ensure food quality and proper nutrition:
Green IoT applications represent a vital step toward achieving a sustainable future. By enabling more innovative resource use and reducing energy consumption across diverse domains, they address environmental concerns while improving functionality and efficiency.