Table of Contents

Green IoT Energy-Efficient Design and Mechanisms

As IoT is adopted to address problems in the various sectors of society or economy, the energy demand for IoT is increasing rapidly and almost following an exponential trend. As the number of IoT devices increases, the amount of traffic created by IoT devices increases, increasing the energy demand of the core networks that are used to transport the IoT traffic and also increasing the energy demand of data centres that are used to analyse the massive amounts of data collected by the IoT devices. The large-scale adoption and deployment of IoT infrastructure and services in the various sectors of the economy will significantly increase the energy demand from the IoT cyber-physical infrastructure (sensor and actuator devices) through the transport network infrastructure and the cloud computing data centre infrastructure. Therefore, one of the design goals of green IoT is to develop effective strategies to reduce energy consumption. These strategies should be deployed across the IoT architecture stacks. That is, energy-saving strategy should be implemented across all the IoT layers, including:

At each layer, various energy-efficient strategies are implemented to reduce energy consumption. Much energy is used to perform computation and communicate at the multiple layers. A significant amount of energy is saved by deploying energy-efficient computing mechanisms (hardware and software), low-power communication and networking protocols, and energy-efficient architectures. Energy efficiency should be one of the main goals of green IoT systems: design, manufacturing, deployment, and standardisation. The energy-saving mechanisms may vary from one layer to another, but they can be classified into the following categories (figure 1):

Green IoT Energy-Efficient Design and Mechanisms
Figure 1: Green IoT Energy-Efficient Design and Mechanisms

Green IoT hardware

A realistic approach to significantly reduce the energy consumption in IoT systems or infrastructures is to dramatically improve the energy efficiency of hardware systems because a large proportion of energy is used to power the electrical and electronic hardware such as computing nodes, networking nodes, cooling (and air conditioning) systems, and power electronics systems, security, and lighting systems. Recently, much attention has been paid to improving the energy efficiency of hardware systems in ICT infrastructures, especially in the IoT industry. The energy-saving mechanisms in IoT infrastructures include:

To achieve the green IoT vision, deploying energy-efficient hardware in the entire IoT infrastructure (from the perception layer to the cloud) throughout the IoT industry is essential. Green IoT hardware is not limited to energy-efficient hardware design and hardware-based energy-saving mechanisms in the IoT infrastructure but also includes sustainable hardware approaches such as:

Reducing the size of hardware device

There has been a significant reduction in the size of electronic hardware from the times of the vacuum tube to modern-day semiconductor chips. In the early days of electronics, computers occupied entire floors of buildings, radio communication systems were large systems integrated into cabinets, and the smallest electronic device at the time was a two-way radio system that was often carried on the back [1]. As the sizes of electronics devices decreased, their energy demand also dropped drastically.

Over the past few decades, the sizes of computing and communication devices have decreased significantly, reducing the power required to operate them. Despite the significant progress made by the semiconductor industry to decrease the size of semiconductor chips while improving their performance, there is still a persistent drive to keep lowering the sizes of semiconductor chips to decrease their cost, reduce energy consumption, and conserve the resources required to manufacture them.

One of the Co-founders of Intel, Gordon Moore, observed that “the number of transistors and resistors on a chip doubles every 24 months”, and the computer industry adopted it as the well-known Moore's law and became a performance metric in the semiconductor or computer chip industry. As more transistors were being packed into a single small-sized chip, the sizes of computing and network equipment decreased significantly, translating to a significant decrease in power consumption. Although advanced chip manufacturing has dramatically reduced the transistor gate length, current leakage has increased, increasing chip power consumption and heat dissipation. Thus, doubling the number of transistors on the chip could double the amount of power consumed by the chip[2].

In some energy-hungry IoT devices, batteries with higher energy capacity are required. The energy capacity of a battery is correlated with its size. That is, batteries with higher energy capacities may be larger and heavier, limiting the extent to which the device's size can be decreased. The energy capacity of the battery may be relatively small. Still, an energy harvesting module is attached to the battery to recharge the battery with energy harvested from the environment continuously. Adding an energy harvesting module may increase the size of the IoT device, but it improves the device's operational life or lifetime. It should be noted that the energy harvested by energy harvesting modules is minimal and that the power electronics components consume energy.

Another approach to keep decreasing the sizes of IoT devices and possibly reduce energy consumption is to integrate the entire electronics of an IoT device, computer or network node into a single Integrated Circuit (IC) called a System on a Chip (SoC) [3]. The components are the devices or nodes often integrated into an IC or SoC, including a Central Processing Unit (CPU), input and output ports, memory, analogue input and output module, and the power supply unit. The SoC can efficiently perform specific functions such as signal processing, wireless communication, executing security algorithms, image processing, and artificial intelligence. The primary reason for integrating the entire electronics of a system into a chip is to reduce energy consumption, size, and cost of the system as a whole. That is, a system that was initially made of multiple chips is integrated into a single chip that is smaller in size, may be cheaper, and consumes less energy. External devices such as the power sources (batteries or energy harvesting, antennas and other analogue electronics components) can be integrated into a SoC to reduce size, energy consumption, and cost.

Using Energy-Efficient Materials and Sensors
Energy-efficient IoT systems start with the careful selection of materials and sensors. Modern IoT devices increasingly utilise low-power electronic components and sensors designed to minimise energy consumption without compromising performance. For instance:

Energy-efficient hardware design

At the IoT perception layer, some of the energy-efficient mechanisms include:

  1. Energy-efficient sensors (Green sensors): IoT sensors should be designed to consume as little energy as possible. When selecting the sensors to be used in the design of IoT devices, energy consumption and sustainability should be among the design criteria considered.
  2. Energy-efficient radio modules (Green radio modules): Radio modules are the major energy consumers in IoT devices, and designing them to consume a minimal amount of energy significantly decreases their energy consumption. When choosing an IoT device for an IoT application, the radio modules' energy consumption should be considered.
  3. Low-power microcontrollers and microprocessors (Green MCUs and ICs): the energy consumption of the microcontroller or microprocessor is significant as batteries with limited energy capacity often power these devices. In selecting IoT devices to be used for an IoT application, the performance and energy consumption of the devices should be prioritised rather than sacrificing one for the other. Some of the design strategies that have been developed to improve the energy efficiency of the microcontroller or microprocessor of IoT devices are:
    • Duty cycling: Switching off the microcontroller or microprocessor when the device is idle and then switching it on only when it is needed for processing.
    • Using low-power microcontrollers or microprocessors: Choosing very low-power microcontrollers or microprocessors with minimal processing power but consume relatively little energy.
    • Using energy-efficient CMOS ICs to manufacture MCUs or CPUs: Manufacturing the components of IoT devices using energy-efficient CMOS ICs can significantly reduce the energy consumption of IoT devices.
    • Hardware acceleration and SoC design: Using application-specific integrated circuits (ASICs) to implement hardwired functionalities in an energy-efficient way (e.g., DSP systems, System-in-package(SiP), System-on-Chip(SoC)), resulting in highly compact designs (combining sensors, MCU, batteries, and energy harvesters into a single chip).

As tens of billions to trillions of IoT devices are being deployed in various sectors (e.g., intelligent transport systems, smart health care, smart manufacturing, smart homes, smart cities, smart agriculture, and smart energy) of the society or economy, the amount of traffic generated by IoT devices and transported through the local network and the Internet to fog or cloud computing platforms is also multiplying. The computing or processing required to analyse the massive amounts of data generated has also increased significantly. The increase in traffic and computing or processing requirements also increases the energy consumption of hardware deployed in the networking and data centre infrastructures handling the IoT traffic and data. Some of the hardware-based energy-saving strategies that can be leveraged to reduce the energy consumption of networking and computing nodes in IoT based-infrastructure (some of which were discussed in [4]) include:

  1. Custom systems-on-chip: A design approach that integrates some or all system components into a single chip, which reduces the size of the system compared to the method of designing the various components of the system separately. Although the SoC devices' size, weight and energy consumption may be relatively lower than those developed using separate chips, their performance may be lower. For example, a Raspberry Pi that contains a Broadcom SoC may consume less than 5 W, and its processing power may be less than that of computer processors. SoCs are used in mobile phones to ensure acceptable computing or processing and networking performance while minimising energy consumption to extend the battery life. Thus, the SoC design approach will significantly reduce the device's size and energy consumption without necessarily sacrificing the performance of the devices.
  2. Dynamic frequency scaling: The processor, microprocessor, or microcontroller can be forced into a low-power mode by reducing clock frequency or voltage. Also, the power consumption of the peripheral components of the device can be dynamically reduced by dynamically powering down some of the peripherals that are idle (not used at all). The power consumption of the peripherals can be controlled so that they consume power only when necessary. Dynamic frequency or voltage scaling can be implemented in software, which is then used to monitor and adjust the processor's power and clock frequency or voltage. Frequency and voltage scaling can be implemented on computing and networking nodes from the IoT perception layer through the networking or transporting layer to fog/cloud computing layers. Frequency or voltage scaling is a feature implemented in some Intel processes in the form of P-states and C-states. The P-states provide a mechanism to scale the frequency and voltage the processor runs to reduce its power consumption. The C-states are the states of the CPU when it has reduced or turned off some of its selected functions [5].
  3. Low-energy displays: For applications that require information to be displayed, increasing the energy efficiency of the display could decrease the device's energy consumption.
  4. Hardware data processing (e.g., (AI hardware): Rather than using the CPU for all computing or processing tasks, hardware acceleration is employed to shift unique data operations or specific computing tasks into dedicated hardware. Hardware acceleration refers to the process by which an application offloads some specific computing tasks onto some specialised hardware components (e.g., GPUs, DSP, ASICs, etc) within a system to achieve greater efficiency than it is possible using software that is running solely on a general purpose CPU [6]. Visualisation, packet processing, AI processing, cryptography, error correction, and signal processing can be offloaded onto specialised hardware, freeing up the CPU to perform other tasks. Such specific hardware often offers high performance and low energy consumption compared to CPUs. For example, running AI-based tasks on GPUs is more efficient than running them on a CPU, which justifies why GPUs are preferable to CPUs. AI-specific hardware has been introduced, especially for neural-network-based tasks. Thus, IoT hardware designers should examine carefully if tasks could be offloaded to specialised hardware to free up the microcontroller or processors, significantly improving performance and energy efficiency.
  5. Cloud computing (remote processing): Cloud computing is a cost-effective and scalable computing paradigm that enables on-demand remote access to resources such as software, infrastructure, and platforms over the Internet. By adopting cloud-based services (software-as-a-service, infrastructure-as-a-service, platform-as-a-service), companies or organisations do not need to invest in hardware infrastructure to host their service, significantly reducing the energy demand of IT services. An interesting strategy that has significantly increased the performance and energy efficiency of IT infrastructure and services is virtualisation. Virtualisation refers to the hardware or software methods that enable partitioning a physical machine into multiple instances that run concurrently and share the underlying physical resources and devices. It involves the use of a Virtual Machine Monitor (VMM), also called a hypervisor, to manage the Virtual Machines (VMs) and enable them to share the underlying physical resources (hardware). The sharing of hardware resources by VMs hosting multiple services (data analytics, high-performance computing, security, etc.) significantly reduces the energy demand from data centres. Data centres have developed and implemented several energy-efficient strategies (e.g., switching off idle servers, energy-efficient task scheduling, and other optimisation methods). The exponential increase in the number of deployed IoT devices and the generation of massive amounts of data they generate and send to fog computing nodes or cloud computing data centres will likely significantly increase the energy consumption of data centres, requiring green cloud computing strategies.
  6. Photonic computing: In an attempt to increase processing performance and significantly decrease energy consumption, researchers and experts in the electronics and computer industries are seeking ways to use optical devices for data processing, data storage, and data communication. Optical or photonic computing offers high speed, high bandwidth, and low energy consumption benefits that can be exploited to meet the need for high-performance computing, high-speed communication, and low energy consumption and can be considered as a promising technology for high-performance or high-speed computing and communication technologies for computing and networking nodes in the IoT networking/transport and fog/cloud computing layers. The main components of photonic or optical computing systems are optical processing units (for data processing), optical connectors (for optical data transfer), and optical storage units (for optical data storage). In optical or phototonic computing, light waves (photons) produced by lasers or incoherent sources are exploited as a primary means for carrying out numerical calculations, reasoning, artificial intelligence, data processing, data storage and data communications for computing, unlike in traditional computers where these functions are performed using electrons [7]. A significant challenge in optical or photonic computing systems is the inefficiencies or performance bottlenecks introduced when converting electrical signals to optical and optical signals to electrical, as there is still a need to interface them with existing digital computing and communication systems.
  7. Improving the energy efficiency of mobile radio networks: The adoption of Low-Power Wide Area (LPWA) cellular technologies (e.g., NB-IoT, LTE-M) has enabled the deployment of IoT networking services over existing mobile networks [8]. Power amplifiers consume more than 50% of the energy consumption of cellular base stations, so it is possible to reduce energy consumption by improving the efficiency of the power amplifier of wireless access network nodes (e.g., improving the efficiency of the power amplifier of 4G/5G/6G base stations). Another strategy to reduce the energy demand of cellular mobile base stations is to centralise or shift some of the baseband processing to the cloud or a pool of baseband units, the so-called Cloud Radio Access Network (C-RAN).
  8. Turning off idle networking or computing nodes: The most popular energy-efficient management strategy is to switch off idle devices or components. This approach can be applied from the IoT perception layer to the fog/cloud computing layer.

Green computing

The increasing proliferation of IoT devices in almost every sector or industry in developing and developed economies has increased the amount of data collected from the environment, increasing the demand for processing or computing. IoT and traditional devices require high performance, QoS, and longer battery life, which can be achieved primarily by developing strategies to improve computing performance and energy consumption. Green or sustainable computing is the practice of developing strategies to maximise energy efficiency (minimise energy consumption) and to minimise the environmental impact from the design and use of computer chips, systems, and software, spanning across the supply chain from the extraction of raw materials needed to make computers to how systems are recycled [9].

Green computing strategies can be implemented in software or hardware. Some of the hardware-based green computing strategies have been discussed above in the section on Green IoT hardware. The software strategies will be addressed in the Green IoT software section below. Hardware acceleration is a primary green computing strategy that improves both computing performance and energy efficiency. Hardware accelerators such as GPUs and Data Processing Units (DPUs) are major green computing drivers because they provide high-performance and energy-efficient computing for AI, networking, cybersecurity, gaming, and High-Performance Computing (HPC) services or tasks. It is estimated that about 19 terawatt-hours of electricity a year could be saved if all AI, HPC and networking computing tasks could be offloaded to GPUs and DPU accelerators. With the increasing use of sophisticated data analytics and AI tools to process the massive amounts of data generated by IoT devices, green computing strategies such as hardware acceleration will be essential [10].

Green computing is not only about devising strategies to reduce energy consumption. It also includes leveraging high-performance computing resources to tackle climate-related challenges. For example, GPUs and DPUs are used to run climate models (e.g., predict climate and weather patterns) and develop other green technologies (e.g., energy-efficient fertiliser production, development of battery technologies, etc.). Combining IoT and green computing technologies provides powerful tools for scientists, policymakers, and companies to tackle complex climate-related problems.

Green IoT Communication and Networking infrastructure

Communication infrastructure is a significant energy consumer in IoT systems as device-generated data increases exponentially. Strategies to enhance energy efficiency include:

a. Low-power networking and communication technologies:
Communication protocols were adopted for low bandwidth and low power operations, such as Zigbee, LoRaWAN, Sigfox, and BLE (Bluetooth Low Energy).
Energy-efficient adaptations of 5G technologies through techniques like massive MIMO (Multiple Input, Multiple Output) and dynamic spectrum sharing.

b. Energy-efficient data transmission:
Data aggregation and compression reduce the transmitted data volume, conserving network bandwidth and lowering energy usage.
Scheduling transmissions during periods of low network usage minimises power surges and optimises resource utilisation.

c. Network-level offloading of computation: Devices conserve battery power by shifting intensive computational tasks from resource-constrained IoT devices to more capable edge or fog nodes.
Edge computing reduces data transfer requirements and latency, leading to energy savings at device and infrastructure levels.

d. Energy-efficient communication techniques:
Algorithms that adaptively control transmission power based on signal strength and environmental conditions ensure optimal energy use.
Implementing sleep and wake cycles for IoT devices, where communication modules remain dormant when not in use, significantly reduces energy consumption.

Green IoT architectures

Energy-efficient IoT systems are built around architectural frameworks that integrate energy optimisation across all layers of the IoT ecosystem, including device, network, and application levels. Key strategies include:

Green IoT Software

Optimised software plays a critical role in reducing the energy footprint of IoT systems:

Green IoT security

Energy-efficient security measures are vital to ensure sustainable IoT systems:

Advance Green Manufacturing

Developing advanced design and manufacturing processes to produce energy-efficient chips is one of the strategies currently being used to reduce energy consumption to achieve the green computing and communication goals. Given the rapid adoption of smartphones and IoT systems, producing energy-efficient chips is very important. An example of how advanced manufacturing may significantly reduce energy consumption in computing and communication devices is the A-series chips used in Apple's iPhones. The power consumption of the 7-nm A12 chip is $50\%$ less than its 10-nm A11 predecessor. Also, the 5-nm A14 chip is $30%$ more power efficient than the 7-nm A13 chip, and the 4-nm A16 is $20%$ more power efficient than the 5-nm A15. [11].

A similar trend can be observed in the PC industry, although there is no guarantee that more advanced chip manufacturing processes will continue to improve chip performance and energy efficiency. Designing energy-efficient chips for 5G/6G base stations is crucial to meet the growing demands of high-speed communication while minimising energy consumption and environmental impact. These chips are engineered with advanced semiconductor technologies to reduce power consumption and improve energy efficiency. They integrate specialised hardware accelerators for signal processing and AI-driven resource management to optimise network performance dynamically. Power-saving techniques like dynamic voltage and frequency scaling (DVFS) are also employed to adapt energy usage based on real-time load.

Green IoT policies

Regulatory frameworks and corporate policies play a foundational role in driving energy-efficient IoT adoption:

Energy-efficient IoT systems demand an integrated approach, combining advanced hardware, optimised software, sustainable manufacturing, and policy support to meet the goals of green computing and communication. As the IoT ecosystem expands, these strategies are essential to balance innovation with environmental sustainability.


[1] Electronic Components, “Using modern technology to reduce power consumption”, June 2021, accessed on August 2023, https://www.arrow.com/en/research-and-events/articles/using-modern-technology-to-reduce-power-consumption
[2] Partner Perspectives, “Moore's Law Is Dead. Where Is Energy Saving Heading in the Electronic Information Industry?”, https://www.lightreading.com/moores-law-is-dead-where-is-energy-saving-heading-in-electronic-information-industry/a/d-id/781014, 2022, accessed on Sept. 7, 2023
[3] Anysilicon, “What is a System on Chip (SoC)?”, https://anysilicon.com/what-is-a-system-on-chip-soc/, accessed on: Sept 7, 2023
[4] Electronic Components, “Using modern technology to reduce power consumption”, June 2021, Accessed on Sept. 18, 2023
[5] Microsoft, “P-states and C-States”, https://learn.microsoft.com/en-us/previous-versions/windows/desktop/xperf/p-states-and-c-states, accessed on Oct. 2, 2023
[6] Heavy AI, “Hardware acceleration”, https://www.heavy.ai/technical-glossary/hardware-acceleration, accessed on Oct. 2, 2023
[7] Molly Loe, “Optical computers: everything you need to know”, TechHQ, May 2023, accessed on Oct. 4, 2023
[8] e.g., 2G/3G/4G/5G
[9] Rick Merritt “What is Green Computing?” NVIDIA, https://blogs.nvidia.com/blog/2022/10/12/what-is-green-computing/, 2022, accessed on Oct. 4, 2023
[10] Rick Merritt “What is Green Computing?” NVIDIA, https://blogs.nvidia.com/blog/2022/10/12/what-is-green-computing/, 2022, accessed on Oct. 4, 2023
[11] Partner Perspectives, “Moore's Law Is Dead. Where Is Energy Saving Heading in the Electronic Information Industry?”, https://www.lightreading.com/moores-law-is-dead-where-is-energy-saving-heading-in-electronic-information-industry/a/d-id/781014, 2022, accessed on Sept. 7, 2023