IOT-OPEN.EU Reloaded Consortium partners proudly present the 2nd edition of the Introduction to the IoT book. The complete list of contributors is listed below.
ITT Group
Riga Technical University
Silesian University of Technology
Western Norway University of Applied Sciences
This book is intended to provide comprehensive information about low-level programming in the assembler language for a wide range of computer systems, including:
The book also contains an introductory chapter on general computer architectures and their components.
While low-level assembler programming was once considered outdated, it is now back, and this is because of the need for efficient and compact code to save energy and resources, thus promoting a green approach to programming. It is also essential for cutting-edge algorithms that require high performance while using constrained resources. Assembler is a base for all other programming languages and hardware platforms, and so is needed when developing new hardware, compilers and tools. It is crucial, in particular, in the context of the announced switch from chip manufacturing in Asia towards EU-based technologies.
We (the Authors) assume that persons willing to study this content possess some general knowledge about IT technologies, e.g., understand what an embedded system is and the essential components of computer systems, such as RAM, CPU, DMA, I/O, and interrupts and knows the general concept of software development with programming languages, C/C++ best (but not obligatory).
This content was implemented under the following project:
Consortium Partners
Erasmus+ Disclaimer
This project has been funded with support from the European Commission.
This publication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein.
Copyright Notice
This content was created by the MultiASM Consortium 2023–2026.
The content is copyrighted and distributed under CC BY-NC Creative Commons Licence and is free for non-commercial use.
In case of commercial use, please get in touch with MultiASM Consortium representative.
The old, somehow abandoned Assembler programming comes to light again: “machine code to machine code and assembler to the assembler.” In the context of rebasing electronics, particularly developing new CPUs in Europe, engineers and software developers need to familiarise themselves with this niche, still omnipresent technology: every code you write, compile, and execute goes directly or indirectly to the machine code.
Besides developing new products and the related need for compilers and tools, the assembler language is essential to generating compact, rapid implementations of algorithms: it gives software developers a powerful tool of absolute control over the hardware, mainly the CPU.
Assembler programming applies to the selected and specific groups of tasks and algorithms.
Using pure assembler to implement, e.g., a user interface, is possible but does not make sense.
Nowadays, assembler programming is commonly integrated with high-level languages, and it is a part of the application's code responsible for rapid and efficient data processing without higher-level language overheads. This applies even to applications that do not run directly on the hardware but rather use virtual environments and frameworks (either interpreted or hybrid) such as Java, .NET languages, and Python.
It is a rule of thumb that the simpler and more constrained the device is, the closer the developer is to the hardware. An excellent example of this rule is development for an ESP32 chip: it has 2 cores that can easily handle Python apps, but when it comes to its energy-saving modes when only ultra-low power coprocessor is running, the only programming language available is assembler, it is compact enough to run in very constrained environments, using microampers of current and fitting dozen of bytes.
This book is divided into four main chapters:
The following chapters present the contents of the coursebook:
Assembler programming is a very low-level technique for writing the software. It uses the hardware directly relying on the architecture of the processor and computer with the possibility of having influence on every single element and every single bit in hundreds of registers which control the behaviour of the computer and all its units. That gives the programming engineer much more control over the hardware but also requires a high level of carefulness while writing programs. Modification of a single bit can completely change the way of a computer’s behaviour. This is why we begin our book on the assembler with a chapter about the architecture of computers. It is essential to understand how the computer is designed, how it operates, executes the programs, and performs calculations. It is worth mentioning that this is important not only for programming in assembler. The knowledge of computer and processor architecture is useful for everyone who writes the software in any language. Having this knowledge programming engineers can write more efficient applications in terms of memory use, time of execution and energy consumption.
You can notice that in this book we often use the term processor for elements sometimes called in some other way. Historically the processor or microprocessor is the element that represents the central processing unit (CPU) only and requires also memory and peripherals to form the fully functional computer. If the integrated circuit contains all mentioned elements it is called a one-chip computer or microcontroller. A more advanced microcontroller for use in embedded systems is called an embedded processor. An embedded processor which contains other modules, for example, a wireless networking radio module is called a System on Chip (SoC). From the perspective of an assembler programmer differentiating between these elements is not so important, and in current literature, all such elements are often called the processor. That's why we decided to use the common name “processor” although other names can also appear in the text.
The computers which we use every day are all designed based on the same general idea of cooperation of three base elements processor, memory and peripheral devices. Their names represent their functions in the system. The memory stores the data and program code, the processor manipulates the data by executing programs, and peripherals are used to keep the contact with the user, environment and other systems. To exchange the information these elements are connected together by interconnections named buses. The generic block schematic diagram of the exemplary computer is shown in Fig. 2.
It is often called “the brain” of the computer. Although it doesn’t think, the processor is the element which controls all other units of the computer. The processor is the device that manages everything in the machine. Every hardware part of the computer is controlled more or less by the main processor. Even if the device has its own processor - for example keyboard - it works under the control of the main one. Processor handles events. We can say that synchronous events are those that which processor handles periodically. The processor can’t stop. Of course when it has the power. But even when you don’t see anything special happening on the screen. In PC computer in an operating system without a graphical user interface, for example, plain Linux, or a command box in Windows, if you see only „C:\>” the processor works. In this situation, it executes the main loop of the system. In such a loop, it’s waiting for the asynchronous events. Such an asynchronous event occurs when the user pushes the key or moves the mouse, when the sound card stops playing one sound, the hard disk ends up transmitting the data. For all of those actions the processor handles executing the programs - or if you prefer – procedures.
Processor is characterised by its main parameters including the frequency of operation, and class. Other features we will explain further in this chapter.
Frequency is very important information which tells the user how many instructions can be executed in the time unit. To have the detailed information of a real number of instructions per second it must be combined with the average number of clock pulses required to execute the instruction. Older processors needed a few or even a dozen pulses of clock for a single instruction. Modern machines, thanks to parallel execution can achieve impressive results of a few instructions per single cycle.
The class of the processor tells what is the number of bits in the data word. It means what is the size of the arguments of the data which the processor can calculate with a single arithmetic, logic or other operation. The still popular 8-bit machines have a data length of 8 bits, while most sophisticated can use 32 or 64-bit arguments. The class of the processor determines the size of its internal data registers, the size of arguments of instructions and the number of lines of the data bus.
Memory is the element of the computer that stores data and programs. It is visible to the processor as the sequence of data words, where every word has its own address. Addressing allows the processor to access simple and complex variables and to read the instructions for execution. Although it is intuitive that the size of a memory word should correspond to the class of the processor it is not always true. For example in PC computers, independently of the processor class, the memory is always organised as the sequence of bytes and the size of it is provided in megabytes or gigabytes.
The size of the memory installed on the computer does not have to correspond to the size of the address space – the maximal size of the memory which is addressable by the processor. In modern machines, it would be impossible or hardly achievable, for example for x64 architecture the theoretical address space is 2^64 (16 exabytes). Even address space supported in hardware by processors is as big as 2^48 which equals 256 terabytes. On the opposite side in constrained devices, the size of the physical memory can be bigger than supported by the processor. To enable access to bigger memory than the addressing space of the processor or to support flexible placement of programs in big address space the paging mechanism is used. It is a hardware support unit for mapping the address used by the processor into the physical memory installed in the computer.
Called also input-output (I/O) devices. There is a variety of units belonging to this group. It includes timers, communication ports, general-purpose inputs and outputs, displays, network controllers, video and audio modules, data storage controllers and many others. The main function of peripheral modules is to exchange information between the computer and the user, collect information from the environment, send and receive data to and from elements of the computer not connected directly to the processor, and exchange data with other computers and systems. Some of them are used to connect the processor to the functional unit of the same computer. For example, the hard disk drive is not connected directly to the processor, it uses the specialized controller which is the peripheral module operating as an interface between the centre of the computer and the hard drive. Another example can be the display. It uses a graphic card which plays the role of peripheral interface between the computer and the monitor.
Processor, memory and peripherals exchange information using interconnections called buses. Although you can find in the literature and internet a variety of kinds of buses and their names, on the very low level there are three buses which connect the processor, memory and peripherals.
Address bus delivers the address generated by the processor to memory or peripherals. This address specifies the one, and only one memory cell or peripheral register that the processor wants to access. The address bus is used not only to address the data which the processor wants to transmit to or from memory or peripheral. It also addresses the instruction which processor fetches and later executes. Instructions are also stored in the memory. The address bus is one-directional, the address is generated by the processor and delivered to other units.
The number of lines in the address bus is fixed for the processor and determines the size of the addressing space the processor can access. For example, if the address bus of some processor has 16 lines, it can generate up to 16^2 = 65536 different addresses.
Data bus is used to exchange data between processor and memory or peripherals. The processor can read the data from memory or peripherals or write the data to these units previously sending their address with the address bus. As data can be read or written the data bus is bi-directional.
The number of bits of the data bus usually corresponds to the class of the processor. It means that an 8-bit class processor has 8 lines of the data bus.
Control bus is formed by lines mainly used for synchronisation between the elements of the computer. In the minimal implementation, it includes the read and write lines. Read line (#RD) is the information to other elements that the processor wants to read the data from the unit. In such a situation, the element, e.g. memory puts the data from the addressed cell on the data bus. Active write signal (#WR) informs the element that the data which is present on the data bus should be stored at the specified address. The control bus can also include other signals specific to the system, e.g. interrupt signals, DMA control lines, clock pulses, signals distinguishing the memory and peripheral access, signals activating chosen modules and others.
The classical architecture of computers uses a single address, and single data bus to connect the processor, memory and peripherals. This architecture is called von Neumann or Princeton, and we showed it in Fig. 3. Additionally in this architecture, the memory contains the code of programs and data the programs use. This suffers the drawback of the impossibility of accessing the instruction of the program and data to be processed at the same time, called the von Neumann bottleneck. The answer for this issue is the Harvard architecture, where program memory is separated from the data memory and they are connected to the processor with two pairs of address and data buses. Of course, the processor must support such type of architecture. The Harvard architecture we show in Fig. 4.
Harvard architecture is very often used in one-chip computers. It does not suffer the von Neumann bottleneck and additionally makes it possible to implement different lengths of the data word and instruction word. For example, the AVR 8-bit class family of microcontrollers has 16 bits of the program word. PIC microcontrollers, also 8-bit class have 13 or 14-bit instruction word length. In modern microcontrollers, the program is usually stored in internal flash reprogrammable memory, and data in internal static RAM memory. All interconnections including address and data buses are implemented internally making the implementation of the Harvard architecture easier than in a computer based on the microprocessor. In several mature microcontrollers, the program and data memory are separated but connected to the processor unit with a single set of buses. It is named mixed architecture. This architecture benefits from an enlargement of the size of possible address space but still suffers from the von Neumann bottleneck. This approach can be found in the 8051 family of microcontrollers. The schematic diagram of mixed architecture we present in Fig. 5.
It is not only the whole computer that can have a different architecture. This also touches processors. There are two main internal architectures of processors: CISC and RISC. CISC means Complex Instruction Set Computer while RISC stands for Reduced Instruction Set Computer. The naming difference can be a little confusing because it considers the instruction set to be complex or reduced. We can find CISC and RISC processors with similar number of instructions implemented. The difference is rather in the complexity of instructions not only the instruction set.
Complex instructions mean that the typical single instruction of a CISC processor makes more during its execution than typical RISC instruction. It also uses more sophisticated addressing. It means that if we want to make some algorithm we can use fewer CISC instructions in comparison to more RISC instructions. Of course, it comes with the price of a more sophisticated construction of CISC processor than RISC which influences the average execution time of a single instruction. As a result, the overall execution time can be similar, but in CISC more effort is done by the processor while in RISC more work is done by the compiler (or assembler programmer).
Additionally, there are differences in the general-purpose registers. In CISC the number of registers is smaller than in RISC. Additionally, they are specialised. It means that not all operations can be done with the use of any register. For example in some CISC processors arithmetic calculations can be done only with the use of a special register called accumulator, while addressing of table element or using a pointer can be done with the use of a special index (or base) register. In RISC almost all registers can be used for any purpose like the mentioned calculations or addressing. In CISC processors instructions have usually two arguments having the form of
operation arg1, arg2 ; Example: arg1 = arg1 + arg2
In such a situation arg1 is one of a source and also a destination - place for the result. It destroys the original arg1 value. In many RISC processors three argument instructions are present:
operation arg1, arg2, arg3 ; Example: arg3 = arg1 + arg2
In such an approach two arguments are the source and the third one is the destination - original arguments are preserved and can be used for further calculations. The table 1 summarises the difference between CISC and RISC processors.
Feature | CISC | RISC |
---|---|---|
Instructions | Complex | Simple |
Registers | Specialised | Universal |
Number of registers | Smaller | Larger |
Calculations | With accumulator | With any register |
Addressing modes | Complex | Simple |
Non destroying instructions | No | Yes |
Examples of processors | 8086, 8051 | AVR, ARM |
From our perspective, the processor is the electronic integrated circuit that controls other elements of the computer. Its main ability is to execute instructions. While we will go into details of the instruction set you will see that some of the instructions perform calculations or process data, but some of them do not. This suggests that the processor is composed of two main units. One of them is responsible for instruction execution while the second performs data processing. The first one is called the control unit or instruction processor, the second one is named the execution unit or data processor. We can see them in Fig 6.
The function of the control unit, known also as the instruction processor is to fetch, decode and execute instructions. It also generates signals to the execution unit if the instruction being executed requires so. It is a synchronous and sequential unit. Synchronous means that it changes state synchronously with the clock signal. Sequential means that the next state depends on the states at the inputs and the current internal state. As inputs, we can consider not only physical signals from other units of the computer but also the code of the instruction. To ensure that the computer behaves the same every time it is powered on, the execution unit is set to the known state at the beginning of operation by RESET signal. A typical control unit contains some essential elements:
Elements of the control unit are shown in Fig 7.
The control unit executes instructions in a few steps:
In detail, the process looks as follows:
The control unit works according to the clock signal generator cycles known as main clock cycles. With every clock cycle, some internal operations are performed. One such operation is reading or writing the memory which sometimes requires more than a single clock cycle. Single memory access is known as a machine cycle. As instruction execution sometimes requires more than one memory access and other actions the execution of the whole instruction is named instruction cycle. Summarising one instruction execution requires one instruction cycle, and several machine cycles, each composed of a few main clock cycles. Modern advanced processors are designed in such a way that they are able to execute a single instruction (sometimes even more than one) every single clock cycle. This requires a more complex design of a control unit, many execution units and other advanced techniques which makes it possible to process more than one instruction at a time.
The control unit also accepts input signals from peripherals enabling the interrupts and direct memory access mechanisms. For proper return from the interrupt subroutine, the control unit uses a special register called stack pointer. Interrupts and direct memory access mechanisms will be explained in detail in further chapters.
An execution unit, known also as the data processor executes instructions. Typically, it is composed of a few essential elements:
The arithmetic logic unit (ALU) is the element that performs logical and arithmetical calculations. It uses data coming from registers, the accumulator or from memory. Data coming from memory data for arithmetic and logic instructions are stored in the temporal register. The result of calculations is stored back in the accumulator, other register or memory. In some legacy CISC processors, the only possible place for storing the result is the accumulator. Besides the result, ALU also returns some additional information about the calculations. It modifies the bits in the flag register which comprises flags that indicate the results from arithmetic and logical operations. For example, if the result of the addition operation is too large to be stored in the resulting argument, the carry flag is set to inform about such a situation.
Typically flags register include:
The flags are used as conditions for decision-making instructions (like if statements in some high-level languages). Flags register can also implement some control flags to enable/disable processor functionalities. An example of such a flag can be the Interrupt Enable flag from the 8086 microprocessor.
As we mentioned in the chapter about CISC and RISC processors in the first ones there are specialised registers including the accumulator. A typical CISC execution unit is shown in Fig 8.
A typical RISC execution unit does not have a specialised accumulator register. It implements the set of registers as shown in Fig 9.
As we already know the processor executes instructions which can process the data. We can consider two streams flowing through the processor. Stream of instructions which passes through the control unit, and stream of data processed by the execution unit. In 1966 Michael Flynn proposed the taxonomies to define different processors' architectures. Flynn classification is based on the number of concurrent instruction (or control) streams and data streams available in the architecture. Taxonomies as proposed by Flynn are presented in Table2
Basic taxonomies | Data streams | ||
---|---|---|---|
Single | Multiple | ||
Instruction streams | Single | SISD | SIMD |
Multiple | MISD | MIMD |
Single Instruction Single Data processor is a classical processor with a single control unit and a single execution unit. It can fetch a single instruction at one cycle and perform single calculation. Mature PC computers based on 8086, 80286 or 80386 processors or some modern small-scale microcontrollers like AVR are examples of such an architecture.
Single Instruction Multiple Data is an architecture in which one instruction stream can perform calculations on multiple data streams. Good examples of implementation of such architecture are all vector instructions (called also SIMD instructions) like MMX, SSE, AVX, and 3D-Now in x64 Intel and AMD processors. Modern ARM processors also implement SIMD instructions which perform vectorised operations.
Multiple Instruction Multiple Data is an architecture in which many control units operate on multiple streams of data. These are all architectures with more than one processor (multi-processor architectures). MIMD architectures include multi-core processors like modern x64, ARM and even ESP32 SoCs. This also includes supercomputers and distributed systems, using common shared memory or even a distributed but shared memory.
Multiple Instruction Single Data. At first glance, they seem illogical, but these are machines where the certainty of correct calculations is crucial and required for the security of the system operation. Such an approach can be found in applications like space shuttle computers.
Single Instruction Multiple Threads. Originally defined as the subset of SIMD. In modern construction, Nvidia uses this execution model in their G80 architecture [1] where multiple independent threads execute concurrently using a single instruction.
The computer cannot work without the memory. The processor fetches instructions from the memory, data is also stored in the memory. In this chapter, we will discuss memory types, technologies and their properties. The overall view of the computer memory can be represented as the memory hierarchy triangle shown in Fig. 10 where the available size reduces while going up but the access time significantly shortens. It is visible that the fastest memory in the computer is the set of internal registers, next is the cache memory, operational memory (RAM), disk drive, and the slowest, but the largest is the memory available via the computer network - usually the Internet.
In the table 3 you can find the comparison of different memory types with their estimated size and access time.
Memory type | Average size | Access time |
---|---|---|
Registers | few kilobytes | <1 ns |
Cache | few megabytes | <10 ns |
RAM | few gigabytes | <100 ns |
Disk | few terabytes | <100 us |
Network | unlimited | few seconds, but can be unpredictable |
Before we start talking about details of memory we will mention the term address space. The processor can access instructions of the program and data to process. The question is how many instructions and data elements can it reach? The possible number of elements (usually bytes) represents the address space available for the program and for data. We know from the chapter about von Neumann and Harvard architectures that these spaces can overlap or be separate.
The size of the program address space is determined by the number of bits of the Instruction Pointer register. In many 8-bit microprocessors, the size of IP is 16-bits. It means that the size of address space, expressed in the number of different addresses the processor can reach, is equal to 2^16 = 65536 (64k). It is too small for bigger machines like PC computers so here the number of bits of IP is 64 (although the real number of bits used is limited to 48).
The size of the possible data address space is determined by addressing modes and the size of index registers used for indirect addressing. In 8-bit microprocessors sometimes it is possible to join two 8-bit registers to achieve address space of the same 64k size as for the program. In PC computers program and data address spaces overlap, so the address space for data is the same as for programs.
In this kind of memory, the instructions of the programs are stored. In many computers (like PCs), the same physical memory is used to store the program and the data. This is not the case in most microcontrollers where the program is stored in non-volatile memory, usually Flash EEPROM. Even if the memory for programs and data is common, the code area is often protected from writing to prevent malware and other viruses from interfering with the original, safe software.
In data memory, all kinds of data can be stored. Here we can find numbers, tables, pictures, audio samples and everything that is processed by the software. Usually, the data memory is implemented as RAM, with the possibility of reading and writing. SRAM is a volatile memory so its content is lost if the computer is powered off.
There are different technologies for creating memory elements. The technology determines if memory is volatile or non-volatile, the physical size (area) of a single bit, the power consumption of the memory array, and how fast we can access the data. According to the data retention, we can divide memories into two groups:
Non-volatile memories are implemented as:
Volatile memories have two main technologies:
Non-volatile memories are used to store the data which should be preserved during the power is off. Their main application is to store the code of the program, constant data tables, and parameters of the computer (e.g. network credentials). The hard disk drive (or SSD) is also the non-volatile memory, although not directly accessible by the processor via address and data buses.
ROM - Read Only Memory is the kind of memory that is programmed by the chip manufacturer during the production of the chip. This memory has fixed content which can't be changed in any way. It was popular as a program memory in microcontrollers a few years ago because of the low price of a single chip, but due to a need for replacement of the whole chip in case of an error in the software and lack of possibility of a software update, it was replaced with technologies which allow reprogramming the memory content. Currently, ROM is sometimes used to store the serial number of a chip.
PROM - Programmable Read Only Memory is the kind of memory that can be programmed by the user but can't be reprogrammed further. It is sometimes called OTP - One Time Programming. Some microcontrollers implement such a memory but due to the inability of software updates, it is a choice for simple devices with a short lifetime of a product. OTP technology is used in some protection mechanisms for preserving from unauthorised software change or reconfiguration of device.
EPROM - Erasable Programmable Read Only Memory is the memory which can be erased and programmed many times. The data can be erased by shining an intense ultraviolet light through a special glass window into the memory chip for a few minutes. After erasing the chip can be programmed again. This kind of memory was very popular a few years ago but due to a complicated erase procedure, requiring a UV light source, was replaced with EEPROM.
EEPROM - Electrically Erasable Programmable Read Only Memory is a type of non-volatile memory that can be erased and reprogrammed many times with electric signals only. They replaced EPROM memories due to ease of use and the possibility of reprogramming without removing the chip from the device. It is designed as an array of MOSFET transistors with, so-called, floating gates which can be charged or discharged and keep the stable state for a long time (at least 10 years). In EEPROM memory every byte can be individually reprogrammed.
Flash EEPROM is the version of EEPROM memory that is similar to EEPROM. Whilst in the EEPROM a single byte can be erased and reprogrammed, in the Flash version, erasing is possible in larger blocks. This reduces the area occupied by the memory, making it possible to create denser memories with larger capacity. Flash memories can be realised as part of a larger chip, making it possible to design microcontrollers with built-in reprogrammable memory for the firmware. This enables firmware updates without the need for any additional tools or removing the processor from the operating device. It enables also remote updates of the firmware.
FRAM - Ferroelectric Random Access Memory is the type of memory where the information is stored with the effect of change of polarity of ferroelectric material. The main advantage is that the power efficiency, access time, and density are comparable to DRAM, but without the need for refreshing and with data retention while the power is off. The main drawback is the price which limits the popularity of FRAM applications. Currently, due to their high reliability, FRAMs are used mainly in specialised applications like data loggers, medical equipment, and automotive “black boxes”.
Volatile memories are used for temporal data storage while the computer system is running. Their content is lost after the power is off. Their main application is to store the data while processed by the program. Their main applications are operational memory (known as RAM), cache memory, and data buffers.
SRAM - Static Random Access Memory is the memory used as operational memory in smaller computer systems and as the cache memory in bigger machines. Its main benefit is very high operational speed. The access time can be as short as a few nanoseconds making it the first choice when speed is required: in cache memory or as processor's registers. The drawbacks are the area and power consumption. Every bit of static memory is implemented as a flip-flop and composed of six transistors. Due to its construction, the flip-flop always consumes some power. That is the reason why the capacity of SRAM memory is limited usually to a few hundred kilobytes, up to a few megabytes.
DRAM - Dynamic Random Access Memory is the memory used as operational memory in bigger computer systems. Its main benefit is very high density. The information is stored as a charge in a capacitance created under the MOSFET transistor. Because every bit is implemented as a single transistor it is possible to implement a few gigabytes in a small area. Another benefit is the reduced power required to store the data in comparison to static memory. But there are also drawbacks. DRAM memory is slower than SRAM. The addressing method and the complex internal construction prolong the access time. Additionally, because the capacitors which store the data discharge in a short time DRAM memory requires refreshing, typically 16 times per second, it is done automatically by the memory chip, but additionally slows the access time.
Peripheral or peripheral devices, also known as Input-Output devices, enable the computer to remain in contact with the external environment or expand the computer's functionality. Peripheral devices enhance the computer's capability by making it possible to enter information into a computer for storage or processing and to deliver the processed data to a user, another computer, or a device controlled by the computer. Internal peripherals are connected directly to the address, data, and control buses of the computer. External peripherals can be connected to the computer via USB or a similar connection.
There is a variety of peripherals which can be connected to the computer. The most commonly used are:
From the assembler programmer's perspective, the peripheral device is represented as a set of registers available in I/O address space. Registers of peripherals are used to control their behaviour including mode of operation, parameters, configuration, speed of transmission etc. Registers are also data exchange points where the processor can store data to be transmitted to the user or external computer, or read the data coming from the user or another system.
The size of the I/O address space is usually smaller than the size of the program or data address space. The method of accessing peripherals depends on the design of the processor. We can find two methods of I/O addressing implementation: separate or memory-mapped I/O.
Separate I/O address space is accessed independently of the program or data memory. In the processor, it is implemented with the use of separate control bus lines to read or write I/O devices. Separate control lines usually mean that the processor implements also different instructions to access memory and I/O devices. It also means that the chosen peripheral and byte in the memory can have the same address and only the type of instruction used distinguishes the final destination of the address. Separate I/O address space is shown schematically in Fig 11. Reading the data from the memory is activated with the #MEMRD signal, writing with the #MEMWR signal. If the processor needs to read or write the register of the peripheral device it uses #IORD or #IOWR lines respectively.
In this approach, the processor doesn't implement separate control signals to access the peripherals. It uses a #RD signal to read the data from and a #WR signal to write the data to a common address space. It also doesn't have different instructions to access registers in the I/O address space. Every such register is visible like any other memory location. It means that this is only the responsibility of the software to distinguish the I/O registers and data in the memory. Memory-mapped I/O address space is shown schematically in Fig 12.
As we already mentioned, instructions are executed by the processor in a few steps. You can find in the literature descriptions that there are three, four, or five stages of instruction execution. Everything depends on the level of detail one considers. The three stages description says that there are fetch, decode and execute steps. The four-stage model says that there are fetch, decode, data read and execute steps. The five stages version adds another final step for writing the result back and sometimes reverses the steps of data read and execution.
It is worth remembering that even a simple fetch step can be divided into a set of smaller actions which must be performed by the processor. The real execution of instructions depends on the processor's architecture, implementation and complexity. Considering the five-stage model we can describe the general model of instruction execution:
From the perspective of the processor, instructions are binary codes, unambiguously determining the activities that the processor is to perform. Instructions can be encoded using a fixed or variable number of bits.
A fixed number of bits makes the construction of the instruction decoder simpler because the choice of some specific behaviour or function of the execution unit is encoded with the bits which are always at the same position in the instruction. On the opposite side, if the designer plans to expand the instruction set with new instructions in the future, there must be some spare bits in the instruction word reserved for future use. It makes the code of the program larger than required. Fixed lengths of instructions are often implemented in RISC machines. For example in ARM architecture instructions have 32 bits. In AVR instructions are encoded using 16 bits.
A variable number of bits makes the instruction decoder more complex. Based on the content of the first part of the instruction (usually byte) it must be able to decide what is the length of the whole instruction. In such an approach, instructions can be as short as one byte, or much longer. An example of a processor with variable instruction length is 8086 and all further processors from the x86 and x64 families. Here the instructions, including all possible constant arguments, can have even 15 bytes.
Modern processors have a very complex design and include many units responsible mainly for shortening the execution time of the software.
As was described in the previous chapter, executing a single instruction requires many actions which must be performed by the processor. We could see that each step, or even substep, can be performed by a separate logical unit. This feature has been used by designers of modern processors to create a processor in which instructions are executed in a pipeline. A pipeline is a collection of logical units that execute many instructions at the same time - each of them at a different stage of execution. If the instructions arrive in a continuous stream, the pipeline allows the program to execute faster than a processor that does not support the pipeline. Note that the pipeline does not reduce the time of execution of a single instruction, it increases the throughput of the instruction stream.
A simple pipeline is implemented in AVR microcontrollers. It has two stages, which means that while one instruction is executed another one is fetched as shown in Fig 13.
A mature 8086 processor executed the instruction in four steps. This allowed for the implementation of the 4-stage pipeline as shown in Fig. 14
Modern processors implement longer pipelines. For example, Pentium III used the 10-stage pipeline, Pentium 4 20-stage, and Pentium 4 Prescott even a 31-stage pipeline. Does the longer pipeline mean faster program execution? Everything has benefits and drawbacks. The undoubted benefit of a longer pipeline is more instructions executed at the same time which gives the higher instruction throughput. But the problem appears when branch instructions come. While in the instruction stream a conditional jump appears the processor must choose what way the instruction stream should go. Should the jump be taken or not? The answer usually is based on the result of the preceding instruction and is known when the branch instruction is close to the end of the pipeline. In such a situation in modern processors, the branch prediction unit guesses what to do with the branch. If it misses, the pipeline content is invalidated and the pipeline starts operation from the beginning. This causes stalls in the program execution. If the pipeline is longer - the number of instructions to invalidate is bigger. That's why Intel decided to return to shorter pipelines. In modern microarchitectures, the length of the pipeline varies between 12 and 20.
The superscalar processor increases the speed of program execution because it can execute more than one instruction during a clock cycle. It is realised by simultaneously dispatching instructions to different execution units on the processor. The superscalar processor doesn't implement two or more independent pipelines, rather decoded instructions are sent for further processing to the chosen execution unit as shown in Fig. 15.
In the x86 family first processor with two paths of execution was Pentium with U and V pipelines. Modern x64 processors like i7 implement six execution units. Not all execution units have the same functionality, for example, In the i7 processor, every execution unit has different possibilities, as presented in table 4.
Execution unit | Functionality |
---|---|
0 | Integer calculations, Floating point multiplication, SSE multiplication, divide |
1 | Integer calculations, Floating point addition, SSE addition |
2 | Address generation, load |
3 | Address generation, store |
4 | Data store |
5 | Integer calculations, Branch, SSE addition |
As it was mentioned, the pipeline can suffer invalidation if the conditional branch is not properly predicted. The branch prediction unit is used to guess the outcome of conditional branch instructions. It helps to reduce delays in program execution by predicting the path the program will take. Prediction is based on historical data and program execution patterns. There are many methods of predicting the branches. In general, the processor implements the buffer with the addresses of the last few branch instructions with a history register for every branch. Based on history, the branch prediction unit can guess if the branch should be taken.
Hyper-Threading Technology is an Intel approach to simultaneous multithreading technology which allows the operating system to execute more than one thread on a single physical core. For each physical core, the operating system defines two logical processor cores and shares the load between them when possible. The hyperthreading technology uses a superscalar architecture to increase the number of instructions that operate in parallel in the pipeline on separate data. With Hyper-Threading, one physical core appears to the operating system as two separate processors. The logical processors share the execution resources including the execution engine, caches, and system bus interface. Only the elements that store the architectural state of the processor are duplicated including essential registers for the code execution.
Addressing Modes is the way in which the argument of an instruction is specified. The addressing mode defines a rule for interpreting the address field of the instruction before the operand is reached. Addressing mode is used in instructions which operate on the data or in instructions which change the program flow.
Instructions which reach the data have the possibility of specifying the data placement. The data is an argument of the instruction, sometimes called an operand. Operands can be of one of the following: register, immediate, direct memory, and indirect memory. As in this part of the book the reader doesn't know any assembler instructions we will use the hypothetic instruction copy that copies the data from the source operand to the destination operand. The order of the operands will be similar to high-level languages where the left operand is the destination and the right operand is the source. Copying data from a to b will be done with an instruction as in the following example:
copy b, a
Register operand is used where the data which the processor wants to reach is stored or is intended to be stored in the register. If we assume that a and b are both registers named R0 and R1 the instruction for copying data from R0 to R1 will look as in the following example and as shown in the Fig.16.
copy R1, R0
An immediate operand is a constant or the result of a constant expression. The assembler encodes immediate values into the instruction at assembly time. The operand of this type can be only one in the instruction and is always at the source place in the operands list. Immediate operands are used to initialise the register or variable, as numbers for comparison. An immediate operand as it's encoded in the instruction, is placed in code memory, not in data memory and can't be modified during software execution. Instruction which initialises register R1 with the constant (immediate) value of 5 looks like this:
copy R1, 5
A direct memory operand specifies the data at a given address. An address can be given in numerical form or as the name of the previously defined variable. It is equivalent to static variable definition in high-level languages. If we assume that the var represents the address of the variable the instruction which copies data from the variable to R1 can look like this:
copy R1, var
Indirect memory operand is accessed by specifying the name of the register which value represents the address of the memory location to reach. We can compare the indirect addressing to the pointer in high-level languages where the variable does not store the value but points to the memory location where the value is stored. Indirect addressing can also be used to access elements of the table in a loop, where we use the index value which changes every loop iteration rather than a single address. Different assemblers have different notations of indirect addressing, some use brackets, some square brackets, and others @ symbol. Even different assembler programs for the same processor can differ. In the following example, we assume the use of square brackets. The instruction which copies the data from the memory location addressed by the content of the R0 register to R1 register would look like this:
copy R1, [R0]
Variations of indirect addressing. The indirect addressing mode can have many variations where the final address doesn't have to be the content of a single register but rather the sum of a constant value with one or more registers. Some variants implement automatic incrementation (similar to the “++” operator) or decrementation (“–”) of the index register before or after instruction execution to make processing the tables faster. For example, accessing elements of the table where the base address of the table is named data_table and the register R0 holds the index of the byte which we want to copy from a table to R1 could look like this:
copy R1, table[R0]
Addressing mode with pre-decrementation (decrementing before instruction execution) could look like this:
copy R1, table[--R0]
Addressing mode with post-incrementation (incrementing after instruction execution) could look like this:
copy R1, table[R0++]
The operand of jump, branch, or function call instructions addresses the destination of the program flow control. The result of these instructions is the change of the Instruction Pointer content. Jump instructions should be avoided in structural or object-oriented high-level languages, but they are rather common in assembler programming. Our examples will use the hypothetic jump instruction with a single operand—the destination address.
Direct addressing of the destination is similar to direct data addressing. It specifies the destination address as the constant value, usually represented by the name. In assembler, we define the names of the addresses in code as labels. In the following example, the code will jump to the label named destin:
jump destin
Indirect addressing of the destination uses the content of the register as the address where the program will jump. In the following example, the processor will jump to the destination address which is stored in R0:
jump [R0]
In all previous examples, the addresses were specified as the values which represent the absolute memory location. The resulting address (even calculated as the sum of some values) was the memory location counted from the beginning of the memory - address “0”. It is presented in Fig23.
Absolute addressing is simple and doesn't require any additional calculations by the processor. It is often used in embedded systems, where the software is installed and configured by the designer and the location of programs does not change. Absolute addressing is very hard to use in general-purpose operating systems like Linux or Windows where the user can start a variety of different programs, and their placement in the memory differs every time they're loaded and executed. Much more useful is the relative addressing where operands are specified as differences from memory location and some known value which can be easily modified and accessed. Often the operands are provided relative to the Instruction Pointer which allows the program to be loaded at any address in the address space, but the distance between the currently executed instruction and the location of the data it wants to reach is always the same. This is the default addressing mode in the Windows operating system working on x64 machines. It is illustrated in Fig24.
Relative addressing is also implemented in many jump, branch or loop instructions.
The processor can work with different types of data. These include integers of different sizes, floating point numbers, texts, structures and even single bits. All these data are stored in the memory as a single byte or multiple bytes.
Integer data types can be 8, 16, 32 or 64 bits long. If the encoded number is unsigned it is stored in binary representation, while the value is signed the representation is two's complement. In two's complement representation, the most significant bit (MSB) represents the sign of the number. Zero means a non-negative number, one represents a negative value. The table 5 shows the integer data types with their ranges.
Number of bits | Minimum value (hexadecimal) | Maximum value (hexadecimal) | Minimum value (decimal) | Maximum value (decimal) |
---|---|---|---|---|
8 | 0x00 | 0xFF | 0 | 255 |
8 signed | 0x80 | 0x7F | -128 | 127 |
16 | 0x0000 | 0xFFFF | 0 | 65 535 |
16 signed | 0x8000 | 0x7FFF | -32 768 | 32 767 |
32 | 0x0000 0000 | 0xFFFF FFFF | 0 | 4 294 967 295 |
32 signed | 0x8000 0000 | 0x7FFF FFFF | -2 147 483 648 | 2 147 483 647 |
64 | 0x0000 0000 0000 0000 | 0xFFFF FFFF FFFF FFFF | 0 | 18 446 744 073 709 551 615 |
64 signed | 0x8000 0000 0000 0000 | 0x7FFF FFFF FFFF FFFF | -9 223 372 036 854 775 808 | 9 223 372 036 854 775 807 |
Integer calculations do not always cover all mathematical requirements of the algorithm. To represent real numbers the floating point encoding is used. A floating point is the representation of the value A which is composed of three fields:
There are two main types of real numbers, called floating point values. Single precision is the number which is encoded in 32 bits. Double precision floating point number is encoded with 64 bits. They are presented in Fig25.
The Table6 shows a number of bits for exponent and mantissa for single and double precision floating point numbers. It also presents the minimal and maximal values which can be stored using these formats (they are absolute values, and can be positive or negative depending on the sign bit).
The most common representation for real numbers on computers is standardised in the document IEEE Standard 754. There are two modifications implemented which make the calculations easier for computers.
Biased exponent means that the bias value is added to the real exponent value. This results with all positive exponents which makes it easier to compare numbers. The normalised mantissa is adjusted to have only one bit of the value “1” to the left of the decimal. It requires an appropriate exponent adjustment.
Texts are represented as a series of characters. In modern operating systems, texts are encoded using two-byte Unicode which is capable of encoding not only 26 basic letters but also language-specific characters of many different languages. In simpler computers like in embedded systems, 8-bit ASCII codes are often used. Every byte of the text representation in the memory contains a single ASCII code of the character. It is quite common in assembler programs to use the zero value (NULL) as the end character of the string, similar to the C/C++ null-terminated string convention.
Data encoded in memory must be compatible with the processor. Memory chips are usually organised as a sequence of bytes, which means that every byte can be individually addressed. For processors of the class higher than 8-bit, there appears the issue of the byte order in bigger data types. There are two possibilities:
These two methods for a 32-bit class processor are shown in Fig26
An interrupt is a request to the processor to temporarily suspend the currently executing code in order to handle the event that caused the interrupt. If the request is accepted by the processor, it saves its state and performs a function named an interrupt handler or interrupt service routine (ISR). Interrupts are usually signalled by peripheral devices in a situation while they have some data to process. Often, peripheral devices do not send an interrupt signal directly to the processor, but there is an interrupt controller in the system that collects requests from various peripheral devices. The interrupt controller prioritizes the peripherals to ensure that the more important requests are handled first.
From a hardware perspective, an interrupt can be signalled with the signal state or change.
The interrupt signal comes asynchronously which means that it can come during execution of the instruction. Usually, the processor finishes this instruction and then calls the interrupt handler. To be able to handle interrupts the processor must implement the mechanism of storing the address of the next instruction to be executed in the interrupted code. Some implementations use the stack while some use a special register to store the returning address. The latter approach requires software support if interrupts can be nested (if the interrupt can be accepted while already in another ISR).
After finishing the interrupt subroutine processor uses the returning address to return back program control to the interrupted code.
The Fig. 27 shows how interrupt works with stack use. The processor executes the program. When an interrupt comes it saves the return address on the stack. Next jumps to the interrupt handler. With return instruction processor returns to the program taking the address of an instruction to execute from the stack.
To properly handle the interrupts the processor must recognise the source of the interrupt. Different code should be executed when the interrupt is signalled by a network controller, different if the source of the interrupt is a timer. The information on the interrupt source is provided to the processor by the interrupt controller or directly by the peripheral. We can distinguish three main methods of calling proper ISR for incoming interrupts.
Interrupts can be enabled or disabled. Disabling interrupts is often used for time-critical code to ensure the shortest possible execution time. Interrupts which can be disabled are named maskable interrupts. They can be disabled with the corresponding flag in the control register. In microcontrollers, there are separate bits for different interrupts.
If an interrupt can not be disabled is named non-maskable interrupt. Such interrupts are implemented for critical situations:
In microprocessors, there exists separate non-maskable interrupt input – NMI.
In some processors, it is possible to signal the interrupt by executing special instructions. They are named software interrupts and can be used to test interrupt handlers. In some operating systems (DOS, Linux) software interrupts are used to implement the mechanism of calling system functions.
Another group of interrupts signalled by the processor itself are internal interrupts. They aren't signalled with special instruction but rather in some specific situations during normal program execution.
Direct memory access (DMA) is the mechanism for fast data transfer between peripherals and memory. In some implementations, it is also possible to transfer data between two peripherals or from memory to memory. DMA operates without the activity of the processor, no software is executed during the DMA transfer. It must be supported by a processor and peripheral hardware, and there must be a DMA controller present in the system. The controller plays the main role in transferring the data.
DMA controller is a specialised unit which can control the data transfer process. It implements several channels each containing the address register which is used to address the memory location and counter to specify how many cycles should be performed. Address register and counter must be programmed by the processor, it is usually done in the system startup procedure. The system with an inactive DMA controller is presented in Fig.28.
The process of data transfer is done in some steps, Let us consider the situation when a peripheral has some data to be transferred.
Everything is done without any action of the processor, no program is fetched and executed. Because everything is done by hardware the transfer can be done in one memory access cycle so much faster than by the processor. Data transfer by processor is significantly slower because requires at least four instructions of program execution and two data transfers: one from the peripheral and another to the memory for one cycle. The system with an active DMA controller is presented in Fig.29.
DMA transfer can be done in some modes:
Programming in Assembler plays a crucial role in IoT and embedded systems. These systems consist of small, simple devices with built-in microprocessors with significant capabilities. To effectively utilize existing hardware resources, it is necessary to have simple, inexpensive, and direct access to the hardware from the software level. One way to achieve this is through programming in Assembler. It is possible to use assembler code directly in many environments, e.g., for Arduino, Raspberry, etc.
Assembler allows writing code that is directly translated into processor instructions. As a result, programs are extremely fast and efficient. In embedded systems, where resources are limited, every optimization matters. Programming in Assembler allows for the maximum utilization of available hardware resources. The ability to directly control processor registers, input/output ports, and other hardware components enables the creation of more advanced and precise applications.
Many ready-made libraries utilize low-level assembler functions. Unfortunately, it is often necessary to write assembler code independently. Programming in Assembler requires a deep understanding of processor architecture. Understanding how the processor works at the instruction level improves system design and optimization.
The first embedded systems appeared in the 1960s. These were specialized computers designed for specific tasks. The hardware was large and expensive. In the 1970s, a breakthrough occurred with the invention of microprocessors such as the Intel 4004, which enabled the creation of smaller, cheaper, and more versatile embedded systems.
In the 1990s, the development of processors went in two directions:
Microcontrollers are characterized by the presence of all elements (processor, memory, and peripherals) on a single chip. This allowed for the creation of more complex and functional devices at lower costs. New elements appeared depending on the family. In ARM processors, integrated components can include graphics cards, positioning modules (GPS), communication modules (WiFi, LTE/5G, Bluetooth). These elements are not integrated into AVR processors.
Let's take a look at the mobile devices market. Mobile phones, tablets, and other devices are built on ARM processor architecture. For example, take Snapdragon SoC (System-on-Chip) designed by Qualcomm – this chip integrates a CPU based on the ARM architecture. Of course, that chip may have an additional graphics processing unit(GPU) and even a digital signal processor(DSP) for faster signal processing. Similarly, Apple A18 processors are based on ARM architecture. The only difference between all these mobile devices is the ARM version on which the processor is designed.
ARM stands for Advanced RISC Machine.
Today, ARM has multiple processor architecture series, including Cortex-M, Cortex-R, Cortex-A, and Cortex-X, as well as other series. Cortex-X series processors are made for performance to be used in Smartphones and Laptops. Now it supports up to 14 cores, but this value changes over time. Similarly, the Cortex-A series is made for devices designed to execute complex computation tasks. These processors are made to provide power-efficient workloads for increasing battery life. Similarly to the Cortex-X series, these processors also may have up to 14 cores. The cortex-R series is made for real-time operations, where reaction to events is crucial. These are made specifically to be used in time-sensitive and safety-critical environments. The architecture itself is very similar to the Cortex-A processors with some exceptions that will not be discussed here. The Cortex-M series is for microcontrollers where low power consumption and computational power are required. Many wearable devices, IoT, and embedded devices contain ARM microcontrollers.
Mobile devices tend to use Cortex-A series processors. To learn its architecture, we will use the Raspberry PI. For ARM architecture, we will use the Cortex-M series microcontroller. We chose an ST-manufactured microcontroller, the STM32H743VIT6 microcontroller, to work with the assembler programming language. The microcontroller itself is being chosen for a reason – the processor core is based on ARMv7 Cortex-M7 which is one of the most powerful processor architectures among microcontrollers.
Now you may realize that there is more than one assembly language type. The most widely used assembly language types are ARM, MIPS, and x86. New architectures will come, and they will replace the old ones, just because they may have reduced power consumption, or silicon die size. Another push for new CPU architecture is malware that uses the architecture features against users, to steal their data. Such malware, like “Spectre” and “Meltdown”, perform attacks based on vulnerabilities in modern CPU designs. Used vulnerabilities were designed as a feature for the CPU – speculative execution and branch prediction. So – this means that in the future, there will be new architectures or their newer versions and alongside them, new malwares.
All assembly language types use similar mnemonics for arithmetic operations (some may require additional suffixes to identify some options for the instruction). ARM assembly language has specific suffixes to make commands executed conditionally, some suffixes to identify data type, and one specific suffix to update the processor's status register. Simple command will be executed without any conditions and will not update any status bit in the status register.
The evolution of x86 processors began with the Intel 8086 microprocessor introduced in 1978. It is worth noticing that the first IBM machines, the IBM PC and IBM XT, used the cheaper version 8088 with an 8-bit data bus. The 8088 was software compatible with 8086, but because of the use of an 8-bit external data bus, it allowed for a reduction in the cost of the whole computer. IBM later used the 8086 processors in personal computers named PS/2. Its successor, 80286, was used in the IBM AT personal computer, the extended version of the XT, and in the IBM PS/2 286 machine, which became the standard architecture for a variety of personal computers designed by many vendors. This started the history of personal computers based on the x86 family of processors. In this section, we will briefly describe the most important features and differences between models of x86 processors.
The 8086 is a 16-bit processor, which means it uses 16-bit registers. With the use of the segmentation operating in so-called real addressing mode and a 20-bit address bus, it can access up to 1MB of memory. The base clocking frequency of this model is 5 - 10 MHz. It implements a three-stage instruction pipeline (loosely pipelined) which allows the execution of up to three instructions at the same time. In parallel to the processor, Intel designed a whole set of supporting integrated circuits, which made it possible to build the computer. One of these chips is 8087 - a match coprocessor known now as the Floating Point Unit.
This model includes additional hardware units and is designed to reduce the number of integrated circuits required to build the computer. Its clock generator operates at 6 - 20 MHz. 80186 implements a few additional instructions. It's considered the faster version of 8086.
The 80286 has an extended 24-bit address bus and theoretically can use 16MB of memory. In this model, Intel tried to introduce protected memory management with a theoretical address space of 1 GB. Due to problems with compatibility and efficiency of execution of the software written for 8086, extended features were used by niche operating systems only, with IBM OS/2 as the most recognised.
It is the first 32-bit processor designed by Intel for personal computers. It implements protected and virtual memory management that was successfully used in operating systems, including Microsoft Windows. Architectural elements of the processor (registers, buses) are extended to 32 bits, which allows addressing up to 4GB of memory. 80386 extends the instruction pipeline with three stages, allowing for the execution of up to 6 instructions simultaneously. Although no internal cache was implemented, this chip enables connecting to external cache memory. Intel uses the IA-32 name for this architecture and instruction set. The original name 80386 was later replaced with i386.
It is an improved version of the i386 processor. Intel combined in one chip the main CPU and FPU (except i486SX version) and memory controller, including 8 or 16 kB of cache memory. The cache is 4-way associative, common for instructions and data. It is a tightly pipelined processor which implements a five-stage instruction pipeline where every stage operates in one clock cycle. The clock frequency is between 16 - 100 MHz.
Pentium is a successor of the i486 processor. It is still a 32-bit processor but implements a dual integer pipeline, which makes it the first superscalar processor in the x86 family. I can operate with the clock frequency ranging from 60 to 300 MHz. An improved microarchitecture also includes a separate cache for instructions and data, and a branch prediction unit, which helps to reduce the influence of pipeline invalidation for conditional jump instructions. The cache memory is 8kB for instructions and 8kB for data, both 2-way associative.
Pentium MMX (MultiMedia Extension) is the first processor which implements the SIMD instructions. It uses FPU physical registers for 64-bit MMX vector operations. The clock speed is 120 - 233 MHz. Intel also decided to improve the cache memory as compared to the Pentium. Both instruction and data cache are twice as big (16kB), and they are 4-way associative.
Pentium Pro implements a new architecture (P6) with many innovative units, organised in a 14-stage pipeline, which enhances the overall performance. The advanced instruction decoder generates micro-operations, RISC-like translations of the x86 instructions. It can produce up to two micro-operations representing simple x86 instructions and up to six micro-operations from the microcode sequencer, which stores microcodes for complex x86 instructions. Micro-operations are stored in the buffer, called the reorder buffer or instruction pool, and by the reservation station are assigned to a chosen execution unit. In Pentium Pro, there are six execution units. An important feature of P6 architecture is that instructions can be executed in an out-of-order manner with flexibility of the physical register use known as register renaming. All these techniques allow for executing more than one instruction per clock cycle. Additionally, Pentium Pro implements a new, extended paging unit, which allows addressing 64GB of memory. The instruction and data L1 cache have 8kB in size each. The physical chip also includes the L2 cache assembled as a separate silicon die. This made the processor too expensive for the consumer market, so it was mainly implemented in servers and supercomputers.
Pentium II is the processor based on experience gathered by Intel in the development of the previous Pentium Pro processor and MMX extension to Pentium. Pentium II combines P6 architecture with SIMD instructions operating at a maximum of 450 MHz. The L1 cache size is increased to 32 KB (16 KB data + 16 KB instructions). Intel decided to exclude the L2 cache from the processor's enclosure and assemble it as a separate chip on a single PCB board. As a result, Pentium II has a form of PCB module, not an integrated circuit as previous models. Although offering slightly worse performance, this approach made it much cheaper than Pentium Pro, and the implementation of multimedia instructions made it more attractive for the consumer computer market.
Pentium III is very similar to Pentium II. The main enhancement is the addition of the Streaming SIMD Extensions (SSE) instruction set to accelerate SIMD floating point calculations. Due to the enhancement of the production process, it was also possible to increase the clocking frequency to the range of 400 MHz to 1.4 GHz.
Pentium 4 is the last 32-bit processor developed by Intel. Some late models also implement 64-bit enhancement. It is based on NetBurst architecture, which was developed as an improvement to P6 architecture. The important modification is a movement of the instruction cache from the input to the output of the instruction decoder. As a result, the cache, named trace cache, stores micro-operations instead of instructions. To increase the market impact, Intel decided to enlarge the number of pipeline stages, using the term “hyperpipelining” to describe the strategy of creating a very deep pipeline. A deep pipeline could lead to higher clock speeds, and Intel used it to build the marketing strategy. The Pentium 4's pipeline in the initial model is significantly deeper than that of its predecessors, having 20 stages. The Pentium 4 Prescott processor even has a pipeline of 31 stages. Operating frequency ranges from 1.3 GHz to 3.8 GHz. Intel also implemented the Hyper Threading technology in the Pentium 4 HT version to enable two virtual (logical) cores in one physical processor, which share the workload between them when possible. With Pentium 4, Intel returned to the single chip package for both the processor core and L2 cache. Pentium 4 extends the instruction set with SSE2 instructions, and Pentium 4 Prescott with SSE3. NetBurst architecture suffered from high heat emission, causing problems in heat dissipation and cooling.
Opteron is the first processor which supported the 64-bit instruction set architecture, known at the beginning as AMD64 and in general as x86-64. Its versions include the initial one-core and later multi-core processors. AMD Opteron processor implements x86, MMX, SSE and SSE2 instructions known from Intel processors, and also AMD multimedia extension known as 3DNow!.
Pentium D is a multicore 64-bit processor based on the NetBurst architecture known from Pentium 4. Each unit implements two processor cores.
After facing problems with heat dissipation in processors based on the NetBurst microarchitecture, Intel designed the Core microarchitecture, derived from P6. One of the first implementations is Pentium Dual-Core. After some time, Intel changed the name of this processor line back to Pentium to avoid confusion with Core and Core 2 processors. There is a vast range of Core processors models with different sizes of cache, numbers of cores, offering lower or higher performance. From the perspective of this book, we can think of them as modern, advanced and efficient 64-bit processors, implementing all instructions which we consider. There are many internet sources where additional information can be found. One of them is the Intel website[2], and another commonly used is Wikipedia[3].
All Intel Core processors are based on the Core microarchitecture. Intel uses different naming schemas for these processors. Initially, the names represented the number of physical processor cores in one chip; Core Duo has two physical processors, while Core Quad has four.
Improved version of the Core microarchitecture. The naming schema is similar to the Core processors.
Intel changed the naming. Since then, there has been no strict information about the number of cores inside the chip. Name rather represents the overall processor's performance.
The newest (at the time of writing this book, introduced in 2023) naming drops the “i” letter and introduces the “Ultra” versions of high-performance processors. There exist Core 3 - Core 9 processors as well as Core Ultra 3 - Core Ultra 9.
Model | Year | Class | Address bus | Max memory | Clock freq. | Architecture |
---|---|---|---|---|---|---|
8086 | 1978 | 16-bit | 20 bits | 1 MB | 5 - 10 MHz | x86-16 |
80186 | 1982 | 16-bit | 20 bits | 1 MB | 6 - 20 MHz | x86-16 |
80286 | 1982 | 16-bit | 24 bits | 16 MB | 4 - 25 MHz | x86-16 |
80386 | 1985 | 32-bit | 32 bits | 4 GB | 12.5 - 40 MHz | IA-32 |
i486 | 1989 | 32-bit | 32 bits | 4 GB | 16 - 100 MHz | IA-32 |
Pentium | 1993 | 32-bit | 32 bits | 4 GB | 60 - 200 MHz | IA-32 |
Pentium MMX | 1996 | 32-bit | 32 bits | 4 GB | 120 - 233 MHz | IA-32 |
Pentium Pro | 1995 | 32-bit | 36 bits | 64 GB | 150 - 200 MHz | IA-32 |
Pentium II | 1997 | 32-bit | 36 bits | 64 GB | 233 - 450 MHz | IA-32 |
Pentium III | 1999 | 32-bit | 36 bits | 64 GB | 400 MHz - 1.4 GHz | IA-32 |
Pentium 4 | 2000 | 32-bit | 36 bits | 64 GB | 1.3 GHz - 3.8 GHz | IA-32 |
AMD Opteron | 2003 | 64-bit | 40 bits | 1 TB | 1.4 GHz - 3.5 GHz | x86-64 |
Pentium 4 Prescott | 2004 | 64-bit* | 36 bits | 64 GB | 1.3 GHz - 3.8 GHz | IA-32, Intel64* |
Pentium D | 2005 | 64-bit | 2.66 GHz - 3.73 GHz | Intel64 | ||
Pentium Dual-Core | 2007 | 64-bit | 1.3 GHz - 3.4 GHz | Intel64 | ||
Core 2 | 2006 | 64-bit | 36 bits | 64 GB | 1.06 GHz - 3.5 GHz | Intel64 |
* in some models
The x86 architecture was created in the late 70th years of XX century. The technology available at that time didn't allow for implementing advanced integrated circuits containing millions of transistors in one silicon die. The 8086, the first processor in the whole family of x86, required additional supporting elements to operate. This led to the necessity of making some decisions about the compromise between efficiency, computational possibilities and the size of the silicon and cost. This is the reason why Intel invented some specific elements of the architecture. One of them is the segmentation mechanism to extend the addressing space from 64 kB typically available to 16-bit processors to 1 MB. We present in this chapter some specific features of the x86 and x64 architectures.
The 8086 can address the memory in so-called real mode only. In this mode, the address is calculated with two 16-bit elements: segment and offset. The 8086 implements four special registers to store the segment part of the address: CS, DS, ES, and SS. During program execution, all addresses are calculated relative to one of these registers. The program is divided into three segments containing the main elements. The code segment contains processor instructions and their immediate operands. The instructions address are related to the CS register. The data segment is related to the DS register. It contains data allocated by the program. The stack segment contains the program stack and is related to the SS register. If needed, it is possible to use an extra segment related to the ES register. It is by default used by string instructions.
Although the 8086 processor has only four segment registers, there can be many segments defined in the program. The limitation is that the processor can access only four of them at the same time, as presented in Fig 30. To access other segments, it must change the content of the segment register.
The address, which consists of two elements, the segment and the offset, is named a logical address. Both numbers which form a logical address are 16-bit numbers. So, how to calculate a 20-bit address with two 16-bit values? It is done in the following way. The segment part, taken always from the chosen segment register, is shifted four bit positions left. Four bits at the right side are filled with zeros, forming a 20-bit value. The offset value is added to the result of the shift. The result of the calculations is named the linear address. It is presented the Fig 31. In the 8086 processor, the linear address equals the physical address, which is provided via the address bus to the memory of the computer.
The segment in the memory is called a physical segment. The maximum size of a single physical segment can't exceed 64 kB (65536 B), and it can start at an address evenly divisible by 16 only. In the program, we define logical segments which can be smaller than 64 kB, can overlap, or even start at the same address.
With the introduction of processors capable of addressing larger memory spaces, the real addressing mode was replaced with addressing with the use of descriptors. We will briefly describe this mechanism, taking 80386 as an example. The 80386 processor is a 32-bit machine built according to IA-32 architecture. Using 32 bits, it is possible to address 4 GB of linear address space. Segmentation in 32-bit processors can be used to implement the protection mechanisms. They prevent access to segments which are created by other processes to ensure that processes can operate simultaneously without interfering. Every segment is described by its descriptor. The descriptor holds important information about the segment, including the starting address, limit (size of the segment), and attributes. As it is possible to define many segments at the same time, descriptors are stored in the memory in descriptor tables. IA-32 processors contain two additional segment registers, FS and GS, which can be used to access two extra data segments (Fig 32), but all segment registers are still 16-bit in size.
The segment register in protected mode holds the segment selector, which is an index in the table of descriptors. The table is stored in memory, created and managed by the operating system. Each segment register has an additional part that is hidden from direct access. To speed up the operation, the descriptor is downloaded from the table into the hidden part automatically, each time the segment register is modified. It is schematically presented in Fig 33.