How is a computer processor. How the processor works What kind of data can the CPU work with

2. In the course of their development, semiconductor structures are constantly evolving. Therefore, the principles of constructing processors, the number of elements included in their composition, how their interaction is organized, are constantly changing. Thus, CPUs with the same basic principles of structure are usually called processors of the same architecture. And these principles themselves are called processor architecture (or microarchitecture).

Despite this, within the same architecture, some processors can differ quite a lot from each other - the system bus frequencies, the manufacturing process, the structure and size of the internal memory, etc.

3. In no case should you judge a microprocessor only by such an indicator as the frequency of the clock signal, which is measured in megahertz or gigahertz. Sometimes a “perc” with a lower clock speed can be more productive. Very important are such indicators as: the number of cycles that are necessary to execute the command, the number of commands that it can execute simultaneously, etc.

Evaluation of processor capabilities (characteristics)

In everyday life, when evaluating the capabilities of the processor, it is necessary to pay attention to the following indicators (as a rule, they are indicated on the packaging of the device or in the price list or store catalog):

  • Number of Cores. Multi-core CPUs contain 2, 4, etc. on one chip (in one package). computing cores. Increasing the number of cores is one of the most effective ways to significantly increase the power of processors. But it must be taken into account that programs that do not support multi-core (as a rule, these are old programs) will not work faster on multi-core processors, because. cannot use more than one core;
  • cache size. Cash - very fast inner memory processor, used by him as a kind of buffer in case of need to compensate for "interruptions" while working with RAM. It is logical that the more cache, the better.
  • the number of threads is the throughput of the system. The number of threads often does not match the number of cores. For example, a quad Intel Core i7 runs in 8 threads and is ahead of many six-core processors in its performance;
  • clock frequency - a value that shows how many operations (cycles) per unit of time the processor can perform. It is logical that the higher the frequency, the more operations it can perform, i.e. the more productive it is.
  • The bus speed by which the CPU is connected to the system controller on the motherboard.
  • technical process - the smaller it is, the less energy the processor consumes and, therefore, it heats up less.

A personal computer is a very complex and multifaceted thing, but in each system unit we will find the center of all operations and processes - a microprocessor. What does a computer processor consist of and why is it still needed?

Probably, many will be delighted to learn what the microprocessor of a personal computer consists of. It almost entirely consists of ordinary stones, rocks.

Yes, that's right... The processor contains substances such as, for example, silicon - the same material that makes up sand and granite rocks.

Hoff processor

The first microprocessor for a personal computer was invented almost half a century ago - in 1970 by Martian Edward Hoff and his team of engineers from Intel.

Hoff's first processor ran at just 750 kHz.

The main characteristics of a computer processor today, of course, are not comparable with the above figure, the current "stones" are several thousand times more powerful than their ancestor, and before that, it is better to familiarize yourself with the tasks that it solves.

Many people believe that processors can "think". It must be said right away that there is not a grain of truth in this. Any heavy-duty personal computer processor consists of many transistors - a kind of switches that perform one single function - to skip the signal further or stop it. The choice depends on the signal voltage.

If you look at it from the other side, you can see what the microprocessor consists of, and it consists of registers - information processing cells.

To connect the “stone” with the rest of the personal computer devices, a special high-speed road is used, called the “bus”. Tiny electromagnetic signals "fly" through it at lightning speed. This is the principle of operation of the processor of a computer or laptop.

microprocessor device

How is a computer processor arranged? In any microprocessor, 3 components can be distinguished:

  1. Processor core (this is where the division of zeros and ones occurs);
  2. Cache memory is a small storage of information right inside the processor;
  3. A coprocessor is a special brain center of any processor, in which the most complex operations take place. Here is the work with multimedia files.

The computer processor circuit in a simplified version is as follows:

One of the main indicators of the microprocessor is the clock frequency. It shows how many cycles the “stone” performs per second. The power of the computer processor depends on the totality of the indicators given above.

It should be noted that once rocket launches and the operation of satellites were controlled by microprocessors with a clock frequency a thousand times lower than that of today's "brothers". And the size of one transistor is 22nm, the layer of transistors is only 1nm. For reference, 1 nm is 5 atoms thick!

Now you know how a computer processor works and what successes have been achieved by scientists working in personal computer manufacturing firms.

The modern consumer of electronics is very difficult to surprise. We are already accustomed to the fact that our pocket is legally occupied by a smartphone, a laptop is in a bag, a “smart” watch obediently counts steps on the hand, and earphones caress active system noise reduction.

It's a funny thing, but we are used to carrying not one, but two, three or more computers at once. After all, this is how you can call a device that has CPU. And it doesn’t matter what a particular device looks like. A miniature chip is responsible for its work, having overcome a turbulent and rapid path of development.

Why did we bring up the topic of processors? Everything is simple. Over the past ten years, there has been a real revolution in the world mobile devices.

There are only 10 years difference between these devices. But Nokia N95 then seemed to us a space device, and today we look at ARKit with a certain mistrust

But everything could have turned out differently, and the battered Pentium IV would have remained the ultimate dream of an ordinary buyer.

We tried to do without complicated technical terms and tell how the processor works and find out which architecture is the future.

1. How it all started

The first processors were completely different from what you can see when you open the lid. system block your PC.

Instead of microcircuits in the 40s of the XX century, electromechanical relays supplemented with vacuum tubes. The lamps acted as a diode, the state of which could be regulated by lowering or increasing the voltage in the circuit. The structures looked like this:

For the operation of one gigantic computer, hundreds, sometimes thousands of processors were needed. But, at the same time, you would not be able to run even a simple editor like NotePad or TextEdit from the standard set of Windows and macOS on such a computer. The computer would simply not have enough power.

2. The advent of transistors

First FETs appeared in 1928. But the world changed only after the appearance of the so-called bipolar transistors opened in 1947.

In the late 1940s, experimental physicist Walter Brattain and theorist John Bardeen developed the first point transistor. In 1950, it was replaced by the first junction transistor, and in 1954, the well-known manufacturer Texas Instruments announced a silicon transistor.

But the real revolution came in 1959, when the scientist Jean Henri developed the first silicon planar (flat) transistor, which became the basis for monolithic integrated circuits.

Yes, it's a bit tricky, so let's dig a little deeper and deal with the theoretical part.

3. How a transistor works

So, the task of such an electrical component as transistor is to control the current. Simply put, this little tricky switch controls the flow of electricity.

The main advantage of a transistor over a conventional switch is that it does not require the presence of a person. Those. such an element is capable of independently controlling the current. In addition, it works much faster than you would turn on or off the electrical circuit yourself.

From a school computer science course, you probably remember that a computer “understands” human language through combinations of only two states: “on” and “off”. In the understanding of the machine, this is the state "0" or "1".

The task of the computer is to represent the electric current in the form of numbers.

And if earlier the task of switching states was performed by clumsy, bulky and inefficient electrical relays, now the transistor has taken over this routine work.

From the beginning of the 60s, transistors began to be made from silicon, which made it possible not only to make processors more compact, but also to significantly increase their reliability.

But first, let's deal with the diode

Silicon(aka Si - "silicium" in the periodic table) belongs to the category of semiconductors, which means that, on the one hand, it transmits current better than a dielectric, on the other hand, it does it worse than a metal.

Whether we like it or not, but to understand the work and the further history of the development of processors, we will have to plunge into the structure of one silicon atom. Don't be afraid, let's make it short and very clear.

The task of the transistor is to amplify a weak signal due to an additional power source.

The silicon atom has four electrons, thanks to which it forms bonds (and to be precise - covalent bonds) with the same nearby three atoms, forming a crystal lattice. While most of the electrons are in bond, a small part of them is able to move through the crystal lattice. It is because of this partial transfer of electrons that silicon was classified as a semiconductor.

But such a weak movement of electrons would not allow the use of a transistor in practice, so scientists decided to increase the performance of transistors by doping, or more simply, additions to the crystal lattice of silicon by atoms of elements with a characteristic arrangement of electrons.

So they began to use a 5-valent impurity of phosphorus, due to which they received n-type transistors. The presence of an additional electron made it possible to accelerate their movement, increasing the current flow.

When doping transistors p-type boron, which contains three electrons, became such a catalyst. Due to the absence of one electron, holes appear in the crystal lattice (they play the role of a positive charge), but due to the fact that electrons are able to fill these holes, the conductivity of silicon increases significantly.

Suppose we took a silicon wafer and doped one part of it with a p-type impurity, and the other with an n-type impurity. So we got diode- the basic element of the transistor.

Now the electrons located in the n-part will tend to go to the holes located in the p-part. In this case, the n-side will have a slight negative charge, and the p-side will have a positive charge. The electric field formed as a result of this "gravity" - the barrier - will prevent the further movement of electrons.

If you connect a power source to the diode in such a way that "-" touches the p-side of the plate, and "+" touches the n-side, current flow will not be possible due to the fact that the holes will be attracted to the negative contact of the power source, and the electrons to positive, and the bond between the p and n electrons will be lost due to the expansion of the combined layer.

But if you connect the power supply with sufficient voltage the other way around, i.e. "+" from the source to the p-side, and "-" to the n-side, electrons placed on the n-side will be repelled by the negative pole and pushed to the p-side, occupying holes in the p-region.

But now the electrons are attracted to the positive pole of the power source and they continue to move through the p-holes. This phenomenon has been called forward biased diode.

diode + diode = transistor

By itself, the transistor can be thought of as two diodes docked to each other. In this case, the p-region (the one where the holes are located) becomes common for them and is called the “base”.

The N-P-N transistor has two n-regions with additional electrons - they are also the “emitter” and “collector” and one, weak region with holes - the p-region, called the “base”.

If you connect a power supply (let's call it V1) to the n-regions of the transistor (regardless of the pole), one diode will be reverse-biased and the transistor will be closed.

But, as soon as we connect another power source (let's call it V2), setting the "+" contact to the "central" p-region (base), and the "-" contact to the n-region (emitter), some of the electrons will flow through again formed chain (V2), and the part will be attracted by the positive n-region. As a result, electrons will flow into the collector region, and a weak electric current will be amplified.

Exhale!

4. So how does a computer actually work?

And now the most important thing.

Depending on the applied voltage, the transistor can be either open, or closed. If the voltage is insufficient to overcome the potential barrier (the same one at the junction of p and n plates) - the transistor will be in the closed state - in the "off" state or, in the language of the binary system, "0".

With enough voltage, the transistor turns on, and we get the value "on" or "1" in binary.

This state, 0 or 1, is called a "bit" in the computer industry.

Those. we get the main property of the very switch that opened the way to computers for mankind!

In the first electronic digital computer ENIAC, or, more simply, the first computer, about 18 thousand triode lamps were used. The size of the computer was comparable to a tennis court, and its weight was 30 tons.

To understand how the processor works, there are two more key points to understand.

Moment 1. So, we have decided what is bit. But with its help, we can only get two characteristics of something: either "yes" or "no". In order for the computer to learn to understand us better, they came up with a combination of 8 bits (0 or 1), which they called byte.

Using a byte, you can encode a number from zero to 255. Using these 255 numbers - combinations of zeros and ones, you can encode anything.

Moment 2. The presence of numbers and letters without any logic would not give us anything. That is why the concept logical operators.

By connecting just two transistors in a certain way, you can achieve several logical actions at once: “and”, “or”. The combination of the amount of voltage on each transistor and the type of their connection allows you to get different combinations of zeros and ones.

Through the efforts of programmers, the values ​​\u200b\u200bof zeros and ones, the binary system, began to be translated into decimal so that we could understand what exactly the computer “says”. And to enter commands, our usual actions, such as entering letters from the keyboard, are represented as a binary chain of commands.

Simply put, imagine that there is a correspondence table, say, ASCII, in which each letter corresponds to a combination of 0 and 1. You pressed a button on the keyboard, and at that moment on the processor, thanks to the program, the transistors switched in such a way that the following appeared on the screen: the most written letter on the key.

This is a rather primitive explanation of how the processor and the computer work, but it is this understanding that allows us to move on.

5. And the transistor race began

After the British radio engineer Geoffrey Dahmer proposed in 1952 to place the simplest electronic components in a monolithic semiconductor crystal, the computer industry took a leap forward.

From the integrated circuits proposed by Dahmer, engineers quickly switched to microchips based on transistors. In turn, several such chips have already formed themselves CPU.

Of course, the dimensions of such processors are not much similar to modern ones. In addition, until 1964, all processors had one problem. They required an individual approach - their own programming language for each processor.

  • 1964 IBM System/360. Universal Programming Code compatible computer. An instruction set for one processor model could be used for another.
  • 70s. The appearance of the first microprocessors. Single chip processor from Intel. Intel 4004 - 10 µm TPU, 2300 transistors, 740 kHz.
  • 1973 Intel 4040 and Intel 8008. 3,000 transistors, 740 kHz for the Intel 4040 and 3,500 transistors at 500 kHz for the Intel 8008.
  • 1974 Intel 8080. 6 micron TPU and 6000 transistors. The clock frequency is about 5,000 kHz. It was this processor that was used in the Altair-8800 computer. The domestic copy of the Intel 8080 is the KR580VM80A processor, developed by the Kiev Research Institute of Microdevices. 8 bits
  • 1976 Intel 8080. 3 micron TPU and 6500 transistors. Clock frequency 6 MHz. 8 bits
  • 1976 Zilog Z80. 3 micron TPU and 8500 transistors. Clock frequency up to 8 MHz. 8 bits
  • 1978 Intel 8086. 3 micron TPU and 29,000 transistors. The clock frequency is about 25 MHz. The x86 instruction set that is still in use today. 16 bits
  • 1980 Intel 80186. 3 micron TPU and 134,000 transistors. Clock frequency - up to 25 MHz. 16 bits
  • 1982 Intel 80286. 1.5 micron TPU and 134,000 transistors. Frequency - up to 12.5 MHz. 16 bits
  • 1982 Motorola 68000. 3 µm and 84,000 transistors. This processor was used in the Apple Lisa computer.
  • 1985 Intel 80386. 1.5 micron tp and 275,000 transistors. Frequency - up to 33 MHz in the 386SX version.

It would seem that the list could be continued indefinitely, but then Intel engineers faced a serious problem.

6. Moore's Law or how chipmakers live on

Out in the late 80s. Back in the early 60s, one of the founders of Intel, Gordon Moore, formulated the so-called "Moore's Law". It sounds like this:

Every 24 months, the number of transistors on an integrated circuit chip doubles.

It is difficult to call this law a law. It would be more accurate to call it empirical observation. Comparing the pace of technology development, Moore concluded that a similar trend could form.

But already during the development of the fourth generation Intel processors i486 engineers are faced with the fact that they have already reached the performance ceiling and can no longer fit more processors in the same area. At that time, technology did not allow this.

As a solution, a variant was found using a number of additional elements:

  • cache memory;
  • conveyor;
  • built-in coprocessor;
  • multiplier.

Part of the computational load fell on the shoulders of these four nodes. As a result, the appearance of cache memory, on the one hand, complicated the design of the processor, on the other hand, it became much more powerful.

The Intel i486 processor already consisted of 1.2 million transistors, and the maximum frequency of its operation reached 50 MHz.

In 1995, AMD joined the development and released the fastest i486-compatible Am5x86 processor at that time on a 32-bit architecture. It was already manufactured using a 350 nanometer process technology, and the number installed processors reached 1.6 million pieces. The clock frequency has increased to 133 MHz.

But the chipmakers did not dare to pursue further increasing the number of processors installed on a chip and developing the already utopian CISC (Complex Instruction Set Computing) architecture. Instead, the American engineer David Patterson proposed to optimize the operation of processors, leaving only the most necessary computational instructions.

So processor manufacturers switched to the RISC (Reduced Instruction Set Computing) platform. But even this was not enough.

In 1991, the 64-bit R4000 processor was released, operating at a frequency of 100 MHz. Three years later, the R8000 processor appears, and two years later, the R10000 with clock speeds up to 195 MHz. In parallel, the market for SPARC processors developed, the architecture feature of which was the absence of multiplication and division instructions.

Instead of fighting over the number of transistors, chip manufacturers began to rethink the architecture of their work.. The rejection of "unnecessary" commands, the execution of instructions in one cycle, the presence of registers of general value and pipelining made it possible to quickly increase the clock frequency and power of processors without distorting the number of transistors.

Here are just a few of the architectures that appeared between 1980 and 1995:

  • SPARC;
  • ARM;
  • PowerPC;
  • Intel P5;
  • AMD K5;
  • Intel P6.

They were based on the RISC platform, and in some cases, a partial, combined use of the CISC platform. But the development of technology once again pushed chipmakers to continue building up processors.

In August 1999, the AMD K7 Athlon entered the market, manufactured using a 250 nm process technology and including 22 million transistors. Later, the bar was raised to 38 million processors. Then up to 250 million.

The technological processor increased, the clock frequency increased. But, as physics says, there is a limit to everything.

7. The end of the transistor competition is near

In 2007, Gordon Moore made a very blunt statement:

Moore's Law will soon cease to apply. It is impossible to install an unlimited number of processors indefinitely. The reason for this is the atomic nature of matter.

It is noticeable to the naked eye that the two leading chip manufacturers AMD and Intel have clearly slowed down the pace of processor development over the past few years. The accuracy of the technological process has increased to only a few nanometers, but it is impossible to place even more processors.

And while semiconductor manufacturers threaten to launch multilayer transistors, drawing a parallel with 3DNand memory, a serious competitor appeared at the walled x86 architecture 30 years ago.

8. What awaits "regular" processors

Moore's Law has been invalidated since 2016. This was officially announced by the largest processor manufacturer Intel. Doubling computing power by 100% every two years is no longer possible for chipmakers.

And now processor manufacturers have several unpromising options.

The first option is quantum computers. There have already been attempts to build a computer that uses particles to represent information. There are several similar quantum devices in the world, but they can only cope with algorithms of low complexity.

In addition, the serial launch of such devices in the coming decades is out of the question. Expensive, inefficient and… slow!

Yes, quantum computers consume much less power than their modern counterparts, but they will also be slower until developers and component manufacturers switch to new technology.

The second option - processors with layers of transistors. Both Intel and AMD have seriously thought about this technology. Instead of one layer of transistors, they plan to use several. It seems that in the coming years, processors may well appear in which not only the number of cores and clock frequency will be important, but also the number of transistor layers.

The solution has the right to life, and thus the monopolists will be able to milk the consumer for another couple of decades, but, in the end, the technology will again hit the ceiling.

Today, realizing the rapid development of the ARM architecture, Intel made a quiet announcement of the Ice Lake family of chips. Processors will be manufactured at 10nm technological process and will become the basis for smartphones, tablets and mobile devices. But it will happen in 2019.

9. ARM is the future

So, the x86 architecture appeared in 1978 and belongs to the type of CISC platform. Those. by itself, it implies the existence of instructions for all occasions. Versatility is the main strong point of the x86.

But, at the same time, versatility played a cruel joke with these processors. x86 has several key disadvantages:

  • the complexity of commands and their frank confusion;
  • high energy consumption and heat release.

For high performance, I had to say goodbye to energy efficiency. Moreover, two companies are currently working on the x86 architecture, which can be safely attributed to monopolists. These are Intel and AMD. Only they can produce x86 processors, which means that only they rule the development of technologies.

At the same time, several companies are involved in the development of ARM (Arcon Risk Machine). Back in 1985, developers chose the RISC platform as the basis for further development of the architecture.

Unlike CISC, RISC involves designing a processor with the minimum required number of instructions, but maximum optimization. RISC processors are much smaller than CISC, more power efficient and simpler.

Moreover, ARM was originally created solely as a competitor to x86. The developers set the task to build an architecture that is more efficient than x86.

Ever since the 40s, engineers have understood that one of the priority tasks is to work on reducing the size of computers, and, first of all, the processors themselves. But almost 80 years ago, hardly anyone could have imagined that a full-fledged computer would be smaller than a matchbox.

The ARM architecture was once supported by Apple, which launched the production of Newton tablets based on the ARM6 family of ARM processors.

Sales of desktop computers are falling rapidly, while the number of mobile devices sold annually is already in the billions. Often, in addition to performance, when choosing electronic gadget The user is interested in several more criteria:

  • mobility;
  • autonomy.

The x86 architecture is strong in terms of performance, but if you forego active cooling, the powerful processor will seem pathetic compared to the ARM architecture.

10. Why ARM is the undisputed leader

You will hardly be surprised that your smartphone, whether it's a simple Android or Apple's 2016 flagship, is dozens of times more powerful than full-fledged computers from the late 90s.

But how much more powerful is the same iPhone?

In itself, comparing two different architectures is a very difficult thing. Measurements here can only be performed approximately, but you can understand the enormous advantage that smartphone processors built on the ARM architecture provide.

A universal assistant in this matter is the artificial Geekbench performance test. The utility is available both on stationary computers and on Android and iOS platforms.

The mid-range and entry-level laptops are clearly lagging behind the performance of the iPhone 7. In the top segment, things are a little more complicated, but in 2017, Apple releases the iPhone X on the new A11 Bionic chip.

There, the ARM architecture is already familiar to you, but the figures in Geekbench have almost doubled. Laptops from the "higher echelon" tensed up.

And it's only been one year.

The development of ARM is in leaps and bounds. While Intel and AMD show 5-10% performance gains year after year, over the same period, smartphone manufacturers manage to increase processor power by two to two and a half times.

For skeptical users who trawl through the top lines of Geekbench, I just want to remind you: in mobile technology, size is what matters first of all.

Place a candy bar with a powerful 18-core processor that “rips the ARM architecture to shreds” on the table, and then put your iPhone next to it. Feel the difference?

11. Instead of output

It is impossible to cover the 80-year history of the development of computers in one material. But after reading this article, you will be able to understand how the main element of any computer is arranged - the processor, and what to expect from the market in the coming years.

Of course, Intel and AMD will work on further increasing the number of transistors on a single chip and promoting the idea of ​​multilayer elements.

But do you, as a customer, need such power?

I don't think you're dissatisfied with the performance of an iPad Pro or a flagship iPhone X. I don't think you're dissatisfied with the performance of your multicooker in your kitchen or the picture quality of a 65-inch 4K TV. But all these devices use processors on the ARM architecture.

Windows has already officially announced that it is looking towards ARM with interest. The company included support for this architecture back in Windows 8.1, and is now actively working on a tandem with the leading ARM chipmaker Qualcomm.

Google also managed to look at ARM - operating system Chrome OS supports this architecture. Several Linux distributions have appeared at once, which are also compatible with this architecture. And this is just the beginning.

And just try for a moment to imagine how pleasant it will be to combine an energy-efficient ARM processor with a graphene battery. It is this architecture that will make it possible to obtain mobile ergonomic gadgets that can dictate the future.

4.61 out of 5, rated: 38 )

website Great article, pour your tea.

You are reading these lines from a smartphone, tablet or computer. Any of these devices are based on a microprocessor. The microprocessor is the "heart" of any computer device. There are many types of microprocessors, but they all perform the same tasks. Today we will talk about how the processor works and what tasks it performs. At first glance, all this seems obvious. But many users would be interested in deepening their knowledge of the most important component that makes a computer work. We will learn how technology based on simple digital logic allows your computer not only to solve mathematical problems, but also to be an entertainment center. How are just two numbers - one and zero - converted into colorful games and movies? Many people have asked themselves this question many times and will be glad to receive an answer to it. After all, even at the core of recently us AMD processor Jaguar, on which the latest game consoles are based, is based on the same ancient logic.

In English literature, a microprocessor is often referred to as a CPU (central processing unit, [single] module CPU). The reason for this name lies in the fact that a modern processor is a single chip. The first microprocessor in the history of mankind was created by a corporation back in 1971.

The role of Intel in the history of the microprocessor industry


We are talking about the Intel 4004 model. It was not powerful and could only perform addition and subtraction. At the same time, it could process only four bits of information (that is, it was 4-bit). But for its time, its appearance was a significant event. After all, the entire processor fit in one chip. Before the advent of the Intel 4004, computers were based on a whole set of chips or discrete components (transistors). The 4004 microprocessor formed the basis of one of the first portable calculators.

The first microprocessor for home computers was the Intel 8080 introduced in 1974. All the computing power of an 8-bit computer was placed in one chip. But the announcement of the Intel 8088 processor was of real importance. It appeared in 1979 and since 1981 has been used in the first mass-produced IBM PC personal computers.

Further, the processors began to develop and acquire power. Everyone who is at least a little familiar with the history of the microprocessor industry remembers that the 8088 was replaced by 80286. Then came the turn of 80386, followed by 80486. Then there were several generations of Pentiums: Pentium, Pentium II, III and Pentium 4. All this "Intel" processors based on the basic design of the 8088. They were backwards compatible. This means that the Pentium 4 could process any piece of code for the 8088, but it did so at a speed about five thousand times faster. Not so many years have passed since then, but several more generations of microprocessors have changed.


Since 2004, Intel has been offering multi-core processors. The number of transistors used in them has increased by millions. But even now, the processor obeys the general rules that were created for early chips. The table reflects the history of Intel microprocessors up to and including 2004. We will make some clarifications on what the indicators reflected in it mean:
  • Name (Name). Processor model
  • Date (Date). The year the processor was first introduced. Many processors were introduced multiple times, each time their clock speed was increased. Thus, the next modification of the chip could be re-announced even several years after its first version appeared on the market.
  • Transistors (Number of transistors). The number of transistors in a chip. You can see that this figure has steadily increased
  • Microns (Width in microns). One micron is equal to one millionth of a meter. The value of this indicator is determined by the thickness of the thinnest wire in the chip. For comparison, the thickness of a human hair is 100 microns.
  • Clock speed. Maximum processor speed
  • data width. "Bitness" of the arithmetic logic unit of the processor (ALU, ALU). An 8-bit ALU can add, subtract, multiply, and perform other operations on two 8-bit numbers. A 32-bit ALU can work with 32-bit numbers. To add two 32-bit numbers, an eight-bit ALU needs to execute four instructions. A 32-bit ALU can handle this task in one instruction. In many (but not all) cases, the width of the external data bus is the same as the "bitness" of the ALU. The 8088 processor had a 16-bit ALU but an 8-bit bus. Late Pentiums were characterized by a situation where the bus was already 64-bit, and the ALU was still 32-bit
  • MIPS (Million instructions per second). Allows you to roughly evaluate the performance of the processor. Modern microprocessors perform so many different tasks that this indicator has lost its original meaning and can be used mainly to compare the processing power of several processors (as in this table)

There is a direct relationship between the clock speed, as well as the number of transistors and the number of operations performed by the processor in one second. For example, the clock speed of the 8088 processor reached 5 MHz, and the performance: only 0.33 million operations per second. That is, the execution of one instruction required about 15 processor cycles. In 2004, processors could already execute two instructions per clock cycle. This improvement was provided by increasing the number of processors in the chip.

The chip is also referred to as an integrated circuit (or simply microchip). Most often, this is a small and thin silicon plate into which transistors are “imprinted”. A chip measuring two and a half centimeters on a side can contain tens of millions of transistors. The simplest processors can be squares with a side of only a few millimeters. And this size is enough for several thousand transistors.

microprocessor logic


To understand how a microprocessor works, you should study the logic on which it is based, as well as familiarize yourself with assembly language. This is the native language of the microprocessor.

The microprocessor is capable of executing a specific set of machine instructions (commands). Operating with these instructions, the processor performs three main tasks:

  • With the help of its arithmetic logic unit, the processor performs mathematical operations: addition, subtraction, multiplication and division. Modern microprocessors fully support floating point operations (using a dedicated floating point arithmetic processor)
  • The microprocessor is capable of moving data from one type of memory to another
  • The microprocessor has the ability to make a decision and, based on its decision, "jump", that is, switch to the execution of a new set of instructions.

The microprocessor contains:

  • Address bus (address bus). The width of this bus can be 8, 16 or 32 bits. She is engaged in sending the address to memory
  • Data bus (data bus): 8, 16, 32 or 64 bits wide. This bus can send data to or receive data from memory. When talking about the "bitness" of the processor, we are talking about the width of the data bus
  • Channels RD (read, read) and WR (write, write), providing interaction with memory
  • Clock line (clock bus) providing processor cycles
  • Reset line (erasing bus, reset bus), resetting the value of the program counter and restarting the execution of instructions

Since the information is quite complex, we will assume that the width of both buses - both the address and data buses - is only 8 bits. And briefly consider the components of this relatively simple microprocessor:

  • Registers A, B and C are logic circuits used for intermediate data storage.
  • Address latch is similar to registers A, B and C
  • The program counter is a logic chip (latch) capable of incrementing a value by one in one step (if it receives the appropriate command) and zeroing the value (subject to receiving the appropriate command)
  • ALU (arithmetic logic unit) can perform addition, subtraction, multiplication and division between 8-bit numbers or act as a regular adder
  • The test register is a special latch that stores the results of comparison operations performed by the ALU. Usually the ALU compares two numbers and determines if they are equal or if one of them is greater than the other. The test register is also capable of storing the carry bit of the last action of the adder. It stores these values ​​in a trigger schema. In the future, these values ​​can be used by the command decoder to make decisions.
  • The six blocks on the diagram are labeled "3-State". These are sort buffers. Multiple output sources can be connected to the wire, but the sort buffer only allows one of them (at one time) to pass a value: "0" or "1". Thus, the sort buffer can skip values ​​or block the output source from transmitting data
  • The instruction register and the instruction decoder keep all of the above components under control.

This diagram does not show the control lines of the command decoder, which can be expressed as the following "orders":

  • "Register A accept the value currently coming from the data bus"
  • "Register B to accept the value currently coming from the data bus"
  • "Register C to accept the value currently coming from the arithmetic logic unit"
  • "The program counter register to accept the value currently coming from the data bus"
  • "Address register to accept the value currently coming from the data bus"
  • "Instruction register to accept the value currently coming from the data bus"
  • "Program counter increase value [by one]"
  • "Reset the command counter"
  • "Activate one of the six sort buffers" (six separate control lines)
  • "Tell the arithmetic logic unit what operation to perform"
  • "Test register accept test bits from ALU"
  • "Activate RD (read channel)"
  • "Activate WR (recording channel)"

The command decoder receives data bits from the test register, the synchronization channel, and also from the command register. If we simplify the description of the tasks of the instruction decoder as much as possible, then we can say that it is this module that “tells” the processor what needs to be done at the moment.

microprocessor memory


Familiarity with computer memory and its hierarchy will help you better understand the contents of this section.

Above, we wrote about buses (address and data), as well as read (RD) and write (WR) channels. These buses and channels are connected to memory: operational (RAM, RAM) and read-only memory (ROM, ROM). In our example, we consider a microprocessor whose bus width is 8 bits. This means that it is capable of addressing 256 bytes (two to the eighth). At one point in time, it can read from or write to memory 8 bits of data. Suppose this simple microprocessor has 128 bytes of ROM (starting at address 0) or 128 bytes random access memory(starting from address 128).

The persistent memory module contains a certain pre-installed persistent set of bytes. The address bus asks the ROM for a specific byte to be sent to the data bus. When the read channel (RD) changes its state, the ROM module provides the requested byte to the data bus. That is, in this case, only reading data is possible.

From RAM, the processor can not only read information, it can also write data to it. Depending on whether reading or writing is performed, the signal comes either through the read channel (RD) or through the write channel (WR). Unfortunately, RAM is volatile. When the power is turned off, it loses all data placed in it. For this reason, a computer needs a non-volatile read-only memory device.

Moreover, theoretically, a computer can do without RAM at all. Many microcontrollers allow you to place the necessary data bytes directly on the processor chip. But it is impossible to do without ROM. In personal computers, ROM is called the basic input and output system (BSVV, BIOS, Basic Input / Output System). The microprocessor begins its work at startup by executing the commands found by it in the BIOS.

The BIOS commands perform a test on the computer's hardware, and then they access the hard drive and select the boot sector. This boot sector is a separate small program that the BIOS first reads from disk and then places in RAM. After that, the microprocessor starts executing instructions from the boot sector located in RAM. The boot sector program tells the microprocessor what data (destined for later execution by the processor) should be additionally moved from hard drive into working memory. This is how the processor loads the operating system.

microprocessor instructions


Even the simplest microprocessor is capable of processing a fairly large set of instructions. The set of instructions is a kind of template. Each of these instructions loaded into the instruction register has its own meaning. It is not easy for people to remember the sequence of bits, so each instruction is described as a short word, each of which represents a specific command. These words make up the assembly language of the processor. The assembler translates these words into a binary language that the processor understands.

Here is a list of assembly language command words for a conditional simple processor, which we consider as an example for our story:

  • LOADA mem — Load register A from some memory address
  • LOADB mem — Load register B from some memory address
  • CONB con - Load constant value into register B
  • SAVEB mem - Save (save) the value of register B in memory at a specific address
  • SAVEC mem - Save (save) the value of register C in memory at a specific address
  • ADD - Add (add) the values ​​\u200b\u200bof registers A and B. Store the result of the action in register C
  • SUB - Subtract (subtract) the value of register B from the value of register A. Store the result of the action in register C
  • MUL - Multiply (multiply) the values ​​​​of registers A and B. Store the result of the action in register C
  • DIV - Divide (divide) the value of register A by the value of register B. Store the result of the action in register C
  • COM - Compare (compare) the values ​​​​of registers A and B. Transfer the result to the test register
  • JUMP addr - Jump to the specified address
  • JEQ addr - If the condition of equal values ​​of two registers is met, jump (jump) to the specified address
  • JNEQ addr - If the condition for equal values ​​of two registers is not met, jump (jump) to the specified address
  • JG addr - If value is greater, jump to specified address
  • JGE addr - If value is greater than or equal to, jump to specified address
  • JL addr - If value is less than, jump to specified address
  • JLE addr - If value is less than or equal to, jump to specified address
  • STOP - Stop (stop) execution

English words denoting the actions performed are given in brackets for a reason. So we can see that assembly language (like many other programming languages) is based on English, that is, on the usual means of communication for those people who created digital technologies.

The work of the microprocessor on the example of factorial calculation


Consider the operation of the microprocessor on a specific example of its execution of a simple program that calculates the factorial of the number "5". First, let's solve this problem "in a notebook":

factorial of 5 = 5! = 5 * 4 * 3 * 2 * 1 = 120

In the C programming language, this piece of code that performs this calculation would look like this:

A=1;f=1;while (a

When this program completes, the variable f will contain the value of the factorial of five.

The C compiler translates (that is, translates) this code into an assembly language instruction set. In the processor we are considering, RAM starts at address 128, and read-only memory (which contains assembly language) starts at address 0. Therefore, in the language of this processor, this program will look like this:

// Assume a at address 128 // Assume F at address 1290 CONB 1 // a=1;1 SAVEB 1282 CONB 1 // f=1;3 SAVEB 1294 LOADA 128 // if a > 5 the jump to 175 CONB 56 COM7 JG 178 LOADA 129 // f=f*a;9 LOADB 12810 MUL11 SAVEC 12912 LOADA 128 // a=a+1;13 CONB 114 ADD15 SAVEC 12816 JUMP 4 // loop back to if17 STOP

Now the next question arises: what do all these commands look like in permanent memory? Each of these instructions must be represented as a binary number. To simplify the understanding of the material, suppose that each of the assembly language instructions of the processor we are considering has a unique number:

  • LOAD-1
  • LOAD-2
  • CONB-3
  • SAVEB-4
  • SAVEC mem - 5
  • ADD-6
  • SUB-7
  • MUL-8
  • div-9
  • COM-10
  • JUMP addr - 11
  • JEQ addr - 12
  • JNEQ addr - 13
  • JG addr - 14
  • JGE addr - 15
  • JL addr - 16
  • JLE addr - 17
  • STOP-18

// Assume a at address 128 // Assume F at address 129Addr machine instruction/value0 3 // CONB 11 12 4 // SAVEB 1283 1284 3 // CONB 15 16 4 // SAVEB 1297 1298 1 // LOADA 1289 12810 3 // CONB 511 512 10 // COM13 14 // JG 1714 3115 1 // LOADA 12916 12917 2 // LOADB 125 126 6 // ADD27 5 // SAVEC 12828 12829 11 // JUMP 430 831 18 // STOP

As you can see, seven lines of C code have been converted into 18 lines of assembly language. They took 32 bytes in ROM.

Decoding


The conversation about decoding will have to start with a consideration of philological issues. Alas, not all computer terms have unambiguous correspondences in Russian. The translation of terminology often proceeded spontaneously, and therefore the same English term can be translated into Russian in several ways. And so it happened with the most important component of microprocessor logic "instruction decoder". Computer experts call it both an instruction decoder and an instruction decoder. None of these variants of the name can be called either more or less "correct" than the other.

The instruction decoder is needed in order to translate each machine code into a set of signals that actuate various components of the microprocessor. If we simplify the essence of his actions, then we can say that it is he who coordinates the "software" and "hardware".

Consider the operation of the command decoder using the example of the ADD instruction that performs the addition action:

  • During the first cycle clock frequency The processor is loading the instruction. At this stage, the instruction decoder needs to: activate the sort buffer for the instruction counter; activate the read channel (RD); activate the sort buffer latch to pass input to the instruction register
  • During the second processor clock cycle, the ADD instruction is decoded. At this point, the ALU performs the addition and transfers the value to register C
  • During the third cycle of the processor clock frequency, the program counter increases its value by one (theoretically, this action overlaps with what happened during the second cycle)

Each instruction can be represented as a set of sequentially executed operations that manipulate the components of the microprocessor in a certain order. That is, program instructions lead to completely physical changes: for example, changing the position of the latch. Some instructions may require two or three processor clock cycles to complete. Others may even need five or six cycles.

Microprocessors: performance and trends


The number of transistors in a processor is an important factor influencing its performance. As shown earlier, the 8088 processor required 15 clock cycles to execute one instruction. And to perform one 16-bit operation, it took about 80 cycles at all. This is how the ALU multiplier of this processor was arranged. The more transistors and the more powerful the ALU multiplier, the more the processor manages to do in one cycle.

Many transistors support pipelining technology. Within the framework of the pipeline architecture, there is a partial imposition of executable instructions on each other. An instruction may require the same five cycles to execute, but if five instructions are simultaneously processed by the processor (at different stages of completion), then on average one instruction will require one processor clock cycle to execute.

In many modern processors, there is more than one instruction decoder. And each of them supports pipelining. This allows more than one instruction to be executed per processor cycle. To implement this technology, an incredible number of transistors are required.

64-bit processors


Although 64-bit processors became widespread only a few years ago, they have been around for a relatively long time: since 1992. Both Intel and AMD currently offer such processors. A 64-bit processor is one that has a 64-bit arithmetic logic unit (ALU), 64-bit registers, and 64-bit buses.

The main reason why processors need 64-bit is that this architecture expands the address space. 32-bit processors can only access two or four gigabytes of RAM. Once these figures seemed gigantic, but years have passed and today you will not surprise anyone with such a memory. A few years ago, the memory of a typical computer was 256 or 512 megabytes. Back then, the 4GB limit was only a problem for servers and machines running large databases.

But very quickly it turned out that even ordinary users sometimes do not have enough of either two or even four gigabytes of RAM. This annoying limitation does not apply to 64-bit processors. The address space available to them seems endless these days: two to the sixty-fourth bytes, that is, something like a billion gigabytes. In the foreseeable future, such a gigantic RAM is not expected.

The 64-bit address bus, as well as the wide and high-speed data buses of the corresponding motherboards, allow 64-bit computers to increase the speed of input and output data when interacting with devices such as HDD and video card. These new features significantly increase the performance of modern computers.

But not all users will feel the benefits of 64-bit architecture. It is necessary, first of all, for those who edit videos and photos, and also work with various large pictures. 64-bit computers are appreciated by connoisseurs of computer games. But those users who, using a computer, simply communicate in in social networks and roam the web and edit text files, they will most likely simply not feel any advantages of these processors.

Sourced from computer.howstuffworks.com

Now there is a lot of information on the Internet on the topic of processors, you can find a bunch of articles about how it works, where registers, cycles, interrupts, etc. are mainly mentioned ... But, for a person who is not familiar with all these terms and concepts, it is quite difficult like this "with fly" to delve into the understanding of the process, but you need to start small - namely, with an elementary understanding how the processor is arranged and what main parts it consists of.

So, what will be inside the microprocessor if it is disassembled:

the number 1 denotes the metal surface (cover) of the microprocessor, which serves to remove heat and protect against mechanical damage what is behind this cover (that is, inside the processor itself).

At number 2 - is the crystal itself, which in fact is the most important and expensive part of the microprocessor to manufacture. It is thanks to this crystal that all calculations take place (and this is the most important function of the processor) and the more complex it is, the more perfect it is, the more powerful the processor is and the more expensive it is, respectively. The crystal is made from silicon. In fact, the manufacturing process is very complex and contains dozens of steps, more details in this video:

Number 3 is a special textolite substrate to which all other parts of the processor are attached, in addition, it plays the role of a contact pad - on its reverse side there are a large number of golden "dots" - these are contacts (they are slightly visible in the figure). Thanks to the contact pad (substrate), close interaction with the crystal is ensured, because it is not possible to directly influence the crystal in any way.

The lid (1) is attached to the substrate (3) with a high temperature resistant adhesive-sealant. There is no air gap between the crystal (2) and the cover, its place is taken by thermal paste, when it hardens, it forms a "bridge" between the processor die and the cover, which ensures a very good heat outflow.

The crystal is connected to the substrate using soldering and sealant, the contacts of the substrate are connected to the contacts of the crystal. This figure clearly shows how the contacts of the crystal are connected to the contacts of the substrate using very thin wires (in the photo 170x magnification):

In general, the design of processors from different manufacturers and even models from the same manufacturer can vary greatly. However, the principle of operation remains the same - they all have a contact substrate, a crystal (or several located in one package) and a metal cover for heat dissipation.

For example, the contact pad of the Intel Pentium 4 processor looks like this (the processor is upside down):

The shape of the pins and the structure of their arrangement depends on the processor and motherboard computer (sockets must match). For example, in the figure just above, the contacts of the processor do not have "pins", since the pins are located directly in the motherboard socket.

And there is another situation where the "pins" of the contacts stick out directly from the contact substrate. This feature is typical mainly for AMD processors:

As mentioned above, the device of different models of processors from the same manufacturer may vary, we have a vivid example of this - a quad-core Intel Core 2 Quad processor, which is essentially 2 dual-core processors of the core 2 duo line, combined in one package:

Important! The number of dies inside a processor and the number of processor cores are not the same thing.

In modern models of Intel processors, 2 crystals (chips) fit at once. The second chip is the graphics core of the processor, which in fact plays the role of a video card built into the processor, that is, even if the system is missing, the graphics core will take on the role of a video card, and quite powerful (in some processor models, the computing power of graphics cores allows you to play modern games on medium graphics settings).

That's all central microprocessor unit in short, of course.

Share with friends or save for yourself:

Loading...