What is intel corporation. Intel company. Background of microprocessor technologies

Until the late 90s, Intel paid little attention to marketing and promotion of its brand. It was considered sufficient that they produce best processors in the world. But at some point, competitors with aggressive advertising like Apple, IBM and AMD began to seriously interfere with the leader of the computer market. It was annoying Intel manual and they decided to take a chance. In 1989, a serious problem arose with the sale of 386 processors. Many 286 users did not understand why they would spend money on more powerful processor. Then the RedX project was created. It meant a magazine spread advertisement, and was an inscription in 286 sans-serif type on a white background, crossed out with a bold red cross. In the corner was the Intel logo. It was a crazy move. Marketing experts have called it corporate suicide and "devouring your own child." But the risk was justified. Intel marketers realized that boring advertising in specialized publications for industrial customers does not work, it is necessary to appeal to the end consumer.

Background of microprocessor technologies

At the end of the 1960s, in information technology was the heyday of integral digital microcircuits with hard logic. It became possible to create relatively compact calculating machines, automation and control systems.

But any devices built on integrated circuits were not universal. Each problem had its own solution. All attempts by engineers to create multitasking machines led to a significant increase in size and excessive complexity of the circuits.

A turning point in the direction of new technologies was brewing. The first to make a breakthrough was Intel.

Founders of Intel


Photo: Intel Free Press

Intel was founded by Robert Noyce and Gordon Moore. A little later, Andy Grove joined them.

Noyce grew up in the family of a Congregational church minister, but this did not stop him from graduating from the Massachusetts Institute of Technology and becoming an integrated circuit engineer. He married the most beautiful graduate of the university, with whom he raised four children.

The sheriff's son, Gordon Moore, received his Ph.D. in chemistry and physics from the California Institute of Technology. In 1965 he deduced the famous "Moore's law". In 1950, he met a girl, Betty, who became his wife and gave him two sons.

A native of Hungary, Andy Grove was born into a Jewish family, as a result of constant persecution, in 1956 he emigrated to the United States to his uncle. He received his PhD in chemical engineering from the University of California. The author of the slogan in the business approach "Only the paranoid survive."

Despite the fact that Intel was created by Robert Noyce and Gordon Moore, Grove, initially hired as a top manager, was also considered the founder of the enterprise.

Start

Eight talented engineers, who would later be called the Treacherous Eight, founded Fairchild Semiconductor in 1957 to design and manufacture silicon transistors. Not quite understanding commercial games in Silicon Valley, the Treacherous Eight came under the influence of Fairchild Camera & Instrument, which began to use Fairchild Semiconductor as a cash cow. Salaries fell, and the best developers began to leave the enterprise.

This was also due to the restriction of the freedoms of the Treacherous Eight, who worked hard, but, according to the management company, were not organized. Particularly freedom-loving employees tried to protest, but in vain. In retaliation, Bob Widlar drove to work with a goat that plucked the lawn in front of the office and shitted on it.

Foundation of the company

Robert Noyce and Gordon Moore quit and founded their own company in 1968. For a firm that did not previously exist in Silicon Valley, there is no chance of getting investment. No one will mess with "no one". But, having a reputation as serious developers in the field of microelectronics, they did not have to look for an investor for a long time. It was enough for Noyce to write a one-page business plan for the investor to allocate $ 2.5 million on the same day.

Initially, the company was named after the initials of the founders of N. M. Electronics, but the name was associated with old-fashioned provincial tool companies. Then, imitating Hewlett-Packard, the phrase Moore-Noyce was tried, but by ear it sounded like “more noise” (“more noise”). It was decided to stop at Inegrated Electronics, but the impersonality did not suit. Then it occurred to someone to shorten both words and combine them into one legendary - Intel.

Access to the market

Startup Intel started with the development of chips random access memory which required huge funds for the purchase of equipment. I had to save. Regularly on the run looking for additional investors, Noyce's salary was only $30,000 a year, three times less than at Fairchild Semiconductor.

However, just 18 months later, Intel introduced the first 3101 chip with SRAM technology, and a few months later, the 1101 based on MOS technology. This rapid and unpredictable growth rate of Intel worried competitors. The transition to MOS technology was a major leap forward.

But the golden hour for Intel came after the Japanese company Busicom approached them. The Japanese asked to combine 12 modules into 1. In fact, it was the process of creating a computer in one chip - a prototype of a modern processor that gave impetus to Intel's forward movement.

You can watch the history of Intel in the video.

Marketing policy

For a long time, Intel was not known to the end customer. The average user is indifferent to the brand and manufacturer of the processor installed in the computer. Since the mid-90s, due to a real commercial threat from AMD, Intel has been investing millions of dollars in inbranding. Now every computer has a company logo, and on television channels, in magazines, on websites, there are advertisements from Intel, driving into the mind of the layman the idea of ​​​​buying computers only with Intel processors. It worked.

financial growth

Headquarters in Santa Clara
Photo: Coolcaesar

For a quarter of a century, invariably, Intel has held the palm among manufacturers of processors and motherboards. The team of 12 engineers grew to 150,000 in 1968, and the initial capital of $2.5 million borrowed turned into a book value of the company of $170.85 billion.

Sales revenue in recent years has fluctuated between $53-56 billion a year, and net income $9-13 billion. Intel produces about 80% of the world's processors. Approximately the same indicators are in the production of graphic cards.

Intel's marketing policy and the regular release of innovative products to the market have practically made competitors' attempts to approach Intel's sales levels negligible. For example, the well-known company AMD produces only 10% of processors, which provokes it to regularly file lawsuits against Intel with the antimonopoly committee.

Intel in Russia

Intel officially entered Russia in 1991. Over the past a little more than a quarter of a century, Intel has opened three research and development (R&D) centers in Russia in Nizhny Novgorod, Novosibirsk and Moscow. In addition, Intel is working with universities to improve the skills of teachers and students in the field of scientific research. In MIPT, with the assistance of Intel, a department of microprocessor technologies was opened.

Intel these days

Over the many years of the brand's existence, only 88-year-old Gordon Moore, who is not directly involved in the management of the company, has survived from the founders of the company. Intel is led by CEO Brian Krzanich and President Rene James.

In 2017, Intel remains the leading global leader in microprocessor devices. Interestingly, when Robert Noyce sold the first shares of Intel in 1971, he hardly imagined that every dollar invested by a shareholder will return $270,000 already in the 90s.

The following divisions are in operation:

  • Intel Client Computing Group
  • Data Center Group
  • Internet of Things Group
  • Non-volatile Memory Solutions Group
  • Programmable Solutions Group

On March 23, 2017 Intel announced appearance in corporation board of directors of two new members. We are talking about the CEO of medical equipment manufacturer Medtronic Omar Ishrak (Omar Ishrak), as well as CFO and executive vice president for corporate development and strategy Boeing Greg Smith (Greg Smith).


After the entry of Omar Ishrak and Greg Smith to the board of directors of Intel, the number of members there grew to 13, including chairman Bryant. The composition is as follows:

Performance indicators

2018: Growth of revenue for 13% to $70.85 billion

Mergers and acquisitions

History of Intel filled with numerous acquisitions, many of which are described.

Development Centers

In Russia

In Europe

Intel Exascale Computing Research Center - Intel, the French Atomic Energy Commission, the French National Agency for High Performance Systems and Versailles Saint-Quentin-en-Yvelines University have agreed to establish the Exascale Computing Research Center in Paris. Within its walls, high-performance systems will be developed that operate thousands of times faster than today's most powerful supercomputers.

Germany- In Germany, Intel research centers are located in Braunschweig, Munich, Saarbrücken and Ulm. The research center in Braunschweig conducts research into future generations of microprocessors and computer platforms. It also conducts research on high-performance systems with the number of computing cores from several tens to several hundreds, solutions of the "system-on-a-chip" format for mobile internet devices and new computer memory architectures. One of the key areas of the center is the development of emulation systems that reduce the time to market for new processors.

The Open Research Laboratory in Munich was opened in March 2009. Both internal and open research are carried out here to help create new business models. Saarbrücken is home to the Intel VCI, the Intel Institute for Visual Computing Research. It was founded in May 2009 and is the largest project in Europe organized jointly with the university, Saarland University in Saarbrücken. Both fundamental and applied research is carried out here, aimed at developing new means of human interaction with a computer. Development tools are released in Ulm software for mobile devices and complexes for debugging applications for embedded solutions and applications for execution on multi-core systems.

Ireland- Intel's research activity in Ireland concerns the search and development of new ways of manufacturing chips. Research is mainly focused on nanotechnology and ways to further implement Moore's law. Here, research is being carried out on new memory structures, technologies for self-assembly of nanoparticles, options for using nanotubes, new designs for silicon chips, and so on.

Ireland is home to a joint institute founded by Intel Labs Europe and the National University of Ireland. Its goal is to develop new models and methods for introducing ICT. The Center is supported by a unique consortium of key market players, non-profit organizations and academic communities including Microsoft , SAP and Ernst&Young*. Another centre, TRIL Centre, located in Dublin, focuses on the following areas: improving the quality of life and interaction of the elderly, as well as maintaining the independence of those who suffer from memory disorders. An investment of about $30 million is planned for the center over three years. Another Intel lab, based in Shannon and founded in 2000, is developing technologies for use in blade servers and highly integrated compact embedded systems.

Israel- A research center in Haifa was founded in 1974, becoming the first planning and development center outside of . Today, Intel has four centers in the country, a development center in Haifa with branches in Jerusalem and Yakum, and a center in Petah Tikva. Most of the engineers in Israel are involved in the development of computer processors, technologies wireless communication, software and technology for entertainment. Haifa is currently developing new multi-core processor architectures that could fit into thinner and lighter devices. Israel also develops LAN controllers and firmware. Jerusalem is developing Intel vPro components, while Petah Tikva is developing WiMAX solutions.

Poland- located in Gdansk, the Intel center is the largest in the European region. The laboratory was opened in October 1999 after the acquisition of Olicom Poland. The center's research group is divided into five teams developing software for the Intel Digital Enterprise Group and Mobility Group.

Saudi Arabia- Intel Research Center is located in Dhahran. Local experts develop hardware and software tools that enable oil and gas companies to develop specialized software for field exploration. The laboratory is equipped with a computer system based on Itanium 2 and Xeon processors.

Spain- in the research center in Barcelona, ​​opened in 2002, the development of microprocessor architectures and tools for writing software for future processors is carried out.

Turkey- Founded in 2006, the research center in Istanbul is one of the eleven centers of innovation in the world. His areas of work are digital technologies in healthcare, mobile systems, digital home. Here the development of technologies for industry and education is carried out.

- Built near Heathrow Airport, FasterLAB develops solutions for the financial industry, high performance computing and virtualization technologies, and develops standards.

UAE- in the United United Arab Emirates Intel centers are open in Dubai and Abu Dhabi. The Abu Dhabi Applied Research Center tests and optimizes products based on Intel solutions for the oil and gas industry. These products help companies to find new deposits and bring finished products to the market.

Today we will talk about the history of one company, without which most of the computers in the world would not work now. We are talking, of course, about Intel.

Intel was born in the minds of Robert Noyce and Gordon Moore while they were at Fairchild Semiconductor in the 1960s. The company in those years was a leading manufacturer of analog integrated circuits, but not everything went smoothly: new management came in and began to restrict the freedom of scientists and employees of the company. Therefore, in 1968, Noyce and Moore left Fairchild Semiconductor and subsequently founded their own company, which influenced the whole world.

Robert Noyce (left) and Gordon Moore

After leaving Fairchild Semiconductor, Moore and Noyce began working on a business plan for the future company. The company was named Noyce and Moore Electronics, abbreviated as NM Electronics. Moore was not completely sure of the name, and on the next version of Integrated Electronics, proposed by Moore, Noyce saw the abbreviated name int egrated El ectronics (INTEL), and it was with this name that Robert Noyce and Gordon Moore registered the company on July 16, 1968. After Intel registered, it turned out that there was a company with a similar name to Intelco, and Intel had to pay $15,000 to freely use its name. With a $2.5 million loan from financiers, Intel hired its first employee, Andrew Grove, and began its journey into the world of electronics.

The company immediately chose, as history shows, the right vector of production - electronic circuits memory. It was in the production of RAM that Intel earned the first solid money. With good capital, the company began to experiment with new products, and all this resulted in the fact that in 1971 the company released its first commercial microprocessor, the Intel 4004. “A New era of integrated electronics,” Intel declared to the whole world. It was a full-fledged 4-bit microprocessor, containing everything you need to work. It was developed by order of a Japanese company specifically for its calculators. According to the contract, the rights to manufacture the processor were to be transferred to the Japanese. It was at this time that Intel began to understand what prospects would open up for the microprocessor in the future. Fortunately, Gordon Moore and Robert Noyce were just lucky. The Japanese company was experiencing serious financial problems and therefore decided to go for a new contract with Intel. Under the terms of this agreement, the American company undertook to supply its microprocessors to Japan at a price two times lower than originally declared. But all development rights remained with Intel.

Gradually, the company's microprocessors began to appear not only in traffic lights and calculators, but also in the first personal computers. All this led to the fact that soon the 8080 processor was born, which at that time became the industry standard. It was installed even in such a popular computer at that time as the Altair 8800, and after only three years the company will introduce the first 16-bit 8086 processor.

Intel grew very quickly. In 1968, the company had only 12 employees, and by 1980 there were as many as 15,000! Naturally, such growth required rather careful management. And Noyce and Moore understood this very well. They were just the kind of people who couldn't stand bureaucracy. They had enough of it back at Fairchild Semiconductor. At first, the founders held weekly lunches with employees, then as the company grew, Intel management always remained open to its employees. Each employee to some extent made decisions on a particular issue. And the right approach to the management of the company and the vector of its products led to the fact that in 1983 Intel's income amounted to a whole 1 billion dollars.

Starting in the 80s, Intel pulled itself together and closed all the various minor developments in order to fully focus on the production of microprocessors. Next will come the golden days of the 286s, then the 386s, and finally the 486s of computers equipped with Intel processors. But even after all these successes, Intel will still remain a company that is not known to a wide range of people. Yes, it will be talked about in IT circles, but which of the ordinary users might be interested in what kind of processor is located in their computer?

Apparently, it was important for Intel that everyone on the planet knew this company, and they made it so that a company that no one knew about at the very beginning of the 90s could become one of the most famous brands at the beginning of the 21st century. According to some ratings, Intel is in the top ten of the most famous brands. The thing is that since the 90s, Intel began an inbranding campaign that has already entered many marketing textbooks. Intel spent hundreds of millions of dollars on this campaign. The essence of the company's branding was that in advertising for ordinary personal computers they constantly mentioned the fact that they work on the Intel processor (of course, the advertising of these computers was also paid for by Intel). In addition, Intel has been very active in using television advertising, driving into the mass consciousness that it is imperative to make sure that the computer is running on an Intel processor.

In October 1992, Intel announced that the fifth-generation processors, formerly codenamed P5, would be called Pentium, not 586, as many had assumed. This was due to the fact that many processor manufacturers actively mastered the production of "clones" (and not only) of the 486 processors. Intel was going to register the name "586" as a trademark so that no one else could manufacture processors with that name, but here's the problem: it turned out that it was impossible to register numbers as a trademark (to Intel's great regret), so it was decided to name the new processors "Pentium". On March 22, 1993, a presentation of a new microprocessor took place, a few months later the first computers based on them appeared. And this processor literally conquered the whole world. It was in all computers, and many people around the world demanded a computer with a Pentium processor.

In the late 1990s, Intel faced some of the toughest competition in its history. At that time, the same AMD company produced excellent processors, which, moreover, cost significantly less than Intel's ones. But 2006 Intel took a big bite out of the pie. For a long time, Apple Macintosh computers were supplied with processors, and then IBM. And since 2006, all Macs are now running Intel processors. At the time when Apple from 2006 to 2007 switched to Intel architecture, Intel itself had a whole line of processors for different segments of devices. Celerons, Pentiums, Xeons - each processor was designed for its own needs: if the Xeons were for professional machines and servers, then the Celerons were installed in very budget systems.

In the same, as it seems, back in 2006, Intel released a no less famous processor called the Core 2 Duo, we all know them, and many still have computers with a Core 2 Duo heart in their dachas. And in mid-2009, Intel restructured its processor product line, thereby creating the Core i family, which includes the well-known Core i7, i5 and i3. On the this moment about 85% of modern computers and laptops are running a family of processors Intel Core I. The rest either work on Pentium and Celeron processors, or on processors from a competitor company.

Now the company, in addition to processors, also produces solid state drives, motherboards and components for servers. The company continues to experiment with modern technologies and even recently set a record for the number of simultaneously controlled drones. Intel specialists launched a hundred quadcopters with LED elements into the air, which were used as a light and music system. Drones represent colored figures in the air, accompanying the orchestra that plays Beethoven. In general, let's hope that Intel, like the past 50 years, will determine the future in computer technology and successfully develop the computer industry and thereby all of humanity as a whole.

Understand the company Intel and its three founders is possible only when you understand Silicon Valley and its origins. And in order to do this, you need to penetrate into the history of the company. Shockley Transistor, Treacherous Eight and Fairchild Semiconductor. Without their understanding, Intel will remain the same to you as it is to most people - a mystery.

The invention of computers did not mean that a revolution immediately began. The first computers, based on large, expensive, rapidly breaking vacuum tubes, were expensive monsters that only corporations, research universities, and the military could maintain. The advent of transistors, and later new technologies to etch millions of transistors on a tiny microchip, meant that the processing power of many thousands of ENIAC devices could be concentrated in a rocket head, in a lap computer, and in handheld devices.

In 1947, Bell Laboratory engineers John Bardeen and Walter Brattain invented the transistor, which was introduced to the general public in 1948. A few months later, William Shockley, one of the employees of Bell, developed a model of a bipolar transistor. The transistor, which is essentially a solid-state electronic switch, has replaced the bulky vacuum tube. The move from vacuum tubes to transistors started a miniaturization trend that continues today. The transistor was one of the most important discoveries of the 20th century.

In 1956, Nobel laureate in physics William Shockley formed the Shockley Semiconductor Laboratory to work on four-layer diodes. Shockley failed to bring in his former employees from Bell Labs; instead, he hired a group of what he considered to be the best young electronics professionals fresh out of American universities. In September 1957, due to a conflict with Shockley, who decided to stop researching silicon semiconductors, eight key employees of Shokley Transistor decided to leave their jobs and start their own business. The eight people are now forever known as the Treacherous Eight. This epithet was given to them by Shockley when they left work. The G8 included Robert Noyce, Gordon Moore, Jay Last, Gene Hourney, Victor Grinich, Eugene Kleiner, Sheldon Roberts and Julius Blank.

After leaving, they decided to create their own company, but there was nowhere to take investments from. As a result of calling 30 firms, they came across Fairchild, the owner of Fairchild Camera and Instrument. He happily invested a million and a half dollars in a new company, which was almost twice as much as its eight founders initially considered necessary. A so-called premium deal was struck: if the company was successful, he could buy it out in full for three million. Fairchild Camera and Instrument exercised this right as early as 1958. The subsidiary was named Fairchild Semiconductor.

In January 1959, one of the eight founders of Fairchild, Robert Noyce, invented the silicon integrated circuit. At the same time, Jack Kilby at Texas Instruments invented the germanium integrated circuit six months earlier - in the summer of 1958, but Noyce's model turned out to be more suitable for mass production, and it is she who is used in modern chips. In 1959, Kilby and Noyce independently filed patents for the integrated circuit, and both were successfully granted, with Noyce receiving his patent first.

In the 1960s, Fairchild became one of the leading manufacturers of operational amplifiers and other analog integrated circuits. However, at the same time, the new management of Fairchild Camera and Instrument began to restrict the freedom of action of Fairchild Semiconductor, which led to conflicts. One by one, G8 members and other experienced employees began to quit and start their own companies in Silicon Valley.

The first name Noyce and Moore chose was NM Electronics, N and M being the first letters of their last names. But it wasn't very impressive. After a large number of not very successful proposals, for example, Electronic Solid State Computer Technology Corporation, they came to the final decision: the company will be called Integrated Electronics Corporation. In itself, it was also not very impressive, but it had one merit. The company could be called Intel for short. It sounded good. The title was energetic and eloquent.

Scientists set themselves a very specific goal: to create a practical and affordable semiconductor memory. Nothing like this had been created before, given the fact that silicon-based memory cost at least a hundred times more than conventional magnetic-core memory for that time. Solid-state memory cost as much as one dollar per bit, while magnetic-core memory cost only about a cent per bit. Robert Noyce said: “We only had to do one thing - reduce the cost by a hundred times and thereby conquer the market. That's what we basically did."

In 1970, Intel released a 1 Kb memory chip, far exceeding the capacity of the chips existing at that time (1 Kb is equal to 1024 bits, one byte consists of 8 bits, that is, the chip could store only 128 bytes of information, which is negligible by modern standards. ) The resulting chip, known as Dynamic Random Access Memory (DRAM) 1103, became the world's best-selling semiconductor device by the end of next year. By this time, Intel had grown from a handful of enthusiasts to a company of more than a hundred employees.

At this time, the Japanese company Busicom approached Intel with a request to develop a chipset for a family of high performance programmable calculators. The original design of the calculator provided for a minimum of 12 chips various types. Intel engineer Ted Hoff rejected this concept and instead designed a single-chip logic device that receives application commands from semiconductor memory. This CPU worked under the control of a program that made it possible to adapt the functions of the microcircuit to perform incoming tasks. The microcircuit was universal in nature, that is, its use was not limited to a calculator. Logical modules had only one purpose and a strictly defined set of commands, which were used to control its functions.

There was one problem with this chip: all rights to it belonged exclusively to Busicom. Ted Hoff and other developers realized that this design had virtually unlimited uses. They insisted that Intel buy the rights to the chip they created. Intel offered Busicom to return the $60,000 it paid for the license in exchange for the right to dispose of the developed chip. As a result, Busicom, being in a difficult financial situation, agreed.

On November 15, 1971, the first 4-bit microcomputer set 4004 appeared (the term microprocessor appeared much later). The microcircuit contained 2,300 transistors, cost $200, and was comparable in its parameters to the first ENIAC computer, created in 1946, which used 18,000 vacuum tubes and occupied 85 cubic meters.

The microprocessor performed 60 thousand operations per second, operated at a frequency of 108 kHz and was manufactured using 10-micron technology (10,000 nanometers). Data was transmitted in blocks of 4 bits per clock, and the maximum addressable memory size was 640 bytes. The 4004 was used to control traffic lights, in blood tests, and even in the Pioneer 10 research rocket launched by NASA.

In April 1972, Intel released the 8008 processor, which ran at 200 kHz.

The next processor model, the 8080, was announced in April 1974.

This processor already contained 6000 transistors and could address 64 KB of memory. The first personal computer (not PC) Altair 8800 was assembled on it. The CP / M operating system was used in this computer, and Microsoft developed an interpreter for the BASIC programming language for it. It was the first mass-produced computer for which thousands of programs were written.

Over time, the 8080 became so famous that it began to be copied.

In late 1975, several former Intel engineers involved in the development of the 8080 processor formed Zilog. In July 1976, this company released the Z-80 processor, which was a much improved version of the 8080.

This processor was not pin-compatible with the 8080, but combined many different features, such as a memory interface and a RAM upgrade circuit, which made it possible to develop cheaper and simple computers. The Z-80 also included an extended 8080 instruction set to allow the use of its software. This processor included new instructions and internal registers, so the software developed for the Z-80 could be used with almost all versions of the 8080.

Initially, the Z-80 processor ran at 2.5 MHz (more later versions worked already at a frequency of 10 MHz), contained 8500 transistors and could address 64 KB of memory.

Radio Shack chose the Z-80 processor for its personal computer TRS-80 Model 1. The Z-80 soon became the standard processor for systems running the CP/M operating system and the most common software of the day.

Intel did not stop there, and in March 1976 released the 8085 processor, which contained 6500 transistors, operated at a frequency of 5 MHz and was manufactured using 3-micron technology (3000 nanometers).

Despite being released a few months before the Z-80, it never quite caught on with the latter's popularity. It was used mainly as a control chip for various computerized devices.

In the same year, MOS Technologies released the 6502 processor, which was completely different from Intel processors.

It was developed by a group of engineers Motorola. The same group worked on the 6800 processor, which would eventually evolve into the 68000 family of processors. The first version of the 8080 processor cost $300, while the 8-bit 6502 only cost about twenty-five dollars. Such a price was quite acceptable for Steve Wozniak, and he built the 6502 processor into new Apple models I and Apple II. The 6502 processor was also used in systems built by Commodore and other manufacturers.

This processor and its successors have worked successfully in gaming computer systems, including the Nintendo Entertainment System. Motorola continued to work on the 68000 series of processors, which were subsequently used in Apple Macintosh computers. The second generation of Mac computers used the PowerPC processor, which is the successor to the 68000. Today, Mac computers have again switched to PC architecture and use the same processors, chips system logic and other components.

In June 1978, Intel introduced the 8086 processor, which contained an instruction set codenamed x86.

The same instruction set is still supported in all modern microprocessors: AMD Ryzen Threadripper 1950X and Intel Core i9-7920X. The 8086 processor was completely 16-bit - internal registers and a data bus. It contained 29,000 transistors and ran at 5 MHz. Thanks to the 20-bit address bus, it could address 1 MB of memory. When creating the 8086th backward compatibility with the 8080th was not provided. But at the same time, the significant similarity of their commands and language made it possible to use earlier versions of the software. This feature subsequently played an important role in the rapid transfer of CP/M (8080) system programs to PC rails.

Despite the high efficiency of the 8086 processor, its price was still too high by the standards of that time and, more importantly, it required an expensive chip to support a 16-bit data bus. To reduce the cost of the processor, in 1979 Intel released the 8088 processor, a simplified version of the 8086.

The 8088 used the same internal core and 16-bit registers as the 8086, could address 1 MB of memory, but unlike the previous version, used an external 8-bit data bus. This made it possible to ensure backward compatibility with the previously developed 8-bit 8085 processor and thereby significantly reduce the cost of creating motherboards and computers. That's why IBM chose the stripped-down 8088 rather than the 8086 for its first PC. This decision had far-reaching consequences for the entire computer industry.

The 8088 processor was fully software compatible with the 8086, allowing for 16-bit software. The 8085 and 8080 processors used a very similar instruction set, so programs written for the processors previous versions, could be easily converted for the 8088 processor. This, in turn, allowed the development of a variety of programs for the IBM PC, which was the key to its future success. Not wanting to stop halfway, Intel was forced to provide 8086/8088 backward compatibility support with most of the processors released at the time.

Intel immediately began to develop a new microprocessor after the release of 8086/8088. The 8086 and 8088 processors required a large number of support chips, and the company decides to develop a microprocessor that already contains all the required modules. The new processor included many components that were previously available as separate chips, which would dramatically reduce the number of chips in a computer, and, consequently, reduce its cost. In addition, the system of internal commands has been expanded.

In the second half of 1982, Intel released the 80186 embedded processor, which, in addition to the improved 8086 core, also contained additional modules that replaced some of the support chips.

Also in 1982, the 80188 was released, which is a variant of the 80186 microprocessor with an 8-bit external data bus.

Released on February 1, 1982, the 16-bit x86-compatible 80286 microprocessor was an improvement on the 8086 processor with 3 to 6 times the performance.

This qualitatively new microprocessor was then used in the landmark IBM PC-AT computer.

The 286th processor was developed in parallel with the 80186/80188 processors, but it lacked some modules that were available in the Intel 80186 processor. The Intel 80286 processor was produced in exactly the same package as the Intel 80186 - LCC, as well as in PGA-type packages with sixty-eight conclusions.

In those years, backward compatibility of processors was still supported, which did not interfere with the introduction of various innovations and additional features. One of the major changes was the transition from the 16-bit internal architecture of the 286 and earlier processors to the 32-bit internal architecture of the 386 and later IA-32 processors. This architecture was introduced in 1985, but it took another 10 years for such Operating Systems like Windows 95 (partially 32-bit) and Windows NT (requiring 32-bit drivers only). And only after another 10 years, the operating room appeared Windows system XP, which was 32-bit both at the driver level and at the level of all components. So, it took 16 years to adapt 32-bit computing. For the computer industry, this is quite a long time.

80386th appeared in 1985. It contained 275,000 transistors and performed over 5 million operations per second.

Compaq's DESKPRO 386 computer was the first PC based on the new microprocessor.

The next in the x86 family of processors was the 486th, which appeared in 1989.

Meanwhile, the US Department of Defense was not happy with the prospect of being left with a single chip supplier. As the latter became less and less (remember what a zoo was observed back in the early nineties), the importance of AMD as an alternative manufacturer grew. Under an agreement from 1982, AMD had all the licenses for the production of 8086, 80186 and 80286 processors, however, the newly developed Intel 80386 processor categorically refused to transfer to AMD. And broke the deal. What followed was a long and high-profile lawsuit - the first in the history of companies. It ended only in 1991 with the victory of AMD. For its position, Intel paid the plaintiff a billion dollars.

But still, the relationship was spoiled, and there was no talk of former trust. Moreover, AMD took the path of reverse engineering. The company continued to release Am386 processors, which differed in hardware, but completely coincided in microcode, and then Am486. Intel has already gone to court. Again, the process dragged on for a long time, and success turned out to be on one side, then on the other. But on December 30, 1994, a court decision was made, according to which the Intel microcode is still the property of Intel, and it is somehow not good for other companies to use it if the owner does not like it. So things have changed since 1995. On Intel Pentium and AMD K5 processors, any applications for the x86 platform were launched, but from the point of view of architecture, they were fundamentally different. And it turns out that the real competition between Intel and AMD began only a quarter of a century after the creation of the companies.

However, to ensure compatibility, cross-pollination by technologies has not gone anywhere. Modern Intel processors have a lot of AMD's patents, and conversely, AMD neatly adds Intel-designed instruction sets.

In 1993, Intel introduced the first Pentium processor, which was five times faster than the 486 family. This processor contained 3.1 million transistors and performed up to 90 million operations per second, which is about 1500 times faster than the 4004.

When the next generation of processors appeared, those who had counted on the Sexium name were disappointed.

The P6 family processor, called the Pentium Pro, was born in 1995.

Revisiting the P6 architecture, Intel introduced the Pentium II processor in May 1997.

It contained 7.5 million transistors, packed in a cartridge, unlike a traditional processor, which made it possible to place the L2 cache memory directly in the processor module. This helped to significantly increase its performance. In April 1998, the Pentium II family was expanded with the low cost Celeron processor used in home PCs and the professional Pentium II Xeon processor for servers and workstations. Also in 1998, Intel for the first time integrated L2 cache (which ran at the full processor core frequency) directly into the die, which made it possible to significantly increase its performance.

While the Pentium processor was rapidly gaining market dominance, AMD acquired NexGen, which was working on the Nx686 processor. The merger resulted in the AMD K6 processor.

This processor was both hardware and software compatible with the Pentium processor, that is, it was installed in a Socket 7 socket and executed the same programs. AMD has continued to develop more fast versions K6 processor and won a significant part of the mid-range PC market.

First desktop processor computers The older model, containing built-in L2 cache and operating at full core frequency, was the Pentium III processor, based on the Coppermine core, introduced in late 1999, which was, in fact, a Pentium II containing SSE instructions.

In 1998, AMD introduced the Athlon processor, which allowed it to compete with Intel in the high-speed desktop PC market almost on a par.


This processor turned out to be very successful, and Intel received it in the face of a worthy rival in the field of high-performance systems. Today, the success of the Athlon processor is beyond doubt, but at the time of its entry into the market, there were concerns about this. The fact is that, unlike its predecessor K6, which was compatible both at the software and hardware levels with the Intel processor, Athlon was compatible only at the software level - it required a specific system logic chipset and a special socket.

The new AMD processors were produced using 250nm technology with 22 million transistors. They had a new integer calculation unit (ALU). The EV6 system bus provided data transfer on both edges of the clock signal, which made it possible to obtain at a physical frequency of 100 megahertz effective frequency 200 megahertz. The first level cache was 128 KB (64 KB instructions and 64 KB data). The second level cache reached 512 KB.

The year 2000 was marked by the appearance on the market of new developments of both companies. On March 6, 2000, AMD released the world's first 1GHz processor. It was a representative of the increasingly popular Athlon family based on the Orion core. AMD also introduced the Athlon Thunderbird and Duron processors for the first time. The Duron processor was essentially identical to the Athlon processor and differed from it only in a smaller L2 cache. Thunderbird, in turn, used an integrated cache memory, which made it possible to increase its performance. Duron was a cheaper version of the Athlon processor, which was designed primarily to compete with the inexpensive Celeron processors. And Intel at the end of the year introduced new processor Pentium 4.

In 2001, Intel released new version processor Pentium 4 with an operating frequency of 2 GHz, which was the first processor to reach such a frequency. In addition, AMD introduced the Athlon XP processor, based on the Palomino core, as well as the Athlon MP, designed specifically for multiprocessor server systems. During 2001, AMD and Intel continued to work on improving the performance of microchips under development and improving the parameters of existing processors.

In 2002, Intel introduced the Pentium 4 processor, which for the first time reached an operating frequency of 3.06 GHz. Subsequent processors will also support Hyper-Threading technology. Simultaneous execution of two threads gives processors with Hyper-Threading technology a performance boost of 25-40% compared to conventional Pentium 4 processors. This inspired programmers to develop multi-threaded programs, and set the stage for the emergence of multi-core processors in the near future.

In 2003, AMD released the first 64-bit Athlon 64 processor (codenamed ClawHammer, or K8).

Unlike server 64-bit Itanium and Itanium 2 processors optimized for the new 64-bit architecture software systems and rather slow with traditional 32-bit programs, the Athlon 64 is the 64-bit extension of the x86 family. Some time later, Intel introduced its own set of 64-bit extensions, which it called EM64T or IA-32e. The Intel extensions were almost identical to the AMD extensions, which meant they were compatible at the software level. Until now, some operating systems call them AMD64, although competitors prefer their own brands in marketing documents.

In the same year, Intel released the first processor with L3 cache, the Pentium 4 Extreme Edition. A 2 MB cache was built into it, the number of transistors was significantly increased and, as a result, performance was increased. The Pentium M chip for portable computers also appeared. It was conceived as an integral part of the new Centrino architecture, which was supposed, firstly, to reduce power consumption, thereby increasing battery life, and secondly, to provide the possibility of producing more compact and lightweight cases.

To make 64-bit computing a reality, 64-bit operating systems and drivers are required. In April 2005, Microsoft began distributing a trial Windows version XP Professional x64 Edition supporting additional instructions AMD64 and EM64T.

Without slowing down, AMD in 2004 releases the world's first dual-core x86 processors Athlon 64 X2.

At that time, very few applications could use two cores at the same time, but in specialized software, the performance gain was quite impressive.

In November 2004, Intel was forced to cancel the 4 GHz Pentium 4 model due to heat dissipation problems.

On May 25, 2005, the Intel Pentium D processors were demonstrated for the first time. There is not much to say about them, except perhaps only for a heat dissipation of 130 watts.

In 2006, AMD introduces the world's first 4-core server processor, where all 4 cores are grown on a single chip, and not "glued" from two, as in business colleagues. The most complex engineering problems have been solved - both at the development stage and in production.

In the same year, Intel changed the name of the Pentium brand to Core and released the dual-core Core 2 Duo chip.

Unlike the NetBurst architecture processors (Pentium 4 and Pentium D), the Core 2 architecture did not focus on increasing the clock speed, but on improving other processor parameters, such as cache, efficiency, and the number of cores. The power dissipation of these processors was significantly lower than that of the Pentium desktop line. With a TDP of 65W, the Core 2 had the lowest power dissipation of any desktop microprocessor then commercially available, including Prescott (Intel) cores with a TDP of 130W and San Diego (AMD) cores with a TDP of 89 W.

The first desktop quad-core processor was the Intel Core 2 Extreme QX6700 clocked at 2.67 GHz with 8 MB L2 cache.

In 2007, the 45nm Penryn microarchitecture was released using lead-free Hi-k metal gates. The technology was used in the Intel Core 2 Duo processor family. Support for SSE4 instructions has been added to the architecture, and the maximum amount of L2 cache for dual-core processors has increased from 4 MB to 6 MB.

In 2008, the next generation architecture, Nehalem, was released. The processors have an integrated memory controller that supports 2 or 3 DDR3 SDRAM channels or 4 FB-DIMM channels. The FSB bus was replaced by a new QPI bus. The L2 cache has been reduced to 256 KB per core.

Soon, Intel moved the Nehalem architecture to a new 32nm process technology. This line of processors was named Westmere.

The first model of the new microarchitecture was Clarkdale, which has two cores and an integrated graphics core, manufactured using a 45-nm process technology.

AMD has tried to keep up with Intel. In 2007, she released a new generation of x86 microprocessor architecture - Phenom (K10).

Four processor cores were combined on one chip. In addition to L1 and L2 cache, the K10 models finally received 2MB L3. The size of the data and instruction cache of the 1st level was 64 KB each, and the cache of the 2nd level was 512 KB. There is also promising support for a DDR3 memory controller. The K10 used two 64-bit controllers. Each processor core had a 128-bit floating point module. On top of that, the new processors worked through the HyperTransport 3.0 interface.

In 2009, a long-term conflict between Intel and AMD corporations related to patent law and antitrust law was completed. So, for almost a decade, Intel used a number of dishonest decisions and techniques that interfered with the fair development of competition in the semiconductor market. Intel put pressure on its partners, forcing them to abandon the acquisition AMD processors. Bribery of customers, the provision of large discounts and the conclusion of agreements were used. As a result, Intel paid AMD $1.25 billion and committed to following a certain set of business rules for the next 5 years.

By 2011, the era of Athlons and the competitive struggle in the processor market had already passed into some lull, but it did not last long - already in January, Intel introduced its new architecture Sandy Bridge, which became the ideological development of the first generation Core - a whole milestone that allowed the blue giant to take the lead in the market. AMD fans have been waiting for a response from the Reds for quite a long time - only in October the long-awaited Bulldozer appeared on the market - the return to the market of the AMD FX brand associated with breakthrough processors for the company at the beginning of the century.


The new AMD architecture has taken on a lot - confrontation with the best solutions The (later legendary) Intel cost the Sunnyvale chipmaker dearly. Already traditional for the Reds, inflated marketing, associated with loud statements and incredible promises, crossed all boundaries - the Bulldozer was called a real revolution, and the architecture was predicted a worthy battle against new products from a competitor. What has FX prepared to win the market?

A bet on multi-threading and uncompromising multi-core - in 2011, AMD FX was proudly called "the most multi-core desktop processor on the market", and this was not an exaggeration - the architecture was based on as many as eight cores (albeit logical ones), each of which accounted for one thread. At the time of the announcement of the architecture, the new FX was an innovative and bold solution against the background of the competitor's four cores, looking far ahead. But alas, AMD has always relied on only one direction, and in the case of Bulldozer, this was by no means the area that the mass consumer was counting on.

The productivity of the new AMD chips was very high, and FX easily showed impressive results in synthetics - unfortunately, the same could not be said about gaming loads: the fashion for 1-2 cores and the lack of support for normal parallelization of cores led to the fact that the "Bulldozer" coped with loads with great creaking where Sandy Bridge did not even feel difficulties. Add to this the two Achilles heels of the series - dependence on fast memory and a rudimentary northbridge, as well as the presence of only one FPU unit for every two cores - and the result is very deplorable. AMD FX was called a hot and sluggish alternative to fast and powerful blue processors, which took only relative cheapness and compatibility with older motherboards. At first glance, it was a complete failure, but AMD has never been squeamish about working on bugs - and Vishera became such a job - a kind of reboot of the Bulldozer architecture, which entered the market at the end of 2012.

The updated Bulldozer was named Piledriver, and the architecture itself added instructions, increased muscle in single-threaded loads, and optimized the operation of a large number of cores, which increased multi-threaded performance. However, in those days, the notorious Ivy Bridge, which only increased the number of Intel admirers, acted as a competitor for the updated and refreshed series of reds. AMD decided to follow the already proven strategy of attracting budget users, overall savings on components and the opportunity to get more for less money (without encroaching on the segment above).

But the funniest thing in the history of the appearance of the most unsuccessful (according to most) architecture in AMD's arsenal is that sales of AMD FX can hardly be called not only a failure, but even mediocre - so, according to the Newegg store for 2016, AMD FX became the second most popular processor -6300 (behind only the i7 6700k), and the notorious leader of the budget red segment FX-8350 entered the top five best-selling processors, slightly behind the i7 4790k. At the same time, even the relatively cheap i5, which was cited as an example of marketing success and “people's” status, fell far behind the time-tested oldies based on Piledriver.

Finally, it is worth noting a rather funny fact, which a few years ago was considered an excuse from AMD fans - we are talking about the confrontation between the FX-8350 and the i5 2500k, which originated at the time of the release of Bulldozer. For a long time, it was believed that the red processor was significantly behind the 2500k chosen by many enthusiasts, but in the latest tests of 2017, paired with the most powerful FX-8350 GPU, it turns out to be faster in almost all gaming tests. It would be appropriate to say "Hurrah, wait!".

Meanwhile, Intel continues to conquer the market.

In 2011, a batch of new processors based on the Sandy Bridge architecture was announced, and then a little later, a batch of new processors based on the Sandy Bridge architecture was released for the new LGA 1155 socket released in the same year. This is the second generation of modern Intel processors, a complete update of the line, which paved the way for commercial success for the company, because there were no analogues in terms of power per core and overclocking. You may remember the i5 2500K - the legendary processor, it overclocked to almost 5 GHz, with the appropriate cooling tower, and is able even today, in 2017, to provide acceptable performance in a system with one, and possibly two video cards in modern games. On the hwbot.org resource, the processor overcame a frequency of 6014.1 megahertz from the Russian SAV overclocker. It was a 4-core processor with a 6 MB level 3 cache, the base frequency was only 3.3 GHz, nothing special, but due to solder, the processors of this generation overclocked very strongly and did not overheat. Also absolutely successful in this generation were the i7 2600K and 2700K - 4 core processors with hyperthreading, which gave them as many as 8 threads. They overclocked, however, they were a little weaker, but they had higher performance, and, accordingly, heat dissipation. They were taken as systems for fast and efficient video editing, as well as for broadcasting on the Internet. Interestingly, the 2600K, like the i5 2500K, is also used today not only by gamers, but also by streamers. We can say that this generation has become a national treasure, since everyone wanted exactly the processors from Intel, which affected their price, not in the best direction for the consumer.

In 2012, Intel releases the 3rd generation of processors, called Ivy Bridge, which looks strange, because only a year has passed, could they really invent something fundamentally new that would give a noticeable performance boost? Anyway, the new generation of processors is based on the same socket - LGA 1155, and the processors of this generation are not much ahead of the previous ones, this is, of course, due to the fact that there was no competition in the top segment. All the same AMD, not to say that it would breathe tightly in the back of the first, therefore, Intel could afford to release processors a little more powerful than their own, because they actually became monopolists in the market. But then another catch crept in, now in the form of a thermal interface under the lid, Intel did not use solder, but some kind of its own, as the people called it - chewing gum, this was done to save money, which brought even more income. This topic simply blew up the network, it was no longer possible to overclock processors to the eyeballs, because they received an average temperature of 10 degrees more than the previous ones, because the frequencies came closer to the border of 4 - 4.2 GHz. Special extremals even opened the processor cover, in order to replace the thermal paste with a more efficient one, not everyone managed to do this without chipping the crystal or damaging the processor contacts, but the method turned out to be effective. However, I can highlight some processors that have been successful.

You may have noticed that I did not mention i3 when talking about the second generation, this is due to the fact that processors of this power were not particularly popular. Everyone always wanted an i5, whoever had the money took the i7 of course.

In the 3rd generation, which we will talk about now, the situation has not changed dramatically.
Successful among this generation can be identified i5 3340 and i5 3570K, they did not differ in performance, everything depended on frequency, the cache was still the same - 6 MB, 3340 did not have the ability to overclock, because 3570K was more desirable, but what one, what the second - provided good performance in games. Out of the i7 on the 1155, this was the only K-index 3770 with 8MB cache and 3.5-3.9GHz. In boost, they usually overclocked it to 4.2 - 4.5 GHz. Interestingly, in the same 2011, a new LGA 2011 socket was released, for which two super-processors i7 4820K (4 cores, 8 threads, with L3 cache - 10 MB) and i7 4930K (6 cores, 12 threads, L3 cache was is equal to as much as 12 MB), what kind of monsters they were - it's hard to say, such a percentage cost 1000 bucks and was the dream of many schoolchildren at that time, although for games, of course, it was too powerful, more suitable for professional tasks.

Haswell comes out in 2013, yes, yes, another year, another generation, traditionally a little more powerful than the previous one, because AMD failed again. Known as the hottest generation. However, this generation of i5s were pretty successful. This is due to the fact, in my opinion, that the guys from Sendik ran to change their, as they thought, outdated processors for a new “revolution” from Intel, which then burned all the “Internets”. Processors overclocked even worse than the previous generation, which is why many still dislike this generation. The performance of this generation was slightly higher than the previous one (by 15 percent, which is not much, but the monopoly does its job), and the overclocking limit is a good option for Intel to give less "free" performance to the user.

All i5s traditionally were without hyperthreading. They worked at a frequency of 3 to 3.9 GHz in boost, you could take any with the “K” index, as this guaranteed good performance, albeit with not very high overclocking. There was only one i7 at first, it's 4770K - 4 cores 8 threads, 3.5 - 3.9 GHz, workhorse, but it heats up very much without good cooling, I won’t say that it was popular with scalpers, but people who scalped the lid say that the result is much better, it takes about 5 gigahertz on water, if you’re lucky. This has been the case for every processor since Sendik. However, this is not the end, in this generation there was such a Xeon E3-1231V3, which, in fact, was the same i7 4770, only without integrated graphics and overclocking. It is interesting in that it was inserted into an ordinary mother with a socket 1150 and was much cheaper than the seventh one. A little later, the i7 4790K comes out and it has an already improved thermal interface, but it's still not the same solder as it was before. Nevertheless, the processor overclocks more than the 4770. There were even talks about cases of overclocking to 4.7 GHz in the air, of course, in good cooling.

There are also "Monsters" of this generation (Haswell-E): i7-5960X Extreme Edition, i7-5930K and 5820K, server solutions adapted for the desktop market. These were the most stuffed processors at that time. They are based on the new 2011 v3 socket and cost a lot of money, but their performance is exceptional, which is not surprising, because the older processor in the line has as many as 16 threads and 20 MB of cache. Pick up the jaw and move on.

In 2015, Skylake comes out, on socket 1151, and everything would be fine and it seems almost the same performance, but this generation differs from all previous ones: firstly, the reduced size of the heat-distributing cover, for improved heat transfer with the cooling system on the processor, and secondly, support for DDR4 memory and software support for DirectX 12, Open GL 4.4, Open CL 2.0, which indicates better performance in modern games that will use these APUs. It also turned out that even processors without the K index can be overclocked, this was done using the memory bus, but this case was quickly covered up. Whether this method works through crutches - we do not know.

There were few processors here, Intel again improved the business model, why release 6 processors, if 3-4 out of the entire line are popular? So we will release 4 processors of the medium and 2 expensive segments. Personally, according to my observations, most often they take i5 6500 or 6600K, all the same 4 cores with 6 MB cache and turbo boost.

In 2016, Intel introduced the fifth generation of processors - Broadwell-E. The Core i7-6950X was the world's first ever 10-core desktop processor. The price of such a processor at the time of the start of sales was $ 1,723. To many, such a move by Intel seemed very strange.

On March 2, 2017, the new processors of the older AMD Ryzen 7 line went on sale, which included 3 models: 1800X, 1700X and 1700. As you already know, on February 22 this year, the official presentation of Ryzen took place, at which Lisa Su stated that engineers exceeded the forecast by 40%. In fact, Ryzen is ahead of Excavator by 52%, and given that more than six months have passed since the start of Ryzen sales, the release of new BIOS updates that increase performance and fix minor bugs in the Zen architecture, we can say that this figure has grown to 60% . Today, the older Ryzen is the fastest eight-core processor in the world. And here another assumption was confirmed. About the ten-core Intel. In fact, this was Ryzen's real and only answer. Intel stole the victory from AMD in advance, like, no matter what you release there, the fastest processor will remain with us anyway. And then at the presentation, Lisa Su could not call Ryzen the absolute champion, but only the best of the eight-core. Such is the subtle trolling from Intel.

Now AMD and Intel are introducing new flagship processors. AMD has Ryzen Threadripper, Intel has Core i9. The price of eighteen nuclear thirty-six in-line flagship Intel Core i9-7980XE is about two thousand dollars. The price of a sixteen-core thirty-two threaded Intel Core i9-7960X processor is $1,700, while a similar sixteen-core thirty-two threaded AMD Ryzen Threadripper 1950X costs about a thousand dollars. Draw reasonable conclusions yourself, gentlemen.

Video for this item.