Computer

9 33
Avatar for Tanni123
3 years ago

Computing Basics

The first computers were used primarily for numerical calculations. However, as any information can be numerically encoded, people soon realized that computers are capable of general-purpose information processing. Their capacity to handle large amounts of data has extended the range and accuracy of weather forecasting. Their speed has allowed them to make decisions about routing telephone connections through a network and to control mechanical systems such as automobiles, nuclear reactors, and robotic surgical tools. They are also cheap enough to be embedded in everyday appliances and to make clothes dryers and rice cookers “smart.” Computers have allowed us to pose and answer questions that could not be pursued before. These questions might be about DNA sequences in genes, patterns of activity in a consumer market, or all the uses of a word in texts that have been stored in a database. Increasingly, computers can also learn and adapt as they operate.

[bad iframe src]

Computers also have limitations, some of which are theoretical. For example, there are undecidable propositions whose truth cannot be determined within a given set of rules, such as the logical structure of a computer. Because no universal algorithmic method can exist to identify such propositions, a computer asked to obtain the truth of such a proposition will (unless forcibly interrupted) continue indefinitely—a condition known as the “halting problem.” (See Turing machine.) Other limitations reflect current technology. Human minds are skilled at recognizing spatial patterns—easily distinguishing among human faces, for instance—but this is a difficult task for computers, which must process information sequentially, rather than grasping details overall at a glance. Another problematic area for computers involves natural language interactions. Because so much common knowledge and contextual information is assumed in ordinary human communication, researchers have yet to solve the problem of providing relevant information to general-purpose natural language programs.

Britannica Premium: Serving the evolving needs of knowledge seekers. Get 30% your subscription today.Subscribe Now

Analog computers

Analog computers use continuous physical magnitudes to represent quantitative information. At first they represented quantities with mechanical components (see differential analyzer and integrator), but after World War II voltages were used; by the 1960s digital computers had largely replaced them. Nonetheless, analog computers, and some hybrid digital-analog systems, continued in use through the 1960s in tasks such as aircraft and spaceflight simulation.

[bad iframe src]

One advantage of analog computation is that it may be relatively simple to design and build an analog computer to solve a single problem. Another advantage is that analog computers can frequently represent and solve a problem in “real time”; that is, the computation proceeds at the same rate as the system being modeled by it. Their main disadvantages are that analog representations are limited in precision—typically a few decimal places but fewer in complex mechanisms—and general-purpose devices are expensive and not easily programmed.

Digital computers

In contrast to analog computers, digital computers represent information in discrete form, generally as sequences of 0s and 1s (binary digits, or bits). The modern era of digital computers began in the late 1930s and early 1940s in the United States, Britain, and Germany. The first devices used switches operated by electromagnets (relays). Their programs were stored on punched paper tape or cards, and they had limited internal data storage. For historical developments, see the section Invention of the modern computer.

[bad iframe src]

Mainframe computer

During the 1950s and ’60s, Unisys (maker of the UNIVAC computer), International Business Machines Corporation (IBM), and other companies made large, expensive computers of increasing power. They were used by major corporations and government research laboratories, typically as the sole computer in the organization. In 1959 the IBM 1401 computer rented for $8,000 per month (early IBM machines were almost always leased rather than sold), and in 1964 the largest IBM S/360 computer cost several million dollars.

These computers came to be called mainframes, though the term did not become common until smaller computers were built. Mainframe computers were characterized by having (for their time) large storage capabilities, fast components, and powerful computational abilities. They were highly reliable, and, because they frequently served vital needs in an organization, they were sometimes designed with redundant components that let them survive partial failures. Because they were complex systems, they were operated by a staff of systems programmers, who alone had access to the computer. Other users submitted “batch jobs” to be run one at a time on the mainframe.

[bad iframe src]

Such systems remain important today, though they are no longer the sole, or even primary, central computing resource of an organization, which will typically have hundreds or thousands of personal computers (PCs). Mainframes now provide high-capacity data storage for Internet servers, or, through time-sharing techniques, they allow hundreds or thousands of users to run programs simultaneously. Because of their current roles, these computers are now called servers rather than mainframes.

Supercomputer

The most powerful computers of the day have typically been called supercomputers. They have historically been very expensive and their use limited to high-priority computations for government-sponsored research, such as nuclear simulations and weather modeling. Today many of the computational techniques of early supercomputers are in common use in PCs. On the other hand, the design of costly, special-purpose processors for supercomputers has been supplanted by the use of large arrays of commodity processors (from several dozen to over 8,000) operating in parallel over a high-speed communications network.

[bad iframe src]

Minicomputer

Although minicomputers date to the early 1950s, the term was introduced in the mid-1960s. Relatively small and inexpensive, minicomputers were typically used in a single department of an organization and often dedicated to one task or shared by a small group. Minicomputers generally had limited computational power, but they had excellent compatibility with various laboratory and industrial devices for collecting and inputting data.

One of the most important manufacturers of minicomputers was Digital Equipment Corporation (DEC) with its Programmed Data Processor (PDP). In 1960 DEC’s PDP-1 sold for $120,000. Five years later its PDP-8 cost $18,000 and became the first widely used minicomputer, with more than 50,000 sold. The DEC PDP-11, introduced in 1970, came in a variety of models, small and cheap enough to control a single manufacturing process and large enough for shared use in university computer centres; more than 650,000 were sold. However, the microcomputer overtook this market in the 1980s.

[bad iframe src]

Microcomputer

A microcomputer is a small computer built around a microprocessor integrated circuit, or chip. Whereas the early minicomputers replaced vacuum tubes with discrete transistors, microcomputers (and later minicomputers as well) used microprocessors that integrated thousands or millions of transistors on a single chip. In 1971 the Intel Corporation produced the first microprocessor, the Intel 4004, which was powerful enough to function as a computer although it was produced for use in a Japanese-made calculator. In 1975 the first personal computer, the Altair, used a successor chip, the Intel 8080 microprocessor. Like minicomputers, early microcomputers had relatively limited storage and data-handling capabilities, but these have grown as storage technology has improved alongside processing power.

Personal computer and peripheralsClick on the images of the inkjet printer, laser printer, computer internal layout, hard drive, and mouse components to display more detailed images.Encyclopædia Britannica, Inc.

In the 1980s it was common to distinguish between microprocessor-based scientific workstations and personal computers. The former used the most powerful microprocessors available and had high-performance colour graphics capabilities costing thousands of dollars. They were used by scientists for computation and data visualization and by engineers for computer-aided engineering. Today the distinction between workstation and PC has virtually vanished, with PCs having the power and display capability of workstations.

[bad iframe src]

Embedded processors

Another class of computer is the embedded processor. These are small computers that use simple microprocessors to control electrical and mechanical functions. They generally do not have to do elaborate computations or be extremely fast, nor do they have to have great “input-output” capability, and so they can be inexpensive. Embedded processors help to control aircraft and industrial automation, and they are common in automobiles and in both large and small household appliances. One particular type, the digital signal processor (DSP), has become as prevalent as the microprocessor. DSPs are used in wireless telephones, digital telephone and cable modems, and some stereo equipment.

Computer hardware

The physical elements of a computer, its hardware, are generally divided into the central processing unit (CPU), main memory (or random-access memory, RAM), and peripherals. The last class encompasses all sorts of input and output (I/O) devices: keyboard, display monitor, printer, disk drives, network connections, scanners, and more.

[bad iframe src]

The CPU and RAM are integrated circuits (ICs)—small silicon wafers, or chips, that contain thousands or millions of transistors that function as electrical switches. In 1965 Gordon Moore, one of the founders of Intel, stated what has become known as Moore’s law: the number of transistors on a chip doubles about every 18 months. Moore suggested that financial constraints would soon cause his law to break down, but it has been remarkably accurate for far longer than he first envisioned. It now appears that technical constraints may finally invalidate Moore’s law, since sometime between 2010 and 2020 transistors would have to consist of only a few atoms each, at which point the laws of quantum physics imply that they would cease to function reliably.

Moore's lawMoore's law. Gordon E. Moore observed that the number of transistors on a computer chip was doubling about every 18–24 months. As shown in the logarithmic graph of the number of transistors on Intel's processors at the time of their introduction, his “law” was being obeyed.Encyclopædia Britannica, Inc.

Central processing unit

The CPU provides the circuits that implement the computer’s instruction set—its machine language. It is composed of an arithmetic-logic unit (ALU) and control circuits. The ALU carries out basic arithmetic and logic operations, and the control section determines the sequence of operations, including branch instructions that transfer control from one part of a program to another. Although the main memory was once considered part of the CPU, today it is regarded as separate. The boundaries shift, however, and CPU chips now also contain some high-speed cache memory where data and instructions are temporarily stored for fast access.

[bad iframe src]

The ALU has circuits that add, subtract, multiply, and divide two arithmetic values, as well as circuits for logic operations such as AND and OR (where a 1 is interpreted as true and a 0 as false, so that, for instance, 1 AND 0 = 0; see Boolean algebra). The ALU has several to more than a hundred registers that temporarily hold results of its computations for further arithmetic operations or for transfer to main memory.

The circuits in the CPU control section provide branch instructions, which make elementary decisions about what instruction to execute next. For example, a branch instruction might be “If the result of the last ALU operation is negative, jump to location A in the program; otherwise, continue with the following instruction.” Such instructions allow “if-then-else” decisions in a program and execution of a sequence of instructions, such as a “while-loop” that repeatedly does some set of instructions while some condition is met. A related instruction is the subroutine call, which transfers execution to a subprogram and then, after the subprogram finishes, returns to the main program where it left off.

In a stored-program computer, programs and data in memory are indistinguishable. Both are bit patterns—strings of 0s and 1s—that may be interpreted either as data or as program instructions, and both are fetched from memory by the CPU. The CPU has a program counter that holds the memory address (location) of the next instruction to be executed. The basic operation of the CPU is the “fetch-decode-execute” cycle:

  • Fetch the instruction from the address held in the program counter, and store it in a register.

  • Decode the instruction. Parts of it specify the operation to be done, and parts specify the data on which it is to operate. These may be in CPU registers or in memory locations. If it is a branch instruction, part of it will contain the memory address of the next instruction to execute once the branch condition is satisfied.

  • Fetch the operands, if any.

  • Execute the operation if it is an ALU operation.

  • Store the result (in a register or in memory), if there is one.

  • Update the program counter to hold the next instruction location, which is either the next memory location or the address specified by a branch instruction.

At the end of these steps the cycle is ready to repeat, and it continues until a special halt instruction stops execution.

Steps of this cycle and all internal CPU operations are regulated by a clock that oscillates at a high frequency (now typically measured in gigahertz, or billions of cycles per second). Another factor that affects performance is the “word” size—the number of bits that are fetched at once from memory and on which CPU instructions operate. Digital words now consist of 32 or 64 bits, though sizes from 8 to 128 bits are seen.

Processing instructions one at a time, or serially, often creates a bottleneck because many program instructions may be ready and waiting for execution. Since the early 1980s, CPU design has followed a style originally called reduced-instruction-set computing (RISC). This design minimizes the transfer of data between memory and CPU (all ALU operations are done only on data in CPU registers) and calls for simple instructions that can execute very quickly. As the number of transistors on a chip has grown, the RISC design requires a relatively small portion of the CPU chip to be devoted to the basic instruction set. The remainder of the chip can then be used to speed CPU operations by providing circuits that let several instructions execute simultaneously, or in parallel.

There are two major kinds of instruction-level parallelism (ILP) in the CPU, both first used in early supercomputers. One is the pipeline, which allows the fetch-decode-execute cycle to have several instructions under way at once. While one instruction is being executed, another can obtain its operands, a third can be decoded, and a fourth can be fetched from memory. If each of these operations requires the same time, a new instruction can enter the pipeline at each phase and (for example) five instructions can be completed in the time that it would take to complete one without a pipeline. The other sort of ILP is to have multiple execution units in the CPU—duplicate arithmetic circuits, in particular, as well as specialized circuits for graphics instructions or for floating-point calculations (arithmetic operations involving noninteger numbers, such as 3.27). With this “superscalar” design, several instructions can execute at once.

Both forms of ILP face complications. A branch instruction might render preloaded instructions in the pipeline useless if they entered it before the branch jumped to a new part of the program. Also, superscalar execution must determine whether an arithmetic operation depends on the result of another operation, since they cannot be executed simultaneously. CPUs now have additional circuits to predict whether a branch will be taken and to analyze instructional dependencies. These have become highly sophisticated and can frequently rearrange instructions to execute more of them in parallel.

Main memory

The earliest forms of computer main memory were mercury delay lines, which were tubes of mercury that stored data as ultrasonic waves, and cathode-ray tubes, which stored data as charges on the tubes’ screens. The magnetic drum, invented about 1948, used an iron oxide coating on a rotating drum to store data and programs as magnetic patterns.

[bad iframe src]

BRITANNICA QUIZ

A Brief History of the Computer Told from the 1990s: A Quiz

Where were the personal computer, graphical user interface, handheld mouse, and Ethernet all first developed?

In a binary computer any bistable device (something that can be placed in either of two states) can represent the two possible bit values of 0 and 1 and can thus serve as computer memoryMagnetic-core memory, the first relatively cheap RAM device, appeared in 1952. It was composed of tiny, doughnut-shaped ferrite magnets threaded on the intersection points of a two-dimensional wire grid. These wires carried currents to change the direction of each core’s magnetization, while a third wire threaded through the doughnut detected its magnetic orientation.

The first integrated circuit (IC) memory chip appeared in 1971. IC memory stores a bit in a transistor-capacitor combination. The capacitor holds a charge to represent a 1 and no charge for a 0; the transistor switches it between these two states. Because a capacitor charge gradually decays, IC memory is dynamic RAM (DRAM), which must have its stored values refreshed periodically (every 20 milliseconds or so). There is also static RAM (SRAM), which does not have to be refreshed. Although faster than DRAM, SRAM uses more transistors and is thus more costly; it is used primarily for CPU internal registers and cache memory.

[bad iframe src]

In addition to main memory, computers generally have special video memory (VRAM) to hold graphical images, called bitmaps, for the computer display. This memory is often dual-ported—a new image can be stored in it at the same time that its current data is being read and displayed.

It takes time to specify an address in a memory chip, and, since memory is slower than a CPU, there is an advantage to memory that can transfer a series of words rapidly once the first address is specified. One such design is known as synchronous DRAM (SDRAM), which became widely used by 2001.

[bad iframe src]

Nonetheless, data transfer through the “bus”—the set of wires that connect the CPU to memory and peripheral devices—is a bottleneck. For that reason, CPU chips now contain cache memory—a small amount of fast SRAM. The cache holds copies of data from blocks of main memory. A well-designed cache allows up to 85–90 percent of memory references to be done from it in typical programs, giving a several-fold speedup in data access.

The time between two memory reads or writes (cycle time) was about 17 microseconds (millionths of a second) for early core memory and about 1 microsecond for core in the early 1970s. The first DRAM had a cycle time of about half a microsecond, or 500 nanoseconds (billionths of a second), and today it is 20 nanoseconds or less. An equally important measure is the cost per bit of memory. The first DRAM stored 128 bytes (1 byte = 8 bits) and cost about $10, or $80,000 per megabyte (millions of bytes). In 2001 DRAM could be purchased for less than $0.25 per megabyte. This vast decline in cost made possible graphical user interfaces (GUIs), the display fonts that word processors use, and the manipulation and visualization of large masses of data by scientific computers.

[bad iframe src]

Secondary memory

Secondary memory on a computer is storage for data and programs not in use at the moment. In addition to punched cards and paper tape, early computers also used magnetic tape for secondary storage. Tape is cheap, either on large reels or in small cassettes, but has the disadvantage that it must be read or written sequentially from one end to the other.

IBM introduced the first magnetic disk, the RAMAC, in 1955; it held 5 megabytes and rented for $3,200 per month. Magnetic disks are platters coated with iron oxide, like tape and drums. An arm with a tiny wire coil, the read/write (R/W) head, moves radially over the disk, which is divided into concentric tracks composed of small arcs, or sectors, of data. Magnetized regions of the disk generate small currents in the coil as it passes, thereby allowing it to “read” a sector; similarly, a small current in the coil will induce a local magnetic change in the disk, thereby “writing” to a sector. The disk rotates rapidly (up to 15,000 rotations per minute), and so the R/W head can rapidly reach any sector on the disk.

[bad iframe src]

Early disks had large removable platters. In the 1970s IBM introduced sealed disks with fixed platters known as Winchester disks—perhaps because the first ones had two 30-megabyte platters, suggesting the Winchester 30-30 rifle. Not only was the sealed disk protected against dirt, the R/W head could also “fly” on a thin air film, very close to the platter. By putting the head closer to the platter, the region of oxide film that represented a single bit could be much smaller, thus increasing storage capacity. This basic technology is still used.

Encyclopædia Britannica, Inc.

Refinements have included putting multiple platters—10 or more—in a single disk drive, with a pair of R/W heads for the two surfaces of each platter in order to increase storage and data transfer rates. Even greater gains have resulted from improving control of the radial motion of the disk arm from track to track, resulting in denser distribution of data on the disk. By 2002 such densities had reached over 8,000 tracks per centimetre (20,000 tracks per inch), and a platter the diameter of a coin could hold over a gigabyte of data. In 2002 an 80-gigabyte disk cost about $200—only one ten-millionth of the 1955 cost and representing an annual decline of nearly 30 percent, similar to the decline in the price of main memory.

Optical storage devices—CD-ROM (compact disc, read-only memory) and DVD-ROM (digital videodisc, or versatile disc)—appeared in the mid-1980s and ’90s. They both represent bits as tiny pits in plastic, organized in a long spiral like a phonograph record, written and read with lasers. A CD-ROM can hold 2 gigabytes of data, but the inclusion of error-correcting codes (to correct for dust, small defects, and scratches) reduces the usable data to 650 megabytes. DVDs are denser, have smaller pits, and can hold 17 gigabytes with error correction.

The DVD player uses a laser that is higher-powered and has a correspondingly finer focus point than that of the CD player. This enables it to resolve shorter pits and narrower separation tracks and thereby accounts for the DVD's greater storage capacity.Encyclopædia Britannica, Inc.

Optical storage devices are slower than magnetic disks, but they are well suited for making master copies of software or for multimedia (audio and video) files that are read sequentially. There are also writable and rewritable CD-ROMs (CD-R and CD-RW) and DVD-ROMs (DVD-R and DVD-RW) that can be used like magnetic tapes for inexpensive archiving and sharing of data.

The decreasing cost of memory continues to make new uses possible. A single CD-ROM can store 100 million words, more than twice as many words as are contained in the printed Encyclopædia Britannica. A DVD can hold a feature-length motion picture. Nevertheless, even larger and faster storage systems, such as three-dimensional optical media, are being developed for handling data for computer simulations of nuclear reactions, astronomical data, and medical data, including X-ray images. Such applications typically require many terabytes (1 terabyte = 1,000 gigabytes) of storage, which can lead to further complications in indexing and retrieval.

David Hemmendinger

Peripherals

Computer peripherals are devices used to input information and instructions into a computer for storage or processing and to output the processed data. In addition, devices that enable the transmission and reception of data between computers are often classified as peripherals.

Input devices

plethora of devices falls into the category of input peripheral. Typical examples include keyboards, mice, trackballs, pointing sticks, joysticks, digital tablets, touch pads, and scanners.

Keyboards contain mechanical or electromechanical switches that change the flow of current through the keyboard when depressed. A microprocessor embedded in the keyboard interprets these changes and sends a signal to the computer. In addition to letter and number keys, most keyboards also include “function” and “control” keys that modify input or send special commands to the computer.

Mechanical mice and trackballs operate alike, using a rubber or rubber-coated ball that turns two shafts connected to a pair of encoders that measure the horizontal and vertical components of a user’s movement, which are then translated into cursor movement on a computer monitor. Optical mice employ a light beam and camera lens to translate motion of the mouse into cursor movement.

computer mouseComputer mouse.Encyclopædia Britannica, Inc.

Pointing sticks, which are popular on many laptop systems, employ a technique that uses a pressure-sensitive resistor. As a user applies pressure to the stick, the resistor increases the flow of electricity, thereby signaling that movement has taken place. Most joysticks operate in a similar manner.

Digital tablets and touch pads are similar in purpose and functionality. In both cases, input is taken from a flat pad that contains electrical sensors that detect the presence of either a special tablet pen or a user’s finger, respectively.

scanner is somewhat akin to a photocopier. A light source illuminates the object to be scanned, and the varying amounts of reflected light are captured and measured by an analog-to-digital converter attached to light-sensitive diodes. The diodes generate a pattern of binary digits that are stored in the computer as a graphical image.

Output devices

Printers are a common example of output devices. New multifunction peripherals that integrate printing, scanning, and copying into a single device are also popular. Computer monitors are sometimes treated as peripherals. High-fidelity sound systems are another example of output devices often classified as computer peripherals. Manufacturers have announced devices that provide tactile feedback to the user—“force feedback” joysticks, for example. This highlights the complexity of classifying peripherals—a joystick with force feedback is truly both an input and an output peripheral.

[bad iframe src]

READ MORE ON THIS TOPIC

history of technology: Automation and the computer

Both old and new materials were used increasingly in the engineering industry, which was transformed...

Early printers often used a process known as impact printing, in which a small number of pins were driven into a desired pattern by an electromagnetic printhead. As each pin was driven forward, it struck an inked ribbon and transferred a single dot the size of the pinhead to the paper. Multiple dots combined into a matrix to form characters and graphics, hence the name dot matrix. Another early print technology, daisy-wheel printers, made impressions of whole characters with a single blow of an electromagnetic printhead, similar to an electric typewriterLaser printers have replaced such printers in most commercial settings. Laser printers employ a focused beam of light to etch patterns of positively charged particles on the surface of a cylindrical drum made of negatively charged organic, photosensitive material. As the drum rotates, negatively charged toner particles adhere to the patterns etched by the laser and are transferred to the paper. Another, less expensive printing technology developed for the home and small businesses is inkjet printing. The majority of inkjet printers operate by ejecting extremely tiny droplets of ink to form characters in a matrix of dots—much like dot matrix printers.

laser printerLaser printer.Encyclopædia Britannica, Inc.

Inkjet printerColour inkjet printers can produce nearly any colour by simultaneously heating and depositing various amounts of pigment from black, cyan, magenta, and yellow ink cartridges.Encyclopædia Britannica, Inc.

Computer display devices have been in use almost as long as computers themselves. Early computer displays employed the same cathode-ray tubes (CRTs) used in television and radar systems. The fundamental principle behind CRT displays is the emission of a controlled stream of electrons that strike light-emitting phosphors coating the inside of the screen. The screen itself is divided into multiple scan lines, each of which contains a number of pixels—the rough equivalent of dots in a dot matrix printer. The resolution of a monitor is determined by its pixel size. More recent liquid crystal displays (LCDs) rely on liquid crystal cells that realign incoming polarized light. The realigned beams pass through a filter that permits only those beams with a particular alignment to pass. By controlling the liquid crystal cells with electrical charges, various colours or shades are made to appear on the screen.

Communication devices

The most familiar example of a communication device is the common telephone modem (from modulator/demodulator). Modems modulate, or transform, a computer’s digital message into an analog signal for transmission over standard telephone networks, and they demodulate the analog signal back into a digital message on reception. In practice, telephone network components limit analog data transmission to about 48 kilobits per second. Standard cable modems operate in a similar manner over cable television networks, which have a total transmission capacity of 30 to 40 megabits per second over each local neighbourhood “loop.” (Like Ethernet cards, cable modems are actually local area network devices, rather than true modems, and transmission performance deteriorates as more users share the loop.) Asymmetric digital subscriber line (ADSL) modems can be used for transmitting digital signals over a local dedicated telephone line, provided there is a telephone office nearby—in theory, within 5,500 metres (18,000 feet) but in practice about a third of that distance. ADSL is asymmetric because transmission rates differ to and from the subscriber: 8 megabits per second “downstream” to the subscriber and 1.5 megabits per second “upstream” from the subscriber to the service provider. In addition to devices for transmitting over telephone and cable wires, wireless communication devices exist for transmitting infrared, radiowave, and microwave signals.

Peripheral interfaces

A variety of techniques have been employed in the design of interfaces to link computers and peripherals. An interface of this nature is often termed a bus. This nomenclature derives from the presence of many paths of electrical communication (e.g., wires) bundled or joined together in a single device. Multiple peripherals can be attached to a single bus—the peripherals need not be homogeneous. An example is the small computer systems interface (SCSI; pronounced “scuzzy”). This popular standard allows heterogeneous devices to communicate with a computer by sharing a single bus. Under the auspices of various national and international organizations, many such standards have been established by manufacturers and users of computers and peripherals.

Buses can be loosely classified as serial or parallel. Parallel buses have a relatively large number of wires bundled together that enable data to be transferred in parallel. This increases the throughput, or rate of data transfer, between the peripheral and computer. SCSI buses are parallel buses. Examples of serial buses include the universal serial bus (USB). USB has an interesting feature in that the bus carries not only data to and from the peripheral but also electrical power. Examples of other peripheral integration schemes include integrated drive electronics (IDE) and enhanced integrated drive electronics (EIDE). Predating USB, these two schemes were designed initially to support greater flexibility in adapting hard disk drives to a variety of different computer makers.

William Morton PottengerThe Editors of Encyclopaedia Britannica

Microprocessor integrated circuits

Before integrated circuits (ICs) were invented, computers used circuits of individual transistors and other electrical components—resistorscapacitors, and diodes—soldered to a circuit board. In 1959 Jack Kilby at Texas Instruments Incorporated, and Robert Noyce at Fairchild Semiconductor Corporation filed patents for integrated circuits. Kilby found how to make all the circuit components out of germanium, the semiconductor material then commonly used for transistors. Noyce used silicon, which is now almost universal, and found a way to build the interconnecting wires as well as the components on a single silicon chip, thus eliminating all soldered connections except for those joining the IC to other components. Brief discussions of IC circuit design, fabrication, and some design issues follow. For a more extensive discussion, see semiconductor and integrated circuit.

[bad iframe src]

Design

Today IC design starts with a circuit description written in a hardware-specification language (like a programming language) or specified graphically with a digital design program. Computer simulation programs then test the design before it is approved. Another program translates the basic circuit layout into a multilayer network of electronic elements and wires.

Fabrication

The IC itself is formed on a silicon wafer cut from a cylinder of pure silicon—now commonly 200–300 mm (8–12 inches) in diameter. Since more chips can be cut from a larger wafer, the material unit cost of a chip goes down with increasing wafer size. A photographic image of each layer of the circuit design is made, and photolithography is used to expose a corresponding circuit of “resist” that has been put on the wafer. The unwanted resist is washed off and the exposed material then etched. This process is repeated to form various layers, with silicon dioxide (glass) used as electrical insulation between layers.

[bad iframe src]

Between these production stages, the silicon is doped with carefully controlled amounts of impurities such as arsenic and boron. These create an excess and a deficiency, respectively, of electrons, thus creating regions with extra available negative charges (n-type) and positive “holes” (p-type). These adjacent doped regions form p-n junction transistors, with electrons (in the n-type regions) and holes (in the p-type regions) migrating through the silicon conducting electricity.

Layers of metal or conducting polycrystalline silicon are also placed on the chip to provide interconnections between its transistors. When the fabrication is complete, a final layer of insulating glass is added, and the wafer is sawed into individual chips. Each chip is tested, and those that pass are mounted in a protective package with external contacts.

LOAD NEXT PAGE

[bad iframe src]


12
$ 0.00
Avatar for Tanni123
3 years ago

Comments

A nice things you share my dear. Without computer nothing is possible in this era.

$ 0.00
3 years ago

Nice artical dear.

💕😙😙💕💕

$ 0.00
3 years ago

nice

$ 0.00
3 years ago

Nice

$ 0.00
3 years ago

Nice article

$ 0.00
3 years ago

Wow nice article

$ 0.00
3 years ago

Nice

$ 0.00
3 years ago

The great innovetion item ever.

$ 0.00
3 years ago

Good job

$ 0.00
3 years ago