Computer, machine that performs tasks, such as calculations or electronic communication, under the control of a set of instructions called a program. Programs usually reside within the computer and are retrieved and processed by the computer’s electronics. The program results are stored or routed to output devices, such as video display monitors or printers. Computers perform a wide variety of activities reliably, accurately, and quickly.
People use computers in many ways. In business, computers track inventories with bar codes and scanners, check the credit status of customers, and transfer funds electronically. In homes, tiny computers embedded in the electronic circuitry of most appliances control the indoor temperature, operate home security systems, tell the time, and turn videocassette recorders (VCRs) on and off. Computers in automobiles regulate the flow of fuel, thereby increasing gas mileage, and are used in anti-theft systems. Computers also entertain, creating digitized sound on stereo systems or computer-animated features from a digitally encoded laser disc. Computer programs, or applications, exist to aid every level of education, from programs that teach simple addition or sentence construction to programs that teach advanced calculus. Educators use computers to track grades and communicate with students; with computer-controlled projection units, they can add graphics, sound, and animation to their communications (see Computer-Aided Instruction). Computers are used extensively in scientific research to solve mathematical problems, investigate complicated data, or model systems that are too costly or impractical to build, such as testing the air flow around the next generation of aircraft. The military employs computers in sophisticated communications to encode and unscramble messages, and to keep track of personnel and supplies.
The physical computer and its components are known as hardware. Computer hardware includes the memory that stores data and program instructions; the central processing unit (CPU) that carries out program instructions; the input devices, such as a keyboard or mouse, that allow the user to communicate with the computer; the output devices, such as printers and video display monitors, that enable the computer to present information to the user; and buses (hardware lines or wires) that connect these and other computer components. The programs that run the computer are called software. Software generally is designed to perform a particular type of task—for example, to control the arm of a robot to weld a car’s body, to write a letter, to display and modify a photograph, or to direct the general operation of the computer.
When a computer is turned on it searches for instructions in its memory. These instructions tell the computer how to start up. Usually, one of the first sets of these instructions is a special program called the operating system, which is the software that makes the computer work. It prompts the user (or other machines) for input and commands, reports the results of these commands and other operations, stores and manages data, and controls the sequence of the software and hardware actions. When the user requests that a program run, the operating system loads the program in the computer’s memory and runs the program. Popular operating systems, such as Microsoft Windows and the Macintosh system (Mac OS), have graphical user interfaces (GUIs)—that use tiny pictures, or icons, to represent various files and commands. To access these files or commands, the user clicks the mouse on the icon or presses a combination of keys on the keyboard. Some operating systems allow the user to carry out these tasks via voice, touch, or other input methods.
To process information electronically, data are stored in a computer in the form of binary digits, or bits, each having two possible representations (0 or 1). If a second bit is added to a single bit of information, the number of representations is doubled, resulting in four possible combinations: 00, 01, 10, or 11. A third bit added to this two-bit representation again doubles the number of combinations, resulting in eight possibilities: 000, 001, 010, 011, 100, 101, 110, or 111. Each time a bit is added, the number of possible patterns is doubled. Eight bits is called a byte; a byte has 256 possible combinations of 0s and 1s. See also Expanded Memory; Extended Memory.
A byte is a useful quantity in which to store information because it provides enough possible patterns to represent the entire alphabet, in lower and upper cases, as well as numeric digits, punctuation marks, and several character-sized graphics symbols, including non-English characters such as p. A byte also can be interpreted as a pattern that represents a number between 0 and 255. A kilobyte—1,024 bytes—can store about 1,000 characters; a megabyte can store about 1 million characters; a gigabyte can store about 1 billion characters; and a terabyte can store about 1 trillion characters. Computer programmers usually decide how a given byte should be interpreted—that is, as a single character, a character within a string of text, a single number, or part of a larger number. Numbers can represent anything from chemical bonds to dollar figures to colors to sounds.
The physical memory of a computer is either random access memory (RAM), which can be read or changed by the user or computer, or read-only memory (ROM), which can be read by the computer but not altered in any way. One way to store memory is within the circuitry of the computer, usually in tiny computer chips that hold millions of bytes of information. The memory within these computer chips is RAM. Memory also can be stored outside the circuitry of the computer on external storage devices, such as magnetic floppy disks, which can store about 2 megabytes of information; hard drives, which can store gigabytes of information; compact discs (CDs), which can store up to 680 megabytes of information; and digital video discs (DVDs), which can store 8.5 gigabytes of information. A single CD can store nearly as much information as several hundred floppy disks, and some DVDs can hold more than 12 times as much data as a CD.
The bus enables the components in a computer, such as the CPU and the memory circuits, to communicate as program instructions are being carried out. The bus is usually a flat cable with numerous parallel wires. Each wire can carry one bit, so the bus can transmit many bits along the cable at the same time. For example, a 16-bit bus, with 16 parallel wires, allows the simultaneous transmission of 16 bits (2 bytes) of information from one component to another. Early computer designs utilized a single or very few buses. Modern designs typically use many buses, some of them specialized to carry particular forms of data, such as graphics.
Input devices, such as a keyboard or mouse, permit the computer user to communicate with the computer. Other input devices include a joystick, a rodlike device often used by people who play computer games; a scanner, which converts images such as photographs into digital images that the computer can manipulate; a touch panel, which senses the placement of a user’s finger and can be used to execute commands or access files; and a microphone, used to input sounds such as the human voice which can activate computer commands in conjunction with voice recognition software. “Tablet” computers are being developed that will allow users to interact with their screens using a penlike device.
Information from an input device or from the computer’s memory is communicated via the bus to the central processing unit (CPU), which is the part of the computer that translates commands and runs programs. The CPU is a microprocessor chip—that is, a single piece of silicon containing millions of tiny, microscopically wired electrical components. Information is stored in a CPU memory location called a register. Registers can be thought of as the CPU’s tiny scratchpad, temporarily storing instructions or data. When a program is running, one special register called the program counter keeps track of which program instruction comes next by maintaining the memory location of the next program instruction to be executed. The CPU’s control unit coordinates and times the CPU’s functions, and it uses the program counter to locate and retrieve the next instruction from memory.
In a typical sequence, the CPU locates the next instruction in the appropriate memory device. The instruction then travels along the bus from the computer’s memory to the CPU, where it is stored in a special instruction register. Meanwhile, the program counter changes—usually increasing a small amount—so that it contains the location of the instruction that will be executed next. The current instruction is analyzed by a decoder, which determines what the instruction will do. Any data the instruction needs are retrieved via the bus and placed in the CPU’s registers. The CPU executes the instruction, and the results are stored in another register or copied to specific memory locations via a bus. This entire sequence of steps is called an instruction cycle. Frequently, several instructions may be in process simultaneously, each at a different stage in its instruction cycle. This is called pipeline processing.
Once the CPU has executed the program instruction, the program may request that the information be communicated to an output device, such as a video display monitor or a flat liquid crystal display. Other output devices are printers, overhead projectors, videocassette recorders (VCRs), and speakers.
Programming languages contain the series of commands that create software. A CPU has a limited set of instructions known as machine code that it is capable of understanding. The CPU can understand only this language. All other programming languages must be converted to machine code for them to be understood. Computer programmers, however, prefer to use other computer languages that use words or other commands because they are easier to use. These other languages are slower because the language must be translated first so that the computer can understand it. The translation can lead to code that may be less efficient to run than code written directly in the machine’s language.
Computer programs that can be run by a computer’s operating system are called executables. An executable program is a sequence of extremely simple instructions known as machine code. These instructions are specific to the individual computer’s CPU and associated hardware; for example, Intel Pentium and Power PC microprocessor chips each have different machine languages and require different sets of codes to perform the same task. Machine code instructions are few in number (roughly 20 to 200, depending on the computer and the CPU). Typical instructions are for copying data from a memory location or for adding the contents of two memory locations (usually registers in the CPU). Complex tasks require a sequence of these simple instructions. Machine code instructions are binary—that is, sequences of bits (0s and 1s). Because these sequences are long strings of 0s and 1s and are usually not easy to understand, computer instructions usually are not written in machine code. Instead, computer programmers write code in languages known as an assembly language or a high-level language.
Assembly language uses easy-to-remember commands that are more understandable to programmers than machine-language commands. Each machine language instruction has an equivalent command in assembly language. For example, in one Intel assembly language, the statement “MOV A, B” instructs the computer to copy data from location A to location B. The same instruction in machine code is a string of 16 0s and 1s. Once an assembly-language program is written, it is converted to a machine-language program by another program called an assembler.
Assembly language is fast and powerful because of its correspondence with machine language. It is still difficult to use, however, because assembly-language instructions are a series of abstract codes and each instruction carries out a relatively simple task. In addition, different CPUs use different machine languages and therefore require different programs and different assembly languages. Assembly language is sometimes inserted into a high-level language program to carry out specific hardware tasks or to speed up parts of the high-level program that are executed frequently.
In 1965 semiconductor pioneer Gordon Moore predicted that the number of transistors contained on a computer chip would double every year. This is now known as Moore’s Law, and it has proven to be somewhat accurate. The number of transistors and the computational speed of microprocessors currently doubles approximately every 18 months. Components continue to shrink in size and are becoming faster, cheaper, and more versatile.
With their increasing power and versatility, computers simplify day-to-day life. Unfortunately, as computer use becomes more widespread, so do the opportunities for misuse. Computer hackers—people who illegally gain access to computer systems—often violate privacy and can tamper with or destroy records. Programs called viruses or worms can replicate and spread from computer to computer, erasing information or causing malfunctions. Other individuals have used computers to electronically embezzle funds and alter credit histories (see Computer Security). New ethical issues also have arisen, such as how to regulate material on the Internet and the World Wide Web. Long-standing issues, such as privacy and freedom of expression, are being reexamined in light of the digital revolution. Individuals, companies, and governments are working to solve these problems through informed conversation, compromise, better computer security, and regulatory legislation.
Computers will become more advanced and they will also become easier to use. Improved speech recognition will make the operation of a computer easier. Virtual reality, the technology of interacting with a computer using all of the human senses, will also contribute to better human and computer interfaces. Standards for virtual-reality program languages—for example, Virtual Reality Modeling language (VRML)—are currently in use or are being developed for the World Wide Web.
Other, exotic models of computation are being developed, including biological computing that uses living organisms, molecular computing that uses molecules with particular properties, and computing that uses deoxyribonucleic acid (DNA), the basic unit of heredity, to store data and carry out operations. These are examples of possible future computational platforms that, so far, are limited in abilities or are strictly theoretical. Scientists investigate them because of the physical limitations of miniaturizing circuits embedded in silicon. There are also limitations related to heat generated by even the tiniest of transistors.
Intriguing breakthroughs occurred in the area of quantum computing in the late 1990s. Quantum computers under development use components of a chloroform molecule (a combination of chlorine and hydrogen atoms) and a variation of a medical procedure called magnetic resonance imaging (MRI) to compute at a molecular level. Scientists use a branch of physics called quantum mechanics, which describes the behavior of subatomic particles (particles that make up atoms), as the basis for quantum computing. Quantum computers may one day be thousands to millions of times faster than current computers, because they take advantage of the laws that govern the behavior of subatomic particles. These laws allow quantum computers to examine all possible answers to a query simultaneously. Future uses of quantum computers could include code breaking (see cryptography) and large database queries. Theorists of chemistry, computer science, mathematics, and physics are now working to determine the possibilities and limitations of quantum computing.
Communications between computer users and networks will benefit from new technologies such as broadband communication systems that can carry significantly more data faster or more conveniently to and from the vast interconnected databases that continue to grow in number and type.
In 1948, at Bell Telephone Laboratories, American physicists Walter Houser Brattain, John Bardeen, and William Bradford Shockley developed the transistor, a device that can act as an electric switch. The transistor had a tremendous impact on computer design, replacing costly, energy-inefficient, and unreliable vacuum tubes.
In the late 1960s integrated circuits (tiny transistors and other electrical components arranged on a single chip of silicon) replaced individual transistors in computers. Integrated circuits resulted from the simultaneous, independent work of Jack Kilby at Texas Instruments and Robert Noyce of the Fairchild Semiconductor Corporation in the late 1950s. As integrated circuits became miniaturized, more components could be designed into a single computer circuit. In the 1970s refinements in integrated circuit technology led to the development of the modern microprocessor, integrated circuits that contained thousands of transistors. Modern microprocessors can contain more than 40 million transistors.
Manufacturers used integrated circuit technology to build smaller and cheaper computers. The first of these so-called personal computers (PCs)—the Altair 8800—appeared in 1975, sold by Micro Instrumentation Telemetry Systems (MITS). The Altair used an 8-bit Intel 8080 microprocessor, had 256 bytes of RAM, received input through switches on the front panel, and displayed output on rows of light-emitting diodes (LEDs). Refinements in the PC continued with the inclusion of video displays, better storage devices, and CPUs with more computational abilities. Graphical user interfaces were first designed by the Xerox Corporation, then later used successfully by Apple Computer, Inc.. Today the development of sophisticated operating systems such as Windows, the Mac OS, and Linux enables computer users to run programs and manipulate data in ways that were unimaginable in the mid-20th century.
Several researchers claim the “record” for the largest single calculation ever performed. One large single calculation was accomplished by physicists at IBM in 1995. They solved one million trillion mathematical subproblems by continuously running 448 computers for two years. Their analysis demonstrated the existence of a previously hypothetical subatomic particle called a glueball. Japan, Italy, and the United States are collaborating to develop new supercomputers that will run these types of calculations 100 times faster.
In 1996 IBM challenged Garry Kasparov, the reigning world chess champion, to a chess match with a supercomputer called Deep Blue. The computer had the ability to compute more than 100 million chess positions per second. In a 1997 rematch Deep Blue defeated Kasparov, becoming the first computer to win a match against a reigning world chess champion with regulation time controls. Many experts predict these types of parallel processing machines will soon surpass human chess playing ability, and some speculate that massive calculating power will one day replace intelligence. Deep Blue serves as a prototype for future computers that will be required to solve complex problems. At issue, however, is whether a computer can be developed with the ability to learn to solve problems on its own, rather than one programmed to solve a specific set of tasks.