Origins
During the 1960s, computer processors were constructed out of small and
medium-scale ICs—each containing from tens to a few hundred transistors. These were
placed and soldered onto printed circuit boards, and often multiple boards were
interconnected in a chassis. The large number of discrete logic gates used more
electrical power—and therefore produced more heat—than a more integrated design
with fewer ICs. The distance that signals had to travel between ICs on the
boards limited a computer's operating speed.
In the NASA Apollo space missions to the moon in the 1960s and 1970s, all onboard computations for primary guidance,
navigation and control were provided by a small custom processor called
"The Apollo Guidance
Computer". It used wire wrap circuit boards whose only logic elements were
three-input NOR gates.[3]
The integration of a whole CPU onto a single chip or on a few chips
greatly reduced the cost of processing power. The integrated circuit processor
was produced in large numbers by highly automated processes, so unit cost was
low. Single-chip processors increase reliability as there are many fewer
electrical connections to fail. As microprocessor designs get faster, the cost
of manufacturing a chip (with smaller components built on a semiconductor chip
the same size) generally stays the same.
Microprocessors integrated into one or a few large-scale ICs the
architectures that had previously been implemented using many medium- and
small-scale integrated circuits. Continued increases in microprocessor capacity
have rendered other forms of computers almost completely obsolete (see history of
computing hardware), with one or more microprocessors used in
everything from the smallest embedded systems and handheld devices to the
largest mainframes and supercomputers.
The first microprocessors emerged in the early 1970s and were used for
electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4-bit and 8-bit
microprocessors, such as terminals, printers, various kinds of automation etc.,
followed soon after. Affordable 8-bit microprocessors with 16-bit addressing
also led to the first general-purpose microcomputers from the
mid-1970s on.
Since the early 1970s, the increase in capacity of microprocessors has
followed Moore's law; this
originally suggested that the number of components that can be fitted onto a
chip doubles every year. With present technology, it is actually every two
years, [4] and as such Moore later
changed the period to two years.[5]
No comments:
Post a Comment