Saturday, June 12, 2010

The Third Generation: Integrated Circuit Computers (1965–1980)

The real explosion in computer use came with the integrated circuit generation.

Jack Kilby invented the integrated circuit (IC) or

microchip

, made of germanium.

Six months later, Robert Noyce (who had also been working on integrated circuit

design) created a similar device using silicon instead of germanium. This is the

silicon chip upon which the computer industry was built. Early ICs allowed

IC technology also allowed for the development of more powerful supercomputers.

Seymour Cray took what he had learned while building the CDC 6600 and

started his own company, the Cray Research Corporation. This company produced

a number of supercomputers, starting with the $8.8 million Cray-1, in

1976. The Cray-1, in stark contrast to the CDC 6600, could execute over 160 million

instructions per second and could support 8 megabytes of memory.

dozens of transistors to exist on a single silicon chip that was smaller than a single

“discrete component” transistor. Computers became faster, smaller, and

cheaper, bringing huge gains in processing power. The IBM System/360 family

of computers was among the first commercially available systems to be built

entirely of solid-state components. The 360 product line was also IBM’s first

offering where all of the machines in the family were compatible, meaning they

all used the same assembly language. Users of smaller machines could upgrade to

larger systems without rewriting all of their software. This was a revolutionary

new concept at the time.

The IC generation also saw the introduction of time-sharing and multiprogramming

(the ability for more than one person to use the computer at a time).

Multiprogramming, in turn, necessitated the introduction of new operating systems

for these computers. Time-sharing minicomputers such as DEC’s PDP-8 and

PDP-11 made computing affordable to smaller businesses and more universities.

N

N

P

Collector

Base

N

N

P

The Third Generation: Integrated Circuit Computers (1965–1980)

The real explosion in computer use came with the integrated circuit generation.

Jack Kilby invented the integrated circuit (IC) or

microchip

, made of germanium.

Six months later, Robert Noyce (who had also been working on integrated circuit

design) created a similar device using silicon instead of germanium. This is the

silicon chip upon which the computer industry was built. Early ICs allowed

IC technology also allowed for the development of more powerful supercomputers.

Seymour Cray took what he had learned while building the CDC 6600 and

started his own company, the Cray Research Corporation. This company produced

a number of supercomputers, starting with the $8.8 million Cray-1, in

1976. The Cray-1, in stark contrast to the CDC 6600, could execute over 160 million

instructions per second and could support 8 megabytes of memory.

dozens of transistors to exist on a single silicon chip that was smaller than a single

“discrete component” transistor. Computers became faster, smaller, and

cheaper, bringing huge gains in processing power. The IBM System/360 family

of computers was among the first commercially available systems to be built

entirely of solid-state components. The 360 product line was also IBM’s first

offering where all of the machines in the family were compatible, meaning they

all used the same assembly language. Users of smaller machines could upgrade to

larger systems without rewriting all of their software. This was a revolutionary

new concept at the time.

The IC generation also saw the introduction of time-sharing and multiprogramming

(the ability for more than one person to use the computer at a time).

Multiprogramming, in turn, necessitated the introduction of new operating systems

for these computers. Time-sharing minicomputers such as DEC’s PDP-8 and

PDP-11 made computing affordable to smaller businesses and more universities.

N

N

P

Collector

Base

N

N

P

The Second Generation: Transistorized Computers (1954–1965)

The vacuum tube technology of the first generation was not very dependable. In

fact, some ENIAC detractors believed that the system would never run because

the tubes would burn out faster than they could be replaced. Although system reliability

wasn’t as bad as the doomsayers predicted, vacuum tube systems often

experienced more downtime than uptime.

In 1948, three researchers with Bell Laboratories—John Bardeen, Walter Brattain,

and William Shockley—invented the transistor. This new technology not only

revolutionized devices such as televisions and radios, but also pushed the computer

industry into a new generation. Because transistors consume less power than vacuum

tubes, are smaller, and work more reliably, the circuitry in computers consequently

became smaller and more reliable. Despite using transistors, computers of this generation

were still bulky and quite costly. Typically only universities, governments, and

large businesses could justify the expense. Nevertheless, a plethora of computer

makers emerged in this generation; IBM, Digital Equipment Corporation (DEC),

and Univac (now Unisys) dominated the industry. IBM marketed the 7094 for scientific

applications and the 1401 for business applications. DEC was busy manufacturing

the PDP-1. A company founded (but soon sold) by Mauchly and Eckert built the

Univac systems. The most successful Unisys systems of this generation belonged to

its 1100 series. Another company, Control Data Corporation (CDC), under the supervision

of Seymour Cray, built the CDC 6600, the world’s first supercomputer. The

$10 million CDC 6600 could perform 10 million instructions per second, used 60-bit

words, and had an astounding 128 kilowords of main memory.

The First Generation: Vacuum Tube Computers (1945–1953)

Although Babbage is often called the “father of computing,” his machines were

mechanical, not electrical or electronic. In the 1930s, Konrad Zuse (1910–1995)

picked up where Babbage left off, adding electrical technology and other improvements

to Babbage’s design. Zuse’s computer, the Z1, used electromechanical

relays instead of Babbage’s hand-cranked gears. The Z1 was programmable and

cally to solve systems of linear equations, we cannot call it a general-purpose

computer. There were, however, some features that the ABC had in common with

the general-purpose ENIAC (Electronic Numerical Integrator and Computer),

which was invented a few years later. These common features caused considerable

controversy as to who should be given the credit (and patent rights) for the

invention of the electronic digital computer. (The interested reader can find more

details on a rather lengthy lawsuit involving Atanasoff and the ABC in Mollenhoff

[1988].)

John Mauchly (1907–1980) and J. Presper Eckert (1929–1995) were the two

principle inventors of the ENIAC, introduced to the public in 1946. The ENIAC

is recognized as the first all-electronic, general-purpose digital computer. This

machine used 17,468 vacuum tubes, occupied 1,800 square feet of floor space,

weighed 30 tons, and consumed 174 kilowatts of power. The ENIAC had a memory

capacity of about 1,000 information bits (about 20 10-digit decimal numbers)

and used punched cards to store data.

John Mauchly’s vision for an electronic calculating machine was born from

his lifelong interest in predicting the weather mathematically. While a professor

of physics at Ursinus College near Philadelphia, Mauchly engaged dozens of

adding machines and student operators to crunch mounds of data that he believed

would reveal mathematical relationships behind weather patterns. He felt that if

he could have only a little more computational power, he could reach the goal

that seemed just beyond his grasp. Pursuant to the Allied war effort, and with

ulterior motives to learn about electronic computation, Mauchly volunteered for a

crash course in electrical engineering at the University of Pennsylvania’s Moore

School of Engineering. Upon completion of this program, Mauchly accepted a

teaching position at the Moore School, where he taught a brilliant young student,

J. Presper Eckert. Mauchly and Eckert found a mutual interest in building an

electronic calculating device. In order to secure the funding they needed to build

their machine, they wrote a formal proposal for review by the school. They portrayed

their machine as conservatively as they could, billing it as an “automatic

calculator.” Although they probably knew that computers would be able to function

most efficiently using the binary numbering system, Mauchly and Eckert

designed their system to use base 10 numbers, in keeping with the appearance of

building a huge electronic adding machine. The university rejected Mauchly and

Eckert’s proposal. Fortunately, the United States Army was more interested.

had a memory, an arithmetic unit, and a control unit. Because money and resources

were scarce in wartime Germany, Zuse used discarded movie film instead of

punched cards for input. Although his machine was designed to use vacuum tubes,

Zuse, who was building his machine on his own, could not afford the tubes. Thus,

the Z1 correctly belongs in the first generation, although it had no tubes.

Zuse built the Z1 in his parents’ Berlin living room while Germany was at

war with most of Europe. Fortunately, he couldn’t convince the Nazis to buy his

machine. They did not realize the tactical advantage such a device would give

them. Allied bombs destroyed all three of Zuse’s first systems, the Z1, Z2, and

Z3. Zuse’s impressive machines could not be refined until after the war and ended

up being another “evolutionary dead end” in the history of computers.

Digital computers, as we know them today, are the outcome of work done by

a number of people in the 1930s and 1940s. Pascal’s basic mechanical calculator

was designed and modified simultaneously by many people; the same can be said

of the modern electronic computer. Notwithstanding the continual arguments

about who was first with what, three people clearly stand out as the inventors of

modern computers: John Atanasoff, John Mauchly, and J. Presper Eckert.

John Atanasoff (1904–1995) has been credited with the construction of the

first completely electronic computer. The Atanasoff Berry Computer (ABC) was

a binary machine built from vacuum tubes. Because this system was built specifi

The First Generation: Vacuum Tube Computers (1945–1953)

Although Babbage is often called the “father of computing,” his machines were

mechanical, not electrical or electronic. In the 1930s, Konrad Zuse (1910–1995)

picked up where Babbage left off, adding electrical technology and other improvements

to Babbage’s design. Zuse’s computer, the Z1, used electromechanical

relays instead of Babbage’s hand-cranked gears. The Z1 was programmable and

cally to solve systems of linear equations, we cannot call it a general-purpose

computer. There were, however, some features that the ABC had in common with

the general-purpose ENIAC (Electronic Numerical Integrator and Computer),

which was invented a few years later. These common features caused considerable

controversy as to who should be given the credit (and patent rights) for the

invention of the electronic digital computer. (The interested reader can find more

details on a rather lengthy lawsuit involving Atanasoff and the ABC in Mollenhoff

[1988].)

John Mauchly (1907–1980) and J. Presper Eckert (1929–1995) were the two

principle inventors of the ENIAC, introduced to the public in 1946. The ENIAC

is recognized as the first all-electronic, general-purpose digital computer. This

machine used 17,468 vacuum tubes, occupied 1,800 square feet of floor space,

weighed 30 tons, and consumed 174 kilowatts of power. The ENIAC had a memory

capacity of about 1,000 information bits (about 20 10-digit decimal numbers)

and used punched cards to store data.

John Mauchly’s vision for an electronic calculating machine was born from

his lifelong interest in predicting the weather mathematically. While a professor

of physics at Ursinus College near Philadelphia, Mauchly engaged dozens of

adding machines and student operators to crunch mounds of data that he believed

would reveal mathematical relationships behind weather patterns. He felt that if

he could have only a little more computational power, he could reach the goal

that seemed just beyond his grasp. Pursuant to the Allied war effort, and with

ulterior motives to learn about electronic computation, Mauchly volunteered for a

crash course in electrical engineering at the University of Pennsylvania’s Moore

School of Engineering. Upon completion of this program, Mauchly accepted a

teaching position at the Moore School, where he taught a brilliant young student,

J. Presper Eckert. Mauchly and Eckert found a mutual interest in building an

electronic calculating device. In order to secure the funding they needed to build

their machine, they wrote a formal proposal for review by the school. They portrayed

their machine as conservatively as they could, billing it as an “automatic

calculator.” Although they probably knew that computers would be able to function

most efficiently using the binary numbering system, Mauchly and Eckert

designed their system to use base 10 numbers, in keeping with the appearance of

building a huge electronic adding machine. The university rejected Mauchly and

Eckert’s proposal. Fortunately, the United States Army was more interested.

had a memory, an arithmetic unit, and a control unit. Because money and resources

were scarce in wartime Germany, Zuse used discarded movie film instead of

punched cards for input. Although his machine was designed to use vacuum tubes,

Zuse, who was building his machine on his own, could not afford the tubes. Thus,

the Z1 correctly belongs in the first generation, although it had no tubes.

Zuse built the Z1 in his parents’ Berlin living room while Germany was at

war with most of Europe. Fortunately, he couldn’t convince the Nazis to buy his

machine. They did not realize the tactical advantage such a device would give

them. Allied bombs destroyed all three of Zuse’s first systems, the Z1, Z2, and

Z3. Zuse’s impressive machines could not be refined until after the war and ended

up being another “evolutionary dead end” in the history of computers.

Digital computers, as we know them today, are the outcome of work done by

a number of people in the 1930s and 1940s. Pascal’s basic mechanical calculator

was designed and modified simultaneously by many people; the same can be said

of the modern electronic computer. Notwithstanding the continual arguments

about who was first with what, three people clearly stand out as the inventors of

modern computers: John Atanasoff, John Mauchly, and J. Presper Eckert.

John Atanasoff (1904–1995) has been credited with the construction of the

first completely electronic computer. The Atanasoff Berry Computer (ABC) was

a binary machine built from vacuum tubes. Because this system was built specifi

The First Generation: Vacuum Tube Computers (1945–1953)

Although Babbage is often called the “father of computing,” his machines were

mechanical, not electrical or electronic. In the 1930s, Konrad Zuse (1910–1995)

picked up where Babbage left off, adding electrical technology and other improvements

to Babbage’s design. Zuse’s computer, the Z1, used electromechanical

relays instead of Babbage’s hand-cranked gears. The Z1 was programmable and

cally to solve systems of linear equations, we cannot call it a general-purpose

computer. There were, however, some features that the ABC had in common with

the general-purpose ENIAC (Electronic Numerical Integrator and Computer),

which was invented a few years later. These common features caused considerable

controversy as to who should be given the credit (and patent rights) for the

invention of the electronic digital computer. (The interested reader can find more

details on a rather lengthy lawsuit involving Atanasoff and the ABC in Mollenhoff

[1988].)

John Mauchly (1907–1980) and J. Presper Eckert (1929–1995) were the two

principle inventors of the ENIAC, introduced to the public in 1946. The ENIAC

is recognized as the first all-electronic, general-purpose digital computer. This

machine used 17,468 vacuum tubes, occupied 1,800 square feet of floor space,

weighed 30 tons, and consumed 174 kilowatts of power. The ENIAC had a memory

capacity of about 1,000 information bits (about 20 10-digit decimal numbers)

and used punched cards to store data.

John Mauchly’s vision for an electronic calculating machine was born from

his lifelong interest in predicting the weather mathematically. While a professor

of physics at Ursinus College near Philadelphia, Mauchly engaged dozens of

adding machines and student operators to crunch mounds of data that he believed

would reveal mathematical relationships behind weather patterns. He felt that if

he could have only a little more computational power, he could reach the goal

that seemed just beyond his grasp. Pursuant to the Allied war effort, and with

ulterior motives to learn about electronic computation, Mauchly volunteered for a

crash course in electrical engineering at the University of Pennsylvania’s Moore

School of Engineering. Upon completion of this program, Mauchly accepted a

teaching position at the Moore School, where he taught a brilliant young student,

J. Presper Eckert. Mauchly and Eckert found a mutual interest in building an

electronic calculating device. In order to secure the funding they needed to build

their machine, they wrote a formal proposal for review by the school. They portrayed

their machine as conservatively as they could, billing it as an “automatic

calculator.” Although they probably knew that computers would be able to function

most efficiently using the binary numbering system, Mauchly and Eckert

designed their system to use base 10 numbers, in keeping with the appearance of

building a huge electronic adding machine. The university rejected Mauchly and

Eckert’s proposal. Fortunately, the United States Army was more interested.

had a memory, an arithmetic unit, and a control unit. Because money and resources

were scarce in wartime Germany, Zuse used discarded movie film instead of

punched cards for input. Although his machine was designed to use vacuum tubes,

Zuse, who was building his machine on his own, could not afford the tubes. Thus,

the Z1 correctly belongs in the first generation, although it had no tubes.

Zuse built the Z1 in his parents’ Berlin living room while Germany was at

war with most of Europe. Fortunately, he couldn’t convince the Nazis to buy his

machine. They did not realize the tactical advantage such a device would give

them. Allied bombs destroyed all three of Zuse’s first systems, the Z1, Z2, and

Z3. Zuse’s impressive machines could not be refined until after the war and ended

up being another “evolutionary dead end” in the history of computers.

Digital computers, as we know them today, are the outcome of work done by

a number of people in the 1930s and 1940s. Pascal’s basic mechanical calculator

was designed and modified simultaneously by many people; the same can be said

of the modern electronic computer. Notwithstanding the continual arguments

about who was first with what, three people clearly stand out as the inventors of

modern computers: John Atanasoff, John Mauchly, and J. Presper Eckert.

John Atanasoff (1904–1995) has been credited with the construction of the

first completely electronic computer. The Atanasoff Berry Computer (ABC) was

a binary machine built from vacuum tubes. Because this system was built specifi

The First Generation: Vacuum Tube Computers (1945–1953)

Although Babbage is often called the “father of computing,” his machines were

mechanical, not electrical or electronic. In the 1930s, Konrad Zuse (1910–1995)

picked up where Babbage left off, adding electrical technology and other improvements

to Babbage’s design. Zuse’s computer, the Z1, used electromechanical

relays instead of Babbage’s hand-cranked gears. The Z1 was programmable and

cally to solve systems of linear equations, we cannot call it a general-purpose

computer. There were, however, some features that the ABC had in common with

the general-purpose ENIAC (Electronic Numerical Integrator and Computer),

which was invented a few years later. These common features caused considerable

controversy as to who should be given the credit (and patent rights) for the

invention of the electronic digital computer. (The interested reader can find more

details on a rather lengthy lawsuit involving Atanasoff and the ABC in Mollenhoff

[1988].)

John Mauchly (1907–1980) and J. Presper Eckert (1929–1995) were the two

principle inventors of the ENIAC, introduced to the public in 1946. The ENIAC

is recognized as the first all-electronic, general-purpose digital computer. This

machine used 17,468 vacuum tubes, occupied 1,800 square feet of floor space,

weighed 30 tons, and consumed 174 kilowatts of power. The ENIAC had a memory

capacity of about 1,000 information bits (about 20 10-digit decimal numbers)

and used punched cards to store data.

John Mauchly’s vision for an electronic calculating machine was born from

his lifelong interest in predicting the weather mathematically. While a professor

of physics at Ursinus College near Philadelphia, Mauchly engaged dozens of

adding machines and student operators to crunch mounds of data that he believed

would reveal mathematical relationships behind weather patterns. He felt that if

he could have only a little more computational power, he could reach the goal

that seemed just beyond his grasp. Pursuant to the Allied war effort, and with

ulterior motives to learn about electronic computation, Mauchly volunteered for a

crash course in electrical engineering at the University of Pennsylvania’s Moore

School of Engineering. Upon completion of this program, Mauchly accepted a

teaching position at the Moore School, where he taught a brilliant young student,

J. Presper Eckert. Mauchly and Eckert found a mutual interest in building an

electronic calculating device. In order to secure the funding they needed to build

their machine, they wrote a formal proposal for review by the school. They portrayed

their machine as conservatively as they could, billing it as an “automatic

calculator.” Although they probably knew that computers would be able to function

most efficiently using the binary numbering system, Mauchly and Eckert

designed their system to use base 10 numbers, in keeping with the appearance of

building a huge electronic adding machine. The university rejected Mauchly and

Eckert’s proposal. Fortunately, the United States Army was more interested.

had a memory, an arithmetic unit, and a control unit. Because money and resources

were scarce in wartime Germany, Zuse used discarded movie film instead of

punched cards for input. Although his machine was designed to use vacuum tubes,

Zuse, who was building his machine on his own, could not afford the tubes. Thus,

the Z1 correctly belongs in the first generation, although it had no tubes.

Zuse built the Z1 in his parents’ Berlin living room while Germany was at

war with most of Europe. Fortunately, he couldn’t convince the Nazis to buy his

machine. They did not realize the tactical advantage such a device would give

them. Allied bombs destroyed all three of Zuse’s first systems, the Z1, Z2, and

Z3. Zuse’s impressive machines could not be refined until after the war and ended

up being another “evolutionary dead end” in the history of computers.

Digital computers, as we know them today, are the outcome of work done by

a number of people in the 1930s and 1940s. Pascal’s basic mechanical calculator

was designed and modified simultaneously by many people; the same can be said

of the modern electronic computer. Notwithstanding the continual arguments

about who was first with what, three people clearly stand out as the inventors of

modern computers: John Atanasoff, John Mauchly, and J. Presper Eckert.

John Atanasoff (1904–1995) has been credited with the construction of the

first completely electronic computer. The Atanasoff Berry Computer (ABC) was

a binary machine built from vacuum tubes. Because this system was built specifi

Generation Zero: Mechanical Calculating Machines (1642–1945)

Prior to the 1500s, a typical European businessperson used an abacus for calculations

and recorded the result of his ciphering in Roman numerals. After the

decimal numbering system finally replaced Roman numerals, a number of people

invented devices to make decimal calculations even faster and more accu

rate. Wilhelm Schickard (1592–1635) has been credited with the invention of the

first mechanical calculator, the Calculating Clock (exact date unknown). This

device was able to add and subtract numbers containing as many as six digits. In

1642, Blaise Pascal (1623–1662) developed a mechanical calculator called the

Pascaline to help his father with his tax work. The Pascaline could do addition

with carry and subtraction. It was probably the first mechanical adding device

actually used for a practical purpose. In fact, the Pascaline was so well conceived

that its basic design was still being used at the beginning of the twentieth

century, as evidenced by the Lightning Portable Adder in 1908, and the Addometer

in 1920. Gottfried Wilhelm von Leibniz (1646–1716), a noted mathematician,

invented a calculator known as the Stepped Reckoner that could add,

subtract, multiply, and divide. None of these devices could be programmed or

had memory. They required manual intervention throughout each step of their

calculations.

Although machines like the Pascaline were used into the twentieth century,

new calculator designs began to emerge in the nineteenth century. One of the

most ambitious of these new designs was the Difference Engine by Charles Babbage

(1791–1871). Some people refer to Babbage as “the father of computing.”

By all accounts, he was an eccentric genius who brought us, among other things,

the skeleton key and the “cow catcher,” a device intended to push cows and other

movable obstructions out of the way of locomotives.

Babbage built his Difference Engine in 1822. The Difference Engine got its

name because it used a calculating technique called the method of differences. The

machine was designed to mechanize the solution of polynomial functions and was

actually a calculator, not a computer. Babbage also designed a general-purpose

machine in 1833 called the Analytical Engine. Although Babbage died before he

could build it, the Analytical Engine was designed to be more versatile than his

earlier Difference Engine. The Analytical Engine would have been capable of performing

any mathematical operation. The Analytical Engine included many of the

components associated with modern computers: an arithmetic processing unit to

perform calculations (Babbage referred to this as the mill), a memory (the store),

and input and output devices. Babbage also included a conditional branching

operation where the next instruction to be performed was determined by the result

of the previous operation. Ada, Countess of Lovelace and daughter of poet Lord

Byron, suggested that Babbage write a plan for how the machine would calculate

numbers. This is regarded as the first computer program, and Ada is considered to

be the first computer programmer. It is also rumored that she suggested the use of

the binary number system rather than the decimal number system to store data.

Generation Zero: Mechanical Calculating Machines (1642–1945)

Prior to the 1500s, a typical European businessperson used an abacus for calculations

and recorded the result of his ciphering in Roman numerals. After the

decimal numbering system finally replaced Roman numerals, a number of people

invented devices to make decimal calculations even faster and more accu

rate. Wilhelm Schickard (1592–1635) has been credited with the invention of the

first mechanical calculator, the Calculating Clock (exact date unknown). This

device was able to add and subtract numbers containing as many as six digits. In

1642, Blaise Pascal (1623–1662) developed a mechanical calculator called the

Pascaline to help his father with his tax work. The Pascaline could do addition

with carry and subtraction. It was probably the first mechanical adding device

actually used for a practical purpose. In fact, the Pascaline was so well conceived

that its basic design was still being used at the beginning of the twentieth

century, as evidenced by the Lightning Portable Adder in 1908, and the Addometer

in 1920. Gottfried Wilhelm von Leibniz (1646–1716), a noted mathematician,

invented a calculator known as the Stepped Reckoner that could add,

subtract, multiply, and divide. None of these devices could be programmed or

had memory. They required manual intervention throughout each step of their

calculations.

Although machines like the Pascaline were used into the twentieth century,

new calculator designs began to emerge in the nineteenth century. One of the

most ambitious of these new designs was the Difference Engine by Charles Babbage

(1791–1871). Some people refer to Babbage as “the father of computing.”

By all accounts, he was an eccentric genius who brought us, among other things,

the skeleton key and the “cow catcher,” a device intended to push cows and other

movable obstructions out of the way of locomotives.

Babbage built his Difference Engine in 1822. The Difference Engine got its

name because it used a calculating technique called the method of differences. The

machine was designed to mechanize the solution of polynomial functions and was

actually a calculator, not a computer. Babbage also designed a general-purpose

machine in 1833 called the Analytical Engine. Although Babbage died before he

could build it, the Analytical Engine was designed to be more versatile than his

earlier Difference Engine. The Analytical Engine would have been capable of performing

any mathematical operation. The Analytical Engine included many of the

components associated with modern computers: an arithmetic processing unit to

perform calculations (Babbage referred to this as the mill), a memory (the store),

and input and output devices. Babbage also included a conditional branching

operation where the next instruction to be performed was determined by the result

of the previous operation. Ada, Countess of Lovelace and daughter of poet Lord

Byron, suggested that Babbage write a plan for how the machine would calculate

numbers. This is regarded as the first computer program, and Ada is considered to

be the first computer programmer. It is also rumored that she suggested the use of

the binary number system rather than the decimal number system to store data.

Generation Zero: Mechanical Calculating Machines (1642–1945)

Prior to the 1500s, a typical European businessperson used an abacus for calculations

and recorded the result of his ciphering in Roman numerals. After the

decimal numbering system finally replaced Roman numerals, a number of people

invented devices to make decimal calculations even faster and more accu

rate. Wilhelm Schickard (1592–1635) has been credited with the invention of the

first mechanical calculator, the Calculating Clock (exact date unknown). This

device was able to add and subtract numbers containing as many as six digits. In

1642, Blaise Pascal (1623–1662) developed a mechanical calculator called the

Pascaline to help his father with his tax work. The Pascaline could do addition

with carry and subtraction. It was probably the first mechanical adding device

actually used for a practical purpose. In fact, the Pascaline was so well conceived

that its basic design was still being used at the beginning of the twentieth

century, as evidenced by the Lightning Portable Adder in 1908, and the Addometer

in 1920. Gottfried Wilhelm von Leibniz (1646–1716), a noted mathematician,

invented a calculator known as the Stepped Reckoner that could add,

subtract, multiply, and divide. None of these devices could be programmed or

had memory. They required manual intervention throughout each step of their

calculations.

Although machines like the Pascaline were used into the twentieth century,

new calculator designs began to emerge in the nineteenth century. One of the

most ambitious of these new designs was the Difference Engine by Charles Babbage

(1791–1871). Some people refer to Babbage as “the father of computing.”

By all accounts, he was an eccentric genius who brought us, among other things,

the skeleton key and the “cow catcher,” a device intended to push cows and other

movable obstructions out of the way of locomotives.

Babbage built his Difference Engine in 1822. The Difference Engine got its

name because it used a calculating technique called the method of differences. The

machine was designed to mechanize the solution of polynomial functions and was

actually a calculator, not a computer. Babbage also designed a general-purpose

machine in 1833 called the Analytical Engine. Although Babbage died before he

could build it, the Analytical Engine was designed to be more versatile than his

earlier Difference Engine. The Analytical Engine would have been capable of performing

any mathematical operation. The Analytical Engine included many of the

components associated with modern computers: an arithmetic processing unit to

perform calculations (Babbage referred to this as the mill), a memory (the store),

and input and output devices. Babbage also included a conditional branching

operation where the next instruction to be performed was determined by the result

of the previous operation. Ada, Countess of Lovelace and daughter of poet Lord

Byron, suggested that Babbage write a plan for how the machine would calculate

numbers. This is regarded as the first computer program, and Ada is considered to

be the first computer programmer. It is also rumored that she suggested the use of

the binary number system rather than the decimal number system to store data.

Generation Zero: Mechanical Calculating Machines (1642–1945)

Prior to the 1500s, a typical European businessperson used an abacus for calculations

and recorded the result of his ciphering in Roman numerals. After the

decimal numbering system finally replaced Roman numerals, a number of people

invented devices to make decimal calculations even faster and more accu

rate. Wilhelm Schickard (1592–1635) has been credited with the invention of the

first mechanical calculator, the Calculating Clock (exact date unknown). This

device was able to add and subtract numbers containing as many as six digits. In

1642, Blaise Pascal (1623–1662) developed a mechanical calculator called the

Pascaline to help his father with his tax work. The Pascaline could do addition

with carry and subtraction. It was probably the first mechanical adding device

actually used for a practical purpose. In fact, the Pascaline was so well conceived

that its basic design was still being used at the beginning of the twentieth

century, as evidenced by the Lightning Portable Adder in 1908, and the Addometer

in 1920. Gottfried Wilhelm von Leibniz (1646–1716), a noted mathematician,

invented a calculator known as the Stepped Reckoner that could add,

subtract, multiply, and divide. None of these devices could be programmed or

had memory. They required manual intervention throughout each step of their

calculations.

Although machines like the Pascaline were used into the twentieth century,

new calculator designs began to emerge in the nineteenth century. One of the

most ambitious of these new designs was the Difference Engine by Charles Babbage

(1791–1871). Some people refer to Babbage as “the father of computing.”

By all accounts, he was an eccentric genius who brought us, among other things,

the skeleton key and the “cow catcher,” a device intended to push cows and other

movable obstructions out of the way of locomotives.

Babbage built his Difference Engine in 1822. The Difference Engine got its

name because it used a calculating technique called the method of differences. The

machine was designed to mechanize the solution of polynomial functions and was

actually a calculator, not a computer. Babbage also designed a general-purpose

machine in 1833 called the Analytical Engine. Although Babbage died before he

could build it, the Analytical Engine was designed to be more versatile than his

earlier Difference Engine. The Analytical Engine would have been capable of performing

any mathematical operation. The Analytical Engine included many of the

components associated with modern computers: an arithmetic processing unit to

perform calculations (Babbage referred to this as the mill), a memory (the store),

and input and output devices. Babbage also included a conditional branching

operation where the next instruction to be performed was determined by the result

of the previous operation. Ada, Countess of Lovelace and daughter of poet Lord

Byron, suggested that Babbage write a plan for how the machine would calculate

numbers. This is regarded as the first computer program, and Ada is considered to

be the first computer programmer. It is also rumored that she suggested the use of

the binary number system rather than the decimal number system to store data.

HISTORICAL DEVELOPMENT

During their 50-year life span, computers have become the perfect example of

modern convenience. Living memory is strained to recall the days of steno

pools, carbon paper, and mimeograph machines. It sometimes seems that

these magical computing machines were developed instantaneously in the

form that we now know them. But the developmental path of computers is

paved with accidental discovery, commercial coercion, and whimsical fancy.

And occasionally computers have even improved through the application of

solid engineering practices! Despite all of the twists, turns, and technological

dead ends, computers have evolved at a pace that defies comprehension. We

can fully appreciate where we are today only when we have seen where we’ve

come from.

In the sections that follow, we divide the evolution of computers into generations,

each generation being defined by the technology used to build the machine.

We have provided approximate dates for each generation for reference purposes

only. You will find little agreement among experts as to the exact starting and

ending times of each technological epoch.

Every invention reflects the time in which it was made, so one might wonder

whether it would have been called a computer if it had been invented in the

late 1990s. How much computation do we actually see pouring from the mysterious

boxes perched on or beside our desks? Until recently, computers served us

only by performing mind-bending mathematical manipulations. No longer limited

to white-jacketed scientists, today’s computers help us to write documents,

keep in touch with loved ones across the globe, and do our shopping chores.

Modern business computers spend only a minuscule part of their time performing

accounting calculations. Their main purpose is to provide users with a

bounty of strategic information for competitive advantage. Has the word

computer

now become a misnomer? An anachronism? What, then, should we call

them, if not computers?

We cannot present the complete history of computing in a few pages. Entire

books have been written on this subject and even they leave their readers wanting

for more detail. If we have piqued your interest, we refer you to look at some of

the books cited in the list of references at the end of this chapter.

HISTORICAL DEVELOPMENT

During their 50-year life span, computers have become the perfect example of

modern convenience. Living memory is strained to recall the days of steno

pools, carbon paper, and mimeograph machines. It sometimes seems that

these magical computing machines were developed instantaneously in the

form that we now know them. But the developmental path of computers is

paved with accidental discovery, commercial coercion, and whimsical fancy.

And occasionally computers have even improved through the application of

solid engineering practices! Despite all of the twists, turns, and technological

dead ends, computers have evolved at a pace that defies comprehension. We

can fully appreciate where we are today only when we have seen where we’ve

come from.

In the sections that follow, we divide the evolution of computers into generations,

each generation being defined by the technology used to build the machine.

We have provided approximate dates for each generation for reference purposes

only. You will find little agreement among experts as to the exact starting and

ending times of each technological epoch.

Every invention reflects the time in which it was made, so one might wonder

whether it would have been called a computer if it had been invented in the

late 1990s. How much computation do we actually see pouring from the mysterious

boxes perched on or beside our desks? Until recently, computers served us

only by performing mind-bending mathematical manipulations. No longer limited

to white-jacketed scientists, today’s computers help us to write documents,

keep in touch with loved ones across the globe, and do our shopping chores.

Modern business computers spend only a minuscule part of their time performing

accounting calculations. Their main purpose is to provide users with a

bounty of strategic information for competitive advantage. Has the word

computer

now become a misnomer? An anachronism? What, then, should we call

them, if not computers?

We cannot present the complete history of computing in a few pages. Entire

books have been written on this subject and even they leave their readers wanting

for more detail. If we have piqued your interest, we refer you to look at some of

the books cited in the list of references at the end of this chapter.

STANDARDS ORGANIZATIONS

Suppose you decide that you’d like to have one of those nifty new .28mm dot

pitch AG monitors. You figure that you can shop around a bit to find the best

price. You make a few phone calls, surf the Web, and drive around town until you

find the one that gives you the most for your money. From your experience, you

know that you can buy your monitor anywhere and it will probably work fine on

facturers have agreed to comply with connectivity and operational specifications

established by a number of government and industry organizations.

Some of these standards-setting organizations are ad-hoc trade associations

or consortia made up of industry leaders. Manufacturers know that by establishing

common guidelines for a particular type of equipment, they can market their

found in excruciating detail on the Web site of the organization responsible for

establishing the standard cited. As an added bonus, many standards contain “normative”

and informative references, which provide background information in

areas related to the standard.

products to a wider audience than if they came up with separate—and perhaps

incompatible—specifications.

Some standards organizations have formal charters and are recognized internationally

as the definitive authority in certain areas of electronics and computers.

As you continue your studies in computer organization and architecture, you

will encounter specifications formulated by these groups, so you should know

something about them.

The Institute of Electrical and Electronic Engineers (IEEE) is an organization

dedicated to the advancement of the professions of electronic and computer engineering.

The IEEE actively promotes the interests of the worldwide engineering

community by publishing an array of technical literature. The IEEE also sets standards

for various computer components, signaling protocols, and data representation,

to name only a few areas of its involvement. The IEEE has a democratic, albeit convoluted,

procedure established for the creation of new standards. Its final documents

are well respected and usually endure for several years before requiring revision.

The International Telecommunications Union (ITU) is based in Geneva,

Switzerland. The ITU was formerly known as the Comité Consultatif International

Télégraphique et Téléphonique, or the International Consultative Committee on

Telephony and Telegraphy. As its name implies, the ITU concerns itself with the

interoperability of telecommunications systems, including telephone, telegraph, and

data communication systems. The telecommunications arm of the ITU, the ITU-T,

has established a number of standards that you will encounter in the literature. You

will see these standards prefixed by ITU-T or the group’s former initials, CCITT.

Many countries, including the European Community, have commissioned

umbrella organizations to represent their interests within various international

groups. The group representing the United States is the American National Standards

Institute (ANSI). Great Britain has its British Standards Institution (BSI) in

addition to having a voice on CEN (Comite Europeen de Normalisation), the

European committee for standardization.

The International Organization for Standardization (ISO) is the entity that

coordinates worldwide standards development, including the activities of ANSI

with BSI among others. ISO is not an acronym, but derives from the Greek word,

isos, meaning “equal.” The ISO consists of over 2,800 technical committees, each

of which is charged with some global standardization issue. Its interests range

from the behavior of photographic film to the pitch of screw threads to the complex

world of computer engineering. The proliferation of global trade has been

facilitated by the ISO. Today, the ISO touches virtually every aspect of our lives.

Throughout this book, we mention official standards designations where

appropriate. Definitive information concerning many of these standards can be

your system. You can make this assumption because computer equipment manu

A Look Inside a Computer

Have you even wondered what the inside of a computer really looks like? The

example computer described in this section gives a good overview of the components

of a modern PC. However, opening a computer and attempting to find

and identify the various pieces can be frustrating, even if you are familiar with

the components and their functions.

printed circuit board that connects all of the components in the

computer, including the CPU, and RAM and ROM memory, as well as an assortment

of other essential components. The components on the motherboard

tend to be the most difficult to identify. Above you see an Intel D850 motherboard

with the more important components labeled.

The I/O ports at the top of the board allow the computer to communicate

with the outside world. The I/O controller hub allows all connected devices to

function without conflict. The PCI slots allow for expansion boards belonging to

various PCI devices. The AGP connector is for plugging in the AGP graphics card.

There are two RAM memory banks and a memory controller hub. There is no

processor plugged into this motherboard, but we see the socket where the CPU

is to be placed. All computers have an internal battery, as seen at the lower lefthand

corner. This motherboard has two IDE connector slots, and one floppy disk

controller. The power supply plugs into the power connector.

A note of caution regarding looking inside the box: There are many safety

considerations involved with removing the cover for both you and your computer.

There are many things you can do to minimize the risks. First and foremost,

make sure the computer is turned off. Leaving it plugged in is often

preferred, as this offers a path for static electricity. Before opening your computer

and touching anything inside, you should make sure you are properly

grounded so static electricity will not damage any components. Many of the

edges, both on the cover and on the circuit boards, can be sharp, so take care

when handling the various pieces. Trying to jam misaligned cards into sockets

can damage both the card and the motherboard, so be careful if you decide to

add a new card or remove and reinstall an existing one.

Courtesy of Intel Corporation

If you remove the cover on your computer, you will no doubt first notice a

big metal box with a fan attached. This is the power supply. You will also see

various drives, including a hard drive, and perhaps a floppy drive and CD-ROM

or DVD drive. There are many integrated circuits — small, black rectangular

boxes with legs attached. You will also notice electrical pathways, or buses, in

the system. There are printed circuit boards (expansion cards) that plug into

sockets on the motherboard, the large board at the bottom of a standard desktop

PC or on the side of a PC configured as a tower or mini-tower. The motherboard

is the

AN EXAMPLE SYSTEM: WADING THROUGH THE JARGON

This book will introduce you to some of the vocabulary that is specific to computers.

This jargon can be confusing, imprecise, and intimidating. We believe that

with a little explanation, we can clear the fog.

For the sake of discussion, we have provided a facsimile computer advertisement

(see Figure 1.1). The ad is typical of many in that it bombards the reader

with phrases such as “64MB SDRA

kilobyte (1KB) of memory is typically 1,024 bytes of memory rather than 1,000

bytes of memory. However, a 1GB disk drive might actually be 1 billion bytes

instead of 230 (approximately 1.7 billion). You should always read the manufacturer’s

fine print just to make sure you know exactly what 1K, 1KB, or 1G represents.

When we want to talk about how fast something is, we speak in terms of fractions

of a second—usually thousandths, millionths, billionths, or trillionths. Prefixes

for these metrics are given in the right-hand side of Figure 1.2. Notice that

the fractional prefixes have exponents that are the reciprocal of the prefixes on

the left side of the figure. Therefore, if someone says to you that an operation

requires a microsecond to complete, you should also understand that a million of

those operations could take place in one second. When you need to talk about

how many of these things happen in a second, you would use the prefix mega-.

When you need to talk about how fast the operations are performed, you would

use the prefix micro-.

M,” “64-bit PCI sound card” and “32KB L1

cache.” Without having a handle on such terminology, you would be hard-pressed

to know whether the stated system is a wise buy, or even whether the system is

able to serve your needs. As we progress through this book, you will learn the

concepts behind these terms.

Before we explain the ad, however, we need to discuss something even more

basic: the measurement terminology you will encounter throughout your study of

computers.

It seems that every field has its own way of measuring things. The computer

field is no exception. So that computer people can tell each other how big something

is, or how fast something is, they must use the same units of measure. When

we want to talk about how big some computer thing is, we speak of it in terms of

thousands, millions, billions, or trillions of characters. The prefixes for terms are

given in the left side of Figure 1.2. In computing systems, as you shall see, powers

of 2 are often more important than powers of 10, but it is easier for people to

understand powers of 10. Therefore, these prefixes are given in both powers of 10

and powers of 2. Because 1,000 is close in value to 2

10

(1,024), we can approximate

powers of 10 by powers of 2. Prefixes used in system metrics are often

applied where the underlying base system is base 2, not base 10. For example, a

The Main Components of a Computer

Although it is difficult to distinguish between the ideas belonging to computer

organization and those ideas belonging to computer architecture, it is impossible

to say where hardware issues end and software issues begin. Computer scientists

design algorithms that usually are implemented as programs written in some

computer language, such as Java or C. But what makes the algorithm run?

Another algorithm, of course! And another algorithm runs that algorithm, and so

on until you get down to the machine level, which can be thought of as an algorithm

implemented as an electronic device. Thus, modern computers are actually

implementations of algorithms that execute other algorithms. This chain of nested

algorithms leads us to the following principle:

Principle of Equivalence of Hardware and Software:

Anything that

can be done with software can also be done with hardware, and anything

that can be done with hardware can also be done with software.

1

A special-purpose computer can be designed to perform any task, such as word

processing, budget analysis, or playing a friendly game of Tetris. Accordingly,

programs can be written to carry out the functions of special-purpose computers,

such as the embedded systems situated in your car or microwave. There are times

when a simple embedded system gives us much better performance than a complicated

computer program, and there are times when a program is the preferred

approach. The Principle of Equivalence of Hardware and Software tells us that

we have a choice. Our knowledge of computer organization and architecture will

help us to make the best choice.

Driving Force: The Clock

The clock is a signal that keeps the control unit moving.

– At each clock “tick,” control unit moves to the next

machine cycle -- may be next instruction or

next phase of current instruction.

Clock generator circuit:

– Based on crystal oscillator

– Generates regular sequence of “0” and “1” logic levels

– Clock cycle (or machine cycle) -- rising edge to rising edge

Examples of Electronics Packages

Dual In-line Package (DIP) Older technology, requires the

metal leads to go through a hole in the printed circuit

board.

Dual Flat Pack (DFP) - A fairly recent technology, metal

leads solder to the surface of the printed circuit board

Electronics Packaging

– There are several packaging technologies available that an

engineer can use to create electronic devices.

– Some are suitable for inexpensive toys but not miniature consumer

products, and some are suitable for miniature consumer products

but not inexpensive toys.

– These packages have metal leads that are the conductive wire that

connect electricity from the outside world to the silicon inside the

package.

– Leads between packages are connected with small copper traces

on a printed circuit board (PCB), and the package leads are

soldered to the PCB.

More Memory Details

This is a not the way actual memory is implemented.
– fewer transistors, much more dense, relies on electrical properties
But the logical structure is very similar.
– address decoder
– word select line
– word write enable
Two basic kinds of memory (RAM = Random Access
Memory)
Static RAM (SRAM)
– fast, maintains data without power
Dynamic RAM (DRAM)
– slower but denser, bit storage must be periodically refreshed

Even More Memory Details

There are other types of “non-volatile” memory devices:

• ROM

• PROM

• EPROM

• EEPROM

• Flash

Can you think of other memory devices?