Logic Circuits I ECE 1411 Thursday 4:45pm-7:20pm

Logic Circuits I
ECE 1411
Thursday 4:45pm-7:20pm
Lecture 11
Computer Design Fundamentals
• A Brief History
• The Antikythera mechanism is believed to be the earliest mechanical
analog "computer". Made of over 30 precise, hand cut bronze gears, it
was designed to calculate astronomical positions. It was discovered in and
has been dated to circa100 BC. Devices of a level of complexity
comparable to that of the Antikythera mechanism would not reappear
until a thousand years later.
•
Brief History
Analog Computers
– An analog computer is a form of computer that uses the continuously
changeable aspects of physical phenomena such as electrical, mechanical,
or hydraulic quantities to model the problem being solved.
– As an analog computer does not use discrete values, but rather continuous
values, processes cannot be reliably repeated with exact equivalence.
– Analog computers do not suffer from the quantization noise inherent in digital
computers, but are limited instead by analog noise.
– Can either be electronic, mechanical, or hybrid.
Norden Bombsite
AKAT-1 used to solve diff. eq.
•
Brief History
Jacquard Loom
– The Jacquard loom is a mechanical loom, invented by Joseph Marie Jacquard, first
demonstrated in 1801.
– The loom was controlled by a "chain of cards", a number of punched cards, laced
together into a continuous sequence.
– The Jacquard head used replaceable punched cards to control a sequence of operations.
– It is considered an important step in the history of computing hardware The ability to
change the pattern of the loom's weave by simply changing cards was an important
conceptual precursor to the development of computer programming and data entry.
– Charles Babbage knew of Jacquard looms and planned to use cards to store programs in
his difference engine.
Brief History
• Charles Babbage (1791 - 1871)
– Considered the father of the computer. Credited with inventing the
first mechanical computer.
– As with modern computer architecture, data and program memory
were separated, instruction based operation, control unit could make
conditional jumps, and machine has a separate I/O.
– Babbage began in 1822 with what he called the difference engine,
made to compute values of polynomial functions. It was created to
calculate a series of values automatically. By using the method of finite
differences, it was possible to avoid the need for multiplication and
division.
– This first difference engine would have been composed of around
25,000 parts, weigh fifteen tons (13,600 kg), and would have been 8 ft
(2.4 m) tall.
– In 1991, a perfectly functioning difference engine was constructed
from Babbage's original plans. Built to tolerances achievable in the
19th century, the success of the finished engine indicated that
Babbage's machine would have worked.
Brief History
• Enigma Machine
– An Enigma machine was any of several electromechanical rotor cipher machines used in the twentieth
century for enciphering and deciphering secret messages.
– Enigma was invented by the German engineer Arthur
Scherbius at the end of World War I. Early models were
used commercially from the early 1920s, and adopted by
military and government services of several countries, most
notably Nazi Germany before and during World War II.
Brief History
• Alan Turing (1912 - 1954)
– Turing is widely considered to be the father of theoretical computer
science and artificial intelligence. He devised a number of techniques
for breaking German ciphers, including improvements to the pre-war
Polish bombe method, an electromechanical machine that could find
settings for the Enigma machine.
– On 8 June 1954, Turing's housekeeper found him dead. A postmortem examination established that the cause of death was cyanide
poisoning. When his body was discovered, an apple lay half-eaten
beside his bed, and although the apple was not tested for cyanide, it
was speculated that this was the means by which a fatal dose was
consumed. Despite urban legion, this story is not the basis for the
Apple logo.
• John Von Neumann (1903 – 1957)
– John Von Neumann was a founding figure in computing. He
contributed to the development of the Monte Carlo method, which
allowed solutions to complicated problems to be approximated
using random numbers.
Brief History
•
•
A Turing machine is a hypothetical device that manipulates symbols on a strip of
tape according to a table of rules. Despite its simplicity, a Turing machine can be
adapted to simulate the logic of any computer algorithm, and is particularly useful
in explaining the functions of a CPU inside a computer.
The Von Neumann architecture, also known as the Von Neumann
model and Princeton architecture, is a computer architecture based on that
described in 1945 by the mathematician and physicist John von Neumann and
others in the First Draft of a Report on the EDVAC.
–
–
This describes a design architecture for an electronic digital computer with parts consisting of
a processing unit containing an arithmetic logic unit and processor registers, a control unit containing
an instruction register and program counter, a memory to store both data and instructions,
external mass storage, and input and output mechanisms.
The meaning has evolved to be any stored-program computer in which an instruction fetch and a
data operation cannot occur at the same time because they share a common bus. This is referred to
as the Von Neumann bottleneck and often limits the performance of the system.
Brief History
• Harvard Architecture
– The Harvard architecture is a computer architecture with physically separate storage and
signal pathways for instructions and data.
– The term originated from the Harvard Mark I relay-based computer, which stored
instructions on punched tape (24 bits wide) and data in electro-mechanical counters.
These early machines had data storage entirely contained within the central processing
unit, and provided no access to the instruction storage as data.
• Modified Harvard architecture
– A variation of the Harvard computer architecture that allows the contents of the
instruction memory to be accessed as if it were data. Most modern computers that are
documented as Harvard architecture are, in fact, Modified Harvard architecture.
Brief History
• Harvard Architecture: Contrast with von Neumann architectures
– Under pure von Neumann architecture the CPU can be either reading an instruction or
reading/writing data from/to the memory. Both cannot occur at the same time since the
instructions and data use the same bus system.
– In a computer using the Harvard architecture, the CPU can both read an instruction and
perform a data memory access at the same time, even without a cache. A Harvard
architecture computer can thus be faster for a given circuit complexity because
instruction fetches and data access do not contend for a single memory pathway.
• Modified Harvard architecture : Contrast with modified Harvard
architecture
– A modified Harvard architecture machine is very much like a Harvard architecture
machine, but it relaxes the strict separation between instruction and data while still
letting the CPU concurrently access two (or more) memory buses. The most common
modification includes separate instruction and data caches backed by a common address
space. While the CPU executes from cache, it acts as a pure Harvard machine. When
accessing backing memory, it acts like a von Neumann machine (where code can be
moved around like data, which is a powerful technique).
Brief History
•
•
•
ENIAC (Electronic Numerical Integrator And Computer) was the first electronic generalpurpose computer. It was Turing-complete, digital, and capable of being reprogrammed to
solve "a large class of numerical problems".
Though ENIAC was designed and primarily used to calculate artillery firing tables for
the United States Army's Ballistic Research Laboratory, its first programs included a study of
the feasibility of the hydrogen bomb. ENIAC was defined by the states of its patch cables and
switches, a far cry from the stored program electronic machines that came later. Once a
program was written, it had to be mechanically set into the machine with manual resetting of
plugs and switches.
It combined the high speed of electronics with the ability to be programmed for many
complex problems. It could add or subtract 5000 times a second, a thousand times faster
than any other machine. It also had modules to multiply, divide, and square root. High speed
memory was limited to 20 words (about 80 bytes).
Brief History
• Transistor
– The transistor was developed in 1947 by American physicists John
Bardeen, Walter Brattain, and William Shockley. The transistor is on
the list of IEEE milestones in electronics, and the inventors were jointly
awarded the 1956 Nobel Prize in Physics for their achievement. The
term transistor was coined by John R. Pierce as a contraction of the
term transresistance.
– The Harwell CADET was the first fully transistorized computer in
Europe, and may have been the first fully transistorized computer in
the world.
Brief History
• Intel Corporation
– Intel Corporation, founded on July 18, 1968, is
a portmanteau of Integrated Electronics
– Intel was an early developer of SRAM and DRAM memory
chips, and this represented the majority of its business
until 1981.
– The Intel 4004 ("four-thousand-four") is a 4-bit central
processing unit (CPU) released in 1971. It was the first
microprocessor as well as the first general purpose
programmable microprocessor on the market.
– The Intel 8080 ("eighty-eighty") was the second 8bit microprocessor and was released in April 1974
• The 8080 has sometimes been labeled "the first truly usable
microprocessor“
• The architecture of the 8080 strongly influenced Intel's 8086 CPU
architecture, which spawned the x86 family of processors.
Brief History
• Seymour Roger Cray (1925 – 1996)
– Seymour Cray was an American electrical
engineer and supercomputer architect who designed a series of
computers that were the fastest in the world for decades, and
founded Cray Research which built many of these machines. Called
"the father of supercomputing, Cray has been credited with creating
the supercomputer industry.
– Cray died on October 5, 1996 (aged 71) of head and neck injuries
suffered on September 22, 1996 in a traffic collision. Another driver
tried to pass Cray on Interstate 25 in Colorado Springs, Colorado but
struck a third car that then struck Cray's Jeep Cherokee, causing it to
roll three times.
•
Saturn Launch Vehicle Digital Computer (LVDC)
–
–
–
–
•
The LVDC was a computer that provided the autopilot for the Saturn
V rocket from launch to Earth orbit insertion.
The LVDC was capable of executing 12190 instructions per second. For
comparison, a 2012-era microprocessor can execute 4 instructions per
cycle at 3 GHz, achieving 12 billion instructions per second, one
million times faster.
Memory was in the form of 13-bit "syllables", each with a 14th parity
bit. Instructions were one syllable in size, while data words were two
syllables (26 bits).
Main memory was in the form of 4,096-word memory modules. Up to
8 modules provided a maximum of 32,768 words of memory.
IBM System/4 Pi
–
–
–
–
The IBM System/4 Pi is a family of radiation
hardened avionics computers used, in various versions, on the B-52
Stratofortress bomber, F15, NASA's Skylab and Space Shuttle, as well
as other aircraft.
Process up to 480,000 instructions per second
It has 16 32-bit registers, and uses a microprogram to define
an instruction set of 154 instructions.
Originally only 16 bits were available for addressing memory; later this
was extended with four bits from the program status word register,
allowing a directly addressable memory range of 1M locations.
Computer Design Basics
• Harvard Architecture
– The Harvard architecture is a computer architecture with physically separate storage and
signal pathways for instructions and data.
• Modified Harvard architecture
– A variation of the Harvard computer architecture that allows the contents of the
instruction memory to be accessed as if it were data. Most modern computers that are
documented as Harvard architecture are, in fact, Modified Harvard architecture.
• The architecture we will discuss will be divided into a datapath and control
•
The datapath consists of:
– A set or registers
– The micro-operations performed on data stored in the registers
– The control interface
• The control consists of:
– Signals that control the micro-operations performed in/related to the datapath
– Controls its own operation, determining the sequence of events that occur.
Computer Design Basics
• Datapath:
– Computer systems often employ a number of storage
registers in conjunction with a shared operation unit
called a Arithmetic/Logic Unit (ALU).
• The contents of specified source registers are applied to the
inputs of the shared ALU
• The ALU performs the operation
• The result of the operation is transferred to a destination
register
– ALU is combinational logic
• Entire register transfer operation completes in one clock
source registers -> ALU -> destination register
Computer Design Basics
• Generic bus-based Datapath:
• Four registers
• ALU
• Shifter
• Each register is connected to two muxes
to form ALU and shifter input buses A and B.
• The A/B select inputs on each mux select one
register for the corresponding bus.
• The Destination select inputs determine which
register is loaded with the data on Bus D.
• Since the Desination select inputs are
decoded, only one register Load signal is
active for any data transfer into a register
from Bus B.
Computer Design Basics
• For bus B, there is an additional mux (MUX B)
so that constants can be brought in.
• Bus B connects to Data out, to send data
outside the datapath
• Bus A connects to Address out, to send
address information outside the datapath
Computer Design Basics
• Arithmetic operations are performed on
the operands on the A and B buses by the ALU
• The G select inputs select the micro-operation
performed by the ALU.
• The H select input either passes the operand
on Bus B directly through the shifter output
or selects a shift operation with the shift
operation performed on bus B
• MUX F selects the output of the ALU or
shifter output.
• MUX D selects the output of MUX F or external
data (Data in) to be applied to Bus D.
Computer Design Basics
• Four status bits are shown with the ALU
C: Carry
1 if carry
V: Overflow
1 if operation result requires n+1 bits
n+1 bit is reserved for sign bit
Z: zero
1 if ALU output is all zeros
N: negative
Sign bit
Left most bit
• Used by Control Unit
Computer Design Basics
• To perform the micro-operation
R1 <- R2 + R3
the control unit must provide selection values
to the following sets of control inputs:
1. A select: places the contents of R2 onto
A data (Bus A).
2a B select: places contents of R3 onto the
0 input of MUX B
2b MB select: puts the 0 input of MUX B
onto Bus B
3. G select: provides the arithmetic
operation A + B
4. MF select: places the ALU output on
the MUX F output
5. MD select: places the MUX F output onto
Bus D
6. Destination select: Select R1 as the
destination of the data of the data on Bus D
7. Load enable: Load R1 register
Computer Design Basics
• The ALU Unit:
Computer Design Basics
• The B input logic can be implemented with n
multiplexers
– The number of gates can be reduced if we go
through the logic design of one bit of the B input
Yi = BiS0 + BiS1
where S0 and S1 are common to all n stages
Computer Design Basics
• 4-bit parallel adder
Computer Design Basics
• Logic Circuit
– Four commonly used logic operations:
• AND, OR, XOR, NOT
Computer Design Basics
• Arithmetic/Logic Unit
– The Logic circuit can be combined with the
arithmetic unit to produce the ALU
• The circuit must be repeated n times for a n-bit ALU
• The output carry Ci+1 of a given stage is connected to
the input carry Ci of the next stage.
• The first stage input carry C0 is the input carry for the
ALU Cin
Computer Design Basics
• The Shifter
– Shifts the Bus B value, placing the results on an
MUX F input
– Two main functions
• Shift left or shift right
– A shifter can be constructed of muxes
S
Operation
00 B unchanged
01 Right Shift
10 Left Shift
11 Not Used
Computer Design Basics
• Barrel Shifter
– Shifts more than one bit position in a single clock
cycle
Computer Design Basics
• Datapath Representation
– A typical datapath has more than
four registers
• 32 or more are common and the
construction of the needed bus
system requires different
techniques than previously shown
– The registers are thus organized
into a register file which typically
is a special type of fast memory
that permits one or more words
to be simultaneously read or
written
• The A/B/Destination select are
now addresses with all accesses
occurring on the same clock cycle
• Load enable -> Write
– The ALU and Shifter are grouped
together into the Function Unit
Computer Design Basics
Computer Design Basics
• The Control Word
– There are 16 binary control
inputs whose combined
values specify a control
word
– The Control Word consists
of seven fields
• The three bits of DA select
one of eight destination
registers, etc.
• The four bit FS field controls
the Function Unit operation
• MD select the FU output or
Data in as the input to Bus D
• RW determined whether a
register is written or not
Computer Design Basics
Computer Design Basics
• The Control Word subtraction example
R1 <- R2 + R3 + 1
Specifies
R2 for the ALU A input
R3 for the ALU B input
FU operation: F = A + B + 1
R1 as the destination register
RW to 1 to cause R1 to be written
Field
DA
AA
BA
MB
Symbolic R1
R2
R3
Register F = A + B + 1 Function Write
Binary
001 010 011 0
FS
0101
MD
0
RW
1
Computer Design Basics
Computer Design Basics
• A Simple Computer Architecture
– In a programmable system, a portion of the input
to the processor consists of a sequence of
instructions
– Instructions are usually stored in RAM or ROM
– Need to provide memory address of instruction
• Address stored in Program Counter (PC)
Computer Design Basics
• Program Counter
– As name implies, has logic to allow it to count
– Has parallel load to allow PC to change the
sequence of operations using decisions based on
status information
• Control Unit
– Contains
• PC
• Associated decision logic
• Logic needed to interpret the instruction to execute it
Computer Design Basics
• Storage Resources
– Includes two memories:
• Instructions
• data
– Cache memory
• A CPU cache is a cache used by the
central processing unit (CPU) of a
computer to reduce the average time
to access data from the main memory.
The cache is a smaller, faster memory
(SRAM) which stores copies of the data
from frequently used main memory
locations
Computer Design Basics
• Reduced Instruction Set Computing (RISC):
– A CPU design strategy based on the insight that a simplified instruction
set (as opposed to a complex set) provides higher performance when
combined with a microprocessor architecture capable of executing
those instructions using fewer microprocessor cycles per instruction.
• ARM processors
• Complex Instruction Set Computing (CISC):
– A CPU design where single instructions can execute several low-level
operations (such as a load from memory, an arithmetic operation, and
a memory store) or are capable of multi-step operations or addressing
modes within single instructions.
• Intel x86 processor line
Computer Design Basics
• Interrupts
– An interrupt is a signal to the processor by hardware or software
indicating an event that needs immediate attention. An interrupt
alerts the processor to a high-priority condition requiring the
interruption of the current code the processor is executing. The
processor responds by suspending its current activities, saving its
state, and executing a function called an interrupt handler (or an
interrupt service routine, ISR) to deal with the event. This interruption
is temporary, and, after the interrupt handler finishes, the processor
resumes normal activities.
– There are two types of interrupts: hardware interrupts and software
interrupts.
• Hardware interrupts are used by devices to communicate that they require
attention from the operating system.
• A software interrupt is caused either by an exceptional condition in the
processor itself, or a special instruction in the instruction set which causes an
interrupt when it is executed.
– Much more to interrupts than indicated here; e.g. polled, edge versus
level, maskable, etc.
Computer Design Basics
• Hardware Acceleration
– Hardware acceleration is the use of computer
hardware to perform some functions faster than is
possible in software running on a more generalpurpose CPU.
– Examples include graphics processing units,
cryptographic accelerator, etc.
Computer Design Basics
Computer Design Basics
• Any sufficiently advanced technology is
indistinguishable from magic.
– Arthur C. Clarke’s third law
Lab 3
• Part 1
– Obtain the Hex inverter datasheet (74LS04) and find the propagation delay
– Find the approximate propagation delay for wire (e.g. XXns/in)
– Using Verilog, code a 7 stage ring oscillator. Use the propagation delays from
the data sheet plus an estimate of the wire delay (add to the gate delays)
based on your previous lab experience with wire lengths needed.
• Note: the datasheet lists both rise and fall delays which you can model with Verilog. See
the following for examples:
– http://www.asic-world.com/verilog/gate3.html
• Note: you will have to force an initial value on one of the stages long enough for the ring
osc. to go to a known state:
initial begin
force n1 = 0;
#100 release n1;
end
• Part 2
– Wire up the ring oscillator and using an oscilloscope, see how close the
simulation results are
– Vary the supply voltage within the allowed min/max range. Does the ring
oscillator speed up or slow down with voltage?
– Optional: If a can of freeze spray is available, cool the part and see if the part
slows down or speeds up?
• 100 points