Figure 7.7 A Channel I/O Configuration

Terminal
Controller
Main
Memory
Memory
Bus
I/O Bus
I/O
Bridge
Printer
Local Area
Network
I/O Processor
(IOP)
Disk
I/O Processor
(IOP)
Disk
Tape
I/O Processor
(IOP)
CPU
Tape
Disk
Figure 7.7 A Channel I/O Configuration
Disk
Printer
IIO: Intelligent I/O
I2O
IOP: A small CPU tuned for I/O
It executes multiple DMA transfers before interrupting the CPU,
further reducing the CPU’s involvement in I/O
CPU’s involvement I/O: Software I/O overhead: Instructions that must be executed in support of I/O.
Since DMA is used to do the I/O, Channel I/O’s response time is the same as that of DMA.
Response time (aka Latency): Time from when a device requests service (has a new piece of data
or needs a new piece of data) to when it gets service (is read from or is written to).
Instruction-Based I/O
Memory-Mapped I/O
I/O devices share the same map as memory.
Any instruction that can access memory can
access an I/O port.
(Isolated I/O)
(I/O-Mapped I/O)
Only specialized instructions can access I/O devices,
and there are separate address spaces for I/O and memory.
I/O Map
Memory Map
0x0000
0x0000
Memory
0xEFFF
0xF000
0x00
Memory
I/O Devices
0xFF
I/O Devices
0xFFFF
Load R1, 0x477E ; reads memory
Load R1, 0xF800 ; reads an I/O port
0xFFFF
In R1, 0xC0 ; reads an I/O port
Out 0xC2, R3 ; writes an I/O port
Memory Mapped I/O and Instruction based I/O are not I/O control methods!
Programmed I/O, Interrupt Driven I/O, DMA and Channel I/O can all be done using either Memory-Mapped or Isolated I/O
Note: the author uses the term “Isolated I/O” to refer to a system with a separate bus for memory and for I/O.
Data
n
Address
n
n
n
Cache
Disk
Controller
Address
Disk
Decoder I/O Controller Decoder
Disk
Request
Ready
Write/Read
Clock (Bus)
Reset
Error
Figure 7.10 A Disk Controller Interface
Write from DMA Controller to Disk (e.g. page 382-4)
1. The DMA circuit places the address of the disk controller on the address lines, and raises
(asserts) the Request and Write signals.
2. With the Request signal asserted, decoder circuits in the controller interrogate the address
lines.
3. Upon sensing its own address, the decoder enables the disk control circuits. If the disk is
available for writing data, the controller asserts the Ready line. At this point, the handshake
between the DMA and the disk controller is complete. With the Ready signal asserted, no
other devices may use the bus.
4. The DMA circuits then place the data on the lines and lowers (negates) the Request signal.
5. When the disk controller sees the Request signal drop, it transfers the byte from the data
lines to the disk buffer, and then lowers its Ready signal.
Protocol: Exact form / meaning / timing (sequence) of signals exchanged between sender and receiver.
Handshake: A protocol in which commands and/or data from the sender are acknowledged by the receiver.
DMA to/from Disk Controller
t0
t1
t2
t3
t4
t5
t6
t7
t8
t9 t10
Request
Address
Write/Read
Ready
Data
(Bus) Clock
Time
t0
t0
t1
t2
t4
t8
Bus Signal
Assert Write
Assert Address
Assert Request
Assert Ready
Data Lines
Lower Ready
Meaning
Bus is needed for writing (not reading)
Indicates where bytes will be written
Request write to addess on address lines
Acknowledges write request, bytes placed on data lines
Write data (requires several cycles)
Release bus
Figure 7.11 A Bus Timing Diagram
Graphical presentation of the sequence and precise timing of signals required for the protocol.
Computer to/from Printer
t0
t1
t2
t3
t4
t5
t6
nStrobe
Busy
nAck
Data
Figure 7.12 A Simplified Timing Diagram for a Parallel Printer
Parallel Transmission: multiple data lines
D0
D1
D2
D3
D4
D5
D6
D7
0
1
0
0
1
1
1
0
1
1
0
1
0
0
1
0
1
0
0
0
1
1
1
1
0
0
1
1
1
1
0
0
1
0
1
1
1
1
0
0
0
1
1
1
0
0
0
1
1
1
0
1
1
1
0
0
1
0
PATA (Parallel AT Attachment) 16 data lines 133 MB/s ~ 1Gb/s
PCI-X (133 MHz 64 bit) ~1 GB/s
0
1
0
0
1
1
1
0
0
1
1
1
0
1
0
1
1
1
0
1
1
0
0
1
0
1
0
1
0
0
1
1
1
1
0
1
1
0
0
0
1
0
0
1
1
0
0
1
0
0
1
0
1
0
Skew: different delays on different lines.
Skew gets worse as distance and/or speed increases
Attenuation (loss of voltage) also limits distance
Parallel Transmission: multiple data lines
D0
D1
D2
D3
D4
D5
D6
D7
0
1
0
0
1
1
1
0
1
1
0
1
0
0
1
0
1
0
0
0
1
1
1
1
0
0
1
1
1
1
0
0
1
0
1
1
1
1
0
0
0
1
1
1
0
0
0
1
1
1
0
1
1
1
0
0
1
0
PATA (Parallel AT Attachment) 16 data lines 133 MB/s ~ 1Gb/s
PCI-X (133 MHz 64 bit) ~1 GB/s
0
1
0
0
1
1
1
0
0
1
1
1
0
1
0
1
1
1
0
1
1
0
0
1
0
1
0
1
0
0
1
1
1
1
0
1
1
0
0
0
1
0
0
1
1
0
0
1
0
0
1
0
1
0
Skew: different delays on different lines.
Skew gets worse as distance and/or speed increases
Attenuation (loss of voltage) also limits distance
Serial Transmission: a single data line
D0
0
1
0
0
1
1
1
Never any skew, but we still have attenuation. However, we can afford better signal conditioning, and a more expensive line.
D0
With modern electronics, we can increase the data rate:
SATA (Serial ATA) 1.2Gb/s (150 MB/s) - 6Gb/s (600MB/s)
PCIe v4.0 ~2 GB/s (per lane – up to 32 lanes with de-skew)
Inductive Wr ite Element
Shield 2
P1
Secondary (Mass) Storage
P2
V write
- We want large, cheap (per bit), non-volatile storage.
- Today this is based largely on magnetic recording
and increasingly on flash (SSD).
Magnetic Recording Technology:
N
S S N N
S S
N N
S N
S S
N N
S S
Recording
Medium
N
Magnetizations
Read Element
GMR Sensor
Monopole
Inductive
Write Element
V read
Shield 2
P1
P2
Shield 1
Track Width
Recording
Medium
Soft
Underlayer
MFM (Magnetic Force Microscopy) Image
magnetic domains are composed of
a few hundred magnetic grains (10 nm each)
Return Pole
Figure 1: Longitudinal recording diagram (top) and perpendicular
recording diagram (bottom)
WHITE PAPER
www.hitachiGST.com
http://www.hitachigst.com/tech/techlib.nsf/techdocs/F47BF010A4D29DFD8625716C005B7F34/$file/PMR_white_paper_final.pdf