computer world - Ульяновский государственный технический

ФЕДЕРАЛЬНОЕ АГЕНТСТВО ПО ОБРАЗОВАНИЮ
УЛЬЯНОВСКИЙ ГОСУДАРСТВЕННЫЙ ТЕХНИЧЕСКИЙ УНИВЕРСИТЕТ
COMPUTER
WORLD
Учебное пособие
для студентов дневного отделения ФИСТ
Составитель Т. А. Матросова
УЛЬЯНОВСК
2007
УДК 004.3=111(075)
ББК 81.2.Англ я7
К63
Рецензенты: доцент, зав. кафедрой «Инос транные языки»
УВВИУС, канд. пед. наук, доцент Л. В. Шилак;
доцент, зав. кафедрой «Иностранные языки» УВВТУ, канд.
филол. наук, доцент Л. А. Жерехова.
Утверждено редакционно-издательским советом университета в
качестве учебного пособия.
Computer world: учебное пособие для студентов дневного отделения
К63 ФИСТ / сост. Т. А. Матросова. – Ульяновск: УлГТУ, 2007. − 118 с.
ISBN 978-5-9798-0026-3
Пособие составлено в соответствии с программой курса английского языка для
высшей школы, содержит оригинальные тексты по специальности, упражнения,
словарь, приложение с теорией по составлению аннотаций и рефератов,
дополнительные тексты для чтения, образцы аннотаций.
Работа подготовлена на кафедре «Иностранные языки».
УДК 004.3=111(075)
ББК 81.2.Англ я7
ISBN 978-5-9798-0026-3
© Т. А. Матросова, составление, 2007
© Оформление. УлГТУ, 2007
©Матросова Т.А., 2006
©Оформление. УлГТУ, 2006
UNI T 1
HISTORY
TEXT 1
Read the text and decide on a suitable title for it.
In 1952, a major computing company took a decis ion to get out of the business
of making mainframe computers. They believed that there was only a market for four
mainframes in the whole world. That company was IBM. The following year they
reversed their decision.
In 1980, IBM decided that there was a market for 250,000 PCs, so they set up a
special team to develop the first IBM PC. It went on sale in 1981 and set a worldwide standard for IBM-compatibility which, over the next ten years, was only
seriously challenged by one other company, Apple Computers. Since then, over
seventy million PCs made by IBM and other manufacturers have been sold. Over this
period, PCs have become commodity items. Since IBM made the design nonproprietary, anyone can make them.
The history of the multi-billion dollar PC industry has been one of mistakes.
Xerox Corporation funded the initial research on personal computers in their Palo
Alto laboratory in California. However, the company failed to capitalize on this work,
and the ideas that they put together went into the operating system developed for
Apple's computers. This was a graphical interface: us ing a mouse, the user clicks on
icons which represent the function to be performed.
The first IBM PC was developed using existing available electrical components.
With IBM's badge on the box it became the standard machine for large corporations
to purchase. When IBM were looking for an operating system, they went initially to
Digital Research, who were market leaders in command-based operating systems
(these are operating systems in which the users type in commands to perform a
function). When the collaboration between IBM and Digital Research failed, IBM
turned to Bill Gates, then 25 years old, to write their operating system.
Bill Gates founded Microsoft on the basis of the development of MS/DOS, the
initial operating system for the IBM PC. Digital Research have continued to develop
their operating system, DR/DOS, and it is considered by many people to be a better
product than Microsoft's. However, without an endorsement from IBM, it has become
a minor player in the market. Novell, the leaders in PC networking, now own Digital
Research, sо things may change.
The original IBM PC had a minimum of 16K of memory, but this could be
upgraded to 512K if necessary, and ran with a processor speed of 4.77MHz. Ten
years later, in 1991, IBM were making PCs with 16Mb of memory, expandable to
64Mb, running with a processor speed of 33MHz. The cost of buying the hardware
has come down considerably as the machine's have become commodity items. Large
companies are considering running major applications on PCs, something which, ten
3
years ago, no one would have believed possible of a PC. In contrast, many computers
in people's homes are just used to play computer games.
The widespread availability of computers has in all probability changed the
world for ever. The microchip technology which made the PC possible has put chips
not only into computers, but also into washing-machines and cars. Some books may
never be published in paper form, but may only be made available as part of public
databases. Networks of computers are already being used to make information
available on a worldwide scale.
VOCABULARY
application n – приложение, применение
available a – пригодный, полезный
availability n – пригодность, полезность
badge n – эмблема
capitalize v – наживать капитал
challenge v – бросать вызов
collaboration n – сотрудничество
commodity item – предмет потребления, товар
compatibility n – совместимость
consider v – считать, полагать
considerably adv – значительно
endorsement n – одобрение, поддержка
expand v – расширять, развивать
fail v – оказаться не в состоянии, провалиться
go on sale – поступить в продажу
icon n – иконка, изображение, символ
major a – больший, более важный, главный
mainframe computers – универсальные ЭВМ
manufacturer n – производитель
minor a – незначительный, меньший
network n – сеть
non-proprietary a – товары, право производства и продажи которых
принадлежит не одной фирме
on a worldwide scale – в мировом масштабе
perform v – выполнять
public database – общедоступная база данных
purchase v – покупать
reverse v – отменять
set up v – основать (компанию, дело)
turn to v – обратиться к
upgrade v – модернизировать
widespread a – (широко) распространенный
4
SUGGESTED ACTIVITI ES
Exercise 1. Answer these questions about the text.
1. How many mainframes did IBM think it was possible to sell in 1952?
2. How many PCs have now been sold?
3. Who paid for the initial research into PCs?
4. Which company later used the results of this research to develop their
operating system?
5. What are command-based operating systems?
6. DR/DOS is an acronym. What does it stand for?
7. Since the invention of the IBM PC, many of its features have been improved.
Which of the following features does the text not mention in this respect?
a. memory
b. speed
с. size
d. cost
8. Give three examples from the text of how the availability of computers has ‘in
all probability changed the world for ever’.
Exercise 2. Look back in the text and find words that have a similar meaning to:
1. international
2. contested
3. errors
4. paid for
5. buy
6. first
7. recommendation
8. improved
Exercise 3. Translate the sixth paragraph (starting 'The original IBM PC...'). Look
carefully at the tenses before you start.
Exercise 4. The article states that 'many computers in people's homes are just used to
play computer games.
Discuss the following questions:
1. In what other ways are computers used at home, or outside work?
2. If you have a PC, how do you use it? (If not, how would you use one?)
5
TEXT 2
Read the text and make up a plan.
FROM ELECTROMECHANICAL TO ELECTRONIC COMPUTERS: AIKEN
TO ENIAC
For over 30 years, Thomas J. Watson, Sr., one of the supersalesmen of the 20th
century, ran International Business Machines Corp. (IBM) with an autocratic hand.
As a result, the company that emerged as a successor to Herman Hollerith's
Tabulating Machine Company was highly successful at selling mechanical
calculators to business.
It was only natural, then, that a young Harvard associate professor of
mathematics, Howard H. Aiken, after reading Charles Babbage and Ada Byron's
notes and conceiving of a modern equivalent of Babbage's analytical engine, should
approach Watson for research funds. The cranky head of IBM, after hearing a pitch
for the commercial possibilities, thereupon gave Aiken a million dollars. As a result,
the Harvard Mark I was born.
Nothing like the Mark I had ever been built before. Eight feet high and 55 feet
long, made of streamlined steel and glass, it emitted a sound that one person said was
«like listening to a roomful of old ladies knitting away with steel needles.» Whereas
Babbage's original machine had been mechanical, the Mark I was electromechanical,
using electromagnetic relays (not vacuum tubes) in combination with mechanical
counters. Unveiled in 1944, the Mark I had enormous publicity value for IBM, but it
was never really efficient. The invention of a truly electronic computer came from
other quarters.
Who is the true inventor of the electronic computer? In 1974, a federal court
determined, as a result of patent litigation, that Dr. John V. Atanasoff was the
originator of the ideas required tо make an electronic digital computer actually work.
However, some computer historians dispute this court decision, attributing that
designation to Dr. John Mauchly. The background is as follows.
In the late 1930s, Atanasoff, a professor of physics at what is now Iowa State
University, spent time trying to build an electronic calculating device to help his
students solve complicated mathematical problems. One night, while sitting in an
Illinois roadside tavern, after having driven 189 miles to clear his thoughts, the idea
came to him for linking the computer memory and associated logic. With the help of
a graduate student, Clifford Berry, and using vacuum tubes, he built the first digital
computer that worked electronically. The computer was called the ABC, for
«Atanasoff-Berry Computer».
During the years of 1940-41, Atanasoff met with Mauchly, who was then a
professor with the Moore School of Electrical Engineering at the University of
Pennsylvania. Mauchly had been interested in building his own computer, and there
is a good deal of dispute as to how many of Atanasoff and Berry's ideas he might
have utilized. In any case, in 1942 Mauchly and his assistant, J. Presper Eckert, were
asked by American military officials to build a machine that would rapidly calculate
trajectories for artillery and missiles. The machine they proposed, which would cut
6
the time needed to produce trajectories from 15 minutes to 30 seconds, would employ
18,000 vacuum tubes–and all of them would have to operate simultaneous ly.
This machine, called ENIAC–for Electronic Numerical Integrator and
Calculator–was worked on 24 hours a day for 30 months and was finally turned on in
February 1946, too late to aid in the war effort. A massive machine that filled an
entire room, it was able to multiply a pair of numbers in about 3 milliseconds, which
made it 300 times faster than any other machine.
There were a number of drawbacks to ENIAC – including serious cooling
problems because of the heat generated by all the tubes and, more importantly,
ridiculously small storage capacity. Worst of all, the system was quite inflexible.
Each time a program was changed, the machine had to be rewired. This last obstacle
was overcome by the Hungarian-born mathematical genius Dr. John von Neumann.
The holder of degrees in chemistry and physics, a great storyteller, and a man
with total recall, von Neumann was a member of the Institute for Advanced Study in
Princeton, New Jersey. One day in 1945, while waiting for a train in Aberdeen,
Maryland, a member of the ENIAC development team, Herman Goldstine, ran into
von Neumann, who was then involved in the top-secret work of designing atomic
weapons. Since both persons had security clearances, they were able to discuss each
other's work, and von Neumann began to realize that the difficulties he was having in
the time-consuming checking of his advanced equations could be solved by the high
speeds of ENIAC. As a result of that chance meeting, von Neumann joined the
ENIAC team as a special consultant.
When the Army requested a more powerful computer than ENIAC, von
Neumann responded by proposing the EDVAC (for Electronic Discrete Variable
Automatic Computer), which would utilize the stored program concept. That is,
instead of people having to rewire the machine to go to a different program, the
machine would, in less than a second, «read» instructions from computer storage for
switching to a new program. Von Neumann also proposed that the computer used the
binary numbering system (the ENIAC worked on the decimal system), to take
advantage of the two-state conditions of electronics («on» and «off» to correspond to
1 and 0).
Mauchly and Eckert and others at the Moore School of Engineering set out to
build the EDVAC, but the first computer using the stored program concept was
actually the EDSAC, built in 1949 at Cambridge University in England. One reason
that EDVAC was delayed was that Eckert and Mauchly founded their own company
in 1946 to build what would ultimately be called the UNIVAC computer.
SUGGESTED ACTIVITI ES
1. Point out the paragraph describing the Mark I and characterize this computer
2. Find the paragraph about ENIAC and speak about it drawbacks.
3. Look through the text and say who designed the computer and who is
considered to be the true inventor of the electronic computer.
4. Retell the text briefly according to the plan.
7
TEXT 3
Translate the text orally without a dictionary
FROM UNIVAC TO PC
It is hard to believe now that people used to refer to a computer as a «Umvac,»
but this is the name by which it probably first came to public attention. UNIVAC was
the name that Presper Eckert and John Mauchly gave to their Universal Automatic
Computer, on which they began work in 1946, fresh from their work on ENIAC. In
1949, Remington Rand acquired the company, and the first UNIVAC became
operational at the Census Bureau in 1951.
However, it was in the next year, a presidential election year, that the public
really came to know the term UNIVAC. During vote counting the night of the 1952
election, UNIVAC surprised CBS network executives by predicting–after analyzing
only 5% of the vote counted–that Eisenhower would defeat Stevenson for President .
Of course, since then, computers have been used extensively by television networks
to predict election outcomes.
UNIVAC was also the first computer used for data processing and record
keeping by a bus iness organization–it was installed by General Electric in Louisville,
Kentucky, in 1954. Also in that year, IBM's 650 mainframe computer was first
installed, an upgrade of the company's punched-card machines. Because
businesspeople were already used to punched-card data processing, the IBM 650 was
readily accepted by the business community, thus giving IBM a substantial foot in the
door to the computer market, an advantage it still enjoys today.
We have described the movement of computers from vacuum tubes (1951) to
transistors (1959) to integrated circuits (1965). By 1960, a number of companies had
entered the computer market, among them Control Data Corporation (CDC), National
Cash Register (NCR), and General Electric.
In 1964, after reportedly spending a spectacular $5 billion, IBM announced an
entire new line of computers called the System/360, so-called because they covered
«360 degrees» of a circle. That is, the System/360 came in several models and s izes,
with about 40 different kinds of input and output and secondary storage devices, all
compatible so that customers could put together systems tailor-made to their needs
and budgets. Despite the tremendous disruption for users, the System/360 was a
resounding success and repaid IBM's investment many times over.
But in the 1960s and 1970s, competitors to IBM saw holes they could fill. Large
mainframe computers began to be supplemented by minicomputers, such as those
made by Digital Equipment Corporation (DEC), Data General Corporation, and
Hewlett-Packard. Cray, formed by Seymour Cray, began developing the
supercomputer. A former IBM employee named Gene Amdahl marketed his Amdahl
470V/6, which was one and one-half times faster than a comparable IBM computer,
yet cost less and occupied only a third the space.
Besides General Electric, RCA also tried to penetrate the mainframe computer
market, but later withdrew. Of the original mainframe makers, the survivors today are
8
IBM, NCR, UniSys (Sperry-Univac and Burroughs reconstituted), and Honeywell–
and IBM has the majority of the mainframe market.
In the 1970s, the volatile computer industry was thrown into an uproar when the
microprocessor was invented, pointing the way to the relatively inexpens ive
microcomputer. Led by the famous Apple II, invented by Steve Jobs and Steve
Wozniak at Apple Computer, then by other products from Tandy-Radio Shack,
Commodore, Atari, and Texas Instruments, the microcomputer market has been the
battleground of over 150 different computer manufacturers–among them those two
industrial giants, IBM and AT&T.
IBM introduced its IBM PC (for Personal Computer) in the summer of 1981,
and immediately zoomed to the top in microcomputer sales–not surprising,
considering IBM's long-term presence in the computer industry and its strength as a
marketing-oriented organization. The IBM PC led to several enormous industries,
among them IBM-compatible hardware and «clones», popular software such as that
produced by Lotus and Software Publishing, and telecommunications entities such as
local area networks and on-line-retrieval bulletin boards.
American Telephone & Telegraph, on the other hand, which used to be thought
of as «Ma Bell» or «the Phone Company,» was forced by the U.S. government to
divest itself of 22 local Bell operating companies (regrouped into seven regional
holding companies) and to allow competition from other long-distance telephone services, such as MCI and GTE's Sprint, and Allnet Communications. In return, the
government permitted AT&T to enter the computer market. The question in many
observers' minds, however, was whether AT&T could relinquish the habits of a
monopoly and become an aggressive marketing force in a highly competitive
business. The announcement of AT&T's personal computer, the PC6300 (produced
by the Italian office equipment maker Olivetti) in June 1984 was the company's
opening gun. The strategy has been to approach office automation from the
company's historic base in communications, so that AT&T products can be linked
together for both computing and communicating.
In 1987, in an attempt to cut into sales of «IBM-compatible» microcomputers–
computers made by companies other than IBM (such as Compaq) that nevertheless
run IBM-type software and equipment–International Business Machines announced
its line of Personal System/2 computers, most of which significantly improved on
speed and memory capacity.
According to Business Week (April 17, 1987), the top 15 office equipment and
computer manufacturers, ranked in terms of their market value for the year 1986,
were the following: IBM, Digital Equipment, Hewlett-Packard, Xerox, NCR, UniSys,
Tandy, Apple Computer, Cray Research, Automatic Data Processing, Pitney-Bowes,
Tandem Computers, Honeywell, Wang Laboratories, and Amdahl.
9
SUGGESTED ACTIVITI ES
1. Find the paragraph about the first Personal Computer. Translate it in written
form.
2. Write a summary of the text in Russian. Use the introductory patterns given
below:
в статье говорится…, обращается внимание…, особое внимание
уделяется…, обсуждаются…, рассматриваются…, подробно
анализируются.
TEXT 4
Read the text. What keywords can you write? How does the author describe a
humanlike PC?
HOWARD H. AIKEN AND THE COMPUTER
Howard Aiken's contributions to the development of the computer – notably the
Harvard Mark II (IBM ASSC) machine, and its successor the Mark II – are often
excluded from the mainstream history of computers on two technicalities. The first is
that Mark I and Mark II were electro-mechanical rather than electronic; the second
one is that Aiken was never convinced that computer programs should be treated as
data what have come to be known as the von Neumann concept, or the stored
program.
It is not proposed to discuss here the origins and significance of the stored
program. Nor I wish to deal with the related problem of whether the machines before
the stored program were or were not «computers». This subject is complicated by the
confusion in actual names given to machines. For example, the ENIAC, which did
not incorporate a stored program, was officially named a computer: Electronic
Numeral Integrator And Computer. But the first stored-program machine to be put
into regular operation was Maurice Wiles' EDSAC: Electronic Delay Storage
Automatic Calculator. It seems to be rather senseless to deny many truly significant
innovations (by H. H. Aiken and by Eckert and Mauchly), which played an important
role in the history of computers, on the arbitrary ground that they did not incorporate
the stored-program concept.
Aiken was a visionary, a man ahead of his times. Grace Hopper and others
remember his prediction in the late 1940s, even before the vacuum tube had been
wholly replaced by the transistor, that the time would come when a machine even
more powerful than the giant machines of those days could be fitted into a space as
small as a shoe box.
Some weeks before his death Aiken had made another prediction. He pointed
out that hardware considerations alone did not give a true picture of computer costs.
As hardware has become cheaper, software has been apt to get more expensive. And
then he gave us his final prediction: «The time will come», he said, «when
10
manufacturers will give away hardware in order to sell software». Time alone will
tell whether or not this was his final look ahead into the future.
THE DEVELOPMENT OF COMPUTERS IN THE USA
In the early 1960s, when computers were hulking mainframes that took up entire
rooms, engineers were already toying with the then – extravagant notion of building a
computer intended for the sole use of one person. By the early 1970s, researches at
Xerox's Polo Alto Research Center (XeroxPARC) had realized that the pace of
improvement in the technology of semiconductors – the chips of silicon that are the
building blocks of present-day electronics – meant that sooner or later the PC would
be extravagant no longer. They foresaw that computing power would someday be so
cheap that engineers would be able to afford to devote a great deal of it simply to
making non-technical people more comfortable with these new information –
handling tools. In their labs, they developed or refined much of what constitutes PCs
today, from «mouse» pointing devices to software «windows». Although the work at
XeroxPARC was crucial, it was not the spark that took PCs out of the hands of
experts and into the popular imagination. That happened inauspicious ly in January
1975, when the magazine Popular Electronics put a new kit for hobbyists, called the
Altair, on its cover, for the first time, anybody with $400 and a soldering iron could
buy and assemble his own computer. The Altair inspired Steve Wosniak and Steve
Jobs to build the first Apple computer, and a young college dropout named Bill Gates
to write software for it. Meanwhile, the person who deserves the credit for inventing
the Altair, an engineer named Ed Roberts, left the industry he had spawned to go to
medical school. Now he is a doctor in small town in central Georgia.
To this day, researchers at Xerox and elsewhere pooh-pooh the Altair as too
primitive to have made use of the technology they felt was needed to bring PCs to the
masses. In a sense, they are right. The Altair incorporated one of the first single-chip
microprocessor – a semiconductor chip, that contained all the bas ic circuits needed to
do calculations – called the Intel 8080. Although the 8080 was advanced for its time,
it was far too slow to support the mouse, windows, and elaborate software Xerox had
developed. Indeed, it wasn't until 1984, when Apple Computer's Macintosh burst onto
the scene, that PCs were powerful enough to fulfill the original vis ion of researchers.
«The kind of computing that people are trying to do today is just what we made at
PARC in the early 1970s,» says Alan Kay, a former Xerox researcher who jumped to
Apple in the early 1980s.
Researchers today are proceeding in the same spirit that motivated Kay and his
XeroxPARC colleagues in the 1970s: to make information more accessible to
ordinary people. But a look into today's research labs reveals very little that
resembles what we think of now as a PC. For one thing, researchers seem eager to
abandon the keyboard and monitor that are the PC's trademarks. Instead they are
trying to devise PCs with interpretive powers that are more humanlike – PCs that can
hear you and see you, can tell when you're in a bad mood and can ask questions when
they don't understand something.
11
It is impossible to predict the invention that, like the Altair, crystallize new
approaches in a way that captures people's imagination.
TEXT 5
Look through the text. Point out the introductory part and the main part.
Characterize each computer system.
TOP 20 COMPUTER SYSTEMS
From soldering irons to SparcStations, from MITS to Macintosh, personal
computers have evolved from do-it-yourself kits for electronic hobbyists into
machines that practically leap out of the box and set themselves up. What enabled
them to get from there to here? Innovation and determination. Here are top 20
systems that made that rapid evolution possible.
MI TS Altair 8800
There once was a time when you could buy a top-of-the-line computer for $395.
The only catch was that you had to build it yourself. Although the Altair 8800 wasn't
actually the first personal computer (Scelbi Computer Consulting 8008-based Scelbi8H kit probably took that honor in 1973), it grabbed attention. MITS sold 2000 of
them in 1975 – more than any single computer before it.
Based on Inters 8-bit 8080 processor, the Altair 8800 kit inc luded 256 bytes of
memory (upgradable, of course) and a toggle-switch-and-LED front panel. For
amenities such as keyboard, video terminals, and storage devices, you had to go to
one of the companies that sprang up to support the Altair with expansion cards. In
1975, MITS offered 4- and 8-KB Altair versions of BASIC, the first product
developed by Bill Gates' and Paul Allen's new company, Microsoft.
If the personal computer hobbyists movement was simmering, 1975 saw it come
to a boil with the introduction of the Altair 8800.
Apple II
Those of you who think of the IBM PC as the quintessential business computers
may be in for a surprise: The Apple II (together with VisiCalc) was what really made
people to look at personal computers as business tools, not just toys.
The Apple II debuted at the first West Coast Computer Fair in San Francisco in
1977. With built-in keyboard, graphics display, eight readily accessible expansion
slots, and BASIC built-into ROM, the Apple II was actually easy to use. Some of its
innovations, like built-in high-resolution color graphics and a high-level language
with graphics commands, are still extraordinary features in desk top machines.
Commondore PET
Also introduced at the first West Coast Computer Fair, Commondore's PET
(Personal Electronic Transactor) started a long line of expensive personal computers
that brought computers to the masses. (The VIC-20 that followed was the first
computer to sell 1 million units, and the Commondore 64 after that was the first to
12
offer a whopping 64 KB of memory.) The keyboard and small monochrome display
both fit in the same one-piece unit. Like the Apple II, the PET ran on MOS
Technology's 6502. Its $795 price, key to the Pet's popularity supplied only 4 KB of
RAM but included a built-in cassette tape drive for data storage and 8-KB version of
Microsoft BASIC in its 14-KB ROM.
Radio Shack TRS-80
Remember the Trash 80? Sold at local Radio Shack stores in your choice of
color (Mercedes Silver), the TRS-80 was the first ready-to-go computer to use Zilog's
Z80 processor.
The base unit was essentially a thick keyboard with 4 KB of RAM and 4 KB of
ROM (which included BASIC). An optional expansion box that connected by ribbon
cable allowed for memory expansion. A Pink Pearl eraser was standard equipment to
keep those ribbon cable connections clean.
Much of the first software for this system was distributed on audiocassettes
played in from Radio Shack cassette recorders.
Osborne 1 Portable
By the end of the 1970s, garage start-ups were pass. Fortunately there were
other entrepreneurial possibilities. Take Adam Osborne, for example. He sold
Osborne Books to McGraw-Hill and started Osborne Computer. Its first product, the
24-pound Osborne 1 Portable, boasted a low price of $1795. More important,
Osborne established the practice of bundling software. Business was looking good
until Osborne preannounced its next version while sitting on a warehouse full of
Osborne IS. Oops. Reorganization followed soon thereafter.
Xerox Star
This is the system that launched a thousand innovations in 1981. The work of
some of the best people at Xerox PARC (Palo Alto Research Center) went into it.
Several of these – the mouse and a desktop GUI with icons – showed up two years
later in Apple's Lisa and Macintosh computers. The Star wasn't what you would call a
commercial success, however. The main problem seemed to be how much it cost. It
would be nice to believe that someone shifted a decimal point somewhere: The
pricing started at $50,000.
IBM PC
Irony of ironies that someone at mainframe-centric IBM recognized the business
potential in personal computers. The result was in 1 981 landmark announcement of
the IBM PC. Thanks to an open architecture, IBM's clout, and Lotus 1-2-3
(announced one year later), the PC and its progeny made bus iness micros legitimate
and transformed the personal computer world. The PC used Intel’s 16-bit 8088, and
for $3.000, it came with 64 KB of RAM and a floppy drive. The printer adapter and
monochrome monitor were extras, as was the color graphics adapter.
13
Compaq Portable
Compaq's Portable almost single-handedly created the PC clone market.
Although that was about all you could do with it single-handedly – it weighed a ton.
Columbia Data Products just preceded Compaq that year with the first true IBM PC
clone but didn't survive. It was Compaq's quickly gained reputation for engineering
and quality, and its essentially 100 percent IBM compatibility (reverse-engineering,
of course), that legitimized the clone market. But was it really designed on a napkin?
Radio Shack TRS-80 Model 100
Years before PC-compatible subnotebook computers, Radio Shack came out
with a book-size portable with a combination of features, battery life, weight, and
price that is still unbeatable. (Of course, the Z80-based Model 100 didn't have to run
Windows.)
The $800 Model 100 had only an 8-row by 40-column reflective LCD (large at
the time) but supplied ROM-based applications (including text editor,
communications program, and BASIC interpreter), a built-in modem, I/O ports,
nonvolatile RAM, and a great keyboard. Wieghing under 4 pounds, and with a
battery life measured in weeks (on four AA batteries), the Model 100 quickly became
the first popular laptop, especially among journalists. With its battery-backed RAM,
the Model 100 was always in standby mode, ready to take notes, write a report, or go
on-line. NEC’ s PC 8201 was essentially the same Kyocera-manufectured system.
Apple Macintosh
Apple s Macintosh and its GUI generated even more excitement than the IBM
PC. Apple's R&D people were inspired by critical ideas from Xerox PARK (and
practiced on Apple's Lisa) but added many of their own ideas to create a polished
product that changed the way people use computers.
The original Macintosh used Motorola's 16-bit 68000 microprocessor. At
$2,495, the system offered a built-in-high-resolution monochrome display, the Mac
OS, and a single-button mouse. With only 128 KB of RAM, the Mac was
underpowered at first. But Apple included some key applications that made the
Macintosh immediately useful. (It was MacPaint that finally showed people what a
mouse is good for.)
’
IBM AT
George Orwell didn't foresee the AT in 1984. The IBM AT set new standards
for performance and storage capacity. Intel's blazingly fast 286 CPU running at 6
MHz and 16-bit bus structure gave the AT several times the performance of previous
IBM systems. Hard drive capacity doubled from 10 MB to 20 MB, and the cost per
megabyte dropped dramatically. New 16-bit expansion slots meant new (and faster)
expansion cards but maintained downward compatibility with old 8-bit cards. These
hardware changes and new high-density 1.2-MB floppy drives meant a new version
of PC-DOS (the dreaded 3.0).
14
The price for an AT with 512 KB of RAM, a serial/parallel adapter, a highdensity floppy drive, and a 20-MB hard drive was well over $5,000 – but much less
than what the pundits expected.
Commondore Amiga 1000
The Amiga introduced the world to multimedia. Although it cost only $1,200,
the 68000-based Amiga 1 000 did graphics, sound, and video well enough that many
broadcast professionals adopted it for special effects. Its sophisticated multimedia
hardware design was complex for a personal computer, as was its multitasking,
windowing OS.
Compaq Deskpro 386
While IBM was busy developing (would «wasting time on» be a better phrase?)
proprietary Micro Channel PS/2 system, clone vendors ALR and Compaq wrestled
away control of the x86 architecture and introduced the first 386-based systems, the
Access 386 and Deskpro 386. Both systems maintained backward compatibility with
the 286-based AT.
Compaq's Deskpro 386 had a further performance innovation in its Flex bus
architecture. Compaq split the x86 external bus into two separate buses: a high-speed
local bus to support memory chips fast enough for the 16-MHz 386, and a slower I/O
bus that supported existing expansion cards.
Apple Macintosh II
When you first looked at the Macintosh II, you may have said, «But it looks just
like a PC. You would have been right. Apple decided it was wiser to give users a case
they could open so they could upgrade it themselves. The monitor in its 68020powered machine was a separate unit that typically sat on top of the CPU case.
Next Nextstation
UNIX had never been easy to use, and only now 10 years later we are getting
back to that level. Unfortunately, Steve Job never developed the software base it
needed for long-term survival. Nonetheless, it survived as an inspiration for future
workstations.
Priced at less than $ 10,000., the elegant Nextstation came with a 25-Mhz 68030
CPU, 8 MB of RAM, and the first commercial magnetooptical drive (256-MB
capacity). It also had a built-in DSP (digital signal processor). The programming
language was object-oriented C, and the OS was a version of UNIX.
NEC UltraLite
Necks UltraLite is the portable that put subnotebook into the lexicon. Like Radio
Shack's TRS-80 Model 100, the UltraLite was a 4-pounder ahead of its time. Unlike
the Model 100, it was expensive (starting price, $2,999), but it could run MS-DOS.
(The burden of running Windows wasn't yet thrust upon its shoulders.)
15
Fans liked the 4.4-pound UltraLite for its trim size and portability, but it really
needed one of today's tiny hard drives. It used battery-backed DRAM (1 MB,
expandable to 2 MB) for storage, with ROM-based Traveling Software's LapLink to
move stored data to a desk top PC. Foreshadowing PCMCIA, the UltraLite had a
socket that accepted credit-card-size ROM cards holding popular applications like
WordPerfect or Lotus 1-2-3, or a battery-backed 256-KB RAM card.
Sun SparcStation 1
It wasn't the first RISK workstation, nor even the first Sun system to use Sun's
new SPARC chip. But the SparcStation 1 set a new standard for price/performance, at
a starting price of only $8,995 – about what you might spend for a fully configured
Macintosh. Sun sold lots of systems and made the words SparcStation and
workstation synonymous in many peoples minds.
The SparcStation 1 also introduced S-Bus, Sun's proprietary 32-bit synchronous
bus, which ran at the same 20-MHz speed as the CPU.
IBM RS/6000
Sometimes, when IBM decides to do something, it does it right. The RS/6000
allowed IBM to enter the workstation market. The RS/6000’s RISK processor chip
set (RIOS) racked up speed records and introduced many to term suprscalar. But its
price was more than competitive. IBM pushed third-party software support, and as a
result, many desktop publishing, CAD, and scientific applications ported to the
RS/6000, running under AIX, IBM's UNIX.
A shrunken version of the multichip RS/6000 architecture serves as the basis for
the single-chip PowerPC, the non-x86-compatible processor with the best chance of
competing with Intel.
Apple Power Macintosh
Not many companies have made the transition from CISC to RISK this well.
The Power Macintosh represents Apple's well-planned and successful leap to bridge
two disparate hardware platforms. Older Macs run Motorola's 680x0 CISK line, the
Power Macs run existing 680x0-based applications yet provide Power PC
performance, a combination that sold over a million systems in a year.
IBM ThinkPad 701С
It is not often anymore that a new computer inspires gee-whiz sentiment, but
IBM's Butterfly subnotebook does, with its marvelous expanding keyboard. The
701C's two-part keyboard solves the last major piece in the puzzle of building of
usable subnotebook: how to provide comfortable touch-typing.(OK, so the floppy
drive is sill external.) With a full-size keyboard and a 10.4-inch screen, the 4.5-pound
701С compares favorably with full-size notebooks. Battery life is good, too.
16
TEXT 6
Translate the text in written form. You may look up a dictionary.
INTEL PROCESSORS, THE HISTORY
Intel was one of the pioneering Microprocessor manufacturers when it created
the 4004 processor in 1971. This was followed by the 8080 processor in the late 70's,
which was developed into the 8086 and 8088 processors in 1979. It was only when,
in 1981 IBM selected the 8086 processor for its new Personal Computer, the IBM
PC, did the Intel processor design gain its opportunity to be used widely.
The Intel 8086/8088 range of processors were based upon Complex Instruction
Set Computing (CISC) which allows the number of bytes per instruction to vary
according to the instruction being processed. This is unlike Reduced Instruction Set
Computing (RISC) which has fixed length instructions (typically set at 32 bits each).
The architecture pioneered by Intel has become known as «x86» due to the early
naming system where processors were called 8086, 80186 (not used in PC's), 80286,
80386, and 80486.
In 1982 Intel introduced the 80286 (or 286) processor. This featured significant
enhancements over the 8086/8088 line, mainly by introducing protected mode and
the ability to address up to 16 megabytes of memory.
UNI T 2
THE I NTERNET
TEXT 1
Look through the text and divide it into two parts. Translate the text.
WHAT IS THE INTERNET
USING the Internet, David, a teacher in the United States, acquired course
materials. A Canadian father accessed it to stay in contact with his daughter in
France. Loma, a housewife, used it to examine scientific research on the early
beginnings of the universe. A farmer turned to it to find information about new
planting methods that make use of satellites. Corporations are drawn to it because of
its power to advertise their products and services to millions of potential customers.
People around the globe read the latest national and international news by means of
its vast reporting and information services.
What is this computer phenomenon called the Internet, or the Net? Do you
personally have need of it? Before you decide to get «on» the Internet, you may want
to know something about it In spite of all the hype, there are reasons to exercise
caution, especially if there are children in the home.
What Is It?
Imagine a room filled with many spiders, each spinning its own web. The webs
are so interconnected that the spiders can travel freely within this maze. You now
17
have a s implified view of the Internet–a global collection of many different types of
computers and computer networks that are linked together. Just as a telephone
enables you to talk to someone on the other side of the earth who also has a phone,
the Internet enables a person to sit at his computer and exchange information with
other computers and computer users anyplace in the world.
Some refer to the Internet as the information superhighway. Just as a road allows
travel through different areas of a country, so the Internet allows information to flow
through many different interconnected computer networks. As messages travel, each
network that is reached contains information that assists in connecting to the adjacent
network. The final destination may be in a different city or country.
Each network can «speak» with its neighbor network by means of a common set
of rules created by the Internet designers Worldwide, how many networks are
connected? Some estimates say over 30,000 According to recent surveys, these
networks connect over 10,000,000 computers and some 30,000,000 users throughout
the world. It is estimated that the number of connected computers is doubling each
year.
What can people locate on the Internet? It offers a rapidly growing collection of
information, with topics ranging from medicine to science and technology. It features
exhaustive material on the arts as well as research material for students and coverage
of recreation, entertainment, sports, shopping, and employment opportunities. The
Internet provides access to almanacs, dictionaries, encyclopedias, and maps.
There are, however, some disturbing aspects to consider. Can everything on the
Internet be regarded as wholesome? What services and resources does the Internet
offer? What precautions are in order? The following articles will discuss these
questions.
Services and Resources of the Internet
A COMMON resource provided by the Internet is a worldwide system for
sending and receiving electronic mail, known as E-mail. In fact, E-mail represents a
large portion of all Internet traffic and is for many the only Internet resource they use.
How does it work? To answer that question, let's review the ordinary mail system
first.
Imagine that you live in Canada and wish to send a letter to your daughter living
in Paris. After properly addressing the envelope, you mail it, starting the letter's
journey. At a postal facility, the letter is routed to the next location, perhaps a
regional or national distribution center, and then to a local post office near your
daughter.
A similar process occurs with E-mail. After your letter is composed on your
computer, you must specify an E-mail address that identifies your daughter. Once
you send this electronic letter, it travels from your computer, often through a device
called a modem, which connects your computer to the Internet via the telephone
network. Off it goes, bound for various computers that act like local and national
postal routing facilities. They have enough information to get the letter to a
destination computer, where your daughter can retrieve it.
18
Unlike the regular mail, E-mail often reaches its destination, even on other
continents, in minutes or less unless some part of the network is heavily congested or
temporarily out of order. When your daughter inspects her electronic mailbox, she
will discover your E-mail. The speed of E-mail and the ease with which it can be sent
even to multiple recipients all over the world make it a popular form of
communication.
Newsgroups
Another popular service is called Usenet. Usenet offers access to newsgroups for
group discussions on specific topics. Some newsgroups focus on buying or selling
various consumer items. There are thousands of newsgroups, and once a user has
gained access to Usenet, there is no cost to subscribe to them.
Let’s imagine that someone has joined a newsgroup involved in stamp
collecting. As new messages about this hobby are sent by others subscribing to this
group, the messages become available to this newcomer. This person reviews not
only what someone has sent to the newsgroup but also what others have written in
response. If, for example, someone requests information about a particular stamp
series, shortly afterward there may be many responses from around the world,
offering information that would be immediately available to all who subscribe to this
newsgroup.
A variation of this idea is the Bulletin Board System (BBS). BBSs are similar to
Usenet, except that all files are located on a single computer, usually maintained by
one person or group. The content of news-groups reflects the varied interests,
viewpoints, and moral values of those who use them, so discretion is needed.
File Sharing and Topic Searching
One of the original Internet goals was global information sharing. The teacher
mentioned in the previous article located another educator on the Internet who was
willing to share already developed course materials. Within minutes the files were
transferred, despite a 2,000-mile distance.
What help is available when one does not know where a subject may be located
within the Internet? Just as we locate a phone number by using a telephone directory,
a user may find locations of interest on the Internet by first gaining access to what are
known as search sites. The user supplies a word or a phrase, the site then replies with
a list of Internet locations where information can be found. Generally, the search is
free and takes only a few seconds.
The farmer mentioned earlier had heard of a new technique called precis ion
farming, which uses computers and satellite maps. By entering that phrase at a search
site, he found the names of farmers who were using it as well as detailed information
about the method.
The World Wide Web
The part of the Internet called World Wide Web (or Web) allows authors to use
an old-fashioned idea that of footnotes in a new way. When an author of a magazine
19
article or a book inserts a footnote symbol, we scan the bottom of the page and are
possibly directed to another page or book. Authors of Internet computer documents
can do essentially the same thing using a technique that will underline or highlight a
word, a phrase, or an image in their document.
The highlighted word or image is a clue to the reader that an associated Internet
resource, often another document, exists. This Internet document can be fetched and
displayed immediately for the reader. The document may even be on a different
computer and located in another country. David Peal, author of Access the Internet,
notes that this technique «links you to actual documents, not just references to them»
The Web also supports the storage and retrieval, or playing, of photographs,
graphics, animations, videos, and sounds. Loma, the housewife mentioned at the
outset of the previous artic le, obtained and played a short color movie of the current
theories regarding the universe. She heard the narration through her computer's audio
system.
Surfing the Net
By using a Web browser, a person can easily and quickly view information and
colorful graphics that may be stored on computers in many different countries. Using
a Web browser can be similar in some ways to actual travel, only easier. One can
visit the Web exhibits of the Dead Sea Scrolls or the Holocaust Memorial Museum.
This ability to move nimbly back and forth from one Internet Web site to another is
commonly called surfing the Net.
Bus inesses and other organizations have become interested in the Web as a
means to advertise their products or services as well as to offer other kinds of
information. They create a Web page, a sort of electronic storefront window. Once an
organization's Web page address is known, potential customers can use a browser to
go «shopping», or information browsing. As in any marketplace, however, not all
products, services, or information provided on the Internet are wholesome.
Researchers are trying to make the Internet secure enough for confidential and
safeguarded transactions. Another worldwide Internet-dubbed by some Internet II – is
being developed because of the increased traffic that this commercial activity has
generated.
What Is «Chat»?
Another common service of the Internet is the Internet Relay Chat, or Chat. Chat
allows a group of people, using aliases, to send messages to one another immediately.
While used by a variety of age groups, it is especially popular among young people.
Once connected, the user is brought into contact with a large number of other users
from all around the world.
So-called chat rooms, or chat channels, are created that feature a particular
theme, such as science fiction, movies, sports, or romance. All the messages typed
within a chat room appear almost simultaneous ly on the computer screens of all
participants for that chat room.
20
A chat room is much like a party of people mingling and talking at the same
general time, except that all are typing short messages instead. Chat rooms are
usually active 24 hours a day. Of course, Christians realize that the Bible principles
about association, such as the one found at 1 Corinthians 15:33, apply to participation
in chat groups just as they apply to all aspects of life.
Who Pays for the Internet?
You may be wondering “Who pays the charges for the large distances one can
travel on the Internet?” The expense is shared by all users, corporate and individual.
However, the end user is not necessarily presented with a long-distance telephone
bill, even if he has visited many international sites. Most users have an account with a
local commercial Internet service provider, who in many cases bills the user a fixed
monthly fee. Providers generally supply a local number to avoid extra phone costs. A
typical monthly access fee is approximately $20 (US).
As you can see, the potential of the Internet is enormous. But should you get on
this information superhighway?
Do you Really Need the Internet?
SHOULD you use the Internet? Of course, this is a personal matter, one that you
should weigh carefully. What factors might influence your decision?
Need–Have You Calculated the Expense?
Much of the recent growth of the Internet is due to strong marketing efforts of
the business world. Clearly, their motive is to create a sense of need. Once this
perceived need is cultivated, some organizations then require a membership or annual
subscription fee for the information or service that you initially accessed without cost.
This fee is in addition to your monthly Internet access costs. Some on-line
newspapers are a common example of this practice.
Have you calculated the expense of equipment and software versus your actual
need? Are there public libraries or schools with access to the Internet? Using these
resources at first may help you to assess your need without making a large initial
investment in a personal computer and related equipment. It may be that appropriate
public. Internet resources can be used, аs needed, until it is clear how often such
resources are actually required. Remember, the Internet existed for more than two
decades before the general public even became aware of it, let alone felt a need for it!
VOCABULARY
access n– доступ
account n – счет
adjacent a – смежный, соседний
alias n – вымышленное имя
clue n – подсказка
congested p.p – переполненный
discretion n – осмотрительность, осторожность
21
equipment n – оборудование, приборы
exhaustive a – исчерпывающий
expense n – расход
facilities n – средства, возможности
footnotes n – сноска, примечание
highlight v – выделить
hype n – беззастенчивая реклама
maze n – лабиринт
message n – сообщение
mingle v – общаться
multiple a – многочисленный
network n – сеть
nimbly adv – проворно
outset n – начало
perceived p.p – ощутимый, понятный
precaution n – предосторожность
precision n – точность
refer v – обращаться
retrieve v – восстановить, вернуть
survey n – обзор
share v – делить
specify v – точно определять
spider n – паук
subscribe v – присоединиться
underline v – подчеркнуть
web n – паутина
wholesome a – здоровый, плодотворный
SUGGESTED ACTIVITI ES
Exercise 1. Find equivalents of the following expressions in the text:
послание, учас тие, личное дело, обратиться, во всем мире, проявлять
осторожность, взаимосвязаны, многочисленный, получатель, подобный,
помешать, хранить, получить доступ, безопасный
Exercise 2. Answer the following questions:
1. What does the Internet enable a person to do?
2. What does the author compare the Internet to?
3. How many computers are connected by networks?
4. What services and resources does the Internet provide?
5. What is E-mail? How does it work?
6. What does Usenet offer?
7. What is Web? What can it support?
8. How can businesses use a Web browser?
22
9. What does Chat allow people to do?
10. How can you pay for the Internet?
Exercise 3. Find a definition of the Internet.
Exercise 4. Sum up what you have learned about the Internet.
TEXT 2
Look through the text and define it’s theme. Translate the text orally.
Connecting to the Net depends on where you are. If you're a college student or
work at a company with its own Net connections, chances are you can gain access
simply by asking your organization's computing center or data-processing department
– they will then give you instructions on how to connect your already networked
computer to the Internet.
Otherwise, you'll need four things: a computer, telecommunications software, a
modem and a phone line to connect to the modem.
The phone line can be your existing voice line – just remember that if you have
any extensions, you (and everybody else in the house or office) won't be able to use
them for voice calls while connected to the Net.
A modem is a sort of translator between computers and the phone system. It's
needed because computers and the phone system process and transmit data, or
information, in two different, and incompatible ways. Computers «talk» digitally; that
is, they store and process information as a series of discrete numbers. The phone
network relies on analog signals, which on an oscilloscope would look like a series of
waves. When your computer is ready to transmit data to another computer over a
phone line, your modem converts the computer numbers into these waves (which
sound like a lot of screeching) – it «modulates» them. In turn, when information
waves come into your modem, it converts them into numbers your computer can
process, by «demodulating» them.
Increasingly, computers come with modems already installed. If yours didn't,
you'll have to decide what speed modem to get. Modem speeds are judged in «baud
rate» or bits per second. One baud means the modem can transfer roughly one bit per
second; the greater the baud rate, the more quickly a modem can send and receive
information. A letter or character is made up of eight bits.
You can now buy a 2,400-baud modem for well under $70 – and most now
come with the ability to handle fax messages as well. For $200 and up, you can buy a
modem that can transfer data at 9,600 baud (and often even faster, when using special
compression techniques). If you think you might be using the Net to transfer large
numbers of files, a faster modem is always worth the price. It will dramatically
reduce the amount of time your modem or computer is tied up transferring files and,
if you are paying for Net access by the hour, save you quite a bit in online charges.
Like the computer to which it attaches, a modem is useless without software to
tell it how to work. Most modems today come with easy-to-install software. Try the
23
program out. If you find it difficult to use or understand, consider a trip to the local
software store to find a better program. You can spend several hundred dollars on a
communications program, but unless you have very specialized needs, this will be a
waste of money, as there are a host of excellent programs available for around $100
or sometimes even less. Among the basic features you want to look for are a choice
of different «protocols» (more on them in a bit) for transferring files to and from the
Net and the ability to write «script» or «command» files that let you automate such
steps as logging into a host system.
When you buy a modem and the software, ask the dealer how to install and use
them. Try out the software if you can. If the dealer can't help you, find another dealer.
You' ll not only save yourself a lot of frustration, you'll also have practiced the second
Net Commandment: «Ask. People Know».
To fully take advantage of the Net, you must spend a few minutes going over the
manuals or documentation that comes with your software. There are a few things you
should pay special attention to: uploading and downloading; screen capturing
(sometimes called «screen dumping»); logging; how to change protocols; and
terminal emulation. It is also essential to know how to convert a file created with our
word processing program into «ASCII» or «text» format, which will let you share
your thoughts with others across the Net.
Uploading is the process of sending a file from your computer to a system on the
Net. Downloading is retrieving a file from somewhere on the Net to your computer.
In general, things in cyberspace go «up» to the Net and «down» to you.
Chances are your software will come with a choice of several «protocols» to use
for these transfers. These protocols are systems designed to ensure that line noise or
static does not cause errors that could ruin whatever information you are trying to
transfer. Essentially, when using a protocol, you are transferring a file in a series of
pieces. After each piece is sent or received, your computer and the Net system
compare it. If the two pieces don't match exactly, they transfer it again, until they
agree that the information they both have is identical. If, after several tries, the
information just doesn't make it across, you'll either get an error message or your
screen will freeze. In that case, try it again. If, after five tries, you are still stymied,
something is wrong with a) the file; b) the telephone line; c) the system you're
connected to; or d) you own computer.
From time to time, you will likely see messages on the Net that you want to save
for later viewing – a recipe, a particularly witty remark, something you want to write
your Congressman about, whatever. This is where screen capturing and logging come
in.
When you tell your communications software to capture a screen, it opens a file
in your computer (usually in the same directory or folder used by the software) and
«dumps» an image of whatever happens to be on your screen at the time.
Logging works a bit differently. When you issue a logging command, you tell
the software to open a file (again, usually in the same directory or folder as used by
the software) and then give it a name. Then, until you turn off the logging command,
everything that scrolls on your screen is copied into that file, sort of like recording on
24
video tape. This is useful for capturing long documents that scroll for several pages –
using screen capture, you would have to repeat the same command for each new
screen.
Terminal emulation is a way for your computer to mimic, or emulate, the way
other computers put information on the screen and accept commands from a
keyboard. In general, most systems on the Net use a system called VT100.
Fortunately, almost all communications programs now on the market support this
system as well – make sure yours does.
You' ll also have to know about protocols. There are several different ways for
computers to transmit characters. Fortunately, there are only two protocols that you're
likely to run across: 8-1-N (which stands for «8 bits, 1 stop bit, no parity» – yikes!)
and 7-1-E (7 bits, 1 stop bit, even parity).
In general, Unix-based systems use 7-1-E, while MS-DOS-based systems use 81-N. What if you don't know what kind of system you're connecting to? Try one of
the settings. If you get what looks like gobbledygook when you connect, you may
need the other setting. If so, you can either change the setting while connected, and
then hit enter, or hang up and try again with the other setting. It's also possible your
modem and the modem at the other end can't agree on the right baud rate. If changing
the protocols doesn't work, try using another baud rate (but no faster than the one
listed for your modem). Again, remember, you can't break anything.! If something
looks wrong, it probably is wrong. Change your settings and try again. Nothing is
learned without trial, error and effort. Those are the basics. Now onto the Net!
SUGGESTED ACTIVITI ES
1. Point out what you consider to be new or important information for you.
2. Find the passage dealing with modem and sum up the information about it.
3. Write abstract of the text.
TEXT 3
Read the text and speak about the Internet sites.
PUBLIC-ACCESS INTERNET SI TES
What follows is a list of public-access Internet sites, which are computer
systems that offer access to the Net. All offer international e-mail and Usenet
(international conferences). In addition, they offer:
FTP
File-transfer protocol – access to scores of file libraries (everything from
computer software to historical documents to song lyrics). You'll be able to
transfer these files from the Net to your own computer.
Telnet
25
Access to databases, computerized library card catalogs, weather reports
and other information services, as well as live, online games that let you
compete with players from around the world.
Additional services that may be offered include:
WAIS
Wide-area Information Server; a program that can search dozens of
databases in one search.
Gopher
A program that gives you easy access to dozens of other online databases
and services by making selections on a menu. You'll also be able to use these
to copy text files and some programs to your mailbox.
IRC
Internet Relay Chat, a CB simulator that lets you have live keyboard chats
with people around the world.
Clarinet
News, sports, feature stories and columns from Universal Press
International; Newsbytes computer news.
TEXT 4
Translate the text in written form.
Due to the network’s complexity, simulation plays a vital role in attempting to
characterize both the behavior of the current Internet and the possible effects of
proposed changes to its operation. Yet modeling and simulating the Internet is not an
easy task. The goal of this paper is to discuss some of the issues and difficulties in
modeling Internet traffic, topologies, and protocols. The discussion is not meant as a
call to abandon Internet simulations as an impossible task. Instead, the purpose is to
share insights about some of the dangers and pitfalls in modeling and s imulating the
Internet, in order to strengthen the contribution of simulations in network research. A
second purpose is to clearly and explicitly acknowledge the limitations as well as the
potential of simulations and of model-based research, so that we do not weaken our
simulations by claiming too much for them.
TEXT 5
Read the text.
MODEM ADVANCES
Modem technology has advanced tremendously since 1975, when 300bps
modems were considered the maximum practical «standard speed» for modems.
Built-in limits to copper wire circuits and telephone switching equipment made faster
modems practical only for corporate, capital-intensive leased-line networks, or so
many thought.
Things have changed since then. We've seen the top speed of affordable
modems rise from 300bps to around 19,200bps, a factor of 64. These newer modems
26
run faster and more reliably, pack more features, use less power in less space, and
cost less. It's the familiar computer pattern of moving toward more power in less
space for less money.
External modems sit on your desk or underneath your monitor, while internal
ones plug into an expans ion slot ins ide the computer case. There are also acoustic
modems, the oldest modem technology of all.
Modem Basics
At its simplest, all any modem needs to do is provide a translation service from
computer signal to telephone s ignal and back. Computers «think» in digital, binary
form, but copper telephone wires carry signals that correspond roughly to the human
voice in loudness, tone, and range. It's little wonder the two need a translator to
communicate.
The modem provides this necessary translation. When a computer sends out its
data, the modem turns the computer's ON/OFF electrical s ignals into a telephone's
varying or modulated audible signal. In other words, the modem modulates the signal
– which accounts for the modulate part of a modem's name.
The newly modulated signal sounds like a whistle or, at higher speeds, like
«fuzzy» noise to the human ear. It would make no sense whatever to a computer's
logic circuits, but it travels just fine on copper wire. When it arrives at the other end
of the phone connection, another modem must turn it back into a digital, ON/OFF (or
demodulated) signal. Otherwise the receiving computer won't be able to make sense
of it. Hence the rest of the name: modulate/demodulate, or modem.
To make this simple idea work, all modems share certain components, like a
transmitter and receiver. On a telephone, the transmitter is the mouthpiece and the
receiver is the earpiece. Why not just use those? The only other component really
needed is something to «talk» and «listen» to the phone while sending and receiving
data... аn electronic translator that can talk over the phone.
Acoustic Modems
Early modems were just about that simple. You placed a normal telephone
receiver into a pair of noise-reducing, cushioned cups. These modems, still in use, are
called acoustic couplers or acoustic modems, because they have no direct electrical
connection. Instead, they send an audible signal directly into a small speaker, which
«speaks» into the telephone mouthpiece.
Acoustic coupling is not very efficient. In an office environment, the shortcomings quickly become obvious. If someone puts down a coffee cup too hard or
drops a spoon during a transmission, it introduces random blips and bleeps into the
signal. These get translated at the other end into «garbage» or nonsense characters. At
a pay phone, passing traffic can have similar effects.
Direct-Connect Modems
A better solution is to connect the modem directly – electrically, that is, instead
of acoustically into the phone circuit. This eliminates noise, interference, and
27
speaker-to-speaker signal loss. It also opens the door to various speed and reliability
enhancements.
Even people familiar with modem technology can have trouble with the monthly
crop of new bells and whistles. Some software authors smooth the way by providing
automatic setup routines for many different modem brands.
Connecting Your Modem to a Phone Line
Telephone lines are wires, and all wires have resistance. Wires allow electrons
to flow from one point to another. The resistance in these wires helps to impede the
electron flow.
Impedance is an inherent physical property of wire. The amount of impedance
depends on the width of the wire in respect to how much current is flowing, and the
length of the wire. At the end of the wires, at the phone company office, are electrical
circuits and switches–all designed to permit transmission over these phone lines, but
all with limitations on their capabilities.
You can hook up to a phone line in two ways: acoustically or directly.
Direct-connect modems connect to telephone lines by means of the familiar
RJ11 modular telephone jack. They are less sensitive to noise, and are easy to
connect. Back in the old days, before modular phone jacks, you had to cut and splice
wires if you wanted a direct electrical connection between your modem and the
phone line. This wasn't too popular with the family, as it tended to render the
telephone useless.
Most modems are direct-connect. They've become so popular that hotels
catering to business travelers provide an extra RJ11 jack in their rooms. Electronics
stores sell special one-line adapter jacks.
Connecting Your Modem to a PC
After successfully connecting your modem to the telephone system, the next
step is to make the connection between the modem and your personal computer.
Free-standing (external) modems are connected by either a DB-25 or DB-9 cable to
the computer's RS-232 serial communications port. Board (internal) modems are connected by plugging a card into one of the computer's expansion slots.
SUGGESTED ACTIVITI ES
1. Explain what the word modem means.
2. Discuss the following with your partner:
What are the external and the internal types of modem?
Which type do you prefer? Why?
3. Write a short paragraph about modems.
28
TEXT 6
Look through the text and make up a plan of it. Retell the text briefly according
to the plan. Translate the introduction in written form.
Introduction
Welcome to the Microsoft Encyclopedia of Networking, a survey of computer
networking concepts, technologies, and services. This work is intended to be a
comprehensive, accurate, and timely resource for students, system engineers, network
administrators, IT implementers, and computing professionals from all walks of life.
Before I outline its scope of coverage, however, I'll ask a simple question that
surprisingly has no easy answer: What is networking?
What Is Networking?
In the simplest sense, networking means connecting computers so that they can
share files, printers, applications, and other computer-related resources. The
advantages of networking computers together are pretty obvious:
• Users can save their important files and documents on a file server,
which is more secure than storing them on their workstations because a
file server can be backed up in a single operation.
• Users can share a network printer, which costs much less than having a
locally attached printer for each user's computer.
• Users can share groupware applications running on application servers,
which enables users to share documents, send messages, and collaborate
directly.
• The job of administering and securing a company's computer resources
is simplified since they are concentrated on a few centralized servers.
This definition of networking focuses on the basic goals of networking
computers: increased manageability, security, efficiency, and cost-effectiveness over
non-networked systems. We could also focus on the different types of networks:
• Local area networks (LANs), which can range from a few desktop
workstations in a small office/home office (SOHO) to several thousand
workstations and dozens of servers deployed throughout dozens of
buildings on a university campus or in an industrial park.
• Wide area networks (WANs), which might be a company's head office
linked to a few branch offices or an enterprise spanning several
continents with hundreds of offices and subsidiaries.
• The Internet, the world's largest network and the «network of networks».
We could also focus on the networking architectures in which these types of
networks can be implemented:
• Peer-to-peer networking, which might be implemented in a workgroup
consisting of computers running Microsoft Windows 98 or Windows
2000 Professional.
• Server-based networking, which might be based on the domain model of
Microsoft Windows NT, the domain trees and forests of Active
29
Directory in Windows 2000, or another architecture such as Novell
Directory Services (NDS) for Novell NetWare.
• Terminal-based networking, which might be the traditional host-based
mainframe environment; the UNIX X Windows environment; the
terminal services of Windows NT 4, Server Enterprise Edition;
Windows 2000 Advanced Server; or Citrix MetaFrame.
Or we could look at the networking technologies used to implement each
networking architecture:
• LAN technologies such as Ethernet, ARCNET, Token Ring, Banyan
Vines, Fast Ethernet, Gigabit Ethernet, and Fiber Distributed Data
Interface (FDDI).
• WAN technologies such as Integrated Services Digital Network (ISDN),
Tl leased lines, X.25, frame relay, Synchronous Optical Network
(SONET), Digital Subscriber Line (DSL), and Asynchronous Transfer
Mode (ATM).
• Wireless communication technologies, including cellular systems such
as Global System for Mobile Communications (GSM), Code Division
Multiple Access (CDMA), Personal Communications Services (PCS),
and infrared systems based on the standards developed by the Infrared
Data Association (IrDA).
We could also consider the hardware devices that are used to implement these
technologies :
• LAN devices such as repeaters, concentrators, bridges, hubs, switches,
routers, and Multistation Access Units (MAUs).
• WAN devices such as modems, ISDN terminal adapters, Channel
Service Units (CSUs), Data Service Units (DSUs), packet
assembler/disassemblers (PADs), frame relay access devices (FRADs),
multiplexers (MUXes), and inverse multiplexers (IMUXes).
• Equipment for organizing, protecting, and troubleshooting LAN and
WAN hardware, such as racks, cabinets, surge protectors, line
conditioners, uninterruptible power supplies (UPS's), KVM switches,
and cable testers
• Cabling technologies such as coaxial cabling, twinax cabling, twistedpair cabling, fiber-optic cabling, and associated equipment such as
connectors, patch panels, wall plates, and splitters.
• Unguided media technologies such as infrared communication, wireless
cellular networking, and satellite networking, and their associated
hardware.
• Data storage technologies such as RAID, network-attached storage
(NAS), and storage area networks (SANs), and the technologies used to
connect them, such as Small Computer System Interface (SCSI) and
Fibre Channel.
30
• Technologies for securely interfacing private corporate networks with
unsecured public ones, such as firewalls, proxy servers, and packetfiltering routers.
• Technologies for increasing availability and reliability of access to
network resources, such as clustering, caching, load balancing, and faulttolerant technologies.
• Network management technologies such as the Simple Network
Management Protocol (SNMP) and Remote Network Monitoring
(RMON)
TEXT 7
Read the text and give it a suitable title.
The web creates new challenges for information retrieval. The amount of
information on the web is growing rapidly, as well as the number of new users
inexperienced in the art of web research. People are likely to surf the web using its
link graph, often starting with high quality human maintained indices such as Yahoo!
or with search engines. Human maintained lists cover popular topics effectively but
are subjective, expensive to build and maintain, slow to improve, and cannot cover all
esoteric topics. Automated search engines that rely on keyword matching usually
return too many low quality matches. To make matters worse, some advertisers
attempt to gain people's attention by taking measures meant to mislead automated
search engines. We have built a large-scale search engine which addresses many of
the problems of existing systems. It makes especially heavy use of the additional
structure present in hypertext to provide much higher quality search results. We chose
100
our system name Google, because it is a common spelling of googol, or 10 and fits
well with our goal of building very large-scale search engines.
Search engine technology has had to scale dramatically to keep up with the
growth of the web. In 1994, one of the first web search engines, the World Wide Web
Worm (WWWW) had an index of 110,000 web pages and web accessible documents.
As of November, 1997, the top search engines claim to index from 2 million to 100
million web documents. At the same time, the number of queries search engines
handle has grown incredibly too. In March and April 1994, the World Wide Web
Worm received an average of about 1,500 queries per day. In November 1997
Altavista claimed it handled roughly 20 million queries per day. The goal of our
system is to address many of the problems, both in quality and scalability, introduced
by scaling search engine technology to such extraordinary numbers.
Creating a search engine which scales even to today's web presents many
challenges. Fast crawling technology is needed to gather the web documents and keep
them up to date. Storage space must be used efficiently to store indices and,
optionally, the documents themselves. The indexing system must process hundreds of
gigabytes of data effic iently. Queries must be handled quickly, at a rate of hundreds
to thousands per second.
31
These tasks are becoming increasingly difficult as the Web grows. However,
hardware performance and cost have improved dramatically to partially offset the
difficulty. There are, however, several notable exceptions to this progress such as
disk seek time and operating system robustness. In designing Google, we have
considered both the rate of growth of the Web and technological changes. Google is
designed to scale well to extremely large data sets. It makes efficient use of storage
space to store the index. Its data structures are optimized for fast and efficient access .
Further, we expect that the cost to index and store text or HTML , will eventually
decline relative to the amount that will be available. This will result in favorable
scaling properties for centralized systems like Google.
TEXT 8
Divide the text into paragraphs. Express the main idea of each paragraph.
Google is designed to be a scalable search engine. The primary goal is to
provide high quality search results over a rapidly growing World Wide WebGoogle
employs a number of techniques to improve search quality including page rank,
anchor text, and proximity information. Furthermore Google is a complete
architecture for gathering web pages, indexing them, and performing search queries
over them. A large-scale web search engine is a complex system and much remains to
be done. Our immediate goals are to improve search efficiency and to scale to
approximately 100 million web pages. Some simple improvements to efficiency
include query caching, smart disk allocation and subindices. Another area which
requires much research is updates. We must have smart algorithms to decide what old
web pages should be recrawled and what new ones should be crawled. Work toward
this goal has been done in. One promising area of research is using proxy caches to
build search databases, since they are demand driven. We are planning to add simple
features supported by commercial search engines like boolean operators, negation,
and stemming. However, other features are just starting to be explored such as
relevance feedback and clustering (Google currently supports a simple hostname
based clustering). We also plan to support user context (like the user s location), and
result summarization. We are also working to extend the use of link structure and link
text. Simple experiments indicate PageRank can be personalized by increasing the
weight of a user's home page or bookmarks. As for link text, we are experimenting
with using text surrounding links in addition to the link text itself. A Web search
engine is a very rich environment for research ideas. We have far too many to list
here so we do not expect this Future Work section to become much shorter in the
near future. The biggest problem facing users of web search engines today is the
quality of the results they get back. While the results are often amusing and expand
users horizons, they are often frustrating and consume precious time. For example,
the top result for a search for Bill Clinton on one of the most popular commercial
search engines was the Bill Clinton Joke of the Day April 14, 1997. Google is
designed to provide higher quality search so as the Web continues to grow rapidly,
information can be found easily. In order to accomplish this Google makes heavy use
32
of hypertextual information consisting of link structure and link (anchor) text. Google
also uses proximity and font information. While evaluation of a search engine is
difficult, we have subjectively found that Google returns higher quality search results
than current commercial search engines. The analysis of link structure via PageRank
allows Google to evaluate the quality of web pages. The use of link text as a
description of what the link points to helps the search engine return relevant (and to
some degree high quality) results. Finally, the use of proximity information helps
increase relevance a great deal for many queries. Aside from the quality of search,
Google is designed to scale. It must be efficient in both space and time, and constant
factors are very important when dealing with the entire Web. In implementing
Google, we have seen bottlenecks in CPU, memory access, memory capacity, disk
seeks, disk throughput, disk capacity, and network IO. Google has evolved to
overcome a number of these bottlenecks during various operations. Google's major
data structures make efficient use of available storage space. Furthermore, the
crawling, indexing, and sorting operations are efficient enough to be able to build an
index of a substantial portion of the web – 24 million pages, in less than one week.
We expect to be able to build an index of 100 million pages in less than a month. In
addition to being a high quality search engine, Google is a research tool. The data
Google has collected has already resulted in many other papers submitted to
conferences and many more on the way. Recent research has shown a number of
limitations to queries about the Web that may be answered without having the Web
available locally. This means that Google (or a similar system) is not only a valuable
research tool but a necessary one for a wide range of applications. We hope Google
will be a resource for searchers and researchers all around the world and will spark
the next generation of search engine technology.
TEXT 9
Read the text and point out three Internet properties which make it hard to
simulate.
The Internet has several key properties that make it exceedingly hard to
characterize, and thus to s imulate. First, its great success has come in large part
because the main function of the Internet Protocol (IP) architecture is to unify diverse
networking technologies and administrative domains. IP allows vastly different
networks administered by vastly different policies to seamlessly interoperate.
However, the fact that IP masks these differences from a user's perspective does not
make them go away IP buys uniform connectivity in the face of diversity, but not
uniform behavior. Indeed, the greater IP's success at unifying diverse networks, the
harder the problem of understanding how a large IP network behaves.
A second key property is that the Internet is big It included an estimated 998
million computers at the end of 2000. Its size brings with it two difficulties. The first
is that the range of heterogeneity mentioned above is very large if only a small
fraction of the computers behave in an atypical fashion, the Internet still might
include thousands of such computers, often too many to dismiss as negligible.
33
Size also brings with it the crucial problem of scaling many networking
protocols and mechanisms work fine for small networks of tens or hundreds of
computers, or even perhaps «large» networks of tens of thousands of computers, yet
become impractical when the network is again three orders of magnitude larger
(today's Internet), much less five orders of magnitude (the coming decade's Internet).
Large scale means that rare events will routinely occur in some part of the network,
and, furthermore, that reliance on human intervention to maintain critical network
properties such as stability becomes a recipe for disaster.
A third key property is that the Internet changes in drastic ways over time. For
example, we mentioned above that in Dec. 2000, the network included 100 million
computers But in Jan.1997, four years earlier, it comprised only 16 million
computers, reflecting growth of about 60% year. This growth then begs the question:
how big will it be in two more years? 5 years?
UNI T 3
PROGRAMMI NG LANGUAGES
TEXT 1
Look through the text and give it a title. Translate the text.
The precursors of object-oriented programming can be traced back to the late
1960's: Classes, inheritance and virtual member functions were integral features of
Simula67. a programming language that was mainly used for writing event-driven
simulations. When Smalltalk first appeared back in 1972 it offered a pure objectoriented programming environment. In fact Smalltalk defined object-oriented
programming. This style of programming was so innovative and revolutionary at the
time that it took more than a decade for it to become a standard in the software
industry. Undoubtedly, the emergence of C++ in the early 80s provided the most
considerable contribution to this revolution.
The Origins of C++
In 1979. a young engineer at Bell (now AT&T) Labs, Bjarne Stroustrup, started
to experiment with extensions to С to make it a better tool for implementing largescale projects. In those days, an average project consisted of tens of thousands of
lines of code (LOC).
NOTE: Today. Microsoft's Windows 2000 (formerly NT 5.0) consists of more
than 30 million lines of code.
When projects leaped over the 100,000 LOC count, the shortcomings of С
became noticeably unacceptable. Efficient teamwork is based, among other things, on
the capability to decouple development phases of individual teams from one another
– something that was difficult to achieve in C.
34
С with Classes
By adding classes to C, the resultant language – «C with classes» – could offer
better support for encapsulation and information hiding. A class provides a distinct
separation between its internal implementation (the part that is more likely to change)
and its external interface. A class object has a determinate state right from its
construction, and it bundles together the data and operations that manipulate it.
Enter C++
In 1983, several modifications and extensions had already been made to С with
classes. In that year, the name «C++» was coined. Ever since then, the + + suffix has
become a synonym for object-orientation. (Bjarne Stroustrup could have made a
fortune only by registering ++ as a trademark). It was also in that year that C++ was
first used outside AT&T Labs. The number of users was doubling every few months
– and so was the number of compilers and extensions to the language.
C++ as Opposed to Other Object-Oriented Languages
C++ differs from other object-oriented languages in many ways. For instance,
C++ is not a root-based language, nor does it operate on a runtime virtual machine.
These differences significantly broaden the domains in which C++ can be used.
Backward Compatibility with Legacy Systems
The fact that legacy С code can be combined seamlessly with new C++ code is a
major advantage. Migration from С to C++ does not force you to throw away good,
functional С code. Many commercial frameworks, and even some components of the
Standard Library itself, are built upon legacy С code that is wrapped in an objectoriented interface.
Object-Orientation and Other Useful Paradigms
In addition to object-oriented programming, C++ supports other useful
programming styles, including procedural programming, object-based programming,
and generic programming – making it a multi-paradigm, general-purpose
programming language.
Procedural Programming
Procedural programming is not very popular these days. However, there are
some good reasons for C++ to support this style of programming, even today.
Gradual Migration of С Programmers To C++
С programmers who make their first steps in C++ are not forced to throw all
their expertise away. Many primitives and fundamental concepts of C++ were
inherited from C, including built-in operators and fundamental types, pointers, the
notion of dynamic memory allocation, header files, preprocessor, and so on. As a
35
transient phase, С programmers can still remain productive when the shift to C++ is
made.
Bilingual Environments
C++ and С code can work together. Under certain conditions, this combination
is synergetic and robust.
Object-Oriented Programming
This is the most widely used style of programming in C++. There is no universal
consensus as to what OO really is; the definitions vary among schools, languages,
and users. There is, however, a consensus about a common denominator – a
combination of encapsulation, information hiding, polymorphism, dynamic binding,
and inheritance. Some maintain that advanced object-oriented consists of generic
programming support and multiple inheritance.
C++ today is very different from what it was in 1983, when it was first named
«C++». Many features have been added to the language since then; older features
have been modified, and a few features have been deprecated or removed entirely
from the language. Some of the extensions have radically changed programming
styles and concepts. For example, downcasting a base to a derived object was
considered a bad and unsafe programming practice before the standardization of
Runtime Type Information. Today, downcasts are safe, and sometimes even
unavoidable. The list of extensions includes const member functions, exception
handling, templates, new cast operators, namespaces, the Standard Template Library,
bool type, and many more. These have made C++ the powerful and robust
multipurpose programming language that it is today. The evolution of C++ has been a
continuous and progressive process, rather than a series of brusque revolutions.
Programmers who learned C++ only three or five years ago and haven't caught up
with the extensions often discover that the language slips through their fingers:
Existing pieces of code do not compile any more, others compile with a plethora of
compiler warnings, and the source code listings in object-oriented magazines seem
substantially different from the way they looked not so long ago. «Namespaces?
Never heard of these before,» and «What was wrong with C-style cast? Why
shouldn't I use it anymore?» are some of the frequently asked questions in various
C++ forums and conferences.
VOCABULARY
average a – средний
backward a – обратный, отсталый
brusque a – грубый
bundle v – связывать
cast n – приведение типов
coin v – измышлять
compatibility n – совместимость
deprecate v – возражать, выступать против
36
derive v – подавлять
distinct a – четкий
domains n – (pl) область, сфера
downcast v – удручать, подавлять
emergence n – появление
event-driven a – управляемый прерываниями, по прерываниям
expertise n – экспертиза
extension n – расширение
framework n – интегрированная система
generic a – общий, характерный для определенного рода
hiding n – сокрытие, утаивание
inheritance n – наследование
legacy n – наследство
namespace n – пространство имен
plethora n – изобилие
port v – переносить
precursor n – предшественник, предвестник
robust a – здоровый, крепкий
simulation n – моделирование
shortcomings n – (pl) недостаток
support n,v – поддержка, поддерживать
synergetic a – взаимодействующий
template n – шаблон
tools n – вспомогательные программы, средства разработки, сервисные
программы
valid a – допустимый, правильный
SUGGESTED ACTIVITI ES
Exercise 1. Which sentences below are true and which are false?
1. Bjarne Stroustrup started to experiment with extensions to C in 2000.
2. C++ underwent a major reform in 1979.
3. The name C++ was coined in 1983.
4. C++ is a general – purpose programming language.
5. Procedural programming is very popular these days.
6. Many fundamental concepts of C++ were inherited from C.
7. C++ and C can work together.
8. C++ today is very different from what it was in 1983.
Exercise 2. Find information about C++ programmers in text 1 and make a written
translation of the paragraph.
Exercise 3. Make up a plan of the text and retell it briefly according to the plan.
37
Exercise 4. Write an abstract of the text.
TEXT 2
Look up the following words and word combinations in a dictionary:
portability, source program, timing, remote target, facilities, pointer, arbitrary
pointer address, sophisticated, single thread, multithreading, embedded systems,
timing constrains, net applets, target hardware.
Translate the text orally. Write an abstract of the text.
Java is a high-level language with which to write programs that can execute on a
variety of platforms. So are C, C++, Fortran and Cobol, among many others. So the
concept of a portable execution vehicle is not new. Why, then, has the emergence of
Java been trumpeted so widely in the technical and popular press?
Why is Java different from other languages?
Part of Java's novelty arises from its new approach to portability. In previous
High-level languages, the portable element was the source program. Once the source
program is compiled into executable form for a specific instruction set architecture
(ISA) and bound to a library of hardware-dependent I/O, timing and related operating
system (OS) services, portability is lost. The resultant executable form of the program
runs only on platforms having that specific ISA and OS. Thus, if a program is to run
on several different platforms, it has to be recompiled and relinked for each platform.
And if a program is sent to a remote target for execution, the sender must know in
advance the exact details of the target to be able to send the correct version.
With Java, source statements can be compiled into machine-independent,
«virtual instructions» that are interpreted at execution time. Ideally, the same virtual
code runs in the same way on any platform for which there is an interpreter and OS
that can provide that interpreter with certain multithreading, file, graphical, and
similar support services. With portability moved to the executable form of the
program, the same code can be sent over the net to be run without prior knowledge of
the hardware characteristics of the target. Executable programs in the Java world are
universal.
In principle, portability could have been achieved in the С or C++ world by
sending the source program over the net and then having the compilation and linkage
done as a pre-step to execution. However, this approach would require that the target
system have sufficient CPU speed and disk capacity to run the sophisticated
compilers and linkers required. In the future, network platforms may not have the
facilities to run even a simple compiler.
Is that all?
Java is not just a new concept in portability. The Java language evolved from С
and C++ by locating and eliminating many of the major sources of program error and
instability. For example, С has an element known-as a pointer that is supposed to
38
contain the address at which a specific type of information is stored. However, the
pointer can be set to literally any address value, and by «casting» a programmer can
trick the compiler into storing any type of information at the arbitrary pointer address.
This is convenient if you write error-free code, and a snake pit if you don't. Java does
not have pointers.
Equally important, Java has built-in support for multiprogramming. С and its
immediately descendent C++, were designed to express a single thread of computing
activity.
There was no inherent support for multiple program threads executing
simultaneous ly (on multiple CPUs), or in parallel (timesharing a single CPU). Any
such facilities had to be supplied by an external multitasking operating system. There
are several good programs of this type readily available, such as MTOS-UX from
Industrial Programming. However, the services provided are all vender-specific.
Neither ANSI nor any of the various committees set up to hammer out a universal set
of OS services ever produced a single, universally-accepted standard. There are in
fact, several proposed standards, so there is no standard.
Java bypasses the problem by building multithreading and the data
synchronization it requires directly into the source program. You still need an OS to
make this happen, but, the semantic meaning of the OS actions is standardized at the
source level.
A standard at last
Java has all of the technical requisites to become the standard programming
language for programs to be distributed over the net. And with a well-supported
campaign spearheaded by Sun Microsystems, Java is becoming the de facto working
standard. Will Java supersede С as the language of choice for new programs in
general? With network programming likely to play an increasingly larger part in the
overall programming field, I think so.
Java for embedded systems
Embedded or real-time systems include all those in which timing constrains
imposed by the world outside of the computer play a critical role in the des ign and
implementation of the system. Common areas for embedded systems are machine and
process control, medical instruments, telephony, and data acquisition.
A primary source of input for embedded systems are random, short-lived,
external signals. When such signals arrive, the processor must interrupt whatever else
it is doing to capture the data, or it will be lost. Thus, an embedded program is most
often organized as a set of individual, but cooperating threads of execution. Some
threads capture new data, some analyze the new data and integrate it with past inputs,
some generate the outgoing signals and displays that are the products of the system.
Currently, most embedded programs are coded in C, with critical parts possibly in
assembler.
Putting the issue of execution efficiency aside, some of the major problems of С
for embedded systems are:
39
• The permissiveness of С operations, which can lead to undisciplined
coding practices and ultimately to unstable execution.
• The absence of universal standards for multithreading, shared data
protection, and intra-thread communication and coordination, which can
make the program hard to transfer to alternate platforms. But, these are
just the problems that Java solves. Since many programmers will have to
learn Java because of its importance to the net, it will be natural for Java
to supplant С in the embedded world.
The use of Java may be different, however. We anticipate that Java programs
for embedded applications will differ from net applets in at least five major ways.
Embedded applications will be:
• compiled into the native ISA for the target hardware.
• capable of running in the absence of a hard or floppy disk, and a network
connection.
• supported by highly tailored, thus relatively small run-time packages.
• able to execute on multiple processors, if needed for capacity expansion.
• contain significant amounts of legacy С code, at least during the transition
from С to Java.
Mixed systems: multiple languages, multiple CPUs
While we expect Java to supersede С as the primary programming language for
embedded systems in the near future, there is still an enormous number of lines of С
code in operation. Companies will have to work with that code for many years as the
transition to Java runs its course. Many systems will have to be a mixture of legacy С
code and Java enhancements.
It is not trivial to integrate an overall application with some components written
in Java and others in С or assembler. Part of the problem arises from security issues:
How can Java guarantee the security of the system if execution disappears into
«unknown» regions of code? Furthermore, the danger is compounded if the non-Java
code were to make OS support service calls, especially calls that alter the
application's threading and data-protection aspects. Java expects to be the sole master
of such matters.
Thus we see that mixed language systems may have to exist, but this is not
going to be easy. Similarly, there may be problems with multiple CPUs.
Current CPUs are fast, and get faster with each new generation.Yet, there are
some embedded applications for which a s ingle CPU still does not have enough
power to keep up with a worst-case burst of external input. Such systems require
multiple CPUs working together to complete the required processing. Even if the
system can handle current work loads, the next version may not.
Do we have a problem?
When you combine the des ire to write in Java, with the need to execute on
unique, system-specific hardware, possibly with mixed source languages and multiple
40
CPUs, you introduce a major obstacle. You are not likely to get an off-the-shelf Java
OS from Sun Microsystems.
Many companies that have previously offered their own proprietary real-time
OS are now developing a Java OS, or are seriously considering such an offering. My
own company, Industrial Programming, is currently using its experience with
embedded multithreading/multiprocessor operating systems to create a new system
that will handle applications written in both Java and С. And as the case with its
traditional product, MTOS-UX, the OS is transparent to the number of tightlycoupled CPUs that are executing the application code. If one CPU is not enough, you
can add more without altering the application.
About the author
David Ripps has been in the computer field for over thirty years. He is currently
Vice President of Industrial Programming, Inc. (Jericho, NY). His functions there
include the technical supervision of the MTOS-UX product line. Among the technical
areas in which David has worked are: realtime operating systems, realtime
applications and computer simulation of physical processes. He is the author or coauthor of three books and numerous technical articles. David's formal education is in
Chemical Engineer, with а В ChE from Cornell and M ChE plus PhD from NYU.
David lives in New York City with his wife, Sylvia, and their children, Elana and
Samara.
TEXT 3
Read the text. Feel in the spaces in the text with one of these words:
bytes, chip, bits, access time, internal cache, external cache, registers, processor,
RAM, storage, data.
The main components of the computer of most significance to programmers are
disk, RAM, and the CPU; the first two of these store programs and data that are used
by the CPU.
Computers represent pieces of information (or data) as binary digits, universally
referred to as …. Each bit can have the value 0 or 1. The binary system is used
instead of the more familiar decimal system because it is much easier to make
devices that can store and retrieve 1 of 2 values, than 1 of 10. Bits are grouped into
sets of eight, called ….
The disk uses magnetic recording heads to store and retrieve groups of a few
hundred to a few thousand bytes on rapidly spinning platters in a few milliseconds.
The contents of the disk are not lost when the power is turned off, so it is suitable for
more or less permanent storage of programs and … .
…, which is an acronym for Random Access Memory, is used to hold programs
and data while they're in use. It is made of millions of microscopic transistors on a
piece of silicon called a... . Each bit is stored using a few of these transistors. RAM
does not retain its contents when power is removed, so it is not good for permanent…
However, any byte in a RAM chip can be accessed in about 10 nanoseconds, which is
41
about a million times as fast as accessing a disk. Each byte in a RAM chip can be
independently stored and retrieved without affecting other bytes, by providing the
unique memory address belonging to the byte you want.
The CPU (also called the …) is the active component in the computer. It is also
made of millions of microscopic transistors on a chip. The CPU executes programs
consisting of instructions stored in RAM, using data also stored in RAM. However,
the CPU is so fast that even the typical RAM … … of 10 nanoseconds is a
bottleneck; therefore, computer manufacturers have added both … … and… … which
are faster types of memory used to reduce the amount of time that the CPU has to
wait. The internal cache resides on the same chip as the CPU and can be accessed
without delay. … sits between the CPU and the regular RAM; it's faster than the
latter, but not as fast as the internal cache. Finally, a very small part of the on-chip
memory is organized as …which can be accessed within the normal cycle time of the
CPU, thus allowing the fastest possible processing.
TEXT 4
Look through the text and define its theme. What two characteristics are
discussed? What is the real reason for C problems?
It may sound odd to describe computers as providing grand scope for creative
activities: Aren’t they monotonous, dull, unintelligent, and extremely limited? Yes,
they are. However, they have redeeming virtues that make them ideal as the canvas of
invention: they are extraordinarily fast and spectacularly reliable. These
characteristics allow the creator of a program to weave intricate chains of thought and
have a fantastic number of steps carried out without fail.
The most impressive attribute of modern computers, of course, is their speed; as
we have already seen, this is measured in MIPS (millions of instructions per second).
Of course, raw speed is not very valuable if we can't rely on the results we get.
ENIAC, one of the first electronic computers, had a failure every few hours, on the
average; since the problems it was solving took about that long to run, the likelihood
that the results were correct wasn't very high. Particularly critical calculations were
often ran several times, and if the users got the same answer twice, they figured it
was probably correct. By contrast, modern computers are almost incomprehensibly
reliable. With almost any other machine, a failure rate of one in every million
operations would be considered phenomenally low, but a computer with such a
failure rate would make thousands of errors per second.
On the other hand, if computers are so reliable, why are they blamed for so
much that goes wrong with modern life? Who among us has not been the victim of an
erroneous credit report, or a bill sent to the wrong address, or been put on hold for a
long time because «the computer is down»? The answer is fairly simple: It's almost
certainly not the computer. More precisely, it's very unlikely that the CPU was at
fault; it may be the software, other equipment such as telephone lines, tape or disk
drives, or any of the myriad «peripheral devices» that the computer uses to store and
retrieve information and interact with the outside world. Usually, it's the software;
42
when customer service representatives tell you that they can't do something obviously
reasonable, you can count on its being the software.
TEXT 5
Read the text and speak about the reason for transforming SDL into UML.
This document describes an automatable approach for transforming system
specifications expressed in the ITU( International Telecommunication Union)
language SDL [Specification and Description Language] to the industry standard
object-oriented Unified Modeling Language (UML) .
The purpose behind such a translation is to take advantage of formalized system
specifications expressed in the SDL language. SDL is used mostly in the telecom
domain for specifying communication protocols. While several SDL-oriented tools
exist in the market, there are some significant advantages to doing software
development with UML. Although SDL has been around significantly longer than the
UML, the UML has penetrated the software community much more rapidly and more
extensively than SDL. This is because it has a broader scope and because it has been
marketed more successfully. Consequently, there is a much broader base of expertise
for UML than there is for SDL 1 , even in these early days of UML. This rapid
penetration is accompanied by a corresponding growth in tool support, with the
number and variety of UML-oriented tools far exceeding SDL tools. Finally, in
contrast to the relative rigidity of SDL – which stems from its roots as a specification
language (as opposed to an implementation language) – UML generally has far
greater expressive power and versatility for representing the diversity of techniques
used in industrial software development.
TEXT 6
Read the text. What types of diagrams are mentioned in the text? What can
these diagrams do?
The Sequence Diagram is one of the most interesting and useful diagrams in the
Unified Modeling Language (UML). It helps you document and understand the
dynamic aspects of your software system–specifically the sequence of messages that
are sent and received between objects. Sequence diagrams can help you comprehend
and solve difficult issues in the process-intensive portions of your applications.
Fast Facts
The Sequence Diagram is one of the five UML diagrams that help you model
the dynamic aspects of your software. Sequence diagrams and their cousin,
collaboration diagrams, show the dynamic interaction between objects in the system.
A sequence diagram's focus is the time ordering of messages between objects
(usually business objects).
This article covers one of the most interesting diagrams in the UML–the
sequence diagram. They are most often used in the construction phase of software
projects and are especially useful when analyzing the process-intensive portions of
43
your application. Sequence diagrams are closely related to collaboration diagrams.
While the collaboration diagram's main focus is to show how objects are associated
with each other, sequence diagrams show the time ordering of messages between
objects.
Why use Sequence Diagrams?
Unless you are using business objects in your applications, you won't have much
need for sequence diagrams. This is because if you're not using business objects,
most of your application logic resides inside methods of user interface objects or in
functions and procedures–and there really isn't much messaging that occurs between
objects. However, once you decide to elevate your programming by using bus iness
objects in your applications, sequence diagrams help you answer two very important
questions:
1. Which objects should be assigned a particular responsibility?
2. In what order should messages pass between objects?
These questions are very difficult to answer correctly when you simply try to
envision object messaging in your head. In contrast, when you document your
thought process in a sequence diagram, suddenly the answers to these questions
become crystal clear. At a higher level, it also helps you comprehend the overall flow
of a particular process. In addition, sequence diagrams help you easily identify
unnecessary messages between objects and factor them out. You may also discover
that objects you originally thought should be involved in a particular process
shouldn't be involved at all!
TEXT 7
Translate the text in written form.
In our competitive and dynamic world, businesses require quality software
systems that meet current needs and are easily adapted. These requirements are best
met by modeling business rules at a very high level, where they can be easily
validated with clients, and then automatically transformed to the implementation
level. The Unified Modeling Language (UML) is becoming widely used for both
database and software modeling, and version 1.1 was adopted in November 1997 by
the Object Management Group (OMG) as a standard language for object-oriented
analys is and des ign. Initially based on a combination of OMT (Object Modeling
Technique) and OOSE (Object-Oriented Software Engineering) methods, UML was
refined and extended by a consortium of several companies, and is undergoing minor
revisions by the OMG Revision Task Force.
TEXT 8
Read the text and find equivalents of the following expressions in the text:
отказоустойчивость; параллельная обработка данных; компьютерные
технологии; инструкции; отказоустойчивые компьютеры; базы данных,
работающие в оперативном режиме; неисправность; архитектура с
44
двойной избыточностью;
программное
обеспечение;
первичный
процессор; вспомогательный процессор; шина; двойное и однократное
повреждение; остановить систему; независимые процессоры; порт
ввода/вывода; источник питания;
Translate the text in written form.
The March issue of Computer ignores some of the leading developments of
industry in the field of fault-tolerance. A company called Tandem manufactures a
«non-stop», parallel processing computer system. The only indirect reference to it in
the March issue is contained in Al Hopkins' article: «With certain exceptions, vendors
do not offer fault-tolerant computers and systems as off-the-shelf items».
Computer's omission is not unique. I specialized in fault-tolerance and received
a master's degree in computer science from Carnegie-Mellon University, where
several fault-tolerant multiprocessors were built. Yet until I left the university and
entered the job market, I had never heard of Tandem computers. I have now been
programming them for two years; let me give your readers some background
information gleaned from Tandem's manuals and discussions with Tandem
employees.
The company was started in the early 70's by disgruntled Hewlett-Packard users
who realized how difficult it would be to make their system fault-tolerant. The
founders commissioned marketing studies which determined that there was a demand
for fault-tolerant computing in commercial applications, like banking, securities
transfer, and online data bases. A commercially oriented architecture was designed,
with dual redundancy providing the fault tolerance. Tandem's basic claim is that no
one failure can bring the system down; processors, I/O ports, buses, and power
supplies are at least dual and a single failure should not halt the system.
For a program to run non-stop, it must execute on two distinct processors. The
program must be designed to checkpoint vital information from the primary processor
to the backup processor at certain points. The checkpointing software is provided by
Tandem, but the user must decide what needs to be checkpointed, and when.
UNI T 4
OPERATI NG SYSTEMS
TEXT 1
Translate the text.
Is DOS dead?
When the original IBM PC debuted, there didn't seem to be anything unique or
special about the operating system that shipped with it. DOS was hardly the most
sophisticated or easiest to use OS on the market. When IBM hitched its wagon to
45
DOS, though, it swiftly rocketed to industry standard status, and broke all records for
marketshare.
When graphical operating systems like OS/2, Windows, and the Macintosh OS
showed themselves to be far easier to use, pundits predicted the swift demise of DOS,
but its death has been anything but rapid.
The original Microsoft Windows, although known as an operating system, was
actually just an operating environment built on top of DOS. Since DOS still lived
underneath Windows, DOS programs were still usable from within the Windows
environment.
Today, thousands of court reporters still use DOS-based CAT software like
Premier Power and TurboCAT, despite the availability of Windows-based products
that can accomplish the same tasks. Why? Because what they have works, and they
see no reason to upgrade.
Today, however, it's hard to buy a computer that doesn't already have Windows
loaded on it. Although most consumer varieties of Windows still have DOS
underneath, many features of the newer computers simply aren't available from DOSbased programs anymore. People trying to run DOS-based programs on Pentium III
computers find themselves struggling to make PCMCIA cards, fast modems,
software keys («dongles»), USB ports, and even plain old serial ports work with their
software.
Can you still use your old CAT?
Is it worth wrestling with memory management problems and compatibility
issues just to hang on to your old CAT software? Possibly. New CAT software
represents a significant investment. If you're tight on cash and happy with the old
program, here are a few ways to make your life easier:
■ Look for used computers rather than new ones. It's possible to pick up
used 486 and Pentium 1 computers for a song these days, and if it's running
nothing but your old CAT software, it should be perfectly adequate for the task.
■ Use a different computer for CAT than you use for everything else.
This lets you load an old version of DOS and/or Windows on the machine
instead of struggling to run your DOS software under Windows Me or
Windows 2000.
■ Look for an old version of DOS. Microsoft doesn't sell DOS anymore
(at least not to folks like us), but you may be able to find a copy at a garage
sale, a friend's house, or a swap meet. Most DOS-based CAT software was
designed to run under DOS 6.2, so if you're dedicating a computer to CAT,
that's the DOS to load.
But is it dead?
Okay, the title of this article is, «Is DOS Dead?» and all we've done so far is
discuss how you can continue to use it. The bottom line is that DOS is no longer sold
or supported by Microsoft. Even though there are still remnants of DOS deep down
underneath Windows 98 and Windows Me, Microsoft expects to switch everyone
46
over to the Windows NT kernel, effectively removing the last vestiges of DOS from
their operating system.
If you continue to use a DOS environment, you're locking yourself out of the
latest tools for court reporters, like SearchMaster and e-Transcript. Sure, you can run
them on a separate computer, but is it really worth the trouble?
They don't make Ford Model T's or Edsels any more, but that doesn't stop an
active group of people from still driving them. If you choose to stay with your DOSbased system, the Windows police won't knock on your doors (or windows?) in the
middle of the night to arrest you. You may get years of productive use from your
system, and save thousands in upgrade fees.
Is DOS dead? Yes. But let your own situation determine whether you stay with
it anyway. Good luck!
VOCABULARY
accomplish v – совершать, выполнять
bottom line n – конец
CAT – компьютерная томография
dedicate v – назначить, специализировать
demise n – смерть
dongle n – защитный ключ-заглушка для защиты ПО от несанкционированного
доступа
environment n – среда, условия работы
feature n – свойство, характеристика
folks n – люди
for a song – за бесценок
garage scale – распродажа на дому
hang on v – держаться за что-либо, оставаться верным
hitch the wagon – ставить целью
issue n –вопрос, пункт
kernel n – зерно, суть
marketshare n – доля на рынке
newsletter n – информационный бюллетень
predict v – предсказывать
pundit n – ученый муж (шутл)
PCMCIA – Personal Computer Memory Card International Association –
Международная ассоциация производителей плат памяти для ПК
rocket v – резко подниматься
remnants n – остатки, следы
ship v – грузить
sophisticated a – сложный, современный
swap v – менять
tight on cash – быть ограниченным в средствах
upgrade v – модернизировать
47
vestige n – след, призрак
wrestle v – бороться
SUGGESTED ACTIVITI ES
Exercise1. Find information about all operating systems mentioned in the text.
Exercise2. Write keywords for the text.
Exercise3. Match English and Russian equivalents:
1. to lock yourself out
2. to worth the trouble
3. good luck
4. to be tight on cash
5. hardly
6. for a song
7. to hang on to
a. едва ли
b. удачи
с. за бесценок
d. стоить беспокойства
e. быть ограниченным в
средствах
f. отгородиться
g. держаться за
Exercise4. Write questions that can be a plan of the text.
Exercise5. Retell the text.
TEXT 2
Read the text and answer the following questions:
1. What are the advantages of Windows98 over Windows95?
2. What is to be altered?
98'S GENERAL FEATURES
Windows 98 maintains support for 16 - bit programs just as in Windows 95 but
is orientated away from the FAT16 file system and towards the new FAT32 system
More about this later.
If you choose the classic look the appearance of the desktop is very similar to
Windows 95.
Operation is s lightly smoother and faster than Windows 95. For example you
can launch programs with a single click and the forward and back buttons. This can
make life eas ier. System startup is said to be faster, but subjectively the saving is so
small I didn’t notice it. System shutdown took slightly longer.
One of a big gripes in Windows 95 was that the minimise, maximise and close
buttons were so close together that accidental operation was possible, but this remains
48
the same. Why on earth these have not been altered, considering the widespread
criticism is anyone's guess.
Those of you who were expecting an advance in reliability in multi-tasking will
also have to console yourselves. Has this aspect been improved on in Windows 98? I
could detect nothing in the literature or in operation to suggest this.
Better reliability?
Claims for better reliability in Windows 98 do not centre on improved multitasking but on testing and automatic error-fixing the hard disk system files and
configuration. Here there has been a significant improvement with Registry Checker
and System File Checker. Windows 98 can check itself on loading with Registry
Checker and correct itself from a set of backup files that it holds.
System File Checker as the name implies, checks Windows system files for
corruption and can restore them from the CD or from a backup set.
The most likely cause of trouble is badly-written third-party programs
interfering with the system files. This used to be a problem with Windows 95.
However, with both registry and system file checkers, if the system is so corrupt it
won’t load.
TEXT 3
Read the text and give it a title. Answer the following questions:
1. Where is UNIX used now?
2. What are the advantages of the system?
Despite the lack of unification, the number of UNIX systems continues to grow.
As of the mid 1990s, UNIX runs on an estimated five million computers throughout
the world. Versions of UNIX ran on nearly every computer in existence, from small
IBМ PCs to large supercomputers such as Crays. Because it is so easily adapted to
new kinds of computers, UNIX is the operating system of choice for many of today's
high-performance microprocessors. Because a set of versions of the operating
system's source code is readily available to educational institutions, UNIX has also
become the operating system of choice for educational computing at many
universities and colleges. It is also popular in the research community because
computer scientists like the ability to modify the tools they use to suit their own
needs.
UNIX has become popular too, in the business community. In large part this
popularity is because of the increasing numbers of people who have studied
computing using a UNIX system, and who have sought to use UNIX in their business
applications. Users who become familiar with UNIX tend to become very attached to
the openness and flexibility of the system. The client-server model of computing has
also become quite popular in bus iness environments, and UNIX systems support this
paradigm well (and there have not been too many other choices).
Unix vendors and users are the leaders of the «open systems» movement:
without UNIX the very concept of open systems would probably not exist.
49
TEXT 4
Translate the text without a dictionary.
SECURI TY AND UNIX
Dennis Ritchie wrote about the security of UNIX: «It was not designed from the
start to be secure. It was designed with the necessary characteristics to make security
serviceable.»
UNIX is a multi-user, multi-tasking operating system. Multi-user means that the
operating system allows many different people to use the same computer at the same
time. Multi-tasking means that each user can run many different programs
simultaneous ly.
One of the natural functions of such operating systems is to prevent different
people (or programs) using the same computer from interfering with each other.
Without such protection, a wayward program (perhaps written by a student in an
introductory computer science course) could affect other programs or other users,
could accidentally delete files, or could even crash (halt) the entire computer system.
To keep such disasters from happening, some form of computer security has always
had a place in the UNIX design philosophy.
But UNIX security provides more than mere memory protection. UNIX has a
sophisticated security system that controls the ways users access files, modify system
databases, and use system resources. Unfortunately, those mechanisms don't help
much when the systems are misconfigured, are used carelessly, or contain buggy
software. Nearly all of the security holes that have been found in UNIX over the
years have resulted from these kinds of problems rather than from shortcomings in
the intrinsic design of the system. Thus, nearly all UNIX vendors believe that they
can (and perhaps do) provide a reasonably secure UNIX operating system.
TEXT 5
Read the text with a dictionary. Discuss the following in pairs:
Is Plan9 similar to UNIX?
Write an abstract of the text.
UNIX'S LITTLE BROTHER
Bell Labs’ s Plan 9, named after a cult sci-fi film, resembles UNIX in many
ways. Like UNIX, Plan 9 was designed as a time sharing operating system primarily
for software developers. Like UNIX, it is a file-based operating system intended to be
portable between hardware environments. Plan 9 even shares some of UNIX’s
original developers. Along with Bell Labs research scientists Rob Pike, Dave
Presotto, Howard Trickey, UNIX originator Ken Thompson is credited as a primary
designer of Plan 9. Thompson created Plan 9’s fileserver and initial complier. UNIX
cocreator Dennis Ritchie also helped on the project.
But Plan 9 differs from UNIX–and trends occurring in the UNIX world to day–
in some important ways. For example Plan 9 rejects the currently popular idea of an
50
extended systems architecture made up of many powerful, self-contained
workstations that coexist and communicate on a network. Instead, Plan 9 creates a
networked architecture made up of three key components each designed for a
particular function and dependent on the others.
Gnot A Workstation
The Plan 9 architecture consists of what its creators call a «CPU server», a
fileserver and a terminal. The CPU server, implemented as a multiprocessor system
takes on all computation tasks. It has no local storage and instead relies on associated
remote fileservers for storage. The CPU server communicates via a 20-megabyte per
second direct memory access (DM) link with fileservers that are equipped with lots of
solid - state, magnetic and optical memory.
The servers use a nonstandard, efficient protocol to communicate over standard
phone lines with a dedicated Plan 9 terminal called a Gnot. Plan 9 researchers say
they are fully aware that calling the Gnot a terminal rather than a workstation implies
a return to an older, more centralized style of computing. That, they say, is exactly
the idea.
Although the current prototype Gnot is plenty powerful–it's based on a 25megahertz 68020 processor and has a 1,024 x 1,024 pixel display–the Plan 9
terminal, like terminals of old, is designed as an anonymous node on the network, not
a full-fledged workstation.
«If you have a network of workstations, each workstation has some files of its
own, and somebody has to worry about [managing] that,» says Peter Weinberger,
Bell Labs computing principles research department head. «Here [with Plan 9] there's
just a fileserver. I have files, but the workstation doesn't have any files.» This strategy
makes a Plan 9 network much easier to manage than one made up of full-function
workstations, says Weinberger. It also makes security easier to handle.
Complementing Plan 9's hardware architecture is a unique method for naming
files and creating name spaces. When a user sits down at a Plan 9 terminal, he or she
selects from a set of available services, and Plan 9 automatically creates a name space
by joining the user's private name space with the service's name spaces.
Researchers say the Plan 9 architecture will not only be easier to use and
ad: minister but also better able to keep up with swiftly evolving computer technology
advances. By expressly isolating the terminal, researchers say, Plan 9 can take
advantage of the most rapid improvements in chip speed and display technologies
without having to replace whole workstations or large amounts of storage.
Plan 9 researchers insist they aren't trying to replace UNIX. «It's not as if anyone expects UNIX to go away. This [Plan 9] is just complementary, supplementary,»
says Weinberger. In fact, although UNIX and Plan 9 are not compatible, Plan 9 can
interoperate and share files with UNIX. A large percentage of the 60 Plan 9 users
inside Bell Labs currently use the operating system primarily to access UNIX
applications and files.
Nor do researchers expect Plan 9 to become available outside Bell Labs anytime soon. Although lately Bell Labs's management has been publicly discussing the
51
research effort, Weinberger says complete documentation on the operating system
became available within Bell Labs only recently.
Longer term, however, Weinberger says, many of Plan 9's concepts could find
their way into commercial UNIX. «In the long run, when somebody sits down at [a
terminal or workstation] you won't be sure whether you're in a Plan 9 environment or
a UNIX environment,» says Weinberger.
TEXT 6
Read the text. Express the main idea of each paragraph.
WHAT IS AN OPERATI NG SYSTEM?
For most people, a computer is a tool for solving problems. When running a
word processor, a computer becomes a machine for arranging words and ideas. With
a spreadsheet, the computer is a financial planning machine, one that is vastly more
powerful than a pocket calculator. Connected to an electronic network, a computer
becomes part of a powerful communications system.
At the heart of every computer is a master set of programs called the operating
system. This is the software that controls the computer's input/output systems such as
keyboards and disk drives, and that loads and runs other programs. The operating
system is also a set of mechanisms and policies that help define controlled sharing of
system resources.
UNI T 5
DATABASE SYSTEMS
TEXT 1
Translate the text.
THE WORLDS OF DATABASE SYSTEMS
Databases today are essential to every business. They are used to maintain
internal records, to present data to customers and clients on the World-Wide-Web,
and to support many other commercial processes. Databases are likewise found at the
core of many scientific investigations. They represent the data gathered by
astronomers, by investigators of the human genome, and by biochemists exploring the
medicinal properties of proteins, along with many other scientists.
The power of databases comes from a body of knowledge and technology that
has developed over several decades and is embodied in specialized software called a
database management system, or DBMS, or more colloquially a «database system». A
DBMS is a powerful tool for creating and managing large amounts of data efficiently
and allowing it to persist over long periods of time, safely. These systems are among
the most complex types of software available The capabilities that a DBMS provides
the user are:
1. Persistent storage. Like a file system, a DBMS supports the storage of
very large amounts of data that exists independently of any processes that
52
are using the data. However, the DBMS goes far beyond the file system m
providing flexibility, such as data structures that support efficient access
to very large amounts of data.
2. Programming interface. A DBMS allows the user or an application program to
access and modify data through a powerful query language
Again, the advantage of a DBMS over a file system is the flexibility to
manipulate stored data in much more complex ways than the reading and
writing of files.
3. Transaction management. A DBMS supports concurrent access to data,
i.e., simultaneous access by many distinct processes (called «transactions») at
once. To avoid some of the undesirable consequences of simultaneous access, the
DBMS supports isolation, the appearance that transactions execute one at-a-time,
and atomicity, the requirement that transactions execute either completely or not
at all. A DBMS also supports durability, the ability to recover from failures or
errors of many types.
The Evolution of Database Systems
What is a database? In essence a database is nothing more than a collection of
information that exists over a long period of time, often many years. In common
parlance, the term database refers to a collection of data that is managed by a DBMS.
The DBMS is expected to:
1. Allow users to create new databases and spec ify their schema (logical
structure of the data), using a specialized language called a data-definition
language.
2. Give users the ability to query the data (a «query» is database lingo for
a question about the data) and modify the data, using an appropriate
language, often called a query language or data-manipulation language.
3. Support the storage of very large amounts of data – many gigabytes or
more – over a long period of time, keeping it secure from accident or
unauthorized use and allowing efficient access to the data for queries and
database modifications.
4. Control access to data from many users at once, without allow ing the
actions of one user to affect other users and without allowing simultaneous
accesses to corrupt the data accidentally.
Early Database Management Systems
The first commercial database management systems appeared in the late 1960's.
These systems evolved from file systems, which provide some of item (3) above, file
systems store data over a long period of time, and they allow the storage of large
amounts of data. However, file systems do not generally guarantee that data cannot
be lost if it is not backed up, and they don't support efficient access to data items
whose location in a particular file is not known.
Further, file systems do not directly support item (2), a query language for the
data in files T heir support for (1) a schema for the data is limited to the creation
53
of directory structures for files. Finally, file systems do not satisfy (4). When they
allow concurrent access to files by several users or processes, a file system
generally will not prevent situations such as two users modifying the same file at
about the same time, so the changes made by one user fail to appear in the file.
VOCABULARY
accidentally adv – случайно
amount n – количество
appropriate a – соответствующий, подходящий
atomicity n – атомарнос ть
avoid v – избегать
capability n – способность, возможность
concurrent a – параллельный
consequence n – следствие, последствие
creation n – создание
core n – суть, сущность
corrupt v – портить
data item – элемент данных
distinct a – различный, определенный
durability n – выносливость, стойкость, прочность
essence n – суть
execute v – выполнять
failure n – сбой, отказ
flexibility n – гибкость
in common parlance – говоря обычным языком
isolation n – изоляция
lingo n – жаргон
persist v – сохраняться
prevent v – предотвращать
protein n – белок
query n,v – запрос, запрашивать
recover v – восстанавливаться
satisfy v – удовлетворять
simultaneous a – одновременный
transaction n – обработка запроса
undesirable a – нежелательный
SUGGESTED ACTIVITI ES
Exercise 1. Answer the following questions:
1. What is a database system?
2. Where are databases used today?
3. What are the capabilities of the DBMS?
54
4.
5.
6.
7.
What is DBMS expected to do?
When did the first commercial DBMS appear?
What is the advantage of DBMS over a file system?
What does DBMS support to avoid undesirable consequences of
simultaneous access?
Exercise 2. Read the paragraph. Some of the words are missing: they are listed at the
end of the paragraph. Put the right word in each space.
Relational Database Systems
Following a famous paper written by Ted Codd in 1970, (1)… systems
changed significantly. Codd proposed that database systems should present the
user with a view of data organized as (2)… called relations. Behind the scenes,
there might be a complex data (3)… that allowed (4)… response to a variety of
(5)… . But, unlike the user of earlier database systems, the (6)… of a relational
system would not be concerned with the storage structure. Queries could be
expressed in a very (7)… language, which greatly increased the (8)… of database
programmers.
tables, database, user, rapid, high-level, structure, queries, efficiency
Exercise 3. Find the paragraph dealing with the drawbacks of file systems. Translate
it in written form.
Exercise 4. Write an abstract of the text.
TEXT 2
Look through the text and define it’s theme. Decide on a suitable title for the
text. Make up a plan of the text and retell the text briefly according to the plan.
The first important applications of DBMS's were ones where data was composed
of many small items and many queries or modifications were made. Here are some of
these applications.
Airline Reservations Systems
In this type of system, the items of data include:
1. Reservations by a single customer on a single flight, including such information as assigned seat or meal preference.
2. Information about flights – the airports they fly from and to, their departure
and arrival times, or the aircraft flown, for example.
3. Information about ticket prices, requirements, and availability.
Typical queries ask for flights leaving around a certain time from one given
city to another, what seats are available, and at what prices. Typical data
modifications include the booking of a flight for a customer, assigning a seat, or
indicating a meal preference. Many agents will be accessing parts of the data at any
55
given time. The DBMS must allow such concurrent accesses, prevent problems such
as two agents assigning the same seat simultaneously, and protect against loss of
records if the system suddenly fails.
Banking Systems
Data items include names and addresses of customers, accounts, loans, and their
balances, and the connection between customers and their accounts and loans, e . g .
who has signature authority over which accounts. Queries for account balances are
common, but far more common are modifications representing a single payment
from, or deposit to, an account.
As with the airline reservation system, we expect that many tellers and
customers (through ATM machines or the Web) will be querying and modifying the
bank's data at once. It is vital that simultaneous accesses to an account not cause the
effect of a transaction to be lost. Failures cannot be tolerated. For example, once the
money has been ejected from an ATM machine, the bank must record the debit,
even if the power immediately fails. On the other hand, it is not permissible for the
bank to record the debit and then not deliver the money if the power fails. The
proper way to handle this operation is far from obvious and can be regarded as one of
the significant achievements in DBMS architecture.
Corporate Records
Many early applications concerned corporate records, such as a record of each
sale, information about accounts payable and receivable, or information about
employees – their names, addresses, salary, benefit options, tax status, and so on.
Queries include the printing of reports such as accounts receivable or employees'
weekly paychecks. Each sale, purchase, bill, receipt, employee hired, fired, or
promoted, and so on, results in a modification to the database.
TEXT 3
Read the text. The paragraphs are mixed, put them in their correct order.
For example, since the rate at which data can be read from a given disk is
fairly low, a few megabytes per second, we can speed processing if we use many
disks and read them in parallel (even if the data originates on tertiary storage, it is
“cached” on disks before being accessed by the DBMS). These disks may be part
of an organized parallel machine, or they may be components of a distributed
system, in which many machines, each responsible for a part of the database,
communicate over a high-speed network when needed.
Of course, the ability to move data quickly, like the ability to store large
amounts of data, does not by itself guarantee that queries can be answered quickly.
We still need to use algorithms that break queries up in ways that allow parallel
computers or networks of distributed computers to make effective use of all the
resources. Thus, parallel and distributed management of very large databases remains
an active area of research and development.
56
The ability to store enormous volumes of data is important, but it would be of
little use if we could not access large amounts of that data quickly. Thus, very large
databases also require speed enhancers. One important speedup is through index
structures. Another way to process more data in a given time is to use parallelism. This
parallelism manifests itself in various ways.
TEXT 4
Read the text and find information about storage manager and buffer manager.
STORAGE AND BUFFER MANAGEMENT
The data of a database normally resides in secondary storage, in today’s computer systems ‘‘secondary storage” generally means magnetic disk. However, to
perform any us eful operation on data, that data must be in main memory. It is the
job of the storage manager to control the placement of data on disk and its
movement between disk and main memory.
In a simple database svstem, the storage manager might be nothing more than
the file system of the underlying operating system However, for efficiency purposes,
DBMS’s normally control storage on the disk directly, at least under some
circumstances. The storage manager keeps track of the location of files on the disk
and obtains the block or blocks containing a file on request from the buffer manager.
Recall that disks are generally divided into disk blocks, which are regions of
contiguous storage containing a large number of bytes, perhaps 212 or 214 (about
4,000 to 16,000 bytes).
The buffer manager is responsible for partitioning the available main memory
into buffers, which are page-sized regions into which disk blocks can be transferred.
Thus, all DBMS components that need information from the disk will interact with
the buffers and the buffer manager, either directly or through the execution engine.
The kinds of information that various components may need include:
1. Data: the contents of the database itself.
2. Metadata: the database schema that describes the structure of, and con
straints on, the database.
3. Statistics: information gathered and stored by the DBMS about data
properties such as the s izes of, and values in, various relations or other
components of the database.
4. Indexes: data structures that support efficient access to the data.
TEXT 5
Read the text. Speak about the advantages of DBMS and RDBMS.
EVOLUTION OF DATABASE SYSTEMS
Since the first database management systems (DBMSs) appeared in the early
1960s, they have evolved in terms of quality (functionality, performance, ease of use,
and so on) and quantity (number of different products). The qualitative evolution has
been driven by two complementary trends: significant progress in the areas of
57
database theory and database technology and increasingly sophisticated requirements
of database users. The quantitative evolution of DBMSs stems from the increasing
number and variety of database applications and the increasing diversity of
computing resources.
A DBMS is characterized mainly by the data model it supports. The first
DBMSs, based on the hierarchical or network model, remain the most used today.
They can be viewed as an extension of file systems in which interfile links are
provided through pointers. The data manipulation languages of those systems are
navigational, that is, the programmer must specify the access paths to the data by
navigating in hierarchies or networks.
In the early 1980s, the first systems based on the relational model appeared on
the market, bringing definite advantages over their predecessors. Today a large
number of relational products is available on mainframe computers, minicomputers,
microcomputers, and dedicated computers (database machines), and their market is
rapidly expanding. The success of the relational model among researchers, designers,
and users is due primarily to the simplicity and the power of its concepts.
What Is a Relational Database Management System?
The advantages of the relational model, invented by E. F. Codd, have been
thoroughly demonstrated by database researchers. One main advantage is the ability
to provide full independence between logical data descriptions, in conceptual terms,
and physical data descriptions, in terms of files.
As a consequence of this physical independency, high-level data manipulation
languages may be supported. Such languages free the programmer from physical
details, thereby allowing query optimization to be done by the system rather than by
the user. The promotion of the relational model has also been helped by database
language standardization, which yields the standard Structured Query Language
(SQL). SQL provides a uniform interface to all types of users (database
administrators, programmers, end users) for data definition, control, and
manipulation.
The relational data model can be characterized by three features:
1. The data structures are simple. These are two-dimensional tables, called
relations (or tables), whose elements are data items. A relation can be viewed as a
file; a row of a relation, called tuple (or row), can be viewed as a record; and a
column of a relation called attribute (or column), can be viewed as a data item. The
relationship linking two relations is specified by a common attribute in both relations.
For example, the relationship between an EMPLOYEE relation and a
DEPARTMENT relation can be specified by an attribute dept_name stored in both
relations.
2. A set of eight operators (union, intersection, difference, Cartesian product,
select, project, join, and difference), called relational algebra, facilitates data
definition, data retrieval, and data update. Each relational operator takes one or two
relations as input and produces one relation.
3. A set of integrity constraints defines consistent states of the database.
58
A relational DBMS (RDBMS) is a software program that supports the relational
model. A definition of a relational database system has been proposed by a Relational
Task Group. Such a definition is useful to characterize systems that are claimed to be
relational. A system is said to be minimally relational if it satisfies three conditions:
1. All information in the database is represented as values in tables.
2. No intertable pointers are visible to the user.
3. The system must support at least the following relational algebra operators:
select, project, and natural join. These operators must not be restricted by internal
constraints. An example of internal constraint that exists in some systems is that there
must exist an index on join attribute to perform a join. These constraints limit the
power of the database language.
TEXT 6
Sum up information about Database Management Systems.
Retell the text.
♦ Database Managemt it Systems: A DBMS is characterized by the ability
to support efficient access to large amounts of data, which persists over
time. It is also characterized by support for powerful query languages and
for durable transactions that can execute concurrently in a manner that
appears atomic and independent of other transactions;
♦ Comparison With File Systems: Conventional file systems are inadequate
as database systems, because they fail to support efficient search, efficient
modifications to small pieces of data, complex queries, controlled buffering
of useful data in main memory, or atomic and independent execution of
transactions;
♦ Relational Database Systems: Today, most database systems are based
on the relational model of data, which organizes information into tables.
SQL is the language most often used in these systems;
♦ Secondary and Tertiary Storage: Large databases are stored on secondary
storage devices, usually disks. The largest databases require tertiary storage
devices, which are several orders of magnitude more capacious than
disks, but also several orders of magnitude slower;
♦ Client-Server Systems: Database management systems usually support a
client-server architecture, with major database components at the server and the
client used to interface with the user;
♦ Future Systems: Major trends in database systems inc lude support for
very large ‘’multimedia” objects such as videos or images and the integration of
information from many separate information sources into a single
database;
♦ Database Languages: There are languages or language components for
defining the structure of data (data-definition languages) and for querying
and modification of the data (data-manipulation languages);
♦ Components of a DBMS: The major components of a database man59
agement system are the storage manager, the query processor, and the
transaction manager;
♦ The Storage Manager: This component is responsible for storing data,
metadata (information about the schema or structure of the data), indexes
(data structures to speed the access to data), and logs (records of changes
to the database). This material is kept on disk. An important storagemanagement component is the buffer manager, which keeps portions of
the disk contents m main memory;
♦ The Query Processor: This component parses queries, optimizes them by
selecting a query plan, and executes the plan on the stored data;
♦ The Transaction Manager: This component is responsible for logging
database changes to support recovery after a system crashes. It also supports concurrent execution of transactions in a way that assures atomicity
(a transaction is performed either completely or not at all), and isolation
(transactions are executed as if there were no other concurrently executing
transactions).
TEXT 7
Translate the text in written form.
THE EVOLUTION OF OBJECT-ORI ENTED DATABASES
Object-oriented database research and practice dates back to the late 1970’s and
had become a significant research area by the early 1980’s, with initial commercial
product offerings appearing in the late 1980’s. Today, there are many companies
marketing commercial object-oriented databases that are second generation products.
The growth in the number of object-oriented database companies has been
remarkable. As both the user and vendor communities grow there will be a user pull
to mature these products to provide robust data management systems.
UNI T 6
COMPUTER SECURITY
TEXT 1
Look through the text and comment on its title.
HOW TO BECOME A HACKER
Looking for advice on how to learn to crack passwords, sabotage systems,
mangle websites, write viruses, and plant Trojan horses? You came to the wrong
place. I'm not that kind of hacker.
60
Looking for advice on how to learn the guts and bowels of a system or network,
get inside it, and become a real expert? Maybe I can help there. How you use this
knowledge is up to you. I hope you'll use it to contribute to computer science and
hacking (in its good sense), not to become a cracker or vandal.
This little essay is basically the answers to all the e-mails I get asking how to
become a hacker. It's not a tutorial in and of itself. It's certainly not a guaranteed
success. Just give it a try and see what happens. If this ends up being of any use to
you, let me know. That said, here's where to start:
Be curious
Take things apart. Look under the hood. Dig through your system directories and
see what's in there. View the files with hex editors. Look inside your computer.
Wander around computer stores and look at what's there.
Read everything in sight
If you can afford it, buy lots of books. If you can't, spend time in libraries and
online. Borrow books from friends. Go through tutorials. Read the help files on your
system. If you're using Unix/Linux, read the main files. Check out the local college
bookstores and libraries. And as you're reading, try things (see next paragraph).
Experiment
Don't be afraid to change things, just to see what'll happen. Do this long enough,
of course, and you'll wipe out your system (see next paragraph), but that's part of
becoming a hacker. Try command options and switches you've never tried before.
Look for option menus on programs and see what they can do. In Windows, tweak
your registry and see what happens. Change settings in .INI files. In Unix, dig around
in the directories where you don't normally go. On the Macintosh, play around in the
system folder.
Make backups
If you start mucking around with system files, registries, password files, and
such, you will eventually destroy your system. Have a backup ready. If you can
afford it, have a system you use just for experimenting, ready to reload on a moment's
notice, and do the serious work on a different computer.
Don't limit yourself
Who says a computer or network is the only place to hack? Take apart your
telephone. Figure out your television (careful of the high voltage around the picture
tube – if you fry yourself, it's not my fault) and VCR. Figure out how closed
captioning works (that was a plug for my FAQ). Take apart your printer. Pick up the
latest issues of Nuts & Volts and Midnight Engineer. Take apart the locks on your
doors. Figure out how your radio works. Be insatiably curious and read voraciously.
61
Get some real tools
You can't cut a board in half with a screwdriver. Well, maybe, but it' ll take a
long time. Dig around and find the proper tools for the operating systems you're
using. They're out there on the Web. You can get some pretty good stuff as shareware
or freeware (especially on Unix). The serious power tools often cost serious money.
What kinds of tools? Hex file editors. Snoopers that analyze system messages and
network traffic. Programming tools. Scripting tools. Disk editors/formatters.
Disassemblers. When you get good, write some of your own.
Learn to program
If you want to be a hacker, you're going to have to learn to program. The easiest
way to start depends on the operating system you're using. The choice of language is
very individual. It's almost a religious thing. Suggest a programming language to a
beginner, and someone will disagree. Heck, you'll probably get flamed for it in a
newsgroup. In Unix, I'd suggest getting started with Perl. Buy a copy of the camel
book (Programming Perl) and the llama book (Learning Perl). You'll have the
fundamentals of programming really fast! The best part is that the language itself is
free. In Windows, you can get started quickly using a visual development
environment like Visual Basic or Delphi. No matter what the system, if you want to
get serious, you'll eventually need to learn С (or C++ or Visual C++ or some other
variant). Real hackers know more than one programming language, anyway, because
no one language is right for every task.
Learn to type
Hackers spend a lot of time at their keyboards. I type 90+ wpm (according to the
Mavis Beacon typing tutor). HackingWiz (of hackers.com and Hacker's Haven BBS
fame) says he can type 140+ wpm. The typing tutor may be boring, but it pays off.
Use real operating systems
Everyone's using Windows 95/98 these days, but it's just a shell on top of a 32bit patch to a 16-bit DOS. Get some real operating systems (Linux, Windows NT,
Mac OS, OS/2...) and learn them. You can't call yourself a linguist if you only know
one language, and you certainly can't call yourself a hacker if you only know one OS.
Linux is a hacker's dream. All the source code is freely available. Play with it,
analyze it, learn it. Eventually, perhaps you can make a contribution to Linux
yourself. Who knows, you might even have a chance to write your own OS.
Talk to people
It's hard to learn in a vaccuum. Take classes. Join users groups or computer
clubs. Talk to people on IRC or newsgroups or Web boards until you find people to
learn with. That can take a while. Every third message on newsgroups like alt.hack is
«teach me to hack.» Sigh. The best way to be accepted in any group is to contribute
something. Share what you learn, and others will share with you.
62
Do some projects
It's important to pick some projects and work until you've finished them.
Learning comes from doing, and you must follow the project through start to finish to
really understand it. Start really simple. Make an icon. Customize your system (the
startup screen on Win95, or the prompt on Unix). Make a script that performs some
common operation. Write a program that manipulates a file (try encrypting
something).
Learn to really use the Internet
Start with the Web. Read the help for the search engines. Learn how to use
boolean searches. Build up an awesome set of bookmarks. Then move on to other
Internet resources. Get on Usenet. Learn to use gopher. Get on IRC. You'll find
useful information in the strangest places. Get to the point where you can answer
your own questions. It's a whole lot faster than plastering them all over various
newsgroups and waiting for a serious answer.
Once you've gone through these steps, go out and contribute something. The
Internet was built by hackers. Linux was built by hackers. Usenet was built by
hackers. Sendmail was built by hackers. Be one of the hackers that builds something.
VOCABULARY
awesome a – благоговейный, почтительный
backup n – резервная копия
bookmark n – закладка
caption n – заголовок
customize v – изготовить по техническим условиям (заказчика)
directory n – каталог
editor n – редактор
encrypt v – зашифровать
FAQ – вопрос-ответ
freeware n – способ коммерческого распространения ПО
folder n – папка
heck! int – черт возьми! проклятье!
hex a – шестнадцатеричный
insatiably adv – жадно, ненасытно
IRC – международная линия передачи документальной информации
mangle v – искажать, калечить
muck v – слоняться, пачкать
on a moments notice – сразу
pay off v – расплатиться, окупиться
patch n – заплата
plug n – разъем, подстрочник
setting n – настройка, установка
script n – документ
63
share v – делиться
shareware n – условно бесплатное ПО (попробуй перед тем как заплатить)
snooper n – тот, кто сует нос в чужие дела
stuff n – ерунда, чепуха
tweak v – ущипнуть
tutorial n – руководство
voraciously adv – жадно, ненасытно
SUGGESTED ACTIVITI ES
Exercise 1. Answer the following questions:
1. What does the author mean saying «I’m not that kind of hacker»?
2. What operating systems are mentioned in the text?
3. What programming languages are recommended?
4. Why is it important to program?
5. What is the typing tutor?
6. Can you learn in a vacuum?
7. What Internet resources does the author advise?
8. Do you agree with the author?
9. Why is it important to pick some projects?
Exercise 2. Find equivalents of the following expressions in the text:
резервная копия; интерактивный; перезагрузить; клавиатура; доступный;
руководство; пароль; разобрать на части; основы; вопрос-ответ;
программные средства; документ; позволить cебе; понимать; скучный;
делать вклад.
Exercise 3. Choose one of the following situations and write a list of instructions and
warnings
1. An English friend wants advice about how to become a good programmer.
2. A friend has been given a PC and asks you for some advice
Exercise 4.Write an abstract of the text.
TEXT 2
Look through the text and say what it is about. Write questions which can be
issues of its plan. Retell the text briefly according to the plan.
A BIT OF HISTORY
2 November 1988 Robert Morris younger (Robert Morris), graduate student of
informatics faculty of Cornwall University (USA) infected a great amount of
computers, connected to Internet network. This network unites machines of
university centres, private companies and governmental agents, including National
64
Aeronautics Space Administration, as well as some military scientific centres and
labs.
Network worm has struck 6,200 machines that formed 7,3% computers to
network, and has shown, that UNIX not okay too. Amongst damaged were NASA,
LosAlamos National Lab, exploratory centre VMS USA, California Technology
Institute, and Wisconsin University (200 from 300 systems). Spread on networks
ApraNet, MilNet, Science Internet, NSF Net it practically has removed these network
from building. According to «Wall Street Journal, virus has infiltrated networks in
Europe and Australia, where there were also registered events of blocking the
computers.
Here are some recalls of the event participants:
Symptom: hundreds or thousands of jobs start running on a Unix system
bringing response to zero.
Systems attacked: Unix systems, 4.3BSD Unix & variants (e.g.: SUNs) any
sendmail compiled with debug has this problem. This virus is spreading very quickly
over the Milnet. Within the past 4 hours, it has hit >10 sites across the country, both
Arpanet and Milnet sites. Well over 50 sites have been hit. Most of these are «major»
sites and gateways.
Method: Someone has written a program that uses a hole in SMTP Sendmail
utility. This utility can send a message into another program.
Apparently what the attacker did was this: he or she connected to sendmail (i.e.,
telnet victim.machine 25), issued the appropriate debug command, and had a small С
program compiled. (We have it. Big deal.) This program took as an argument a host
number, and copied two programs – one ending in VAX.OS and the other ending in
SunOS – and tried to load and execute them. In those cases where the load and
execution succeeded, the worm did two things (at least): spawn a lot of shells that did
nothing but clog the process table and burn CPU cycles; look in two places – the
password file and the internet services file – for other sites it could connect to (this is
hearsay, but I don't doubt it for a minute). It used both individual .host files (which it
found using the password file), and any other remote hosts it could locate which it
had a chance of connecting to. It may have done more; one of our machines had a
changed superuser password, but because of other factors we're not sure this worm
did it.
All of Vaxen and some of Suns here were infected with the virus. The virus
forks repeated copies of itself as it tries to spread itself, and the load averages on the
infected machines skyrocketed: in fact, it got to the point that some of the machines
ran out of swap space and kernel table entries, preventing login to even see what was
going on!
The virus also «cleans» up after itself. If you reboot an infected machine (or it
crashes), the /tmp directory is normally cleaned up on reboot. The other incriminating
files were already deleted by the virus itself.
4 November the author of the virus – Morris – come to FBI headquarters in
Washington on his own. FBI has imposed a prohibition on all material relating to the
Morris virus.
65
22 January 1989 a court of jurors has acknowledged Morris guilty. If
denunciatory verdict had been approved without modification, Morris would have
been sentenced to 5 years of prison and 250,000 dollars of fine. However Morris
attorney Thomas Guidoboni immediately has lodged a protest and has directed all
papers to the Circuit Court with the petition to decline the decision of court... Finally
Morris was sentenced to 3 months of prisons and fine of 270 thousand dollars, but in
addition Cornwall University carried a heavy loss, having excluded Morris from its
members. Author then had to take part in liquidation of its own creation.
TEXT 3
Read the text. Divide it into two parts.
In which paragraph does the author write about the main problems?
To what extent do you agree or disagree with his opinion?
There's only one problem with software development these days, according
to security analyst and author Gary McGraw: It isn't any good.
McGraw, noted for his books on Java security, is out with a new book that
purports to tell software developers how to do it better. Titled Building Secure
Software and co-authored with technologist John Viega, the book provides a plan for
designing software better able to res ist the hacker attacks and worm infestations that
plague the networked world.
At the root of the problem, McGraw argues, lies «bad software.» While the
market demands that software companies develop more features more quickly,
McGraw and others in the security field are sounding the alarm that complex and
hastily designed applications are sure to be shot through with security holes.
Raised in eastern Tennessee, McGraw studied philosophy at the University of
Virginia before getting his dual doctorate in computer and cognitive science from
Indiana University. He subsequently went to work for Reliable Software
Technologies, now called Cigital, and gained attention in computer security circles
for the books he co-authored on Java security.
McGraw spoke to CNET News.com about the state of software development
and education, outlining his 10 principles for better security and the five worst
software security problems.
Q: You've identified the root of the computer security problem as bad
software development. Why is software such a problem?
A: I would say there are three major factors influencing the problem. Number
one is complexity. It turns out that software is way more complicated than it used to
be. For example, in 1990, Windows 3.1 was two and a half million lines of code.
Today, Windows XP is 40 million lines of code. And the best way to determine how
many problems are going to be in a piece of software is to count how many lines of
code it has. The simple metric goes like this: More lines, more bugs.
The second factor in what I like to call the «trinity of trouble» is connectivity.
That is, the Internet is everywhere, and every piece of code written today exists in a
66
networked world. And the third factor is something where we've only seen the tip of
the iceberg. It's called extensibility. The idea behind an extensible system is that code
will arrive from God knows where and change the environment.
Such as?
A perfect example of this is the Java Virtual Machine in a Web browser, or the
.Net virtual machine, or the J2ME micro VM built into phones and PDAs. These are
all systems that are meant to be extensible. With Java and .Net, you have a base
system, and lots of functionality gets squirted down the wire just in time to assemble
itself. This is mobile code.
The idea is that I can't anticipate every kind of program that might want to run
on my phone, so I create an extensible system and allow code to arrive as it is needed.
Not all of the code is baked in. There are a lot of economic reasons why this is a good
thing and a lot of scary things that can happen as a result. I wrote lots about this in
1996 in the Java security book. So, if you look at those three problems together–
complexity, connectedness and extensibility–they are the major factors making it
much harder to create software that behaves.
What are some of the specific problems facing programmers trying to write
secure code?
There are many subtleties in writing good programs There's too much to know,
and there aren't many good methods in how to develop software securely. The tools
that developers have are bad. Programming is hard. And popular languages like С
and C++ are really awful from a security standpoint. Basically, it's not an exact
science. So all of these factors work together to cause the problem.
Who else shares responsibility for this problem?
If you think about who practices security today, you'll find that it's usually a
network architect, someone who understands the network, an IT person. Now, who
develops software. Software architects and developers. Those guys don't talk to the
security or network guys. They're often not even in the same organization. The
software guys are associated with a line of business, and the IT stall is part of
corporate infrastructure.
Historically, isn't part of the problem the fact that a lot of software was
developed before computers were networked?
Sure, but computers have been networked for a long time now. You can't exactly
say that the Internet is new. Yet we're still producing code as if it were living in a
non-networked environment, which is why the connectivity thing is part of this trinity
of trouble. Most developers do not learn about security And so we see this same
problem come up over and over again, like butler overflows, for example.
67
You write a lot about the lack of security education in the computer science
field. What should be done to bridge the education gap?
One thing is that some universities are beginning to teach security, sometimes
even software security–UС Davis, the University of Virginia, Purdue, Princeton And
then, the fact is that the world is catching on. The world realizes that if we want to get
a proactive handle on computer security, we're going to have to pay more attention to
software.
Where has the security focus been, if not on software?
It's been on firewalls.
Wait, aren't firewalls software?
They're software, but they're supposed to be a partial solution to the security
problem. You're connected to the Internet and all your ports are wide open, so you get
a firewall so only a few of your ports are going to be open, and that lessens your
exposure. But the problem is that what's listening to those few ports though the
firewall is a piece of code. A lot of people treat security as a network architecture
problem. They think, «If I put a firewall between myself and the evil, dangerous
Internet, I'll be secure.» And I say, «Good start, but your firewall has holes in it on
purpose. « What's on the other side of those holes?
In the back of their minds, people know that security problems are caused by
bad software, like (Microsoft's) IIS Web server. But they try to solve the problem in
the wrong way–like firewalls–and the second way is magic crypto fairy dust.
TEXT 4
Translate the text in written form.
INTERNET SAFETY AND POLICY LEADERSHIP
The Internet is fostering some of the fastest technological, social, and economic
changes in history Since coming into widespread use in the mid-1990s, it has evolved
rapidly into a global network, connecting many of the world's personal computers and
an increasing number of cell phones and other devices.
While information and communications technology (ICT) has created previously
unimagined opportunities for millions of people worldwide, it has also provided new
tools for criminals. As a leader in the industry that is creating so many benefits,
Microsoft is committed to helping address technology abuses, both in our own
programs and in collaboration with governments, law enforcement officials, and
other industry leaders.
Public policy is also critical to shaping the Internet's future. We will continue to
work with government officials and other stakeholders to advance public polic ies that
improve economic and social well-being, deter criminal activity, and enable people to
realize their full potential.
68
TEXT 5
Read the text and divide it into paragraphs. Translate the first three paragraphs
in written form.
CREATI NG AND USING SECURE TECHNOLOGY
Software programs represent a unique combination of human authorship and
technology. The complexity of modern computing is compounded on the Internet as
programs interact with a broadening diversity of other programs and devices. This
complexity has become the target of hackers and writers of viruses and worms, who
have become increasingly sophisticated in probing for and exploiting vulnerabilities
in order to inflict senseless harm or worse. The launch of the Trustworthy Computing
Initiative was about fundamentally changing the way we design and develop software
in order to more holistically address this reality. Back in 2002, some 8,500 Microsoft
developers halted their work and dedicated their efforts to building security directly
into our software. While this initiative delayed the release of Windows Server 2003
and postponed work on other key products, we believe that the initial Trustworthy
Computing push created a quantum leap in our ability to help protect computing
systems from online attacks. Across the company, our developers have maintained
their focus on improving the security of our products, and employees around the
world helped customers, partners, and other key audiences understand how,
collectively, we can improve the security of the Internet. Our development of
Microsoft Windows XP Service Pack 2 (SP2)–a free upgrade to the Windows XP
operating system–was the most significant step of the past year. Windows XP SP2
contains a number of new security technologies. By consenting to and «turning on»
Automatic Updates, consumers permit Microsoft to send them updates when they
connect to the Internet. During the setup of Windows XP SP2, consumers are
presented with a screen that educates them in consumer-friendly terms on the
importance of enabling this feature.
TEXT 6
Read the text and express the main idea. Give the text a suitable title. Find a
definition of hacker in the text.
In today's world of international networks and electronic commerce, every
computer system is a potential target. Rarely does a month go by without news of
some major network or organization having its computers penetrated by unknown
computer criminals. Although some computer «hackers» have said that such
intrusions are merely teenage pranks or fun and games, these intrusions have become
more sinister in recent years: computers have been rendered inoperable; records have
been surreptitiously altered; software has been replaced with secret «back doors» in
place; proprietary information has been copied without authorization; and millions of
passwords have been captured from unsuspecting users.
Even if nothing is removed or altered, system administrators must often spend
hours or days reloading and reconfiguring a compromised system to regain some
69
level of confidence in the system's integrity. There is no way to know the motives of
an intruder and the worst must be assumed. People who break into systems simply to
«look around» do real damage, even if they do not read confidential mail and do not
delete any files. If computer security was once the subject of fun and games, those
days have long since passed.
Many different kinds of people break into computer systems. Some people
perhaps the most widely publicized – are the equivalent of reckless teenagers out on
electronic joy rides. Like youths who «borrow» fast cars, their main goal isn't
necessarily to do damage, but to have what they consider to be a good time. Others
are far more dangerous: some people who compromise system security are
sociopaths, joyriding around the networks bent on in flicting damage on unsuspecting
computer systems. Others see themselves at «war» with rival hackers; woe to
innocent users and systems who happen to get in the way of cyberspace «drive-by
shootings!» Still others are out for valuable corporate information, which they hope
to resell for profit. There are also elements of organized crime, spies and saboteurs
motivated by bothgreed and politics, terrorists, and single-minded anarchists using
computers and networks.
Who Is a Computer Hacker?
HACKER noun 1. A person who enjoys learning the details of computer
systems and how to stretch their capabilities – as opposed to most users of computers,
who prefer to learn only the minimum amount necessary.
2. One who programs enthusiastically or who enjoys
programming rather than just theorizing about programming.
TEXT 7
Translate the text without a dictionary.
WHAT IS COMPUTER SECURI TY?
Terms like security, protection, and privacy often have more than one meaning.
Even professionals who work in information security do not agree on exactly what
these terms mean. The focus of this book is not formal definitions and theoretical
models so much as practical, useful information. Therefore, we'll use an operational
definition of security and go from there.
Computer Security: «A computer is secure if you can depend on it and its
software to behave as you expect.»
If you expect the data entered into your machine today to be there in a few
weeks, and to remain unread by anyone who is not supposed to read it, then the
machine is secure. This concept is often called trust: you trust the system to preserve
and protect your data.
By this definition, natural disasters and buggy software are as much threats to
security as unauthorized users. This belief is obvious ly true from a practical standpoint. Whether your data is erased by a vengeful employee, a random virus, an
unexpected bug, or a lightning strike – the data is still gone.
70
SUPPLEMENTARY READI NG
TEXT 1
FUZZY LOGIC
Introduction
Welcome to the wonderful world of fuzzy logic, the new science you can use to
powerfully get things done. Add the ability to utilize personal computer based fuzzy
logic analys is and control to your technical and management skills and you can do
things that humans and machines cannot otherwise do.
Following is the base on which fuzzy logic is built:
As the complexity of a system increases, it becomes more difficult and eventually impossible to make a precise statement about its behavior, eventually arriving
at a point of complexity where the fuzzy logic method born in humans is the only
way to get at the problem.
(Originally identified and set forth by Lotfi A. Zadeh, Ph.D., University of
California, Berkeley).
Fuzzy logic is used in system control and analysis design, because it shortens the
time for engineering development and sometimes, in the case of highly complex
systems, is the only way to solve the problem.
Fuzzy logic can apply also to economics, psychology, marketing, weather
forecasting, biology, politics…...to any large complex system.
The term «fuzzy» was first used by Dr. Lotfi Zadeh in the engineering journal,
«Pro-ceedings of the IRE», a leading engineering journal, in 1962. Dr. Zadeh
became, in 1963, the Chairman of the Electrical Engineering department of the
University of California at Berkeley. Dr. Zadeh’s thoughts are not to be taken lightly.
Fuzzy logic is not the wave of the future. It is now! There are already hundreds
of millions of dollars of successful, fuzzy logic based commercial products,
everything from self-focusing cameras to washing machines that adjust themselves
according to how dirty the clothes are, automobile engine controls, subway control
systems and computer programs trading successfully in the financial markets.
Fuzzy Logic Analysis and Control
A major contributor to Homo sapiens success and dominance of this planet is
our innate ability to exercise analys is and control based on the fuzzy logic method.
Here is an example:
Suppose you are driving down a typical, two way, 6 lane street in a large city,
one mile between signal lights. The speed limit is posted at 45 Mph. It is usually
optimum and safest to «drive with the traffic», which will usually be going about 48
Mph. How do you define with specific, precise instructions «driving with the
traffic»? It is difficult. But, it is the kind of thing humans do every day and do well.
There will be some drivers going more than 48 Mph and a few drivers driving
exactly the posted 45 Mph. But, most drivers will be driving 48 Mph. They do this by
exercising «fuzzy logic» – receiving a large number of fuzzy inputs, somehow
71
evaluating all the inputs in their human brains and summarizing, weighting and
averaging all these inputs to yield an optimum output decision. Inputs being
evaluated may include several images and considerations such as: How many cars are
in front. How fast are they driving. Any drivers going real slow. How about side
traffic entering from side streets. Do the police ever set up radar surveillance on this
stretch of road. What do you see in the rear view mirror. Even with all this, and more,
to think about, those who are driving with the traffic will all be going along together
at the same speed.
The same ability you have to drive down a modern city street was used by our
ancestors to successfully organize and carry out chases to drive wooly mammoths
into pits, to obtain food, clothing and bone tools.
Human beings have the ability to take in and evaluate all sorts of information
from the physical world they are in contact with and to mentally analyze, average and
summarize all this input data into an optimum course of action. All living things do
this, but humans do it more and do it better and have become the dominant species of
the planet.
If you think about it, much of the information you take in is not very precisely
defined, such as the speed of a vehicle coming up from behind. We call this fuzzy
input.
However, some of your «input» is reasonably precise and non-fuzzy such as the
speedometer reading. Your processing of all this information is not very precisely
definable. We call this fuzzy processing. Fuzzy logic theorists would call it using
fuzzy algorithms.
Fuzzy logic is the way the human brain works, and we can mimic this in
machines so they will perform somewhat like humans (not to be confused with
Artificial Intelligence, where the goal is for machines to perform EXACTLY like
humans). Fuzzy logic control and analys is systems may be electromechanical in
nature, or concerned only with data, for example economic data, in all cases guided
by «If-Then» rules stated in human language.
The Fuzzy Logic Method
The fuzzy logic analysis and control method is, therefore:
1. Receiving of one, or a large number, of measurement or other assessment of
conditions existing in some system we wish to analyze or control.
2. Processing all these inputs according to human based, fuzzy «If-Then» rules,
which can be expressed in plain language words, in combination with traditional nonfuzzy processing.
3. Averaging and weighting the resulting outputs from all the individual rules
into one single output decision or signal which decides what to do or tells a controlled
system what to do. The output signal eventually arrived at is a precise appearing,
defuzzified, «crisp» value.
72
Fuzzy Perception
A fuzzy perception is an assessment of physical condition that is not measured
with precision, but is assigned an intuitive value. In fact, the fuzzy logic people assert
everything in the universe is a little fuzzy, no matter how good your measuring
equipment is. It will be seen below that fuzzy perceptions can serve as a basis for
processing and analysis in a fuzzy logic control system.
Measured, non-fuzzy data is the primary input for the fuzzy logic method.
Examples: temperature measured by a temperature transducer, motor speed,
economic data, financial markets data, etc. It would not be usual in an electromechanical control system or a financial or economic analysis system, but humans
with their fuzzy perceptions could also provide input.
In the fuzzy logic literature, you will see the term «fuzzy set.» A fuzzy set is a
group of anything that cannot be precisely defined. Cons ider the fuzzy set of «old
houses.» How old is an old house? Where is the dividing line between new houses
and old houses? Is a fifteen year old house an old house? How about 40 years? What
about 39.9 years? The assessment is in the eyes of the beholder.
Other examples of fuzzy sets are: tall women, short men, warm days, high
pressure gas, small crowd, medium viscosity, hot shower water, etc.
When humans are the basis for an analysis, we must have a way to assign some
rational value to intuitive assessments of individual elements of a fuzzy set. We must
translate from human fuzziness to numbers that can be used by a computer. We do
this by assigning assessment of conditions a value from zero to 1.0. For «how hot the
room is» the human might rate it at .2 if the temperature were below freezing, and the
human might rate the room at 9, or even 10, if it is a hot day in summer with the air
conditioner off.
You can see these perceptions are fuzzy, just intuitive assessments, not precisely
measured facts.
By making fuzzy evaluations, with zero at the bottom of the scale and 1.0 at the
top, we have a basis for analysis rules for the fuzzy logic method, and we can
accomplish our analys is or control project. The results seem to turn out well for
complex systems or systems where human experience is the only base from which to
proceed, certainly better than doing nothing at all, which is where we would be if
unwilling to proceed with fuzzy rules.
TEXT 2
DESI GN OF A BITMAPPED MULTILI NGUAL WORKSTATION
Providing computer support for English text is simpler in several important
respects than supporting other natural languages. Most English words require no
diacritical marks. Internal storage of ASCII codes allows implicit collation of terms.
Display of English text requires only upper- and lowercase letters to generate output
comparable to printing generated by other means.
By contrast, many non-English languages use characters that increase the
complexity of computerization. Langauges that use diacritical marks for certain
73
characters require different methods for input, storage, and display, often including
modified or special keyboards, character codes, and display techniques. Collation is
complex, requiring different rules for different languages or sometimes even for the
same language; for example, a German telephone directory is collated slightly
differently from a German dictionary. Standard character codes do not correspond to
any of these collation rules. Some languages display special characters for certain
letter combinations, such as the use of a special character for the double «s» in
German. Because many computer languages use English words and characters,
countries with different alphabets (such as Greek and Cyrillic) must also use English
characters. (I use the term «English» to describe the alphabet in the ASCII character
set, without diacritical characters.)
Some Middle Eastern languages also require special treatment for input and display. Hebrew and Arabic, for example, are written from right to left, except for
numerics, which retain left-to-right place value notation. In addition, some Arabic
characters have different shapes depending on their position in a word.
Input methodologies. For the most part, the design of today’s computers, is
based on the English language, so it is natural that processing English text is easier
than processing text in other languages. In addition, the English character set is
simpler than that of most other languages except under special c ircumstances such as
representing foreign words, equations, or special fonts.
However, English input itself is hampered by the Qwerty keyboard. Despite its
inefficient layout and positioning of keys, the Qwerty keyboard has survived as the
principal means of input for English and other languages for over a century. A number of other devices have been proposed, the Dvorak keyboard being the best known.
The most promising recent design is the Maltron keyboard developed by Stephen
Hobday and Lillian Malt. This design overcomes most of the Qwerty keyboard’s
design flaws. It takes advantage of the dexterity of both thumbs by giving them
control of a number of images keys, including the letter «e», space, period, and enter.
The keys are separated into two pods, one for each hand, and placed in a concave
configuration that eliminates the need for users to move their hands to access all keys.
These last two features reduce two known causes of severe strain resulting from
Qwerty keyboard use. The keys are also repositioned to increase alternate hand
typing and to make greater use of the most dextrous fingers (giving a slight bias to
the right hand). This keyboard can increase the speed of any user, even professional
typists.
Input problems in other languages are considerably more complex. The frequency of use of characters in foreign languages is different from that in English.
German, for example, makes little use of the letter «y» but frequent use of the letter –
«z» so those two letters are transposed on German typewriters.
Western European languages other than English generally use diacritical
characters. Letters such as á and ê in French, ü in German, require special treatment
for both input and display. It is technically possible but cumbersome to type the characters just cited using ASCII characters and a Qwerty keyboard. As a result,
74
language-specific keyboard layouts have been adopted that allow for the relative
frequency of letter use in the different alphabets.
Input problems are compounded in languages that print from right to left, such
as Arabic and Hebrew. When the default input mode is Arabic, some computers require that English characters be input in reverse order. Becker describes a different
method in which the keyboard is temporarily reconfigured for each alphabet, so the
characters are accepted in the conventional mode for each language. A prompt
window on the display indicates the current keyboard configuration. The concept of a
language-independent keyboard is an important contribution to multilingual
processing.
Input of Oriental languages poses even more difficult problems. The use of
logo-graphic, rather than alphabetic, characters to represent words stems from ancient
Chinese writing, which was also adopted by the Japanese and Koreans around 1,000
years ago. These symbols are effective in permitting communication between cultures
that speak different dialects, but the large number of characters and their different
pronunciations in each language create serious input difficulties.
The problem of providing a simple, efficient input method in a multilingual
environment remains complex. Multiple input devices, including voice as well as
key-boards, could be one solution. More research in this area is sorely needed.
TEXT 3
THE DO-I T-YOURSELF SUPERCOMPUTER
In the well-known stone soup fable, a wandering soldier stops at a poor village
and says he will make soup by boiling a cauldron of water containing only a shiny
stone. The townspeople are skeptical at first but soon bring small offerings: a head of
cabbage, a bunch of carrots, a bit of beef. In the end, the cauldron is filled with
enough hearty soup to feed everyone. The moral: cooperation can produce significant
achievements, even from meager, seemingly insignificant contributions.
Researchers are now using a similar cooperative strategy to build
supercomputers, the powerful machines that can perform billions of calculations in a
second. Most conventional supercomputers employ parallel processing: they contain
arrays of ultrafast microprocessors that work in tandem to solve complex problems
such as forecasting the weather or simulating a nuclear explosion. Made by IBM,
Cray and other computer vendors, the machines typically cost tens of millions of
dollars–far too much for a research team with a modest budget. So over the past few
years, scientists at national laboratories and universities have learned how to
construct their own supercomputers by linking inexpensive PCs and writing software
that allows these ordinary computers to tackle extraordinary problems.
In 1996 two of us (Hargrove and Hoffman) encountered such a problem in our
work at Oak Ridge National Laboratory (ORNL) in Tennessee. We were trying to
draw a national map of ecoregions, which are defined by environmental conditions:
all areas with the same climate, landforms and soil characteristics fall into the same
ecoregion. To create a high-resolution map of the continental U.S., we divided the
75
country into 7.8 million square cells, each with an area of one square kilometer. For
each cell we had to consider as many as 25 variables, ranging from average monthly
precipitation to the nitrogen content of the soil. A single PC or workstation could not
accomplish the task. We needed a parallel-processing supercomputer–and one that we
could afford!
Our solution was to construct a computing cluster using obsolete PCs that
ORNL would have otherwise discarded. Dubbed the Stone SouperComputer because
it was built essentially at no cost, our cluster of PCs was powerful enough to produce
ecoregion maps of unprecedented detail. Other research groups have devised even
more capable clusters that rival the performance of the world’s best supercomputers
at a mere fraction of their cost. This advantageous price-to-performance ratio has
already attracted the attention of some corporations, which plan to use the clusters for
such complex tasks as deciphering the human genome. In fact, the cluster concept
promises to revolutionize the computing field by offering tremendous processing
power to any research group, school or business that wants it.
Beowulf and Grendel
The notion of linking computers together is not new. In the 1950s and 1960s the
U.S. Air Force established a network of vacuum-tube computers called SAGE to
guard against a Soviet nuclear attack. In the mid-1980s Digital Equipment
Corporation coined the term «cluster» when it integrated its mid-range VAX
minicomputers into larger systems. Networks of workstations– generally less
powerful than minicomputers but faster than PCs - soon became common at research
institutions. By the early 1990s scientists began to consider building clusters of PCs,
partly because their mass-produced microprocessors had become so inexpensive.
What made the idea even more appealing was the falling cost of Ethernet, the
dominant technology for connecting computers in local-area networks.
Advances in software also paved the way for PC clusters. In the 1980s Unix
emerged as the dominant operating system for scientific and technical computing.
Unfortunately, the operating systems for PCs lacked the power and flexibility of
Unix. But in 1991 Finnish college student Linus Torvalds created Linux, a Unix-like
operating system that ran on a PC. Torvalds made Linux available free of charge on
the Internet, and soon hundreds of programmers began contributing improvements.
Now wildly popular as an operating system for stand-alone computers, Linux is also
ideal for clustered PCs.
The first PC cluster was born in 1994 at the NASA Goddard Space Flight
Center. NASA had been searching for a cheaper way to solve the knotty
computational problems typically encountered in earth and space science. The space
agency needed a machine that could achieve one gigaflops–that is, perform a billion
floating-point operations per second. (A floating-point operation is equivalent to a
simple calculation such as addition or multiplication.) At the time, however,
commercial supercomputers with that level of performance cost about $1 million,
which was too expensive to be dedicated to a single group of researchers.
76
One of us (Sterling) decided to pursue the then radical concept of building a
computing cluster from PCs. Sterling and his Goddard colleague Donald J. Becker
connected 16 PCs, each containing an Intel 486 microprocessor, using Linux and a
standard Ethernet network. For scientific applications, the PC cluster delivered
sustained performance of 70 megaflops–that is, 70 million floating-point operations
per second. Though modest by today’s standards, this speed was not much lower than
that of some smaller commercial supercomputers available at the time.
And the cluster was built for only $40,000, or about one tenth the price of a
comparable commercial machine in 1994.
NASA researchers named their cluster Beowulf, after the lean, mean hero of
medieval legend who defeated the giant monster Grendel by ripping off one of the
creature’s arms. Since then, the name has been widely adopted to refer to any lowcost cluster constructed from commercially available PCs. In 1996 two successors to
the original Beowulf cluster appeared: Hyglac (built by researchers at the California
Institute of Technology and the Jet Propulsion Laboratory) and Loki (constructed at
Los Alamos National Laboratory). Each cluster integrated 16 Intel Pentium Pro
microprocessors and showed sustained performance of over one gigaflops at a cost of
less than $50,000, thus satisfying NASA’s original goal.
TEXT 4
ERGONOMICS
Within the past two years, substantial media attention has been directed at
potential adverse health effects of long-term computer use. Renewed concerns about
radiation, combined with reports of newly-recognized «repetitive stress injuries» such
as carpal tunnel syndrome, have led some to call for regulation in the workplace and
others to rearrange their offices and computer labs. There is little evidence that
computer use is on the decline, however. On the contrary, more people are spending
more time doing more tasks with computers – and faculty, students and staff at
colleges and universities have some of the most computer-intensive work styles in the
world.
If, as is widely suspected, health effects are cumulative, then many of us are at
risk in our offices, labs, dormitories, and homes. Unfortunately, many years will be
required before epidemiological studies can provide definitive guidelines for
computer users, managers, furniture suppliers, and office designers. In the interim,
individuals and institutions must educate themselves about these issues and protective
measures.
One set of issues concerns workstation design, setup, and illumination, together
with users’ work habits. The City of San Francisco, which recently enacted worker
safety legislation, cited research by the National Institute of Occupational Safety and
Health (NIOSH) into VDT operator complaints of eyestrain, headaches, general
malaise, and other visual and musculoskeletal problems as the rationale for imposing
workplace standards, to be phased in over the next four years.
77
A second set of issues relates to suspected radiation hazards, including
miscarriage and cancer. A special concern with radiation is that nearby colleagues
could be affected as well, since radiation is emitted from the backs and sides of some
terminals. The most recent NIOSH study is reassuring, but some caution still seems
prudent.
Ergonomics and work habits
Most people can ride any bicycle on flat ground for a short distance with no
problems. On a fifty mile ride over hilly terrain, however, minor adjustments in seat
height, handlebar angle, and the like can mean the difference between top
performance and severe pain. Similarly, occasional computer users may notice no ill
effects from poorly designed or badly adjusted workstations, whereas those who
spend several hours a day for many years should pay careful attention to ergonomics,
the study of man-machine interfaces.
The key to most workstation comfort guidelines is adjustability–to
accommodate different body dimensions, personal workstyle preferences, and the
need to change positions to avoid fatigue. A recommended working posture shows
the body directly facing the keyboard and terminal, back straight, feet flat on the
floor, eyes aligned at or slightly below the top of the screen, and thighs, forearms,
wrists, and hands roughly parallel to the floor. Achieving this posture may require:
• A chair with a seat pan that adjusts both vertically and fore-and-aft, an
adjustable height backrest, and adjustable tilting tension.
• An adjustable height work surface or separate keyboard/mouse tray (note
that many keyboard trays are too narrow to accommodate a mouse pad,
leaving the mouse at an awkward height or reach on the desktop).
• A height adjustment for the video display (a good use for those manuals
you’ll never read!).
• An adjustable document holder to minimize head movement and eyestrain
• Adjustable foot rests, arms rests, and/or wrist rests.
Studies show that many people are unaware of the range of adjustments possible
in their chairs and workstations. Although the best chairs permit adjustment while
seated, you may have to turn the chair upside down to read the instructions. (Be
careful not to strain your back while upending and righting the chair!). If you are
experiencing discomfort experiment with adjustments or try exchanging chairs or
workstations with colleagues. A posture cushion, which maintains the natural
curvature of the spine and pelvis while supporting the lumbar region, may also prove
helpful. It should be noted that any adjustment may feel uncomfortable for a week or
so while your body readjusts itself.
(Some people have been advised by their physicians to use a backless «balans»
chair, which minimizes compression of the spine and shifts the body weight forward
with the aid of a shin rest. This posture may be uncomfortable, however, since it
requires stronger abdominal and leg muscles than conventional sitting positions. The
«balans» chair is not recommended for overweight or exceptionally tall persons).
78
Light and glare
Eyestrain, headaches, and impaired vision are often a product of improper
illumination resulting in glare, which is light within the field of vis ion that is brighter
than other objects to which the eyes are adapted. Both direct glare from sunlight and
lighting fixtures directed at the user’s eyes and indirect glare due to reflections from
video screen or glossy surfaces are common problems for VDT users.
Many offices are too bright for computer use, which may be a carryover from
the days when paperwork required such brightness or the result of many office
workers’ preferences for sunlight and open windows. A NIOSH study recommends
200-500 lux for general office work; other sources suggest 500-700 lux for light
characters on dark monitors and somewhat more for dark-on-light. If documents are
not sufficiently illuminated, desk lights are recommended in preference to ceiling
lights, which increase reflections from video screens. Reducing overhead lighting
could also result in substantial energy savings.
VDT workstation placement is also important. Terminal screens should be
positioned at right angles to windows, so sunlight is neither directly behind the
monitor nor behind the operator, where it will reflect off the screen. If this is
infeasible, blinds or drapes should be installed. Screens should also be positioned
between rows of overhead fixtures, which can be fitted with baffles or parabolic
louvers to project light downward rather than horizontally into the eyes or terminal
screens.
Some users have found filters placed in front of the screen to be effective in
reducing reflections, however some dimming or blurring of the display may result.
Experts advise trial and error, since the best solution appears to depend upon specific
conditions and user preferences. Finally, if you wear glasses or contact lenses, be sure
your physician is aware of the amount of terminal work you do; special lenses are
sometimes necessary. Bifocals, in particular, are not recommended for extensive
terminal work, since the unnatural neck position compresses the cervical vertebrae.
Breaks and exercises
Working in the same position for too long causes tension buildup and is thought
to increase the risk of repetitive motion injuries, such as carpal tunnel syndrome.
Remedies include changing postures frequently, performing other work interspersed
with computing (some studies recommend a 10-15 minute break from the keyboard
every hour), and doing exercises such as tightening and releasing fists and rotating
arms and hands to increase circulation. Be aware, also, that the extra stress created by
deadline pressure exacerbates the effects of long hours at the computer.
Radiation hazards
For at least a decade, concerns have been raised about possible effects of
radiation from video display terminals, including cancer and miscarriages. Earlier
fears about ionizing radiation, such as X rays, have been laid to rest, since these rays
are blocked by modern glass screens. Also well below exposure standards are
ultraviolet, infrared, and ultrasound radiation.
79
More recent controversy surrounds very low frequency (VLF) and extremely
low frequency (ELF) electromagnetic radiation produced by video displays’
horizontal and vertical deflection circuits, respectively. Researchers have reported a
number of ways that electromagnetic fields can affect biological functions, including
changes in hormone levels, alterations in binding of ions to cell membranes, and
modification of biochemical processes inside the cell. It is not clear, however,
whether these biological effects translate into health effects.
Several epidemiological studies have found a correlation between VDT use and
adverse pregnancy outcomes, whereas other studies found no effect. The most recent
analys is, published this year found no increased risk of spontaneous abortions
associated with VDT use and exposure to electromagnetic fields in a survey of 2,430
telephone operators. This study, which measured actual electromagnetic field strength
rather than relying on retrospective estimates, seems the most trustworthy to date.
The authors note, however, that they surveyed only women between 18 and 33 years
of age and did not address physical or psychological stress factors.
A 1990 Macworld article by noted industry critic, Paul Brodeur, proposed that
users maintain the following distances to minimize VLF and ELF exposure:
• 28 inches or more from the video screen
• 48 inches or more from the sides and backs of any VDTs.
Although these guidelines seem overly cautious, a fundamental principle is that
magnetic field strength diminishes rapidly with distance. Users could, for example,
select fonts with larger point sizes to permit working farther from the screen.
Remember that magnetic fields penetrate walls.
Over-reaction to ELF and VLF radiation can also compromise ergonomics. In a
campus computer lab, for example, all displays and keyboards were angled thirty
degrees from the front of desktops to reduce the radiation exposure of students behind
the machines. The risks of poor working posture in this case appear to be greater than
the radiation risks.
A final form of radiation, static electric, can cause discomfort by bombarding
the user with ions that attract dust particles, leading to eye and skin irritations. Antistatic pads, increasing humidity, and grounded glare screens are effective remedies
for these symptoms.
Avoiding carpal tunnel syndrome: A guide for computer keyboard users
Carpal tunnel syndrome (CTS) is a painful, debilitating condition. It involves the
median nerve and /the flexor tendons that extend from the forearm into the hand
through a “tunnel” made up of the wrist /bones, or carpals, and the transverse carpal
ligament. As you move your hand and fingers, the flexor tendons rub against the
sides of the tunnel. This rubbing can cause irritation of the tendons, causing them to
swell. When the tendons swell they apply pressure to the median nerve. The result
can be tingling, numbness, and eventually debilitating pain.
CTS affects workers in many fields. It is common among draftsmen,
meatcutters, secretaries, musicians, assembly-line workers, computer users,
automotive repair workers, and many others. CTS can be treated with steroids, anti80
inflammatories, or physical therapy, or with surgery to loosen the transverse carpal
ligament. Recovery of wrist and hand function is often, but not always, complete.
Causes
Like many skeletomuscular disorders, CTS has a variety of causes. It is most often
the result of a combination of factors. Among these are:
Genetic predisposition. Certain people are more likely than others to get CTS.
The amount of natural lubrication of the flexor tendons varies from person to person.
The less lubrication, the more likely is CTS. One study has related the cross-sectional
shape of the wrist, and the associated geometry of the carpal tunnel, to CTS. Certain
tunnel geometries are more susceptible to tendon irritation.
Health and lifestyle. People with diabetes, gout, and rheumatoid arthritis are
more prone than others to develop CTS, as, are those experiencing the hormonal
changes. Job stress has also been linked to an increased likelihood of CTS. And CTS
seems to be more frequent among alcoholics.
Repetitive motion. The most common cause of CTS that’s been attributed to the
workplace is repetitive motion. When you flex your hand or fingers the flexor
tendons rub against the walls of the carpal tunnel. If you allow your hand time to
recover, this rubbing is not likely to lead to irritation. The amount of recovery time
you need varies from fractions of a second to minutes, depending on many
circumstances, including the genetic and health factors mentioned above, as well as
the intens ity of the flexing, the weight of any objects in your hand, and the extent to
which you bend your wrist during flexing.
Trauma. A blow to the wrist or forearm can make the tendons swell and cause or
encourage the onset of CTS.
Prevention
Computer keyboard users can take several steps to lower their chances of
developing CTS. Some of these center around the configuration of the workplace, or
«ergonomics.» Others have to do with human factors.
Ergonomics. Proper seating is crucial to good ergonomics. The height of your
seat and the position of your backrest should be adjustable. The chair should be on
wheels so you can move it easily. Arm rests on the chair, though optional, are often
helpful.
Table height. To adjust the chair properly, look first at the height of the table or
desk surface on which your keyboard rests. On the average, a height of 27-29 inches
above the floor is recommended. Taller people will prefer slightly higher tables than
do shorter people. If you can adjust your table, set your waist angle at 90 degree, then
adjust your table so that your elbow makes a 90 degree angle when your hands are on
the keyboard.
81
Wrist angle. If your keyboard is positioned properly your wrists should be able
to rest comfortably on the table in front of it. Some keyboards are so «thick» that they
require you to bend your hands uncomfortably upward to reach the keys. If so, it will
help to place a raised wrist rest on the table in front of the keyboard. A keyboard that
requires you to bend your wrists is a common cause of CTS among computer users.
Elbow angle. With your hands resting comfortably at the keyboard and your
upper arms vertical, measure the angle between your forearm and your upper arm
(the elbow angle). If it is less than 90 degree, raise the seat of your chair. If the angle
is greater than 90 degree, lower the seat. Try to hold your elbows close to your sides
to help minimize «ulnar displacement» – the sideways bending of the wrist (as when
reaching for the «Z» key).
Waist angle. With your elbow angle at 90 degree, measure the angle between
your upper legs and your spine (the waist angle). This too should be about 90 degree.
If it is less than 90 degree, your chair may be too low (and your knees too high).
Otherwise, you may need to alter the position of the backrest or adjust your own
posture (nothing provides better support than sitting up straight). (Note: If making
your waist angle 90 degree changes your elbow angle, you may need to readjust the
height of your chair or table.)
Feet. With your elbows and waist at 90 degree angles, your feet should rest
comfortably flat on the floor. If they don’t, adjust your chair and table height and
repeat the steps above. If your table isn’t adjustable and your feet don’t comfortably
reach the floor, a raised footrest can help. Otherwise, you may need a different table.
TEXT 5
BUI ULDI NG A WEB-BASED EDUCATION SYSTEM
What is a Web-based classroom?
The use of computers and communication technologies in learning has a history
going back at least 30 years. In that time it has been called by many names, including
computer-mediated communication (CMC), computer conferencing, online learning,
Internet-based learning, and telematics. The advent of the Web provides a new and
interesting environment for CMC that offers a host of new possibilities together with
many of the advantages of previous incarnations but without some of the problems
that have dogged computer-based learning.
A Web-based classroom is an environment created on the World Wide Web in
which students and educators can perform learning-related tasks. A Web-based
classroom is not simply a mechanism for distributing information to students; it also
performs tasks related to communication, student assessment, and class management.
Your imagination and resources are the only limits to how you utilize the Web.
Many of the tools that provide the functionality of the Web-based classroom
have very little to do with the Web at all. A Web-based classroom may use Internet
applications such as e-mail. Usenet News, FTP, and a variety of other computer
applications such as databases. The Web provides the simple, familiar interface by
which the students and educators in a class can access and use these applications.
82
Client, Server, and Support Software. A large collection of software can be
used in the development, maintenance, and daily activity of a Web-based classroom.
One way of categorizing the software is to use the following three categories:
Support. Software in this category generally has little or no direct
connection with the Web. Instead, it is software the participants use to support
their activity within the Web-based classroom. Some examples include word
processors, graphics programs, and databases.
Client. Students and educators participating in a Web-based classroom do
so via a computer and a collection of client software. The client software
provides the interface to the Web-based classroom that the participants use to
perform tasks and interact in the Web-based classroom. Examples of client
software include Web browsers such as Netscape, e-mail programs such as
Eudora, and programs that provide access to other Internet services such as
chats, MUDs, and videoconferencing.
Server. The client software provides the interfaces the participants use, but
it does not provide a method for supplying the management and distribution of
information required to allow a group of people to communicate and share
information. Management and distribution of information in a Web-based
classroom are the responsibility of the server software. Each of the major
services provided by a Web-based classroom – a Web server, e-mail, mailing
lists, interactive chats, and MUDs – all require a specific server.
Typically, the Web-based classroom participants' computers will provide the
support and client software, while the server software will reside on one or two
central computers. However, this is not always true. It is common for a Web-based
classroom's developers to use one computer for development and to move to a server
on a central computer when finished. During the development stage, the developer's
computer can contain all of the necessary support, client, and server software.
Connections. For a Web-based classroom to work, there must be a connection
between the client and server software (for example, between a student's browser and
the class Web server). Some variation in the types of connections is possible. The
following list breaks the possible connections into four broad categories:
LAN and faster. Most university campuses and businesses have some
form of local area network (LAN). These connections are among the fastest and
most expensive to set up.
Home connections. For most users today, connecting from home means
using a modem and a phone line. Although fast enough for most purposes, a
modem can be quite slow for the retrieval of large documents or multimedia files
(videos and sound clips, for instance). In some parts of the world, ISDN and
cable modems are available: these are approaching the speed of some LANs.
Although less expensive, these connections are usually paid for by students. In
some countries, fees for these connections can be charged on the basis of how
long the user is connected.
83
Hybrid. The slow speed of modem connections has led to the use of hybrid
connections. In a hybrid Web-based classroom, a CD-ROM is used to distribute
large amounts of information, and a modem connection is used to provide
updates and communication. Hybrid connections are a compromise designed to
address the problems of current technology, yet they increase development cost
due to the need to merge two different environments. As home connection
speeds increase, use of hybrid connections will decrease.
None. From one perspective, this connection category is the hybrid
approach without the network connection. It is important because, regardless of
the hype surrounding the technology, a large proportion of people in the world
still cannot gain access to the Internet. Some of these people have access to
computers and therefore can still use many of the elements of a Web-based
classroom (documents and self tests, for example). However, they will not
benefit from the immediate communication and sharing of information possible
with one of the other three connection categories.
From the point of view of users, a connection is judged on two essential
characteristics: speed and cost. The faster a connection, the less time it takes to
retrieve information from servers and the less time the participant has to wait.
Generally, the faster the connection, the more expensive it is in purely monetary
terms. On the other hand, a slower connection, although financially less expensive,
will cost more in time lost waiting for information to download and possibly result in
frustration. Another important characteristic is how the cost of the connection is
calculated. Construction of a LAN has a large up-front cost, some ongoing
maintenance charges, and usually no usage charges; generally the cost is paid by the
institution. On the other hand, a modem connection from home might cost less than a
few hundred dollars, but it is usually paid for by the student.
Protocols. Client and server software use network connections to communicate
and share information. However, before they can do so they must agree upon a set of
rules that govern how they can communicate; such a set of rules is called a
communications protocol. The Internet uses protocols that belong to the TCP/IP
protocol stack. The stack is a collection of protocols that each belong to one of four
layers. The top layer defines the application protocols that software such as Web
browsers and e-mail programs use to communicate with appropriate servers.
How Can You Use It?
What can be done with a Web-based classroom is limited only by the
imagination of the educator and the available resources. Because a Web-based
classroom is an extension of the educator who built it and designed with a particular
situation in mind, the range of possibilities is almost endless. This variety means that
we can't describe all the possibilities of a Web-based classroom within this article.
Furthermore, only now are people starting to fully understand how to use a Webbased classroom without treating it like a horseless carriage–simply doing
84
electronically what they did physically in the past. The next few years will bring
forward Web-based applications that were not even considered yesterday.
The most important thing to remember about a Web-based education system is
that, like conventional teaching aids such as videos and slide projectors, it cannot
teach the course on its own. It is not intended to replace the role of a teacher but
merely to act as a new form of educational tool.
Most types of classes can be put, in whole or in part, on the Web without any
great impact on the way the class is taught. Most ideal for the Web are courses that
emphasize in-depth coverage and discussion because these can be easily supported,
or given entirely, on the Web. Any course that involves extensive writing on the part
of the student would also be ideal because the student can share ideas and hand in
assignments rapidly using the Web.
It has been found that a Web-based classroom is more suitable in a learnercentered role, meaning that if you make the course information available for the
students to go through at their own pace and provide facilities that allow
communication between the members of the class or the lecturers, you are
encouraging them to take more control of their own education. This approach differs
from the traditional method of education, where students sit in a large lecture theater,
dutifully write down a lecturer's words, and follow a course of learning suggested by
that lecturer. It is worth remembering that students brought up on this force-feeding
education method may have difficulty in adapting to any new method of education.
With careful design and appreciation of the students difficulties, however, you can
introduce students to a more effective and potentially satisfying way of learning using
a Web-based education system.
TEXT 6
MOVI NG BYTES
One of the biggest obstacles to buying a new PC is the drudgery of moving all
your programs, files and settings from the old machine to the new one. It can take
days to move every file using disks, then reinstall all your programs and re-create all
the preferences and settings you have built up over the years. You may also have to
download and reapply numerous patches and upgrades to your programs.
This is the sort of thing your operating system ought to handle with ease. But the
«Files and Settings Transfer Wizard» that Microsoft builds into Windows XP doesn't
even try to move software to a new machine. And I have never been able to get it to
work properly even for moving files and settings.
Techies, and those with techies in their employ, sometimes move the entire hard
disk from the old computer into the new one, configuring it as a secondary or «slave»
drive, from which data files, and even programs, with settings intact, can be accessed
as before. But this technique is beyond the knowledge and ability of mainstream
users.
Another option is to buy an external hard disk, attach it to your old PC, and copy
to it all of your key data files and settings – things such as your Web browser
85
bookmarks. Then you can move this external drive over to your new computer and
copy everything to that new machine's main hard disk. But this procedure still won't
transfer your programs, and it can be costly.
So for most people, the best option lies in so-called migration products –
combinations of software and cables. To use these products, you install the software
on both computers, connect the machines with one of the included cables and select
the stuff you want to move. The software does the rest.
Unfortunately, these products have a spotty track record, mainly because the
cables don't always work well. The best type is a special USB cable with a bulbous
section in the middle containing some transfer circuitry. But while all recent personal
computers have USB ports, some older models don't. So some of these products also
come with a so-called parallel cable, the kind used by old-fashioned printers. The
trouble is, some new computers no longer come with a parallel port built in, since all
new printers use USB.
A third kind of cable is a special type for Ethernet networking called a crossover
cable. This theoretically works on all PCs that have an Ethernet networking port. But
using it properly for file transfers requires c+hanging detailed networking settings in
Windows, a procedure beyond the ability of most users.
In the past, I've recommended a product called Alohabob's PC Relocator Ultra
Control, from Eisenworld ($69.95). It's the only consumer migration product that can
transfer programs as well as files and settings. But the company recently dropped the
superior USB cable from PC Relocator, claiming users had trouble with it. Instead,
the product now comes with an Ethernet crossover cable – which the manual warns
requires networking knowledge to use – and a special «high speed» parallel cable.
Unfortunately, however, this special parallel cable also requires too much
technical skill, in my view. You have to go into the computer's very guts – the setup
menu accessible only before Windows launches – and change a setting to make it
work.
I recently tested PC Relocator and couldn't get either cable to work. The
program succeeded only when I borrowed a USB cable from the box of a competing
program, Detto's IntelliMover. And even then, PC Relocator reported that it couldn't
transfer many of the programs on the old machine, and at least one of the programs it
did transfer didn't work right.
A second program I tested, Miramar Systems' Desktop DNA Professional,
comes with only the network crossover cable ($39 without the cable; $49 with the
cable). No average mainstream user would be able to perform the Windows network
configuration required to make this cable work. And despite my own networking
knowledge, I couldn't do it either.
By contrast, IntelliMover worked like a charm. It set up easily, the included
USB cable worked smoothly and quickly, and all the files and settings I selected were
transferred perfectly. For PCs that can't use the USB cable, IntelliMover is also
available with a parallel cable, which is slow but doesn't demand any setting changes
($49.95 with parallel cable; $59.95 with USB).
86
Even IntelliMover isn't an ideal solution, however. For instance, it has no cable
that works when the old computer lacks a USB port and the new one lacks a parallel
port. And even if it works well, you still have to reinstall all your programs by hand –
a tedious process.
All of this is much easier in the Apple world. If you are moving up from an old
Macintosh to a new one, and both machines have Fire Wire ports – common on
Macintoshes – you can just link the two computers with a standard Fire Wire cable.
No special software is required.
After setting up the new Macintosh, you just reboot it while holding down the
«T» key. That puts the computer in a special mode in which it acts like an extra hard
disk on the old Mac, and it shows up on the old Mac's screen as a hard-drive icon. To
move data files and settings, you simply drag the contents of the «Home» folder from
the old Mac to the «Home» folder on the new one. Most programs can also be
transferred in a similar way, by simply dragging the icons representing them from the
Applications folder on the old machine to the Applications folder on the new one.
Someday, perhaps, Microsoft will come up with something just as simple and
effective for long-suffering Windows users. At least, we can dare to dream.
TEXT 7
THE ROLE OF GOVERNMENT I N THE EVOLUTION OF THE I NTERNET
This paper discusses the role of government in the continuing evolution of the
Internet. From its origins as a U.S. government research project, the Internet has
grown to become a major component of network infrastructure, linking millions of
machines and tens of millions of users around the world. Although many nations are
now involved with the Internet in one way or another, this paper focuses on the
primary role the U.S. government has played in the Internet's evolution and discusses
the role that governments around the world may have to play as it continues to
develop.
Very little of the current Internet is owned, operated, or even controlled by
governmental bodies. The Internet indirectly receives government support through
federally funded academic facilities that provide some network-related services.
Increasingly, however, the provision of Internet communication services, regardless
of use, is being handled by commercial firms on a profit-making basis.
This situation raises the question of the proper long-term role for government in
the continued evolution of the Internet. Is the Internet now in a form where
government involvement should cease entirely, leaving private-sector interests to
determine its future? Or, does government still have an important role to play? This
paper concludes that government can still make a series of important contributions.
Indeed, there are a few areas in which government involvement will be vital to the
long-term well-being of the Internet.
87
Origins of the Internet
The Internet originated in the early 1970s as part of an Advanced Research
Projects Agency (ARPA) research project on «internetworking.» At that time, ARPA
demonstrated the viability of packet switching for computer-to-computer
communication in its flagship network, the ARPANET, which linked several dozen
sites and perhaps twice that number of computers into a national network for
computer science research. Extensions of the packet-switching concept to satellite
networks and to ground-based mobile radio networks were also under development
by ARPA, and segments of industry (notably not the traditional telecommunications
sector) were showing great interest in providing commercial packet network services.
It seemed likely that at least three or four distinct computer networks would exist by
the mid-1970s and that the ability to communicate among these networks would be
highly desirable if not essential.
In a well-known joint effort that took place around 1973, Robert Kahn, then at
ARPA, and Vinton Cerf, then at Stanford, collaborated on the design of an
internetwork architecture that would allow packet networks of different kinds to
interconnect and machines to communicate across the set of interconnected networks.
The internetwork architecture was based on a protocol that came to be known as
TCP/IP. The period from 1974 to 1978 saw four successively refined versions of the
protocol implemented and tested by ARPA research contractors in academia and
industry, with version number four eventually becoming standardized. The TCP/IP
protocol was used initially to connect the ARPANET, based on 50 kilobits per second
(kbps) terrestrial lines; the Packet Radio Net (PRNET), based on dual rate 400/100
kbps spread spectrum radios; and the Packet Satellite Net (SATNET), based on a 64
kbps shared channel on Intelsat IV. The initial satellite Earth stations were in the
United States and the United Kingdom, but subsequently additional Earth stations
were activated in Norway, Germany, and Italy. Several experimental PRNETs were
connected, including one in the San Francisco Bay area. At the time, no personal
computers, workstations, or local area networks were available commercially, and the
machines involved were mainly large-scale scientific time-sharing systems. Remote
access to time-sharing systems was made available by terminal access servers.
The technical tasks involved in constructing this initial ARPA Internet revolved
mainly around the configuration of «gateways,» now known as routers, to connect
different networks, as well as the development of TCP/IP software in the computers.
These were both engineering-intens ive tasks that took considerable expertise to
accomplish. By the mid-1980s, industry began offering commercial gateways and
routers and started to make available TCP/IP software for some workstations,
minicomputers, and mainframes. Before this, these capabilities were unavailable;
they had to be handcrafted by the engineers at each site.
In 1979, ARPA established a small Internet Configuration Control Board
(ICCB), most of whose members belonged to the research community, to help with
this process and to work with ARPA in evolving the Internet design. The
establishment of the ICCB was important because it brought a wider segment of the
research community into the Internet decision-making process, which until then had
88
been the almost-exclusive bailiwick of ARPA. Initially, the ICCB was chaired by a
representative of ARPA and met several times a year. As interest in the ARPA
Internet grew, so did interest in the work of the ICCB.
During this early period, the U.S. government, mainly ARPA, funded research
and development work on networks and supported the various networks in the ARPA
Internet by leasing and buying components and contracting out the system's day-today operational management. The government also maintained responsibility for
overall policy. In the mid- to late 1970s, experimental local area networks and
experimental workstations, which had been developed in the research community,
were connected to the Internet according to the level of engineering expertise at each
site. In the early 1980s, Internet-compatible commercial workstations and local area
networks became available, significantly easing the task of getting connected to the
Internet.
The U.S. government also awarded contracts for the support of various aspects
of Internet infrastructure, including the maintenance of lists of hosts and their
addresses on the network. Other government-funded groups monitored and
maintained the key gateways between the Internet networks in addition to supporting
the networks themselves. In 1980, the U.S. Department of Defense (DOD) adopted
the TCP/IP protocol as a standard and began to use it. By the early 1980s, it was clear
that the internetwork architecture that ARPA had created was a viable technology for
wider use in defense.
Emergence of the operational Internet
The DOD had become convinced that if its use of networking were to grow, it
needed to split the ARPA Internet (called ARPANET) in two. One of the resulting
networks, to be known as MILNET, would be used for military purposes and mainly
link military sites in the United States. The remaining portion of the network would
continue to bear the name ARPANET and still be used for research purposes. Since
both would use the TCP/IP protocol, computers on the MILNET would still be able
to talk to computers on the new ARPANET, but the MILNET network nodes would
be located at protected sites. If problems developed on the ARPANET, the MILNET
could be disconnected quickly from it by unplugging the small number of gateways
that connected them. In fact, these gateways were designed to limit the interactions
between the two networks to the exchange of electronic mail, a further safety feature.
By the early 1980s, the ARPA Internet was known simply as the Internet, and
the number of connections to it continued to grow. Recognizing the importance of
networking to the larger computer science community, the National Science
Foundation (NSF) began supporting CSNET, which connected a select group of
computer science researchers to the emerging Internet. This allowed new research
sites to be placed on the ARPANET at NSF's expense, and it allowed other new
research sites to be connected via a commercial network, TELENET, which would be
gatewayed to the ARPANET. CSNET also provided the capacity to support dial-up email connections. In addition, access to the ARPANET was informally extended to
researchers at numerous sites, thus helping to further spread the networking
89
technology within the scientific community. Also during this period, other federal
agencies with computer-oriented research programs, notably the Department of
Energy (DOE) and the National Aeronautics and Space Administration (NASA),
created their own «community networks.»
The TCP/IP protocol adopted by DOD a few years earlier was only one of many
such standards. Although it was the only one that dealt explicitly with
internetworking of packet networks, its use was not yet mandated on the ARPANET.
However, on January 1, 1983, TCP/IP became the standard for the ARPANET,
replacing the older host protocol known as NCP. This step was in preparation for the
ARPANET-MILNET split, which was to occur about a year later. Mandating the use
of TCP/IP on the ARPANET encouraged the addition of local area networks and also
accelerated the growth in numbers of users and networks. At the same time, it led to a
rethinking of the process that ARPA was using to manage the evolution of the
network.
In 1983, ARPA replaced the ICCB with the Internet Activities Board (IAB). The
IAB was constituted similarly to the old ICCB, but the many issues of network
evolution were delegated to 10 task forces chartered by and reporting to the IAB. The
IAB was charged with assisting ARPA to meet its Internet-related R&D objectives;
the chair of the IAB was selected from the research community supported by ARPA.
ARPA also began to delegate to the IAB the responsibility for conducting the
standards-setting process.
Following the CSNET effort, NSF and ARPA worked together to expand the
number of users on the ARPANET, but they were constrained by the limitations that
DOD placed on the use of the network. By the mid-1980s, however, network
connectivity had become sufficiently central to the workings of the computer science
community that NSF became interested in broadening the use of networking to other
scientific disciplines. The NSF supercomputer centers program represented a major
stimulus to broader use of networks by providing limited access to the centers via the
ARPANET. At about the same time, ARPA decided to phase out its network research
program, only to reconsider this decision about a year later when the seeds for the
subsequent high-performance computer initiative were planted by the Reagan
administration and then-Sen. Albert Gore (D-Tenn.). In this period, NSF formulated a
strategy to assume responsibility for the areas of leadership that ARPA had formerly
held and planned to field an advanced network called NSFNET. NSFNET was to join
the NSF supercomputer centers with very high speed links, then 1.5 megabits per
second (mbps), and to provide members of the U.S. academic community access to
the NSF supercomputer centers and to one another.
Under a cooperative agreement between NSF and Merit, Inc., the NSFNET
backbone was put into operation in 1988 and, because of its higher speed, soon
replaced the ARPANET as the backbone of choice. In 1990, ARPA decommissioned
the last node of the ARPANET. It was replaced by the NSFNET backbone and a
series of regional networks most of which were funded by or at least started with
funds from the U.S. government and were expected to become self-supporting soon
thereafter. The NSF effort greatly expanded the involvement of many other groups in
90
providing as well as using network services. This expansion followed as a direct
result of the planning for the High Performance Computing Initiative (HPCI), which
was being formed at the highest levels of government. DOD still retained the
responsibility for control of the Internet name and address space, although it
continued to contract out the operational aspects of the system.
The DOE and NASA both rely heavily on networking capability to support their
missions. In the early 1980s, they built High Energy Physics Net (HEPNET) and
Space Physics Analysis Net (SPAN), both based on Digital Equipment Corporation's
DECNET protocols. Later, DOE and NASA developed the Energy Sciences Net
(ESNET) and the NASA Science Internet (NSI), respectively; these networks
supported both TCP/IP and DECNET services. These initiatives were early
influences on the development of the multiprotocol networking technology that was
subsequently adopted in the Internet.
International networking activity was also expanding in the early and mid1980s. Starting with a number of networks based on the X.25 standard as well as
international links to ARPANET, DECNET, and SPAN, the networks began to
incorporate open internetworking protocols. Initially, Open Systems Interconnection
(OSI) protocols were used most frequently. Later, the same forces that drove the
United States to use TCP/IP-availability in commercial workstations and local area
networks–caused the use of TCP/IP to grow internationally.
The number of task forces under the IAB continued to grow, and in 1989, the
IAB consolidated them into two groups: the Internet Engineering Task Force (IETF)
and the Internet Research Task Force (IRTF). The IETF, which had been formed as
one of the original 10 IAB Task Forces, was given responsibility for near-term
Internet developments and for generating options for the IAB to consider as Internet
standards. The IRTF remained much smaller than the IETF and focused more on
longer-range research issues. The IAB structure, with its task-force mechanism,
opened up the possibility of getting broader involvement from the private sector
without the need for government to pay directly for their partic ipation. The federal
role continued to be limited to oversight control of the Internet name and address
space, the support of IETF meetings, and sponsorship of many of the research
participants. By the end of the 1980s, IETF began charging a nominal attendance fee
to cover the costs of its meetings.
The opening of the Internet to commercial usage was a significant development
in the late 1980s. As a first step, commercial e-mail providers were allowed to use the
NSFNET backbone to communicate with authorized users of the NSFNET and other
federal research networks. Regional networks, initially established to serve the
academic community, had in their efforts to become self-sufficient taken on
nonacademic customers as an additional revenue source. NSF's Acceptable Use
Policy, which restricted backbone usage to traffic within and for the support of the
academic community, together with the growing number of nonacademic Internet
users, led to the formation of two privately funded and competing Internet carriers,
both spin-offs of U.S. government programs. They were UUNET Technologies, a
product of a DOD-funded seismic research facility, and Performance Systems
91
International (PSI), which was formed by a subset of the officers and directors of
NYSERNET, the NSF-sponsored regional network in New York and the lower New
England states.
Beginning in 1990, Internet use was growing by more than 10 percent a month.
This expansion was fueled significantly by the enormous growth on the NSFNET and
included a major commercial and international component. NSF helped to stimulate
this growth by funding both incremental and fundamental improvements in Internet
routing technology as well as by encouraging the widespread distribution of network
software from its supercomputer centers. Interconnections between commercial and
other networks are arranged in a variety of ways, including through the use of the
Commercial Internet Exchange (CIX), which was established, in part, to facilitate
packet exchanges among commercial service providers.
Recently, the NSF decided that additional funding for the NSFNET backbone no
longer was required. The agency embarked on a plan to make the NSF regional
networks self supporting over a period of several years. To assure the scientific
research community of continued network access, NSF made competitively chosen
awards to several parties to provide network access points (NAPs) in four cities. NSF
also selected MCI to provide a very high speed backbone service, initially at 155
mbps, linking the NAPs and several other sites, and a routing arbiter to oversee
certain aspects of traffic allocation in this new architecture.
The Internet Society was formed in 1992 by the private sector to help promote
the evolution of the Internet, including maintenance of the Internet standards process.
In 1992, the IAB was reconstituted as the Internet Architecture Board, which became
part of the Internet Society. It delegated its decision-making responsibility on Internet
standards to the leadership of the IETF, known as the Internet Engineering Steering
Group (IESG). While not a part of the Internet Society, the IETF produces technical
specifications as possible candidates for future protocols. The Internet Society now
maintains the Internet Standards Process, and the work of the IETF is carried out
under its auspices.
TEXT 8
CRASH-PROOF COMPUTI NG
Here's why today's PCs are the most crash-prone computers ever built and how
you can make yours more reliable.
Men are from Mars. Women are from Venus. Computers are from hell.
At least that's how it seems when your system suddenly crashes, wiping out an
hour of unsaved work. But it doesn't have to be that way. Some computers can and do
run for years between reboots. Unfortunately, few of those computers are PCs.
If mainframes, high-end servers, and embedded control systems can chug along
for years without crashing, freezing, faulting, or otherwise refusing to function, then
why can't PCs? Surprisingly, the answer has only partly to do with technology. The
biggest reason why PCs are the most crash-prone computers ever built is that
92
reliability has never been a high priority – either for the industry or for users. Like a
patient seeking treatment from a therapist, PCs must want to change.
«When a 2000-user mainframe crashes, you don't just reboot it and go on
working», says Stephen Rochford, an experienced consultant in Colorado Springs,
Colorado, who develops custom financial applications. «The customer demands to
know why the system went down and wants the problem fixed. Most customers with
PCs don't have that much clout».
Fortunately, there are signs that everyone is paying slightly more attention to the
problem. Users are getting fed up with time-consuming crashes – not to mention the
complicated fixes that consume even more time – but that's only one factor. For the
PC industry, the prime motives seem to be self-defense and future aspirations.
With regard to self-defense: Vendors are struggling to control technical-support
costs, while alternatives such as network computers (NCs) are making IT
professionals more aware of the hidden expenses of PCs. With regard to future
aspirations : The PC industry covets the prestige and lush profit margins of high-end
servers and mainframes. When the chips are down, high availability must be more
than just a promise.
That's why the PC industry is working on solutions that should make crashes a
little less frequent. We're starting to see OSes that upgrade themselves, applications
that repair themselves, sensors that detect impending hardware failures, development
tools that help programmers write cleaner code, and renewed interest in the timetested technologies found in mainframes and mission-critical embedded systems. As
a bonus, some of those improvements will make PCs easier to manage, too.
But don't celebrate yet – it's hardly a revolution. Change is coming slowly, and
PCs will remain the least reliable computers for years to come.
Why PCs Crash
Before examining the technical reasons why PCs crash, it's useful to analyze the
psychology of PCs – by far the biggest reason for their misbehavior. The fact is, PCs
were born to be bad.
«The fundamental concept of the personal computer was to make trade-offs that
guaranteed PCs would crash more often,» declares Brian Croll, director of Solaris
product marketing at Sun Microsystems. «The first PCs cut corners in ways that
horrified computer scientists at the time, but the idea was to make a computer that
was more affordable and more compact. Engineering is all about making trade-offs.»
It's not that PC pioneers weren't interested in reliability. It's just that they were
more interested in chopping computers down to size so that everybody could own
one. They scrounged the cheapest possible parts to build the hardware, and they took
dangerous shortcuts when writing the software.
For instance, to wring the most performance out of slow CPUs and a few
kilobytes of RAM, early PCs ran the application program, the OS, and the device
drivers in a common address space in main memory. A nasty bug in any of those
components would usually bring down the whole system. But OS developers didn't
have much choice, because early CPUs had no concept of protected memory or a
93
kernel mode to insulate the OS from programs running in user mode. All the software
ran in a shared, unprotected address space, where anything could clobber anything
else, bringing the system down.
Ironically, though, the first PCs were fairly reliable, thanks to their utter
simplicity. In the 1970s and early 1980s, system crashes generally weren't as
common as they are today. (This is difficult to document, but almost everyone swears
it's true.) The real trouble started when PCs grew more complex.
Consider the phenomenal growth in code size of a modern OS for PCs:
Windows NT. The original version in 1992 contained 4 million lines of source code –
considered quite a lot at the time. NT 4.0, released in 1996, expanded to 16.5 million
lines. NT 5.0, due this year, will balloon to an estimated 27 million to 30 million
lines. That's about a 700 percent growth in only six years.
However, Russ Madlener, Microsoft's desktop OS product manager, says that
code expansion is manageable if developers expand their testing, too. He says the NT
product group now has two testers for every programmer. «I wouldn't necessarily say
that bugs grow at the same rate as code,» he adds.
It's true that NT is more crash-resistant than Windows 95, a smaller OS that's
been around a lot longer. And both crash less often than the Mac OS, which is older
still. In this case, new technology compensates for NT's youth and girth. NT has more
robust memory protection and rests on a modern kernel, while Windows 95 has more
limited memory protection and totters on the remnants of MS-DOS and Windows
3.1. The Mac OS has virtually no memory protection and allows applications to
multitask cooperatively in a shared address space – a legacy of its origins in the early
1980s.
Still, it will be interesting to see how stable NT remains as it grows fatter. And
grow fatter it will, because nearly everybody wants more features. Software vendors
want more features because they need reasons to sell new products and upgrades.
Chip makers and system vendors need reasons to sell bigger, faster computers.
Computer magazines need new things to write about. Users seem to have an
insatiable demand for more bells and whistles, whether they use them or not.
«The whole PC industry has come to resemble a beta-testing park,» moans Pavle
Bojkavski, a law student at the University of Amsterdam who's frustrated by the
endless cycle of crashes, bug fixes, upgrades, and more crashes. «How about
developing stable computers using older technology? Or am I missing a massive rise
in the number of masochists globally who just love being punished?»
Although there are dozens of technical reasons why PCs crash, it all comes
down to two basic traits: the growth spurt of complexity, which has no end in s ight,
and the low emphasis on reliability. Attempts to sell simplified computers (such as
NCs) or scaled-down applications (such as Microsoft Write) typically meet with
resistance in the marketplace. For many users, it seems the stakes aren't high enough
yet.
«If you're using [Microsoft] Word and the system crashes, you lose a little work,
but you don't lose a lot of money, and no one dies,» explains Sun's Croll. «It's a
worthwhile trade-off.»
94
Causes Behind Crashes
You can sort the technical reasons for crashes into two broad categories:
hardware problems and software problems. Genuine hardware problems are much
less common, but you can't ignore the possibility. One downside to the recent sharp
drop in system prices is that manufacturers are cutting corners more closely than ever
before. Inexpensive PCs aren't necessarily shoddy PCs, but sometimes they are.
Generally, though, when a computer crashes, it's the software that's failed. If it's
an application, you stand to lose your unsaved work in that program, but a good OS
should protect the memory partitions that other programs occupy. Sometimes,
however, the crashed program triggers a cascade of software failures that brings
down the entire system.
So why do programs crash? Chiefly, there are two reasons: A condition arises
that the program's designer didn't anticipate, so the program doesn't handle the
condition; or the program anticipates the condition but then fails to handle it in an
adequate manner.
In a perfect world, every program would handle every possible condition, or at
least it would defer to another program that can handle it, such as the OS. But in the
real world, programmers don't anticipate everything. Sometimes they deliberately
ignore conditions that are less likely to happen – perhaps in trade for smaller code,
faster code, or meeting a deadline. In those cases, the OS is the court of last resort,
the arbiter of disturbances that other programs can't resolve. «At the OS level, you've
got to anticipate the unanticipated, as silly as that sounds,» says Guru Rao, chief
engineer for IBM's System/390 mainframes.
To deal with these dangers, programmers must wrap all critical operations in
code that traps an error within a special subroutine. The subroutine tries to determine
what caused the error and what should be done about it. Sometimes the program can
quietly recover without the user's knowing that anything happened. In other cases, the
program must display an error message asking the user what to do. If the errorhandling code fails, or is missing altogether, the program crashes.
Autopsy of a Crash
Crash is a vague term used to describe a number of misfortunes. Typically, a
program that crashes is surprised by an exception, caught in an infinite loop, confused
by a race condition, starved for resources, or corrupted by a memory violation.
Exceptions are run-time errors or interrupts, that force a CPU to suspend normal
program execution. (Java is a special case: The Java virtual machine [VM] checks for
some run-time errors in software and can throw an exception without involving the
hardware CPU.) For example, if a program tries to open a nonexistent data file, the
CPU returns an exception that means «File not found.» If the program's errortrapping code is poor or absent, the program gets confused. That's when a good OS
should intervene. It probably can't correct the problem behind the scenes, but it can at
least display an error message: «File not found: Are you sure you inserted the right
95
disk?» However, if the OS's error-handling code is deficient, more dominoes fall, and
eventually the whole system crashes.
Sometimes a program gets stuck in an infinite loop. Due to an unexpected
condition, the program repeatedly executes the same block of code over and over
again. (Imagine a person so stupid that he or she follows literally the instructions on a
shampoo bottle: «Lather. Rinse. Repeat.») To the user, a program stuck in an infinite
loop appears to freeze or lock up. Actually, the program is running furiously.
Again, a good OS will intervene by allowing the user to safely stop the process.
But the process schedulers in some OSes have trouble coping with this problem. In
Windows 3.1 and the Mac OS, the schedulers work cooperatively, which means they
depend on processes to cooperate with each other by not hogging all the CPU time.
Windows 95 and NT, OS/2, Unix, Linux, and most other modern OSes allow a
process to preempt another process.
Race conditions are similar to infinite loops, except they're usually caused by
something external to the program. Maybe the program is talking to an external
device that isn't responding as quickly as the program expects – or the program isn't
responsive to the device. Either way, there's a failure to communicate. The software
on each end is supposed to have time-out code to handle this condition, but
sometimes the code isn't there or doesn't work properly.
Resource starvation is another way to crash. Usually, the scarce resource is
memory. A program asks the OS for some free memory; if the OS can't find enough
memory at that moment, it denies the request. Again, the program should anticipate
this condition instead of going off and sulking, but sometimes it doesn't. If the
program can't function without the expected resources, it may stop dead in its tracks
without explaining why. To the user, the program appears to be frozen.
Even worse, the program may assume it got the memory it asked for. This
typically leads to a memory violation. When a program tries to use memory it doesn't
legitimately own, it either corrupts a piece of its own memory or attempts to access
memory outside its partition. What happens next largely depends on the strength of
the OS's memory protection. A vigilant OS won't let a program misuse memory.
When the program tries to access an illegal memory address, the CPU throws an
exception. The OS catches the exception, notifies the user with an error message
(«This program has attempted an illegal operation: invalid page fault»), and attempts
to recover. If it can't, it either shuts down the program or lets the user put the program
out of its misery.
Not every OS is so protective. When the OS doesn't block an illegal memory
access, the errant program overwrites memory that it's using for something else, or it
steals memory from another program. The resulting memory corruption usually
sparks another round of exceptions that eventually leads to a crash.
Corruption also occurs when a program miscalculates how much memory it
already has. For instance, a program might try to store some data in the nonexistent
101st element of a 100-element array. When the program overruns the array bounds,
it overwrites another data structure. The next time the program reads the corrupted
data structure, the CPU throws an exception. Wham! Another crash.
96
TEXT 9
OBJECT-ORIENTED PROGRAMMI NG WI TH JAVA
Object-oriented programming (OOP) is a programming paradigm that is
fundamentally different from traditional procedural programming styles. It is
centered around the concept of objects-programming constructs that have both
properties and the procedures for manipulating those properties. This approach
models the real world much more closely than conventional programming methods
and is ideal for the simulation-type problems commonly encountered in games.
You're probably already aware that Java is an object-oriented language, but you
might not fully understand what that means. To successfully use Java to write
Internet games, you need to embrace object-oriented programming techniques and
design philosophies. The goal of today's lesson is to present the conceptual aspects of
object-oriented programming as they relate to Java. By the end of today's lesson, you
will fully understand what OOP means to Java and maybe even have some new buzz
words to share with your friends! More important, you will gain some insight into
why the OOP paradigm built into Java is a perfect match for game programming.
The following topics are covered:
• What is OOP?
• OOP and games
• Java and other OOP languages
What Is OOP?
If you've been anywhere near the computer section of a bookstore or picked up a
programming magazine in the last five years, you've certainly seen the hype
surrounding object-oriented programming. It's the most popular programming
technology to come about in a long time, and it all revolves around the concept of an
object. The advent of Java has only served to elevate the hype surrounding OOP. You
might wonder what the big deal is with objects and object-oriented technology? Is it
something you should be concerned with, and if so, why? Is it really that crucial
when working with Java? If you sift through the hype surrounding the whole objectoriented issue, you'll find a very powerful technology that provides a lot of benefits to
software design.
But the question still remains : What is OOP? OOP is an approach to
programming that attempts to bridge the gap between problems in the real world and
solutions in the computer programming world. Prior to OOP, a conceptual stumbling
block always existed for programmers when trying to adapt the real world into the
constraints imposed by a traditional programming language. In the real world, people
tend to think in terms of «things,» but in the pre-OOP programming world people
have been taught to think in terms of blocks of code (procedures) and how they act on
data. These two modes of thinking are very different from each other and pose a
significant problem when it comes to designing complex systems that model the real
world. Games happen to be very good examples of complex systems that often model
the real world.
97
OOP presents an approach to programming that allows programmers to think in
terms of objects, or things, much like people think of things in the real world. Using
OOP techniques, a programmer can focus on the objects that naturally make up a
system, rather than trying to rationalize the system into procedures and data. The
OOP approach is a very natural and logical application of the way humans already
think.
The benefits of OOP go beyond easing the pain of resolving real world problems
in the computer domain. Another key issue in OOP is code reuse, when you
specifically design objects and programs with the goal of reusing as much of the code
as possible, whenever possible. Fortunately, it works out that the fundamental
approaches to OOP design naturally encourage code reuse, meaning that it doesn't
take much of an extra effort to reuse code after you employ standard OOP tactics.
The OOP design approach revolves around the following major concepts:
• Objects
• Classes
• Encapsulation
• Messages
• Inheritance
To understand how you can benefit from OOP design methods as a game
programmer, you must first take a closer look at what a game really is.
Think of a game as a type of abstract simulation. If you think about most of the
games you've seen or played, it's almost impossible to come up with one that isn't
simulating something. All the adventure games and sports games, and even the farout space games, are modeling some type of objects present in the real world (maybe
not our world, but some world nevertheless). Knowing that games are models of
worlds, you can make the connection that most of the things (landscapes, creatures,
and so on) in games correspond to things in these worlds. And as soon as you can
organize a game into a collection of «things,» you can apply OOP techniques to the
design. This is possible because things can be translated easily into objects in an OOP
environment.
Look at an OOP design of a s imple adventure game as an example. In this
hypothetical adventure game, the player controls a character around a fantasy world
and fights creatures, collects treasure, and so on. You can model all the different
aspects of the game as objects by creating a hierarchy of classes. After you design the
classes, you create them and let them interact with each other just as objects do in real
life.
The world itself probably would be the first class you design. The world class
would contain information such as its map and images that represent a graphical
visualization of the map. The world class also would contain information such as the
current time and weather. All other classes in the game would derive from a
positional class containing coordinate information specifying where in the world the
objects are located. These coordinates would specify the location of objects on the
world map.
98
The main character class would maintain information such as life points and any
items picked up during the game, such as weapons, lanterns, keys, and so on. The
character class would have methods for moving in different directions based on the
player's input. The items carried by the character also would be objects. The lantern
class would contain information such as how much fuel is left and whether the
lantern is on or off. The lantern would have methods for turning it on and off, which
would cause it to use up fuel.
There would be a general creature class from which all creatures would be
derived. This creature class would contain information such as life points and how
much damage the creature inflicts when fighting. It would have methods for moving
in different directions. Unlike the character class, however, the creature's move
methods would be based on some type of intelligence programmed into the creature
class. The mean creatures might always go after the main character if they are on the
same screen together, for example, but passive creatures might just ignore the main
character. Derived creature classes would add extra attributes such as the capability to
swim or fly.
I've obviously left out a lot of detail in the descriptions of these hypothetical
objects. This is intentional because I want to highlight the benefit of the object
approach, not the details of fully implementing a real example.
Java and Other OOP Languages
You've learned that OOP has obvious advantages over procedural approaches,
especially when it comes to games. OOP was conceived from the ground up with the
intention of simulating the real world. However, in the world of game programming,
the faster language has traditionally always won. This is evident by the amount of
assembly language still being written in the commercial game-development
community. No one can argue the fact that carefully written assembly language is
faster than C, and that even more carefully written С is sometimes faster than C++.
And unfortunately, Java ranks a distant last behind all these languages in terms of
efficiency and speed.
However, the advantages of using Java to write games stretch far beyond the
speed benefits provided by these faster languages. This doesn't mean that Java is
poised to sweep the game community as the game development language of choice;
far from it! It means that Java provides an unprecedented array of features that scale
well to game development. The goal for Java game programmers is to write games in
the present within the speed limitations of Java, while planning games for the future
that will run well when faster versions of Java are released.
In fact two separate speed issues are involved in Java game programming. The
first is the issue of the speed of the Java language and runtime environment which
will no doubt improve as better compilers and more efficient versions of the runtime
environment are released. The second issue is that of Internet connection speed which
is limited by the speed of the modem or physical line used to connect to the Internet.
Both of these issues are important but they impact Java games in different ways: first
99
speed limitation affects how fast a game runs while the second limitation affects how
fast a game loads.
Due to languages such as Smalltalk, which treats everything as an object (an
impediment for simple problems), and their built-in memory-allocation handling (a
sometimes very slow process), OOP languages have developed a reputation for being
slow and inefficient. C++ remedied this situation in many ways but brought with it
the pitfalls and complexities of C, which are largely undesirable in a distributed
environment such as the Internet. Java includes many of the nicer features of C++,
but incorporates them in a more simple and robust language.
The current drawback to using Java for developing games is the speed of Java
programs, which is significantly slower than C++ programs because Java programs
execute in an interpreted fashion. The just-in-time compilation enhancements
promised in future versions of Java should help remedy this problem.
Currently, Java programs are interpreted, meaning that they go through a
conversion process as they are being run. Although this interpreted approach is
beneficial because it allows Java programs to run on different types of computers, it
greatly affects the speed of the programs. A promising solution to this problem is just
in time compilation, which is a technique in which a Java program is compiled into
an executable native to a particular type of computer before being run.
Today, Java is still not ready for prime time when it comes to competing as a
game programmer's language. It just isn't possible yet in the current release of Java to
handle the high-speed graphics demanded by commercial games. To alleviate this
problem, you have the option of integrating native С code to Java programs. This
might or might not be a workable solution, based on the particular needs of a game.
Regardless of whether Java can compete as a high-speed gaming language, it is
certainly capable of meeting the needs of many other types of games that are less
susceptible to speed restrictions.
TEXT 10
THE JPEG STILL PICTURE COMPRESSION STANDARD
Introduction
Advances over the past decade in many aspects of digital technology –
especially devices for image acquisition, data storage, and bitmapped printing and
display – have brought about many applications of digital imaging. However, these
applications tend to be specialized due to their relatively high cost. With the possible
exception of facsimile, digital images are not commonplace in general-purpose
computing systems the way text and geometric graphics are. The majority of modern
business and consumer usage of photographs and other types of images takes place
through more traditional analog means.
The key obstacle for many applications is the vast amount of data required to
represent a digital image directly. A digitized version of a s ingle, color picture at TV
resolution contains on the order of one million bytes; 35mm resolution requires ten
100
times that amount. Use of digital images often is not viable due to high storage or
transmission costs, even when image capture and display devices are quite affordable.
Modern image compression technology offers a possible solution. State-of-theart techniques can compress typical images from 1/10 to 1/50 their uncompressed
size without visibly affecting image quality. But compression technology alone is not
sufficient. For digital image applications involving storage or transmission to become
widespread in today's marketplace, a standard image compression method is needed
to enable interoperability of equipment from different manufacturers.
For the past few years, a standardization effort known by the acronym JPEG, for
Joint Photographic Experts Group, has been working toward establishing the first
international digital image compression standard for continuous-tone (multilevel) still
images, both grayscale and color.
Photovideotex, desktop publishing, graphic arts, color facsimile, newspaper
wirephoto transmission, medical imaging, and many other continuous-tone image
applications require a compression standard in order to develop s ignificantly beyond
their present state. JPEG has undertaken the ambitious task of developing a generalpurpose compression standard to meet the needs of almost all continuous-tone stillimage applications.
If this goal proves attainable, not only will individual applications flourish, but
exchange of images across application boundaries will be facilitated. This latter
feature will become increasingly important as more image applications are
implemented on general-purpose computing systems, which are themselves becoming
increasingly interoperable and internetworked. For applications which require
specialized VLSI to meet their compression and decompression speed requirements, a
common method will provide economies of scale not possible within a s ingle
application.
This article gives an overview of JPEG's proposed image-compression standard.
Readers without prior knowledge of JPEG or compression based on the Discrete
Cosine Transform (DCT) are encouraged to study first the detailed description of the
Baseline sequential codec, which is the basis for all of the DCT-based decoders.
While this article provides many details, many more are necessarily omitted.
Some of the earliest industry attention to the JPEG proposal has been focused on
the Baseline sequential codec as a motion image compression method – of the
«intraframe» class, where each frame is encoded as a separate image.
Background: Requirements and Selection Process
JPEG's goal has been to develop a method for continuous-tone image
compression which meets the following requirements :
1) be at or near the state of the art with regard to compression rate and
accompanying image fidelity, over a wide range of image quality ratings, and
especially in the range where visual fidelity to parameterizable, so that the
application (or user) can set the desired compression/quality tradeoff;
2) be applicable to practically any kind of continuous-tone digital source
image (i.e. for most practical purposes not be restricted to images of certain
101
dimensions, color spaces, pixel aspect ratios, etc.) and not be limited to classes
of imagery with restrictions on scene content, such as complexity, range of
colors, or statistical properties;
3) have tractable computational complexity, to make feasible software
implementations with viable performance on a range of CPU's, as well as
hardware implementations with viable cost for applications requiring high
performance;
4) have the following modes of operation:
• Sequential encoding: each image component is encoded in a
single left-to-right, top-to-bottom scan;
• Progressive encoding: the image is encoded in multiple scans for
applications in which transmission time is long, and the viewer prefers to
watch the image build up in multiple coarse-to-clear passes;
• Lossless encoding: the image is encoded to guarantee exact
recovery of every source image sample value (even though the result is
low compression compared to the lossy modes);
• Hierarchical encoding: the image is encoded at multiple
resolutions so that lower-resolution versions may be accessed without
first having to decompress the image at its full resolution.
TEXT 11
CRYPTOGRAPHY
Cryptography is the science and art of secret writing – keeping information
secret. When applied in a computing environment, cryptography can protect data
against unauthorized disclosure; it can authenticate the identity of а user or program
requesting service; and it can disclose unauthorized tampering.
Cryptanalysis is the related study of breaking ciphers. Cryptology is the
combined study of cryptography and cryptanalysis.
Cryptography is an indispensable part of modern computer security.
A Brief History of Cryptography
Knowledge of cryptography can be traced back to ancient times. It's not difficult
to understand why: as soon as three people had mastered the art of reading and
writing, there was the possibility that two of them would want to send letters to each
other that the third could not read.
In ancient Greece, the Spartan generals used a form of cryptography so that the
generals could exchange secret messages: the messages were written on narrow
ribbons of parchment that were wound spirally around a cylindrical staff called a
scytale. After the ribbon was unwound, the writing on it could only be read by a
person who had a matching cylinder of exactly the same s ize. This primitive system
did a reasonably good job of protecting messages from interception and from the
prying eyes of the message courier as well.
102
In modern times, cryptography's main role has been in securing electronic
communications, moon after Samuel F. B, Morse publicly demonstrated the telegraph
in 1845, users of the telegraph began worrying about the confidentiality of the
messages that were being transmitted. What would happen if somebody tapped the
telegraph line? What would prevent unscrupulous telegraph operators from keeping a
copy of the messages that they relayed and then divulging them to others? The
answer was to encode the messages with a secret code, so that nobody but the
intended recipient could decrypt them.
Cryptography became even more important with the invention of radio, and its
use in war. Without cryptography, messages transmitted to or from the front lines
could easily be intercepted by the enemy.
Code Making and Code Breaking
As long as there have been code makers, there have been code breakers. Indeed,
the two have been locked in a competition for centuries, with each advance on one
side being matched by counter-advances on the other.
For people who use codes, the code-breaking efforts of cryptanalysts pose a
danger that is potentially larger than the danger of not using cryptography in the first
place. Without cryptography, you might be reluctant to send sensitive information
through the mail, across a telex, or by radio. But if you think that you have a secure
channel of communication, then you might use it to transmit secrets that should not
be widely revealed.
For this reason, cryptographers and organizations that use cryptography
routinely conduct their own code-breaking efforts to make sure that their codes are
resistant to attack. The findings of these self-inflicted intrusions are not always
pleasant. The following brief story from a 1943 book on cryptography demonstrates
this point quite nicely.
The importance of the part played by cryptographers in military operations was
demonstrated to us realistically in the First World War. One instructive incident
occurred in September 1918, on the eve of the great offensive against Saint-Mihiel. A
student cryptographer, fresh from Washington, arrived at United States Headquarters
at the front. Promptly he threw the General Staff into a state of alarm by decrypting
with comparative ease a secret radio message intercepted in the American sector.
The smashing of the German salient at Saint-Mihiel was one of the most
gigantic tasks undertaken by the American forces during the war. For years that
salient had stabbed into the Allied lines, cutting important railways and
communication lines. Its lines of defense were thought to be virtually impregnable.
But for several months the Americans had been making secret preparations for
attacking it and wiping it out. The state was set, the minutest details of strategy had
been determined – when the young officer of the United States Military Intelligence
spread consternation through our General Staff.
The dismay at Headquarters was not caused by any new information about the
strength of the enemy forces, but by the realization that the Germans must know as
much about our secret plans as we did ourselves – even the exact hour set for the
103
attack. The intercepted message had been from our own base. German cryptographers
were as expert as any in the world, and what had been done by an American student
cryptographer could surely have been done by German specialists.
The revelation was even more bitter because the cipher the young officer had
broken, without any knowledge of the system, was considered absolutely safe and
had long been used for most important and secret communications.
Cryptography and Digital Computers
Modern digital computers are, in some senses, the creations of cryptography.
Some of the first digital computers were built by the Allies to break messages that
had been encrypted by the Germans with electromechanical encrypting machines.
Code breaking is usually a much harder problem than code making; after the
Germans switched codes, the Allies often took several months to discover the new
coding systems. Nevertheless, the codes were broken, and many historians say that
World War II was shortened by at least a year as a result.
Things really picked up when computers were turned to the task of code making.
Before computers, all of cryptography was limited to two basic techniques:
transposition, or rearranging the order of letters in a message (such as the Spartan's
scytale), and substitution, or replacing one letter with another one. The most
sophisticated pre-computer cipher used five or six transposition or substitution
operations, but rarely more.
With the coming of computers, ciphers could be built from dozens, hundreds, or
thousands of complex operations, and yet could still encrypt and decrypt messages in
a short amount of time. Computers have also opened up the possibility of using
complex algebraic operations to encrypt messages. All of these advantages have had
a profound impact on cryptography.
Modern Controversy
In recent years, encryption has gone from being an arcane science and the stuff
of James Bond movies, to being the subject of debate in several nations (but we'll
focus on the case in the U.S. in the next few paragraphs). In the U.S. that debate is
playing itself out on the front pages of newspapers such as The New York Times and
the San Jose Mercury News
On one side of the debate are a large number of computer professionals, civil
libertarians, and perhaps a majority of the American public, who are rightly
concerned about their privacy and the secrecy of their communications. These people
want the right and the ability to protect their data with the most powerful encryption
systems possible.
On the other side of the debate are the United States Government, members of
the nation's law enforcement and intelligence communities, and (apparently) a small
number of computer professionals, who argue that the use of cryptography should be
limited because it can be used to hide illegal activities from authorized wiretaps and
electronic searches.
104
MIT Professor Ronald Rivest has observed that the controversy over
cryptography fundamentally boils down to one question: should the c itizens of a
country have the right to create and store documents which their government cannot
read?
What Is Encryption?
Encryption is a process by which a message (called plaintext) is transformed
into another message (called ciphertext) using a mathematical function and a special
encryption password, called the key.
Decryption is the reverse process: the ciphertext is transformed back into the
original plaintext using a mathematical function and a key.
Indeed, the only way to decrypt the encrypted message and get printable text is
by knowing the secret key nosmis. If you don't know the key, and you don't have
access to a supercomputer, you can't decrypt the text. If you use a strong encryption
system, even the supercomputer won't help you.
What You Can Do with Encryption
Encryption can play a very important role in your day-to-day computing and
communicating:
• Encryption can protect information stored on your computer from unauthorized
access – even from people who otherwise have access to your computer
system.
• Encryption can protect information while it is in transit from one computer
system to another.
• Encryption can be used to deter and detect accidental or intentional alterations
in your data.
• Encryption can be used to verify whether or not the author of a document is
really who you think it is.
Despite these advantages, encryption has its limits:
• Encryption can't prevent an attacker from deleting your data altogether.
• An attacker can compromise the encryption program itself. The attacker might
modify the program to use a key different from the one you provide, or might
record all of the encryption keys in a special file for later retrieval.
• An attacker might find a previous ly unknown and relatively easy way to
decode messages encrypted with the algorithm you are using.
• An attacker could access your file before it is encrypted or after it is decrypted.
For all these reasons, encryption should be viewed as a part of your overall
computer security strategy, but not as a substitute for other measures such as proper
access controls.
105
The Elements of Encryption
There are many different ways that you can use a computer to encrypt or decrypt
information. Nevertheless, each of these so-called encryption systems share common
elements :
Encryption algorithm
The encryption algorithm is the function, usually with some mathematical
foundations, which performs the task of encrypting and decrypting your data.
Encryption keys
Encryption keys are used by the encryption algorithm to determine how
data is encrypted or decrypted. Keys are similar to computer passwords: when
a piece of information is encrypted, you need to specify the correct key to
access it again. But unlike a password program, an encryption program doesn't
compare the key you provide with the key you originally used to encrypt the
file, and grant you access if the two keys match. Instead, an encryption
program uses your key to transform the ciphertext back into the plaintext. If
you provide the correct key, you get back your original message.
Key length
As with passwords, encryption keys have a predetermined length. Longer
keys are more difficult for an attacker to guess than shorter ones because there
are more of them to try in a brute-force attack. Different encryption systems
allow you to use keys of different lengths; some allow you to use variablelength keys.
Plaintext
The information which you wish to encrypt.
Ciphertext
The information after it is encrypted.
Cryptographic Strength
Different forms of cryptography are not equal. Some systems are easily
circumvented, or broken. Others are quite resistant to even the most determined
attack. The ability of a cryptographic system to protect information from attack is
called its strength. Strength depends on many factors, including:
• The secrecy of the key.
• The difficulty of guessing the key or trying out all possible keys (a key search).
Longer keys are generally harder to guess or find.
• The difficulty of inverting the encryption algorithm without knowing the
encryption key (breaking the encryption algorithm).
• The existence (or lack) of back doors, or additional ways by which an
encrypted file can be decrypted more easily without knowing the key.
106
• The ability to decrypt an entire encrypted message if you know the way that a
portion of it decrypts (called a known text attack).
• The properties of the plaintext and knowledge of those properties by an
attacker. (For example, a cryptographic system may be vulnerable to attack if
all messages encrypted with it begin or end with a known piece of plaintext.
These kinds of regularities were used by the Allies to crack the German
Enigma cipher during the Second World War.)
The goal in cryptographic design is to develop an algorithm that is so difficult to
reverse without the key that it is at least roughly equivalent to the effort required to
guess the key by trying possible solutions one at a time. We would like this property
to hold even when the attacker knows something about the contents of the messages
encrypted with the cipher. Some very sophisticated mathematics are involved in such
design.
107
APPENDIX
Что такое аннотация и peфepaт?
Аннотация (от лат. annotatio – замечание; англ.- summary, annotation,
abstract) – краткая характеристика содержания произведения печати или
рукописи. Аннотации по содержанию и целевому назначению могут быть
справочные, раскрывающие тематику документов и сообщающие какие-либо
сведения о нем, но не дающие его критической оценки, и рекомендательные,
содержащие оценку документа с точки зрения его пригодности для определенной категории читателей. По охвату содержания аннотированного документа и читательского назначения различают аннотации общие, характеризующие документ в целом и рассчитанные на широкий круг читателей и
специализированные, раскрывающие документ лишь в определенных аспектах,
интересующих узкого специалиста и описательные.
В аннотации указываются лишь существенные признаки содержания
документа, т.е. те, которые позволяют выявить его научное и практическое
значение и новиз ну, отличить его от других, близких к нему по тематике и
целевому назначению.
При составлении аннотации не следует пересказывать содержание
документов. Следует свести к минимуму использование сложных оборотов,
употребление личных и указательных местоимений. Объем аннотации от 500 до
1000 печатных знаков.
1.
2.
3.
4.
5.
6.
Состав аннотации:
библиографическое описание;
данные об авторе: ученая с тепень, звание, принадлежнос ть к научной
школе и др. Подробные данные об авторе не являются обязательным
объектом аннотации;
конкретная форма аннотируемого документа: монография, учебник,
учебное пособие и т.д.;
предмет изложения и его основные характерис тики: тема, основные
понятия, процессы, место и время, в течение которого эти процессы
происходят и т.д.;
отличительные черты документа по сравнению с родственными по
тематике и целевому назначению: то новое в содержании, что несет в
себе документ, например, постановка проблемы, решение час тного
вопроса, новая методика, обобщение данных по различным источникам,
новая оценка фактов, новая концепция или гипотеза и др.;
конкретный читательский адрес: кому адресуется книга, статья.
Аннотация служит только для осведомления о существовании документа
определенного содержания и характера. В аннотировании основное заключается в умении лаконично обобщить содержание документа.
108
Схема составления описательной аннотации.
1. Вводная часть.
Автор, название работы на инос транном языке и его перевод,
название журнала на иностранном языке, его номер и год издания,
название фирмы (на иностранном языке) для патента или каталога;
название издательства для книги; количество страниц, таблиц, рисунков;
на каком языке написана работа; библиография.
2. Описательная часть.
Сообщается о /+ сущ./...
Подробно описывается ...
Кратко говорится …
3. Заключительная часть.
Общий вывод, выделяется главное в работе. Может быть дана
рекомендация, кому данная работа будет полезна.
Для составления аннотаций дос таточно просмотрового чтения
работы. Это обычно работа без словаря.
Реферат (от лат. refero – сообщаю; англ.- essay, précis) – краткое изложение
в письменном виде или в форме публичного доклада содержания научного
труда (трудов), литературы по теме. Среди многочисленных видов рефератов
следует выделить специализированные рефераты, в которых изложение
ориентировано на специалистов определенной области или какой-нибудь
деятельности и учитывает их запросы.
Целевое назначение реферата разнообразно. Его функции следующие:
1. реферат отвечает на вопросы, какая основная информация заключается в
реферированном документе;
2. дает описание первичного документа;
3. оповещает о выходе в свет и о наличии соответс твующих первичных
документов;
4. является источником для получения справочных данных.
Реферат является также одним из самостоятельных средств научной
информации, может быть выполнен в форме устного доклада. При всем своем
разнообразии рефераты обладают некоторыми общими чертами. В реферате не
используются доказательства, рассуждения и ис торические экскурсы. Материал
подается в форме консультации или описания фактов. Информация излагается
точно, кратко, без искажений и субъективных оценок. Краткость достигается в
основном за счет использования преимущественно терминологической
лексики, а также средств лаконизации языка /таблиц, формул, иллюстраций/.
Реферат, как правило, включает следующие части:
1. библиографическое описание первичного документа;
2. собственно реферативная часть /текст реферата/;
3. справочный аппарат, т.е. дополнительные сведения и примечания.
109
Текст реферата следует строить по следующему плану:
а) цель и методика исследования/изучения/ или разработки;
б) конкретные данные о предмете исследования /изучения/ или разработки,
его изучаемых свойствах;
в) временные и прос транс твенные характеристики исследования;
г) результаты и выводы.
Заглавие реферата не должно повторяться в тексте. Как и в аннотации,
следует избегать лишних вводных фраз. Примерный объем реферата находится
в пределах 10-15% объема реферируемой статьи. При необходимости объем
может быть больше указанного. Реферирование предполагает владение
мастерством сокращения текс та первичного документа.
Заглавие реферата может быть предс тавлено в двух вариантах:
а) заглавием реферата служит точный перевод на русский язык заголовка
первичного документа, опубликованного на английском языке, например:
Национальная информационная система по физике. Koch H.W. A national
information system for physics. «Phys. Today», 1998, №4 /англ./
б) заглавием реферата является смысловой перевод заголовка первичного
документа, если этот заголовок неточно или недостаточно полно отражает
основное содержание документа. В этом случае заглавие реферата выносится в
квадратные скобки. Например: [o месте информации среди социальных наук и
о причинах, препятствующих её развитию] Batten W. K. We know the enemу – do
we know the friends? «Libr.J»,1998, №5 /англ./
Такое заглавие реферата рекомендуется составлять после того, как полностью уяснена сущность первичного документа и составлен реферат.
Термины. В реферате должна быть использована научная терминология,
принятая в научной литературе по данной отрасли науки и техники. Не следует
употреблять инос транные термины, если имеются равнозначные русские.
Формулы. Формулы в тексте реферата следует приводить в следующих
случаях:
а) когда без них невозможно составление текста реферата;
б) когда формулы выражают итоги работы, изложенной в первичном документе;
в) когда формулы существенно облегчают понимание содержания первичного документа.
Единицы измерения переводятся в «Международную систему единиц СИ».
Иллюстрации: чертежи, карты, диаграммы, фотографии/ и таблицы могут быть
включены в реферат полностью или частично, если они отражают основное
содержание первичного документа и способствуют сокращению текста
реферата.
Фамилии в тексте реферата, как правило, рекомендуется приводить на
английском языке. Фамилии хорошо известных в России иностранных ученых
110
следует писать в русской транскрипции, например, закон Бойля-Мариотта.
Географические названия даются в русской транскрипции в соответствии с
последним изданием «Атласа мира». Название страны следует давать с учетом
установленных сокращений, например, США. Название фирм, учреждений.
организаций даются в оригинальном написании. После названия в круглых
скобках указывается страна. Например: Lakheed (США).
Ссылки в тексте реферата делаются в следующих случаях:
а) когда в первичном документе обсуждается содержание другого
документа;
б) когда первичный документ является продолжением ранее опубликованного документа. Ссылки в тексте документа ставятся в круглые скобки.
В заключение следует отметить, что аннотация, реферат и рецензия
относятся к типам библиографического описания, причем наиболее распространенным типом является аннотация.
Exercises
Exercise1.Употребите данную модель для компрессии высказываний:
1. Пример. Вместо: the parts of machine
можно сказать: machine parts
the techniques of measurement, the change of speed, the production of
computers, the design of engine, the changes of temperature, designing of
computer keyboard.
2. Пример. Вместо: the components for circuit
можно сказать: circuit components
parts for production, the equipment for tests, machine for demonstration, the
arrangement for automatic checking, the equipment for automatic handling.
3. Пример. Вместо: the conference which took place in 2005
можно сказать: the 2005 conference
the conference which was conducted in 2000;
the computer which was produced in 1999;
the show which took place in 2001;
the car which was produced in 2002;
the exhibition which was in 2000.
4. Пример. Вместо: the equipment which is designed for automatic handling
можно сказать: automatic handling equipment
the control system which is designed for measuring; the equipment which is
used for radiographic examination; the equipment the function of which is
measuring; the machines which are produced for industrial purposes.
111
5. Пример. Вместо: the chips made of silicon
можно сказать: silicon chips
the models used in industry; the equipment used for the purpose of testing; the
units used for the purpose of cooling.
6. Пример. Вместо: sections which have the length of 300 feet
можно сказать: 300 feet long sections
holes which have the width of 20 feet; holes which have the depth of 4 metres;
the conveyer which has the length of 152 metres.
7. Пример. Вместо: the data which are not important
можно сказать: unimportant data
the problems which are not solved (un-); the attemps which are not successful
(un-); the element which is not active (in-); the work which is not effective (in-);
the information which is not known (un-); the production which is not economic
(un-).
Exercise 2. Употребите причастие прошедшего времени вместо придаточного
определительного предложении. По содержанию информации, они
эквивалентны.
Пример: The power which is demanded is continually increased.
The power demanded is continually increased.
1. The research which is being carried out on this subject is important.
2. The manufacturing process which was adopted was a revolutionary one.
3. The data which are being produced now are interesting.
4. The power which is required here is great.
Exercise 3. Употребите глагол вместо соответствующего фразеологического
оборота. Глаголы даны в конце упражнения.
Пример: to go round and round – to rotate
1. The magnetic field appears to go round and round.
2. It is not possible to tell in advance what the results of experiment will be.
3. The speed should not be allowed to go beyond the rated limits.
4. The special properties of this material are now being made use of in industry.
5. This unit has the effect of making the voltage greater.
6. This work was formerly done manually, but it is now carried out by machines.
( to increase, to foretell, to exploit, to exceed, to mechanize, to rotate)
Exercise 4. В технической литературе час то опускается действующее лицо или
объект.
Пример. Вместо: We can measure temperature changes
можно сказать: It is possible to measure temperature changes
112
Составьте предложения по таблице рекомендуемых конструкций:
is
(difficult)
easy
was
(impossible)
possible
seems
(unnecessary) to do
necessary
It
appears
to do something
(useless)
something
proves
essential
(undesirable)
becomes
(unusual)
useful
instructive
(uncommon)
desirable
practicable
usual
common
Exercise 5. Обобщите информацию двух предложений, объединив их в одно
предложение с инфинитивной конструкцией.
Пример:
Hydrogen and oxygen combine chemically.
They form the molecule H2 O.
to
Hydrogen and oxigen combine chemically to form the molecule H2 О.
1.
2.
3.
4.
The wires are bound together. They form a single thing.
Sоme metals are mixed in suitable proportions. This makes a good alloy.
A number of workshops were added. They produced more goods.
Two thousand more workers were taken on. This gave to total labour force of
8,000.
5. The unstable isotopes undergo radioactive decay. Other isotopes are formed as
a result.
Exercise 6. Сократите по возможности следующие тексты.
A hybrid computer that introduces a
new concept in engineering and scientific
computation by combining the best
operational features of analog and digital
computers into an integrated system, was
demonstrated for the first time by its
manufacturer Electronic Associates, Ins.,
of Long Branch, N.J. at the Western Joint
Computer (the USA).
A
hybrid
computer
combining the best operational
features of analog and digital
machines was demonstrated by
Electronic Associated, Ins., N.J.
in the USA.
113
The new computer was designed
primarily as a scientific Instrument for a
range of research, design and development
applications in industry, defence and
civilian space programs as well as
commercial applications for a variety of
design and production problems.
HYDAC (hybrid
digital/analog
computer) is the result of a four-year-long
research program conducted by the
computation division at Princeton, and
represents the first major change of
direction in computer development in 10
years.
The new computer
HYDAC
combines the traditional advantages of
both analog and digital computers-the
analog's speed, lower cost, ease of
programming end the digital's capacity for
storage, decision making logic operations
and time-sharing into one centralized system to achieve a computation efficiency
which is well beyond the limits of either
computer used alone.
The new computer is used in
research, design and development, defence and civilian
space programs.
HYDAC (hybrid digital/
analog computer) is the result of
a 4-year-long program.
HYDAC combines the
traditional advantages of analog
and digital computers.
Образцы аннотаций на английском языке
Abstract 1
Simulating how the global Internet behaves is an immensely challenging
undertaking because of the network’s great heterogeneity and rapid change. The
heterogeneity ranges from the individual links that carry the network’s traffic, to the
protocols that interoperate over the links, to the “mix” of different applications used
at a site, to the levels of congestion seen on different links. We discuss two key
strategies for developing meaningful simulations in the face of these difficulties:
searching for invariants, and judiciously exploring the simulation parameter space.
We finish with a brief look at a collaborative effort within the research community to
develop a common network simulator.
Abstract 2
In this paper, we present Google, a prototype of a large-scale search engine
which makes heavy use of the structure present in hypertext. Google is designed to
crawl and index the Web efficiently and produce much more satisfying search results
than existing systems. The prototype with a full text and hyperlink database of at least
24 million pages is available at http://google.stanford.edu/ To engineer a search
114
engine is a challenging task. Search engines index tens to hundreds of millions of
web pages involving a comparable number of distinct terms. They answer tens of
millions of queries every day. Despite the importance of large-scale search engines
on the web, very little academic research has been done on them. Furthermore, due to
rapid advance in technology and web proliferation, creating a web search engine
today is very different from three years ago. This paper provides an in-depth
description of our large-scale web search engine – the first such detailed public
description we know of to date. Apart from the problems of scaling traditional search
techniques to data of this magnitude, there are new technical challenges involved
with using the additional information present in hypertext to produce better search
results. This paper addresses this question of how to build a practical large-scale
system which can exploit the additional information present in hypertext. Also we
look at the problem of how to effectively deal with uncontrolled hypertext collections
where anyone can publish anything they want.
Keywords
World Wide Web, Search Engines, Information Retrieval, PageRank. Google
Abstract 3
Java is great for downloading applications over a network, but what about for
embedded systems? The same features that lend themselves to networkability, fit well
in the process-control field, too. This applet demonstrates some real-time principles.
Abstract 4
This document describes an automatable mapping of specifications written in
the International Telecommunication Union (ITU) Specification and Description
Language (SDL) to specifications expressed in the Object Management Group
(OMG) standard United Modeling Language (UML). Such transformed specifications
preserve the general semantics of the original while allowing des ign teams to take
advantage of UML – a modeling language that is more widespread among software
practitioners than SDL and that enjoys more extensive tool support.
The proposed mapping is based on a proper subset of general UML that is
executable. This means that it produces executable models with dynamic semantics
that are equivalent to that of the original SDL specification. Furthermore, this subset
of UML allows generation of complete program code in a standard programming
language such as C or C++. Finally, this subset has the advantage that it currently
supported by a suite of tools available from Rational Software.
Abstract 5
For the past few years, a joint ISO/CCITT committee known as JPEG (Joint
Photographic Experts Group) has been working to establish the first international
compression standard for continuous-tone still images, both grayscale and color.
JPEG's proposed standard aims to be generic, to support a wide variety of
applications for continuous tone images. To meet the differing needs of many
applications, the JPEG standard includes two basic compression methods, each with
115
various modes of operation. A DCT-based method is specified for «lossy»
compression, and a predictive method, for «lossless» compression, JPEG features a
simple lossy technique known as the Baseline method, a subset of the other DCTbased modes of operation. The Baseline method has been by far the most widely
implemented JPEG method to date, and is sufficient in its own right for a large
number of applications. This article provides an overview of the JPEG standard, and
focuses in detail on the Baseline method.
Note: DCT – Discrete Cosine Transform
Abstract 6
Simulating how the global Internet behaves is an immensely challenging
undertaking because of the network’s great heterogeneity and rapid change/ The
heterogeneity ranges from the individual links that carry the network’s traffic, to the
protocols that interoperate over the links, to the «mix» of different applications used
at site, to the levels of congestion seen on different links. We discuss two key
strategies for developing meaningful in the face of these difficulties: searching for
invariants, and judiciously exploring the simulation parameter space. We finish with
a brief look at a collaborative effort within the research community to develop a
common network simulator.
116
CONTENTS
UNIT 1 HIS TORY .............................................................................................................................3
TEXT 1............................................................................................................................................3
TEXT 2............................................................................................................................................6
TEXT 3............................................................................................................................................8
TEXT 4..........................................................................................................................................10
TEXT 5..........................................................................................................................................12
TEXT 6..........................................................................................................................................17
UNIT 2 THE INTERN ET ...............................................................................................................17
TEXT 1..........................................................................................................................................17
TEXT 2..........................................................................................................................................23
TEXT 3..........................................................................................................................................25
TEXT 4..........................................................................................................................................26
TEXT 5..........................................................................................................................................26
TEXT 6..........................................................................................................................................29
TEXT 7..........................................................................................................................................31
TEXT 8..........................................................................................................................................32
TEXT 9..........................................................................................................................................33
UNIT 3 PROGRAMMING LANGUAGES ...................................................................................34
TEXT 1..........................................................................................................................................34
TEXT 2..........................................................................................................................................38
TEXT 3..........................................................................................................................................41
TEXT 4..........................................................................................................................................42
TEXT 5..........................................................................................................................................43
TEXT 6..........................................................................................................................................43
TEXT 7..........................................................................................................................................44
TEXT 8..........................................................................................................................................44
UNIT 4 OPERATING S YS TEMS..................................................................................................45
TEXT 1..........................................................................................................................................45
TEXT 2..........................................................................................................................................48
TEXT 3..........................................................................................................................................49
TEXT 4..........................................................................................................................................50
TEXT 5..........................................................................................................................................50
TEXT 6..........................................................................................................................................52
UNIT 5 DATABAS E S YS TEMS ....................................................................................................52
TEXT 1..........................................................................................................................................52
TEXT 2..........................................................................................................................................55
TEXT 3..........................................................................................................................................56
TEXT 4..........................................................................................................................................57
TEXT 5..........................................................................................................................................57
TEXT 6..........................................................................................................................................59
TEXT 7..........................................................................................................................................60
UNIT 6 COMPUTER S ECURITY.................................................................................................60
TEXT 1..........................................................................................................................................60
TEXT 2..........................................................................................................................................64
TEXT 3..........................................................................................................................................66
TEXT 4..........................................................................................................................................68
TEXT 5..........................................................................................................................................69
TEXT 6..........................................................................................................................................69
117
TEXT 7..........................................................................................................................................70
S UPPLEMENTARY READ ING....................................................................................................71
TEXT 1 Fuzzy Logic....................................................................................................................71
TEXT 2 Design of a Bitmapped Multilingual Workstation.....................................................73
TEXT 3 The Do-It-Yourself Supercomputer ............................................................................75
TEXT 4 Ergonomics ....................................................................................................................77
TEXT 5 Building a Web-Based Education S ystem...................................................................82
TEXT 6 Moving Bytes .................................................................................................................85
TEXT 7 The Role of Government in the Evolution of the Internet........................................87
TEXT 8 Crash-Proof Computing...............................................................................................92
TEXT 9 Object-Oriented Programming with Java..................................................................97
TEXT 10 The JPEG S till Picture Compression Standard .....................................................100
TEXT 11 Cryptography ............................................................................................................102
Appendix.........................................................................................................................................108
Exercises..........................................................................................................................................111
Abstract 1........................................................................................................................................114
Abstract 2........................................................................................................................................114
Abstract 3........................................................................................................................................115
Abstract 4........................................................................................................................................115
Abstract 5........................................................................................................................................115
Abstract 6........................................................................................................................................116
118
Методическое пособие по английскому языку
Computer world
для студентов дневного отделения ФИСТ
Ответственный за выпуск Т . А. Матросова
Редактор
Подписано в печать 2006. Формат 60×84 / 16
Бумага писчая. Усл. п. л.. Уч. - изд.л.
Т ираж экз. Заказ
Ульяновский государственный технический университет
432027, Ульяновск, Сев. Венец, 32.
Т ипография УлГТ У, 432027, Ульяновск, Сев. Венец, 32.