Chapter 9 The Development of Computer Operating System

Chapter 9
The Development of
Computer Operating
System

642
9.1 The Operating System
An operating system is a program that runs on a
computer to simplify the use of the computer for
the user. The operating system manages the use of
peripheral devices such as printers, monitors and
keyboards. In addition the operating system will
run other programs and display the results. In
order to carry out these functions the operating
system has to require a systematic structure for
the inputs and outputs; there is a definite
structure to files and there is a systematic way in
which the files are stored on the data storage
devices. Without an operating system a computer
is largely an unresponsive hunk of metal and
wires.

643
Although now the concept of an operating system
appears to be a natural and obvious one, operating
systems evolved over a considerable period of time.
The first electronic computers were "hardwired" to
carryout systematic computations. Initially the
computations were for ballistics table. The user would
wire direct connections between the various
components of the computer through a plug board.
When the computations were finished the next user
would have to pull out the wires and rewire for the
next set of computations. This was monumentally
cumbersome by today's standard but a marvelous
advance in speed and accuracy over hand
computations with pencil and paper.

644
In the late 1960's M.I.T. had a time sharing operating
system called MULTICS, the name indicating it was a
multiple user systems. Ken Thompson was working at
Bell Labs in New Jersey and was given the use of a PDP-
7 minicomputer. He decided to create an operating
system for the minicomputer for the convenience it
provided even though there would be only one user.
Initially he called this operating system UNICS in
analogy with MULTICS but later changed the spelling
to UNIX. At the same time Dennis Ritchie was involved
in the creation of the programming language "C," so
named because it was modeled on the programming
language developed in Britain called "B." The
collaboration between Ken Thompson and Dennis
Ritchie has been quite fruitful over the years. UNIX and
C have also been closely linked.

645
9.2 Multics
In 1964, following implementation of the Compatible
Time-Sharing System (CTSS) serious planning began on
the development of a new computer system specifically
organized as a prototype of a computer utility. The plans
and aspirations for this system, called Multics (for
Multiplexed Information and Computing Service), were
described in a set of six papers presented at the 1965 Fall
Joint Computer Conference. The development of the
system was undertaken as a cooperative effort involving
the Bell Telephone Laboratories (from 1965 to 1969), the
computer department of the General Electric Company,
and Project MAC of M.I.T. Implicit in the 1965 papers
was the expectation that there should be a later
examination of the development effort.

646
From the present vantage point, however, it is
clear that a definitive examination cannot be presented
in a single paper. As a result, the present paper
discusses only some of the many possible topics. First
we review the goals, history and current status of the
Multics project. This review is followed by a brief
description of the appearance of the Multics system to
its various classes of users. Finally several topics arc
given which represent some of the research insights
which have come out of the development activities. This
organization has been chosen in order to emphasize
those aspects of software systems having the goals of a
computer utility which we feel to be of special interest.

647
9.3 UNIX
• UNIX was an important innovation in computer. It is
awkward but the computer professionals were
perfectly willing to tolerate its difficulties in order to
get the power it gave them access to. UNIX's
shortcomings were not considered notable at the time.
The concept of user-friendly software came a decade
later. UNIX users were more concerned that
something could be achieved at all rather than
whether it required use of non-mnemonic commands.
• The use of UNIX spread around the country and
initially Bell Labs gave it away free. Later Bell Labs
realized that UNIX had commercial potential and
arranged for the marketing of it.
Background
• The name "Unix" was intended as a pun on Multics
(and was written "Unics" at first, for UNiplexed
Information and Computing System).
• For the first 10 years, Unix development was essentially
confined to Bell Labs and most scripting related work
was also done in NJ. The initial versions of Unix were
labeled "Version n" or "Nth Edition" (of the manuals)
and some milestones in shell history are directly related
to particular Unix releases. Major early Unix
implementation were for DEC's PDP-11 (16 bits) which
was so tiny by today's hardware standards (typical
configuration were limited to 128K memory, 2.4M disc,
64K per-process limit (inc the kernel)) and similar
configurations can be found only in palmtop computers
and top electronic watches. The fact that they managed
to created pretty powerful shells for such a computer is
nothing but simply amazing and attests tremendous
ingenuity of early creators of Unix extremely well.

649
For computer science at Bell Laboratories, the
period 1968-1969 was somewhat unsettled. The main
reason for this was the slow, though clearly inevitable,
withdrawal of the Labs from the Multics project. To the
Labs computing community as a whole, the problem was
the increasing obviousness of the failure of Multics to
deliver promptly any sort of usable system, let alone the
panacea envisioned earlier. For much of this time, the
Murray Hill Computer Center was also running a costly
GE 645 machine that inadequately simulated the GE 635.
Another shake-up that occurred during this period was
the organizational separation of computing services and
computing research.

650
• Thompson is really the guy who is primarily
attributed with developing UNIX. He's an employee of
AT&T-Bell Labs at the time-and still is. Dennis
Ritchie was the co-developer. It was really those two
guys working together who developed UNIX.
• Borne wrote the Borne shell (SH), Korn wrote the
Korn shell (KSH). Steve Johnson was very involved in
writing a lot of the early utilities associated with
UNIX. Kernighan was involved in various utilities,
but was primarily involved in the C Language with
Ritchie-as was Plauger, also a C language guy.
Plauger wrote the first commercial C compiler.
Interestingly enough, all these guys are still out there
doing related things in the UNIX business.

651
In the case of UNIX, the stage was set
by events going back at least as far as
1945. There were four or five things that
happened over a period of years that
made it possible for the whole UNIX
thing to happen by the grass roots
method that it did.

652
In 1945, AT&T was involved with an antitrust
case with the federal government. The
federal government felt that AT&T was
monopolistic, so they pushed them in 1956 into
a consent decree in which AT&T agreed to
make all it's patent technology licensable to the
public. They were also restricted to the
communications business. As a result, they
couldn't be in the computer business. This big
event is the reason AT&T never
commercialized UNIX along the way-it wasn't
allowed to.

653
•One of the other significant events that was taking
place at this same time was a project going on at MIT
called Project MAC (Multiple Access Computers). They
were doing research on time-sharing systems, trying to
allow multiple users to interactively use a computer
system. So time-sharing and multi-programming--all
that multi-user stuff--is really evolving and taking place
around 1963 and in the early '60's.
•So AT&T, GE, and IBM formed a partnership with
MIT and Project MAC to try to develop a time-sharing
operating system called MULTIX. MULTIZ was meant
to be a large multi-user system, but it turned out that
this large multi-user system could only support a total of
one or two users interactively which isn't exactly what
IBM and AT&T had in mind when they started out on
the project. (GE ultimately was able to turn MULTIX
into a viable commercial product.)

654
• Not too long after that, in 1969, AT&T Bell Labs said
enough is enough, we‘re pulling out of this Project
MAC sinkhole. Around that same time, these two guys,
Thompson and Ritchie, were finishing up some
education requirements. Here’s another Berkeley
connection: Ken Thompson got a Masters Degree in
Electrical Engineering from nowhere else but Berkeley
so that‘s the link back to why the software ends up
back at Berkeley。
• Dennis Ritchie was working and finishing up a degree
in Mathematics at Harvard, but he ended up without a
degree, without his Ph.D., decided to bag it and just go
to work. So Thompson and Ritchie, who go to work
for Bell Labs thinking they're going to work on this
fantastic new operating system called MULTIX.

655
• They're all fired up and excited and do in fact spend a
little time on it but within months of when they get
there, the plug gets pulled and they don't have a
project. Well, no problem, because Thompson figures
he could have done it better himself anyway, which is
in fact the case. So he says, I'm going to write a multiuser
system. One of the main motivations for
Thompson embarking on this project was that he
needed a decent operating system to run his game
called Space Travel on, so instead of optimizing Space
Travel he decided to write a new operating system. He
jumped right on it.

656
• There was a one month period, with all the extra
time he could really focus on this thing, he
basically wrote UNIX in one month. He wrote a
kernel, a shell, a file handling system, and one of
the other utility sets and had a working
operating system. He did this on a 4K machine
with 18-bit words and that's what UNIX ran on
originally. You hear the folklore of Bill Gates
writing his 4K basic compiler, well Thompson
wrote a 4K operating system--a 4K multi-user
operating system--which is pretty impressive.

657
Value
• UNIX is the most innovative, influential, operating
system in the history of computing. And it really is. If
you look at all the other operating systems, there are
many ideas that are derived from UNIX. Look at the
DOS commands. DOS took baby elements out of
UNIX--it's just ideas completely extracted from
UNIX.
• The original version of UNIX was written in PDP
assembly language. In 1972 it was rewritten in a
language called C, which is another fundamental
breakthrough in the whole process--they developed
this new programming language just so they could
write the operating system in it. So UNIX was, one of
the first, operating systems written in a high level
language.

658
9.4 CP/M
• With the concept of operating system widely
popularized it was standard practice to develop
an operating system for each new line of
computers. About this time the personal
computer was developed.
• Gary Kildall of the Naval Postgraduate School
in Monterey, California acquired one of the
early personal computers and he immediately
proceeded to develop an operating system for it.
He called the operating system CP/M, for
Control Processor Monitor. It was the first
operating system for a personal computer.
Background
•In the beginning, there was CP/M. As the first easily
configurable standard operating system for micros based
on Intel's then flagship the 8080, this small but effective
system became the MS-DOS of its day. With its logical,
simple, and open architecture it captured the hearts of
legions of amateur systems hackers the world over; so
much so that even in the 1990's some diehards have
refused to surrender entirely to the overwhelming
dominance of DOS/Windows. It also powered thousands
of microcomputer based business systems, often to the
frustration of its users who didn't care about its internals
but hated dealing with its arcane command line syntax.
•CP/M was developed on Intel's 8080 emulator under
DEC's TOPS-10 operating system, so naturally many
parts of CP/M were inspired by it, including the eight
character filenames with a three-character extension
that every MS-DOS/Windows 3.X user still lives with
today.

660
• "Necessity is the mother of invention" the old saying
goes. And its true; but as we all know it takes two to
make a baby and in the case of CP/M the father was a
man named Gary Kildall, who in 1975 was working as
a consultant to Intel.

661
•Kildall's task at Intel that year was to design and
develop a language called PL/M for the 8080 chip, to be
used as a systems development language. At the time,
the chips themselves barely existed and Intel was just
then starting to design a computer system that used the
8080. The plan was for Gary to use the 8080 emulator
Intel had running on their big PDP-10 minicomputer,
but he preferred to work directly on the 8080 itself, in
part because by working on his own machine at home he
could avoid the 50 mile drive to Intel to work every day.
The only 8080-based computer Intel had available was
called "Intellec-8", but it didn't have any software or
disk storage attached to it. So Kildall obtained a used
test floppy drive free from Shugart Associates, attached
it to the Intellec-8 with a controller designed by his
friend John Torode, and wrote a primitive operating
system for it which he called CP/M.

662
Development
The company's seminal product was CP/M 2.0,
which fully separated the three components of the
operating system into logical pieces: the CCP (console
command processor); the BDOS (Basic Disk Operating
System); and the BIOS. Only the BIOS need be provided
by anyone to get CP/M running on a new machine, the
CCP and BDOS would be unchanged. CP/M 2.0 was
quite buggy, and was quickly followed by 2.1 as a fix-up
release. However, 2.1 was limited in its internal capacity
to small floppy drives, and by 1977, hard drives were
coming on the scene. CP/M version 2.2 added expanded
disk formatting tables which could allow access to up to
8 (eight) megabytes per drive in up to 8 (eight) total
drives. It was version 2.2 that became the megabit that
dominated microcomputing almost from its outset.

663
It was CP/M's adaptability that gave it appeal and
launched it on the road to success. It packed a surprising
amount of power in a tiny package, and did so in a
simple, clean logical way. Many of its critics bemoaned
its sometimes cryptic commands (rightly) and also its
lack of powerful features. But it must be remembered
that CP/M was designed in an age when it was a rare,
high-end computer owner that could afford the
thousands of dollars it took to fill up the whole 64K of
the 8080's address space. The entire operating system
took only 8K of the computer's memory, and would run
in a mere 16K of total memory with room left over for
any of its system development utilities to run. More
features would have swelled the system to the point
where decently featured applications would have had no
room to execute.

664
And it was the applications that moved this
operating system out of the realm of the
computer enthusiasts and into the hands of
"real users" (people who don't care if their
computers are powered by hamsters, so long as
they run their necessary applications reliably).
The first real "killer app" for CP/M was
probably WordStar, a word processing
program that became very widely used. Also
famous was the first microcomputer database
application, dBASE II. These and many, many
other applications and utilities eventually made
CP/M a useful tool for a wide range of ordinary
people.
•By 1981, a new generation of Intel microprocessors was
on the horizon -- the 8086 and 8088 16-bit chips, which
could address an incredible 1 megabyte of memory. This
seemed at the time more than anyone could ever figure
out a use for, so Digital Research focused much of their
attention on producing CP/M 3.0 for the dominant
8080/Z80 platform. There were plans of course to port
CP/M to the new 16-bit chips with a version called
CP/M-86, but it was not a priority at the time.
•While DR did finally announce CP/M 3.0, a more full
featured successor to the successful 2.2, the upgrade was
only for 8080/Z80 based systems which were no longer
seen as the coming thing by the public. And CP/M-86
was ported to the IBM-PC, but by that time IBM was
practically giving away the new PC-DOS operating
system. Except for a diehard core of those that loved it
for what it was, CP/M began rapidly to vanish from the
land of living operating systems.

666
9.5 Microsoft Joined
Microsoft rose to fame and power on the basis of the
Disk Operating System, one of the most dramatic
business coups of the twentieth century. But while DOS
was great it lacked the ease of use of the Apple system so
Microsoft launched a project to create an operating to
achieve the ease of use of Apple's operating system. The
result was Windows. The first versions were not
spectacularly successful technically and commercially
but Microsoft continued to develop Windows until it
became virtually the universal operating system for
personal computers. This was in part due to the
technically capabilities and ease of use of Windows but it
was also due to the marketing practices of Microsoft
which resulted in every personal computer coming with
Windows so the acquisition of any other operating
system would superfluous and costly.

667
9.5.1 DOS
• Microsoft initially kept the IBM deal a secret from
Seattle Computer Products. And in what was to become
another extremely fortuitous move, Bill Gates, the not
uncontroversial founder of Microsoft, persuaded IBM to
let his company retain marketing rights for the
operating system separately from the IBM PC project.
Microsoft renamed it PC-DOS (the IBM version) and
MS-DOS (the Microsoft version). The two versions were
initially nearly identical, but they eventually diverged.
• MS-DOS soared in popularity with the surge in the PC
market. Revenue from its sales fuelled Microsoft's
phenomenal growth, and MS-DOS was the key to
company's rapid emergence as the dominant firm in the
software industry. This product continued to be the
largest single contributor to Microsoft's income well
after it had become more famous for Windows.

668
The final major version was 7.0, which
was released in 1995 as part of Microsoft
Windows 95. It featured close integration
with that operating system, including
support for long filenames and removal
of numerous utilities, some of which were
on the Windows 95 CDROM. It was
revised in 1997 with version 7.1, which
added support for the FAT32 filesystem
on hard disks.

669
Although many of the features were
copied from UNIX, MS-DOS was never
able to come anywhere close to UNIX in
terms of performance or features. For
example, MS-DOS never became a
serious multi-user or multitasking
operating system (both of which were
core features of UNIX right from the
start) in spite of attempts to retrofit these
capabilities. Multitasking is the ability for
a computer to run two or more programs
simultaneously.

670
9.5.2 Windows
• Microsoft first began development of the
Interface Manager (subsequently renamed
Microsoft Windows) in September 1981.
• Windows promised an easy-to-use
graphical interface, device-independent
graphics and multitasking support.

671
• The development was delayed several
times, however, and the Windows 1.0 hit
the store shelves in November 1985. The
selection of applications was sparse,
however, and Windows sales were modest.
• Windows 1.0 package, included:MS-DOS
Executive, Calendar, Cardfile, Notepad,
Terminal, Calculator, Clock, Reversi,
Control Panel, PIF (Program Information
File) Editor, Print Spooler, Clipboard,
RAMDrive, Windows Write, Windows
Paint.

672

673
• Windows 2.0, introduced in the fall of 1987, provided
significant usability improvements to Windows. With the
addition of icons and overlapping windows, Windows
became a viable environment for development of major
applications (such as Excel, Word for Windows, Corel
Draw!, Ami, PageMaker and Micrografx Designer), and
the sales were spurred by the runtime ("Single Application
Environment") versions supplied by the independent
software vendors.
• In late 1987 Microsoft released Windows/386. While it was
functionally equivalent to its sibling, Windows/286, in
running Windows applications, it provided the capability
to run multiple DOS applications simultaneously in the
extended memory.

674

675
• Windows 3.0, released in May, 1990, was a complete
overhaul of the Windows environment. With the
capability to address memory beyond 640K and a
much more powerful user interface, independent
software vendors started developing Windows
applications with vigor. The powerful new
applications helped Microsoft sell more than 10
million copies of Windows, making it the best-selling
graphical user interface in the history of computing.

676
• Windows 3.1, released in April, 1992 provides
significant improvements to Windows 3.0. In its first
two months on the market, it sold over 3 million copies,
including upgrades from Windows 3.0.
• Windows 3.11, added no new features but corrects
some existing, mostly network-related problems. It is
replacing Windows 3.1 at the retail and OEM levels,
and the upgrade was available free from
ftp.microsoft.com.

677

678
• Windows for Workgroups 3.1 , released in October,
1992, was the first integrated Windows and networking
package offered by Microsoft. It provided peer-to-peer
file and printer sharing capabilities highly integrated
into the Windows environment. The simple-to-use-andinstall
networking allows the user to specify which files
on the user's machine should be made accessible to
others. The files can then be accessed from other
machines running either Windows or DOS.
• Windows for Workgroups also includes two additional
applications: Microsoft Mail, a network mail package,
and Schedule+, a workgroup scheduler.
• On November, 1993 Microsoft ships Windows for
Workgroups 3.11.

679
Windows NT
• Windows NT 3.1, 94-03-01 is Microsoft's platform of
choice for high-end systems. It is intended for use in
network servers, workstations and software
development machines; it will not replace Windows for
DOS. While Windows NT's user interface is very
similar to that of Windows 3.1, it is based on an
entirely new operating system kernel.
• Windows NT 3.5, 94-04-12 provides OLE 2.0,
improved performance and reduced memory
requirements. It was released in September 1994.
Windows NT 3.5 Workstation replaces Windows NT
3.1, while Windows NT 3.5 Server replaces the
Windows NT 3.1 Advanced Server.
• Windows NT 4.0, ("Cairo") 94-03-15 Microsoft's
project for object-oriented Windows, and a successor
to the "Daytona" release of Windows NT.

680
Windows 95
Windows 95, released in August of 1995.
A 32-bit system providing full pre-emptive
multitasking, advanced file systems,
threading, networking and more. Includes
MS-DOS 7.0, but takes over from DOS
completely after starting. Also includes a
completely revised user interface.

681
Windows CE
• Windows CE has the look and feel of Windows 95
and NT. Users familiar with either of these
operating systems are able to instantly use
Handheld PCs and Palm-size PCs.
• Windows CE 1.0 devices appeared in November
1996. Over the next year, approximately 500,000
Handheld PC units were sold worldwide.
• Windows CE 2.0 became available in early 1998
addresses most of the problems experienced by
Windows CE 1.0 users and also added features to
the operating system that make it more viable for
use by corporate rather than home users.

682
Windows CE
• Windows CE 3.0 Availability June 15, 2000 --
Embedded operating system and its
comprehensive development tools -- Platform
Builder 3.0 and eMbedded Visual Tools 3.0 --
which enable developers to build rich embedded
devices that demand dynamic applications and
Internet services. Windows CE 3.0 combines the
flexibility and the reliability of an embedded
platform with the power of Windows and the
Internet.
Windows 98 Windows 98, released in June of 1998. Integrated Web
Browsing gives your desktop a browser-like interface. You
will 'browse' everything, including stuff on your local
computer. Active Desktop allows you to setup your
desktop to be your personal web page, complete with links
and any web content. You can also place active desktop
items, such as a stock ticker, that will update automatically.
Internet Explorer 4.0 New browser that supports HTML
4.0 and has an enhanced user interface. ACPI supports
OnNow specs for better power management of PCs.
FAT32 with Conversion utility Enhanced & Efficient
support for larger hard drives. Includes a utility to convert
your FAT16 to a FAT32 partition. Multiple Display
Support can expand your desktop onto up to 8 connected
monitors. New Hardware support will support the latest
technology such as DVD, Firewire, USB, and AGP. Win32
Driver model Uses same driver model as Windows NT 5.0
Disk Defragmentor Wizard Enhanced hard drive
defragmentor to speed up access to files and applications.

684
• Windows NT 5.0 will include a host of new
features. Like Windows 98, it will integrate
Internet Explorer 4.0 into the operating system.
This new interface will be matched up with the
Distributed File System, which Microsoft says
will provide "a logical way to organize and
navigate the huge volume of information an
enterprise assembles on servers, independent of
where the servers are physically located.
• As of November 1998, NT 5.0 will be known as
Windows 2000, making NT a "mainstream"
operating system.

685
Windows 2000
Feb. 17 2000, Windows 2000 provides an
impressive platform of Internet, intranet, extranet,
and management applications that integrate
tightly with Active Directory. You can set up
virtual private networks - secure, encrypted
connections across the Internet - with your choice
of protocol. You can encrypt data on the network
or on-disk. You can give users consistent access to
the same files and objects from any networkconnected
PC. You can use the Windows Installer
to distribute software to users over the LAN.

686
Windows Me
Thursday Sep. 14, 2000 Microsoft released
Windows Me, short for Millenium Edition, which is
aimed at the home user. The Me operating system
boasts some enhanced multimedia features, such as
an automated video editor and improved Internet
plumbing. But unlike Microsoft's Windows 2000
OS which offers advanced security, reliability, and
networking features Windows Me is basically just
an upgrade to the DOS-based code on which
previous Windows versions have been built.

687
Windows XP
• WINDOWS XP Microsoft officially launches it
on October 25th. 2001.
• XP is a whole new kind of Windows for
consumers. Under the hood, it contains the 32-
bit kernel and driver set from Windows NT
and Windows 2000. Naturally it has tons of new
features that no previous version of Windows
has, but it also doesn't ignore the past--old DOS
and Windows programs will still run, and may
even run better.

688
XP comes in two flavors: Home and
Professional. XP Home is a $99 upgrade ($199
for the full version) and Professional is a $199
upgrade ($299 for the full version). Recognizing
that many homes have more than one PC,
Microsoft also plans to offer discounts of $8 to
$12 off the price of additional upgrades for
home users (the Open Licensing Program is still
available for business or home users who need 5
or more copies). That's fortunate because you'll
need the additional licenses since the Product
Activation feature makes it all but impossible to
install a single copy on more than one PC.

689
9.6 Linux
There has been some competition for
Windows. A college student in Finland, Linus
Thorvald, developed a version of UNIX as an
operating system for personal computers. This
operating system is called LINUX after
Thorvald's first name. Linus Thorvald, in
addition to writing the code for components of
LINUX himself, organized a community effort
among programmers to get the code created
and tested. LINUX was made free to the
general public.

690
Background
• It was 1991, DOS was still reigning supreme in its vast
empire of personal computers. Bought by Bill Gates
from a Seattle hacker for $50,000, the bare bones
operating system had sneaked into every corner of the
world by virtue of a clever marketing strategy. PC
users had no other choice. Apple Macs were better,
but with astronomical prices that nobody could afford,
they remained a horizon away from the eager millions.
• The other dedicated camp of computing was the
Unixworld. But Unix itself was far more expensive. In
quest of big money, the Unix vendors priced it high
enough to ensure small PC users stayed away from it.
The source code of Unix, once taught in universities
courtesy of Bell Labs, was now cautiously guarded and
not published publicly.

691
• A solution seemed to appear in form of MINIX. It was
written from scratch by Andrew S. Tanenbaum, a USborn
Dutch professor who wanted to teach his students
the inner workings of a real operating system. It was
designed to run on the Intel 8086 microprocessors that
had flooded the world market.
• As an operating system, MINIX was not a superb one.
But it had the advantage that the source code was
available. Anyone who happened to get the book
'Operating Systems: Design and Implementation' by
Tanenbaum could get hold of the 12,000 lines of code,
written in C and assembly language. Students of
Computer Science all over the world pored over the
book, reading through the codes to understand the very
system that runs their computer. And one of them was
Linus Torvalds.

692
• In 1991, Linus Benedict Torvalds was a second year
student of Computer Science at the University of
Helsinki and a self-taught hacker. The 21 year old
sandy haired soft-spoken Finn loved to tinker with
the power of the computers and the limits to which
the system can be pushed. But all that was lacking
was an operating system that could meet the
demands of the professionals. MINIX was good, but
still it was simply an operating system for the
students, designed as a teaching tool rather than an
industry strength one.
• At that time, programmers worldwide were greatly
inspired by the GNU project by Richard Stallman, a
software movement to provide free and quality
software.

693
• By 1991, the GNU project created a lot of
the tools. The much awaited Gnu C
compiler was available by then, but there
was still no operating system. Even MINIX
had to be licensed. (Later, in April 2000,
Tanenbaum released Minix under the BSD
License.) Work was going the GNU kernel
HURD, but that was not supposed to come
out within a few years.
• That was too much of a delay for Linus.

694
Development
• Linux version 0.01 was released by mid September
1991, and was put on the net. Enthusiasm gathered
around this new kid on the block, and codes were
downloaded, tested, tweaked, and returned to Linus.
0.02 came on October 5th。
• Linux version 0.03 came in a few weeks. By
December came version 0.10. Still Linux was little
more than in skeletal form. It had only support for
AT hard disks, had no login ( booted directly to
bash). version 0.11 was much better with support
for multilingual keyboards, floppy disk drivers,
support for VGA,EGA, Hercules etc. The version
numbers went directly from 0.12 to 0.95 and 0.96
and so on. Soon the code went worldwide via ftp
sites at Finland and elsewhere.

695
Soon Linus faced some confrontation
from none other than Andrew
Tanenbaum, the great teacher who wrote
MINIX. In a post to Linus, Tanenbaum
commented:
“I still maintain the point that
designing a monolithic kernel in 1991 is a
fundamental error. Be thankful you are
not my student. You would not get a high
grade for such a design."

696
•Soon, commercial vendors moved in. Linux itself was,
and is free. What the vendors did was to compile up
various software and gather them in a distributable
format, more like the other operating systems with which
people were more familiar. Red Hat , Caldera, and some
other companies gained substantial amount of response
from the users worldwide. While these were commercial
ventures, dedicated computer programmers created their
very own volunteer-based distribution, the famed Debian.
With the new Graphical User Interfaces (like X-window
System, KDE, GNOME) the Linux distributions became
very popular.
•Meanwhile, there were amazing things happening with
Linux. Besides the PC, Linux was ported to many different
platforms. Linux was tweaked to run 3Com's handheld
PalmPilot computer. Clustering technology enabled large
number of Linux machines to be combined into a single
computing entity, a parallel computer.

697
Value
• Perhaps the greatest change is the spread of Linux to the
developing world. In the days before Linux, developing
countries were way behind in the field of computing.
The cost of hardware fell down, but the cost of software
was a huge burden to the cash-strapped computer
enthusiasts of the Third World countries. In desperation,
people resorted to piracy of almost all sorts of software
products. This resulted in widespread piracy, amounting
to billions of dollars. But then again, the pricetag of most
of the commercial products were far beyond the reaches
of the people in developing countries.
• The rise of Linux and other related open source product
has changed it all. Since Linux can be scaled to run in
almost computer with very few resources, it has become
a suitable alternative for low budget computer users.
The use of open source software has also proliferated,
since the price of software is a big question.

698
Chapter 10
A History of Internet

699
A History of Internet
• This Internet Timeline begins in 1962, before
the word ‘Internet’ is invented. The world’s
10,000 computers are primitive, although they
cost hundreds of thousands of dollars. They
have only a few thousand words of magnetic
core memory, and programming them is far
from easy.
• Domestically, data communication over the
phone lines is an AT&T monopoly. The
‘Picturephone’ of 1939, shown again at the New
York World’s Fair in 1964, is still AT&T’s
answer to the future of worldwide
communications.

700
1962
• At MIT, a wide variety
of computer
experiments are going
on. Ivan Sutherland
uses the TX-2 to write
Sketchpad, the origin
of graphical programs
for computer-aided
design.
TX-2 at MIT

701
Intergalactic Network Concept
• J.C.R. Licklider writes memos about his
Intergalactic Network concept, where everyone
on the globe is interconnected and can access
programs and data at any site from anywhere.
• He is talking to his own ‘Intergalactic Network’
of researchers across the country. In October,
‘Lick’ becomes the first head of the computer
research program at ARPA, which he calls the
Information Processing Techniques Office
(IPTO).

702
1962
• Leonard Kleinrock completes his doctoral dissertation
at MIT on queuing theory in communication networks,
and becomes an assistant professor at UCLA.
• The SAGE (Semi Automatic Ground Environment),
based on earlier work at MIT and IBM, is fully
deployed as the North American early warning system.
Operators of ‘weapons directing consoles’ use a light
gun to identify moving objects that show up on their
radar screens. SAGE sites are used to direct air defense.
This project provides experience in the development of
the SABRE air travel reservation system and later air
traffic control systems.

703
1963
• Licklider starts to talk with Larry
Roberts of Lincoln Labs, director of the
TX-2 project, Ivan Sutherland, a
computer graphics expert whom he has
hired to work at ARPA and Bob Taylor,
who joins ARPA in 1965. Lick contracts
with MIT, UCLA, and BBN to start work
on his vision.

704
The First Synchronous
Communication Satellite
• Syncom, the first
synchronous
communication satellite, is
launched. NASA’s satellite
is assembled in the Hughes
Aircraft Company’s
facility in Culver City,
California. Total payload is
55 pounds.
SYNCOM Satellite in production

705
1964
• Simultaneous work on secure
packet switching networks is
taking place at MIT, the RAND
Corporation, and the National
Physical Laboratory in Great
Britain. Paul Baran, Donald
Davies, Leonard Kleinrock, and
others proceed in parallel research.
Baran is one of the first to publish,
On Data Communications
Networks. Kleinrock’s thesis is
also published as a seminal text on
Baran’s paper on secure queuing theory.
packet switched networks

706
1965
• On-line transaction processing debuts with
IBM’s SABRE air travel reservation system for
American Airlines. SABRE (Semi-Automatic
Business Research Environment) links 2,000
terminals in sixty cities via telephone lines.
• Licklider leaves ARPA to return to MIT, and
Ivan Sutherland moves to IPTO. With IPTO
funding, MIT’s Project MAC acquires a GE-
635 computer and begins the development of
the Multics timesharing operating system.

707
1965
• With ARPA funding, Larry Roberts and
Thomas Marill create the first wide-area
network connection. They connect the TX-2 at
MIT to the Q-32 in Santa Monica via a
dedicated telephone line with acoustic couplers.
The system confirms the suspicions of the
Intergalactic Network researchers that
telephone lines work for data, but are
inefficient, wasteful of bandwidth, and
expensive. As Kleinrock predicts, packet
switching offers the most promising model for
communication between computers.

708
1965
• Late in the year, Ivan Sutherland hires Bob
Taylor from NASA. Taylor pulls together the
ideas about networking that are gaining
momentum amongst IPTO’s computer-scientist
contractors.
• The ARPA-funded JOSS (Johnniac Open Shop
System) at the RAND Corporation goes on line.
The JOSS system permits online computational
problem solving at a number of remote electric
typewriter consoles. The standard IBM Model
868 electric typewriters are modified with a
small box with indicator lights and activating
switches. The user input appears in green, and
JOSS responds with the output in black.

709
1966
• Taylor succeeds Sutherland to become the
third director of IPTO. In his own office, he
has three different terminals, which he can
connect by telephone to three different
computer systems research sites around the
nation. Why can’t they all talk together? His
problem is a metaphor for that facing the
ARPA computer research community.
• Taylor meets with Charles Herzfeld, the head
of ARPA, to outline his issues. Twenty-minutes
later he has a million dollars to spend on
networking. The idea is to link all the IPTO
contractors. After several months of discussion,
Taylor persuades Larry Roberts to leave MIT
to start the ARPA network program.

710
1966
• Simultaneously, the
English inventor of packet
switching, Donald Davies,
is theorizing at the British
National Physical
Laboratory (NPL) about
building a network of
computers to test his
packet switching concepts.
Donald Davies

711
1967
• Larry Roberts convenes a conference in Ann
Arbor, Michigan, to bring the ARPA researchers
together. At the conclusion, Wesley Clark suggests
that the network be managed by interconnected
‘Interface Message Processors’ in front of the
major computers. Called IMPs, they evolve into
today’s routers.
• Roberts puts together his plan for the ARPANET.
The separate strands of investigation begin to
converge. Donald Davies, Paul Baran, and Larry
Roberts become aware of each other’s work at an
ACM conference where they all meet. From Davies,
the word ‘packet’ is adopted and the proposed line
speed in ARPANET is increased from 2.4 Kbps to
50 Kbps.

712
1967
• The acoustically coupled modem,
invented in the early sixties, is vastly
improved by John van Geen of the
Stanford Research Institute (SRI). He
introduces a receiver that can reliably
detect bits of data amid the hiss heard
over long-distance telephone connections.

713
1968
• Roberts and the ARPA team refine the overall
structure and specifications for the ARPANET.
They issue an RFQ for the development of the
IMPs.
• Roberts works with Howard Frank and his
team at Network Analysis Corporation
designing the network topology and economics.
Kleinrock’s team prepares the network
measurement system at UCLA, which is to
become the site of the first node.

714
1968
• The ILLIAC IV, the largest
supercomputer of its time, is
being built at Burroughs
under a NASA contract. More
than 1,000 transistors are
squeezed onto its RAM chip,
manufactured by the Fairchild
Semiconductor Corporation,
yielding 10 times the speed at
one-hundredth the size of
equivalent core memory.
ILLIAC-IV will be hooked to
the ARPANET so that remote
scientists can have access to its
unique capabilities.
ILLIAC IV

715
1969
• Frank Heart puts a team together to
write the software that will run the IMPs
and to specify changes in the Honeywell
DDP- 516 they have chosen. The team
includes Ben Barker, Bernie Cosell, Will
Crowther, Bob Kahn, Severo Ornstein,
and Dave Walden.

716
1969
• Four sites are selected. At each, a
team gets to work on producing
the software to enable its
computers and the IMP to
communicate. At UCLA, the first
site, Vint Cerf, Steve Crocker,
and Jon Postel work with
Kleinrock to get ready. On April
7, Crocker sends around a memo
entitled ‘Request for Comments.’
This is the first of thousands of
RFCs that document the design
of the ARPANET and the
Internet.
4-node ARPANET diagram

717
1969
• The team calls itself the Network Working
Group (RFC 10), and comes to see its job as the
development of a ‘protocol,’ the collection of
programs that comes to be known as NCP
(Network Control Protocol).
• The second site is the Stanford Research
Institute (SRI), where Doug Englebart saw the
ARPA experiment as an opportunity to explore
wide-area distributed collaboration, using his
NLS system, a prototype ‘digital library.’ SRI
supported the Network Information Center, led
by Elizabeth (Jake) Feinler and Don Nielson.

718
1970
• Nodes are added to the ARPANET at the
rate of one per month.
• Bob Metcalfe builds a high-speed (100
Kbps) network interface between the
MIT IMP and a PDP-6 to the ARPANET.
It runs for 13 years without human
intervention. Metcalfe goes on to build
another ARPANET interface for Xerox
PARC’s PDP-10 clone (MAXC).

719
1970
• DEC announces the Unibus
for its PDP-11
minicomputers to allow the
addition and integration of
myriad computer-cards for
instrumentation and
communications.
• In December, the Network
Working Group (NWG) led
by Steve Crocker finishes
the initial ARPANET Hostto-
Host protocol, called the
Network Control Protocol
(NCP).
DEC's PDP-11

720
1971
• The ARPANET begins the
year with 14 nodes in
operation. BBN modifies and
streamlines the IMP design
so it can be moved to a less
cumbersome platform than
the DDP-516. BBN also
develops a new platform,
called a Terminal Interface
Processor (TIP) which is
capable of supporting input
from multiple hosts or
terminals.
ARPANET map, 1971

721
1971
Logical map of the ARPANET, April 1971

722
1971
• The Network Working Group completes the Telnet
protocol and makes progress on the file transfer
protocol (FTP) standard. At the end of the year, the
ARPANET contains 19 nodes as planned.
• Many small projects are carried out across the new
network, including the demonstration of an aircraftcarrier
landing simulator. However, the overall traffic
is far lighter than the network’s capacity. Something
needs to stimulate the kind of collaborative and
interactive atmosphere consistent with the original
vision. Larry Roberts and Bob Kahn decide that it is
time for a public demonstration of the ARPANET.
They choose to hold this demonstration at the
International Conference on Computer
Communication (ICCC) to be held in Washington, DC,
in October 1972.

723
1972
• The ARPANET grows by ten more nodes in the first 10
months of 1972. The year is spent finishing, testing and
releasing all the network protocols, and developing
network demonstrations for the ICCC.
• At BBN, Ray Tomlinson writes a program to enable
electronic mail to be sent over the ARPANET. It is
Tomlinson who develops the ‘user@host’ convention,
choosing the @ sign arbitrarily from the nonalphabetic
symbols on the keyboard. Unbeknownst to
him, @ is already in use as an escape character,
prompt, or command indicator on many other systems.
Other networks will choose other conventions,
inaugurating a long period known as the e-mail
‘header wars.’ Not until the late 1980s will ‘@’ finally
become a worldwide standard.

724
1973
• Thirty institutions are
connected to the
ARPANET. The network
users range from
industrial installations
and consulting firms like
BBN, Xerox PARC and
the MITRE Corporation,
to government sites like
NASA’s Ames Research
Laboratories, the
National Bureau of
Standards, and Air Force
research facilities ARPANET Map, 1973

725
1973
• The ICCC demonstrations prove packetswitching
a viable technology, and ARPA (now
DARPA, where the ‘D’ stands for ‘Defense’)
looks for ways to extend its reach. Two new
programs begin: Packet Radio sites are
modeled on the ALOHA experiment at the
University of Hawaii designed by Norm
Abramson, connecting seven computers on four
islands; and a satellite connection enables
linking to two foreign sites in Norway and the
UK.

726
1973
• Bob Kahn moves from BBN to
DARPA to work for Larry Roberts,
and his first self-assigned task is the
interconnection of the ARPANET
with other networks. He enlists
Vint Cerf, who has been teaching at
Stanford. The problem is that
ARPANET, radio-based PRnet,
and SATNET all have different
interfaces, packet sizes, labeling,
conventions and transmission rates.
Linking them together is very
difficult.
Bob Kahn

727
1973
• Kahn and Cerf set about
designing a net-to-net
connection protocol. Cerf
leads the newly formed
International Network
Working Group. In
September 1973, the two give
their first paper on the new
Transmission Control
Protocol (TCP) at an INWG
meeting at the University of
Sussex in England. Vint Cerf

728
1973
• Meanwhile, at Xerox PARC, Bob
Metcalfe is working on a wire-based
system modeled on ALOHA protocols for
Local Area Networks (LANs). It will
become Ethernet.

729
1974
• Daily traffic on the ARPANET exceeds 3
million packets. DARPA funds three contracts,
one at Stanford (Cerf and his students), one at
BBN (directed by e-mail inventor Ray
Tomlinson), and one at University College
London (directed by Peter Kirstein) to develop
and implement the Kahn-Cerf TCP protocol.
Their presentation is published as A Protocol
for Packet Network Interconnection in May
1974 in the IEEE Transactions on
Communications Technology.
• Ethernet is demonstrated by networking Xerox
PARC’s new Alto computers.

730
1974 • BBN recruits Larry Roberts to direct a new venture,
called Telenet, which is the first public packetswitched
service. Roberts’ departure creates a crisis
in the DARPA IPTO office.
• ARPA has fulfilled its initial mission. Discussions
about divesting DARPA of operational responsibility
for the network are held. Because it is DARPAfunded,
BBN has no exclusive right to the source
code for the IMPs. Telenet and other new networking
enterprises want BBN to release the source code.
BBN argues that it is always changing the code and
that it has recently undergone a complete rewrite at
the hands of John McQuillan. Their approach makes
Roberts’ task of finding a new director for IPTO
difficult. J.C.R. Licklider agrees to return to IPTO
from MIT on a temporary basis.

731
1974
• In addition to DARPA, The National Science
Foundation (NSF) is actively supporting computing
and networking at almost 120 universities. The
largest NSF installation is at the National Center for
Atmospheric Research (NCAR) in Boulder, Colorado.
There, scientists use a home-built ‘remote job entry’
system to connect to NCAR’s CDC 7600 from major
universities.

732
1975
• The ARPANET
geographical map now
shows 61 nodes. Licklider
arranges its administration
to be turned over to the
Defense Communications
Agency (DCA). BBN
remains the contractor
responsible for network
operations. BBN agrees to
release the source code for
ARPANET Map, 1975 IMPs and TIPs.

733
1975
• The Network Working Group maintains its open
system of discussion via RFCs and e-mail lists.
Discomfort grows with the bureaucratic style of DCA.
• The Department of Energy creates its own net to
support its own research. This net operates over
dedicated lines connecting each site to the computer
centers at the National Laboratories.
• NASA begins planning its own space physics network,
SPAN. These networks have connections to the
ARPANET so the newly developed TCP protocol
begins to get a workout. Internally, however, the new
networks use such a variety of protocols that true
interoperability is still an issue.

734
1976
• DARPA supports computer scientists at
UC Berkeley who are revising a Unix
system to incorporate TCP/IP protocols.
Berkeley Unix also incorporates a second
set of Bell Labs protocols, called UUCP,
for systems to use dial-up connections.
• Vint Cerf moves from Stanford to
DARPA to work with Bob Kahn on
networking and the TCP/IP protocols.

735
1977
• Cerf and Kahn mount a major
demonstration, ‘internetting’
among the Packet Radio net,
SATNET, and the ARPANET.
Messages go from a van in the
Bay Area across the US on
ARPANET, then to University
College London and back via
satellite to Virginia, and back
through the ARPANET to the
University of Southern
California’s Information
Sciences Institute. This shows its
applicability to international
deployment.
Diagram of the Multinetwork
Demonstration

736
1977
• Larry Landweber of the University of
Wisconsin creates THEORYNET
providing email between over 100
researchers and linking elements of the
University of Wisconsin in different cities
via a commercial packet service like
Telenet.

737
1978
• The appearance of the first very small computers and
their potential for communication via modem to dial
up services starts a boom in a new set of niche
industries, like software and modems.
• Vint Cerf at DARPA continues the vision of the
Internet, forming an International Cooperation Board
chaired by Peter Kirstein of University College
London, and an Internet Configuration Control Board,
chaired by Dave Clark of MIT.
• The ARPANET experiment formally is complete. This
leaves an array of boards and task forces over the next
few years trying to sustain the vision of a free and
open Internet that can keep up with the growth of
computing.

738
1979
• Larry Landweber at Wisconsin
holds a meeting with six other
universities to discuss the
possibility of building a Computer
Science Research Network to be
called CSNET. Bob Kahn attends
as an advisor from DARPA, and
Kent Curtis attends from NSF’s
computer research programs. The
idea evolves over the summer
between Landweber, Peter
Denning (Purdue), Dave Farber
(Delaware), and Tony Hearn
(Utah).
Cover of COMPUTER
Magazine from
September 1979

739
1979
• In November, the group submits a proposal to
NSF to fund a consortium of eleven universities
at an estimated cost of $3 million over five
years. This is viewed as too costly by the NSF.
• USENET starts a series of shell scripts written
by Steve Bellovin at UNC to help communicate
with Duke. Newsgroups start with a name that
gives an idea of its content. USENET is an early
example of a client server where users dial in to
a server with requests to forward certain
newsgroup postings. The server then ‘serves’
the request.

740
1980
• Landweber’s proposal has many enthusiastic
reviewers. At an NSF-sponsored workshop, the idea is
revised in a way that both wins approval and opens
up a new epoch for NSF itself. The revised proposal
includes many more universities. It proposes a threetiered
structure involving ARPANET, a TELENETbased
system, and an e-mail only service called
PhoneNet. Gateways connect the tiers into a seamless
whole. This brings the cost of a site within the reach of
the smallest universities. Moreover, NSF agrees to
manage CSNET for two years, after which it will turn
it over to the University Corporation for Atmospheric
Research (UCAR), which is made up of more than 50
academic institutions.

741
1980
• The National Science Board approves the new
plan and funds it for five years at a cost of $5
million. Since the protocols for interconnecting
the subnets of CSNET include TCP/IP, NSF
becomes an early supporter of the Internet.
• NASA has ARPANET nodes, as do many
Department of Energy (DOE) sites. Now
several Federal agencies support the Internet,
and the number is growing.

742
1981
• By the beginning of the year, more than 200
computers in dozens of institutions have been
connected in CSNET. BITNET, another
startup network, is based on protocols that
include file transfer via e-mail rather than by
the FTP procedure of the ARPA protocols.
• The Internet Working Group of DARPA
publishes a plan for the transition of the entire
network from the Network Control Protocol
to the TCP/IP protocols developed since 1974
and already in wide use (RFC 801).
• At Berkeley, Bill Joy incorporates the new
TCP/IP suite into the next release of the Unix
operating system.

743
1982
• The period during which ad hoc networking systems
have flourished has left TCP/IP as only one contender
for the title of ‘standard.’ Indeed, the International
Organization for Standards (ISO) has written and is
pushing ahead with a ‘reference’ model of an
interconnection standard called Open Systems
Interconnection (OSI) — already adopted in
preliminary form for interconnecting DEC
equipment. But while OSI is a standard existing for
the most part on paper, the combination of TCP/IP
and the local area networks created with Ethernet
technology are driving the expansion of the living
Internet.
• Digital Communications Associates introduces the
first coaxial cable interface for micro-to-mainframe
communications.

744
1983
• In January, the ARPANET standardizes on the
TCP/IP protocols adopted by the Department
of Defense (DOD). The Defense
Communications Agency decides to split the
network into a public ‘ARPANET’ and a
classified ‘MILNET, ‘ with only 45 hosts
remaining on the ARPANET. Jon Postel issues
an RFC assigning numbers to the various
interconnected nets. Barry Leiner takes Vint
Cerf’s place at DARPA, managing the Internet.

745
1983
Internet Topographic Map, 1983

746
1983
• Numbering the Internet hosts and
keeping tabs on the host names simply
fails to scale with the growth of the
Internet. In November, Jon Postel and
Paul Mockapetris of USC/ISI and Craig
Partridge of BBN develop the Domain
Name System (DNS) and recommend the
use of the now familiar
user@host.domain addressing system.

747
1983
• The number of computers connected via these hosts is
much larger, and the growth is accelerating with the
commercialization of Ethernet.
• Having incorporated TCP/IP into Berkeley Unix, Bill
Joy is key to the formation of Sun Microsystems. Sun
develops workstations that ship with Berkeley Unix
and feature built-in networking. At the same time, the
Apollo workstations ship with a special version of a
token ring network.
• In July 1983, an NSF working group, chaired by Kent
Curtis, issues a plan for ‘A National Computing
Environment for Academic Research’ to remedy the
problems noted in the Lax report. Congressional
hearings result in advice to NSF to undertake an even
more ambitious plan to make supercomputers available
to US scientists.

748
1984
• The newly developed DNS is introduced across
the Internet, with the now familiar domains
of .gov, .mil, .edu, .org, .net, and .com. A
domain called .int, for international entities, is
not much used. Instead, hosts in other countries
take a two-letter domain indicating the country.
The British JANET explicitly announces its
intention to serve the nation’s higher education
community, regardless of discipline.

749
1984
• Most important for the Internet, NSF issues a
request for proposals to establish
supercomputer centers that will provide access
to the entire U.S. research community,
regardless of discipline and location. A new
division of Advanced Scientific Computing is
created with a budget of $200 million over five
years.
• Datapoint, the first company to offer
networked computers, continues in the
marketplace, but fails to achieve critical mass.
1985
NSF announces the award of five supercomputing
center contracts:
1.Cornell Theory Center (CTC), directed by Nobel
laureate Ken Wilson;
2.The John Von Neumann Center (JVNC) at Princeton,
directed by computational fluid dynamicist Steven
Orszag;
3.The National Center for Supercomputing Applications
(NCSA), directed at the University of Illinois by
astrophysicist Larry Smarr;
4.The Pittsburgh Supercomputing Center (PSC), sharing
locations at Westinghouse, the University of Pittsburgh,
and Carnegie Mellon University, directed by Michael
Levine and Ralph Roskies;
5.The San Diego Supercomputer Center (SDSC), on the
campus of the University of California, San Diego, and
administered by the General Atomics Company under
the direction of nuclear engineer Sid Karin.

751
1985
• By the end of 1985, the number of hosts
on the Internet (all TCP/IP
interconnected networks) has reached
2,000.
• MIT translates and publishes Computers
and Communication by Dr. Koji
Kobayashi, the Chairman of NEC. Dr.
Kobayashi, who joined NEC in 1929,
articulates his clear vision of ‘C & C’, the
integration of computing and
communication.

752
1986
• The 56Kbps backbone between the NSF centers
leads to the creation of a number of regional
feeder networks - JVNCNET, NYSERNET,
SURANET, SDSCNET and BARRNET -
among others. With the backbone, these
regionals start to build a hub and spoke
infrastructure. This growth in the number of
interconnected networks drives a major
expansion in the community including the DOE,
DOD and NASA.
• Between the beginning of 1986 and the end of
1987 the number of networks grows from 2,000
to nearly 30,000.
1986 • TCP/IP is available on workstations and PCs such as the
newly introduced Compaq portable computer. Ethernet
is becoming accepted for wiring inside buildings and
across campuses. Each of these developments drives the
introduction of terms such as bridging and routing and
the need for readily available information on TCP/IP in
workshops and manuals. Companies such as Proteon,
Synoptics, Banyan, Cabletron, Wellfleet, and Cisco
emerge with products to feed this explosion.
• At the same time, other parts of the U.S. Government
and many of the traditional computer vendors mount an
attempt to validate their products being built to the OSI
theoretical specifications, in the form of the Corporation
for Open Systems.
• USENET starts a major shakeup which becomes known
as the ‘Great Renaming’. A driving force is that, as
many messages are traveling over ARPANET, desirable
new news groups such as ‘alt.sex’ and ‘alt.drugs’ are not
allowed.

754
1987
• The NSF, realizing the rate and commercial
significance of the growth of the Internet, signs a
cooperative agreement with Merit Networks which
is assisted by IBM and MCI. Rick Adams cofounds
UUNET to provide commercial access to
UUCP and the USENET newsgroups, which are
now available for the PC. BITNET and CSNET
also merge to form CREN.
• The NSF starts to implement its T1 backbone
between the supercomputing centers with 24 RTPCs
in parallel implemented by IBM as ‘parallel
routers’. The T1 idea is so successful that
proposals for T3 speeds in the backbone begin.

755
1987
• In early 1987 the number of hosts passes
10,000 and by year-end there have been
over 1,000 RFCs issued.
• Network management starts to become a
major issue and it becomes clear that a
protocol is needed between routers to
allow remote management. SNMP is
chosen as a simple, quick, near term
solution.

756
1987
Internet Map, 1987 NSFNet Map, 1987

757
1988
• The upgrade of the
NSFNET backbone to
T1 completes and the
Internet starts to
become more
international with the
connection of Canada,
Denmark, Finland,
France, Iceland,
Norway and Sweden.
NSFNet T-1 Backbone Map, 1988
1988
• In the US more regionals spring up - Los Nettos and
CERFnet both in California. In addition, Fidonet, a
popular traditional bulletin board system (BBS) joins
the net.
• Dan Lynch organizes the first Interop commercial
conference in San Jose for vendors whose TCP/IP
products interoperate reliably. 50 companies make the
cut and 5,000 networkers come to see it all running, to
see what works, and to learn what doesn’t work.
• The US Government pronounces its OSI Profile
(GOSIP) is to be supported in all products purchased
for government use, and states that TCP/IP is an
interim solution!
• The Morris WORM burrows on the Internet into 6,000
of the 60,000 hosts now on the network. This is the first
worm experience and DARPA forms the Computer
Emergency Response Team (CERT) to deal with future
such incidents.

759
1989
• The number of hosts increases from 80,000 in
January to 130,000 in July to over 160,000 in
November!
• Australia, Germany, Israel, Italy, Japan,
Mexico, Netherlands, New Zealand and the
United Kingdom join the Internet.
• Commercial e-mail relays start between
MCIMail through CNRI and Compuserve
through Ohio State. The Internet Architecture
Board reorganizes again reforming the IETF
and the IRTF.

760
1989
• Networks speed up. NSFNET T3 (45Mbps)
nodes operate. At Interop 100Mbps LAN
technology, known as FDDI, interoperates
among several vendors. The telephone
companies start to work on their own wide area
packet switching service at higher speeds -
calling it SMDS.
• Bob Kahn and Vint Cerf at CNRI hold the first
Gigabit (1000Mbps) Testbed workshops with
funding from ARPA and NSF. Over 600 people
from a wide range of industry, government and
academia attend to discuss the formation of 6
gigabit testbeds across the country.

761
1989
• In Switzerland at CERN Tim Berners-
Lee addresses the issue of the constant
change in the currency of information
and the turn-over of people on projects.
Instead of an hierarchical or keyword
organization, Berners-Lee proposes a
hypertext system that will run across the
Internet on different operating systems.
This was the World Wide Web.

762
1989
Tim Berners-Lee
Berners-Lee's diagram describing 'hypertext

763
1990
• ARPANET formally shuts down. In twenty years, ‘the
net’ has grown from 4 to over 300,000 hosts. Countries
connecting in 1990 include Argentina, Austria, Belgium,
Brazil, Chile, Greece, India, Ireland, South Korea,
Spain, and Switzerland.
• Several search tools, such as ARCHIE, Gopher, and
WAIS start to appear. Institutions like the National
Library of Medicine, Dow Jones, and Dialog are now on
line.
• More ‘worms’ burrow on the net, with as many as 130
reports leading to 12 real ones! This is a further
indication of the transition to a wider audience.

764
1991 • The net’s dramatic growth continues with NSF lifting
any restrictions on commercial use. Interchanges form
with popular providers such as UUNET and PSInet.
Congress passes the Gore Bill to create the National
Research and Education Network, or NREN initiative.
In another sign of popularity, privacy becomes an
‘issue,’ with proposed solutions such as PGP (Pretty
Good Privacy).
• The NSFNET backbone upgrades to T3, or 44 Mbps.
Total traffic exceeds 1 trillion bytes, or 10 billion
packets per month! Over 100 countries are now
connected with over 600,000 hosts and nearly 5,000
separate networks.
• WAIS’s and Gophers help meet the challenge of
searching for information throughout this exploding
infrastructure of computers.

765
1991
T-3 Network Map, 1991

766
1992
• The Internet becomes such a part of the computing
establishment that a professional society forms to
guide it on its way. The Internet Society (ISOC), with
Vint Cerf and Bob Kahn among its founders, validates
the coming of age of inter-networking and its
pervasive role in the lives of professionals in developed
countries. The IAB and its supporting committees
become part of ISOC.
• The number of networks exceeds 7,500 and the
number of computers connected passes 1,000,000. The
MBONE for the first time carries audio and video.
The challenge to the telephone network’s dominance
as the basis for communicating between people is seen
for the first time; the Internet is no longer just for
machines to talk to each other.

767
1992
• During the summer, students at NCSA in
Champagne-Urbana modify Tim Berners-Lee’s
hypertext proposal. In a few weeks MOSAIC is
born within the campus. Larry Smarr shows it
to Jim Clark, who founds Netscape as a result.
• The WWW bursts into the world and the
growth of the Internet explodes like a
supernova. What had been doubling each year,
now doubles in three months. What began as
an ARPA experiment has, in the span of just 30
years, become a part of the world’s popular
culture.

768
The World Wide Web (WWW)
• The World Wide Web is a network of
sites that can be searched and retrieved
by a special protocol known as a
Hypertext Transfer protocol (HTTP).
The protocol simplified the writing of
addresses and automatically searched the
internet for the address indicated and
automatically called up the document for
viewing.

769
The WWW Proposal (schematized) (1989)

770
1992
• Lots of different sort of programs use the
Internet: electronic mail, for example, was
around long before the global hypertext system
I invented and called the World Wide Web
('Web). Now, videoconferencing and streamed
audio channels are among other things which,
like the Web, encode information in different
ways and use different languages between
computers ("protocols") to do provide a service.

771
1992
• The Web is an abstract (imaginary) space of
information. On the Net, you find computers --
on the Web, you find document, sounds,
videos,.... information. On the Net, the
connections are cables between computers; on
the Web, connections are hypertext links. The
Web exists because of programs which
communicate between computers on the Net.
The Web could not be without the Net. The
Web made the net useful because people are
really interested in information (not to mention
knowledge and wisdom!) and don't really want
to have know about computers and cables."
1993
• InterNIC created by NSF to provide specific Internet
services: directory and database services (by AT&T),
registration services (by Network Solutions Inc.), and
information services (by General Atomics/CERFnet).
• In 1993 Mark Andreesen of NCSA (National Center for
SuperComputing Applications, Illinois) launched
Mosaic X. It was easy to install, easy to use and,
significantly, backed by 24-hour customer support. It
also enormously improved the graphic capabilities (by
using 'in-line imaging' instead of separate boxes) and
installed many of the features that are familiar to you
through the browsers which are using to view these
pages such as Netscape (which is the successor company
established by Andreesen to exploit Mosaic) and Bill
Gates' Internet Explorer.
• Backbones: 45Mbps (T3) NSFNET, private
interconnected backbones consisting mainly of 56Kbps,
1.544Mbps, and 45Mpbs lines, plus satellite and radio
connections - Hosts: 2,056,000

773
1994
• No major changes were made to the physical network.
The most significant thing that happened was the
growth. Many new networks were added to the NSF
backbone.Hundreds of thousands of new hosts were
added to the INTERNET during this time period.
• There were 3,2 mln hosts and 3,000 web-sites. Twelve
months later the number of hosts had doubled and the
number of web-sites had climbed to 25,000. By the end
of the next year the number of host computers had
doubled again, and the number of web-sites had
increased by more than ten-fold.
• Backbones: 145Mbps (ATM) NSFNET, private
interconnected backbones consisting mainly of 56Kbps,
1.544Mbps, and 45Mpbs lines, plus satellite and radio
connections - Hosts: 3,864,000

774
1995
• The National Science Foundation announced that as of
April 30, 1995 it would no longer allow direct access to
the NSF backbone. The National Science
Foundationcontracted with four companies that would
be providers of access to the NSF backbone (Merit).
These companies would then sell connections to groups,
organizations, and companies.
• $50 annual fee is imposed on domains, excluding .edu
and .gov domains which are still funded by the
National Science Foundation.
• Backbones: 145Mbps (ATM) NSFNET (now private),
private interconnected backbones consisting mainly of
56Kbps, 1.544Mbps, 45Mpbs, 155Mpbs lines in
construction, plus satellite and radio connections -
Hosts: 6,642,000

775
1996
• Most Internet traffic is carried by
backbones of independent ISPs, including
MCI, AT&T, Sprint, UUnet, BBN planet,
ANS, and more.
• Backbones: 145Mbps (ATM) NSFNET
(now private), private interconnected
backbones consisting mainly of 56Kbps,
1.544Mbps, 45Mpbs, and 155Mpbs lines,
plus satellite and radio connections -
Hosts: over 15,000,000, and growing
rapidly

776
1996
• Currently the Internet Society, the group
that controls the INTERNET, is trying to
figure out new TCP/IP to be able to have
billions of addresses, rather than the
limited system of today. The problem
that has arisen is that it is not known how
both the old and the new addressing
systems will be able to work at the same
time during a transition period.

777
1996
• The WWW browser war begins , fought
primarily between Netscape and
Microsoft, has rushed in a new age in
software development, whereby new
releases are made quarterly with the help
of Internet users eager to test upcoming
(beta) versions.

778
1997
• The American Registry for Internet Numbers
(ARIN) is established to handle administration
and registration of IP numbers to the
geographical areas currently handled by
Network Solutions (InterNIC), starting March
1998.
• Early in the morning of 17 July, human error
at Network Solutions causes the DNS table
for .com and .net domains to become corrupted,
making millions of systems unreachable.

779
1997
• FUNET-TV launched with 'multicasts' of
Web University lectures from CERN and
the information society forum Studia
Generalia.
• The World Wide Web Consortium
publishes version 4.0 of the HTML
language used to create web pages. This
includes multimedia features, UNICODE
support, for displaying the world's
various languages, and features that help
people with disabilities use the Net. .

780
1997
• The Internet2 project is announced in the US to
develop within two years new Internet services
for the research community, such as interactive
TV, videoconferencing and remote presence for
teaching and research. For this collaboration,
the research community began to construct
new Internet connections, which initially ran at
620 Mbit/s, increasing to 2.4 Gbit/s at the
beginning of 1999

781
1998
• The FUNET sets up 155 Mbit/s ATM
connections to all Finnish universities. Links to
other countries are also upgraded to 155 Mbit/s.
The annual increase in traffic abroad has been
relatively steady, at 150%, at least for the
FUNET and NORDUnet.
• FUNET-TV begins video-on-demand
transmissions using a media server, which is
capable if required of transmitting digital-TV
images at several megabits per second.

782
1998
• The NORDUnet launches the Nordunet2
development project and enters into an
agreement with the USA's Internet2
project on collaboration to develop new
Internet services.
• The World Wide Web Consortium
releases the specifications for XML
(Extensible Markup Language) version
1.0, which will make it easy to expand
future web pages

783
2003
• July: Caspian Networks introduces first
flow router where all TCP/IP flows are
managed to control rate and delay,
finally obtaining true Quality of Service
(QoS) using TCP/IP IPv4. The reduced
cost of memory permitted flow state to be
maintained for the duration of every flow
with no loss of scalability. This also
permitted P2P traffic to be controlled so
as not to use excessive network
resources.

784
2004
• August: The Telecommunications
Industry Association (TIA) approves a packet
header option for IPv6 the permits IP to fully
specify the QoS (Guaranteed Rate, Available
Rate, Delay, Burst Tolerance, and Precedence)
for each flow in the first packet. This permits
TCP to jump to the maximum rate the network
can support after one RTT thus improving
TCP performance by 10:1. It also allows voice
and video flows to be established with a
guaranteed rate, loss, and delay. This permits
IPv6 to support all the QoS that was available
in ATM. 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
《计算机图形学:原理与实践》第21章-动画 动画在计算机图形学中起着重要的作用,它可以通过计算机生成逼真的运动图像来模拟现实世界中的动态效果。第21章详细介绍了动画的原理和实践。 动画的目标是通过序列帧的播放来创造出连续的运动感。本章介绍了两种常见的动画制作方法:关键帧动画和插值动画。 关键帧动画是通过在时间轴上选择关键帧来定义动画的开始和结束状态,然后计算机会自动填充中间帧。这种方法可以减少制作动画的工作量,但帧与帧之间的过渡可能不够平滑。 插值动画则通过在关键帧之间插入中间帧来创建流畅的动画效果。其中,线性插值是最简单的方法,通过计算两个关键帧之间的过渡来生成中间帧。其他的插值方法还包括贝塞尔曲线插值和样条插值等,它们能够更好地保持物体的形态和运动特性。 此外,本章还介绍了动画中的常见技术,如运动模糊和骨骼动画。运动模糊通过在连续帧之间模糊对象来模拟物体的运动轨迹,使动画更加真实。骨骼动画则是通过在物体上添加骨骼和关节来模拟物体的形变和动作,从而实现更加灵活和逼真的动画效果。 此外,本章还介绍了动画中使用的额外技术,如动画蒙皮和递归动画。动画蒙皮是将物体的外表表面贴上一个虚拟的模型,使其能够更好地跟随物体的运动。递归动画则是通过将一个物体分解成多个部分,然后对每个部分进行独立的动画处理,最终合成整个物体的动画。 总而言之,《计算机图形学:原理与实践》第21章详细介绍了动画的原理和实践,包括关键帧动画、插值动画、运动模糊、骨骼动画、动画蒙皮和递归动画等技术。通过学习本章内容,读者可以更好地理解和应用计算机图形学中的动画技术。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值