Category Archives: Uncategorized

AsseenonTV – The History of Domain Names

AsSeenOnTv.com sold for $5.1 million

Date: 01/01/2000

Asseenontv.com – This domain was purchased by LA Group in 2000 for a price of $ 5.1 million.  This domain was used to protect brand name “asseenattv.com” of LA Group.  This is biggest domain deal made for defense purpose.

AtlanticMedia – The History of Domain Names

The Atlantic buys QZ.com for new publication

September 24, 2012

The Atlantic today launched Quartz, a publication targeted to upscale readers.

What does this have to do with domain names, other than the decidedly upscale audience that reads Domain Name Wire?

The company bought QZ.com for the site.

Atlantic Media Incorporated bought the domain name around April this year according to historical whois records.

It sold for $76,000 at a Moniker/TRAFFIC auction back in 2007. The buyer back then was a Domain Capital client, and the whois record for the domain name has been under Domain Capital ever since.

No word on what the domain name sold for this time around.

ATT – The History of Domain Names

ATT.com was registered

Date: 04/25/1986

On April 25, 1986, AT&T registered the att.com domain name, making it 15th .com domain ever to be registered.

AT&T Inc. is an American multinational telecommunications conglomerate, headquartered at Whitacre Tower in downtown Dallas, Texas.  AT&T is the second largest provider of mobile telephone services and the largest provider of fixed telephone services in the United States, and also provides broadband subscription television services through DirecTV. AT&T is the third-largest company in Texas (the largest non-oil company, behind only ExxonMobil and ConocoPhillips, and also the largest Dallas company). As of May 2014, AT&T is the 23rd-largest company in the world as measured by a composite of revenues, profits, assets and market value, and the 16th-largest non-oil company. AT&T is the largest telecommunications company in the world by revenue. As of 2016, it is also the 17th-largest mobile telecom operator in the world, with 130.4 million mobile customers. AT&T was ranked at #6 on the 2015 rankings of the world’s most valuable brands published by Millward Brown Optimor.

AT&T Inc. began its existence as Southwestern Bell Corporation, one of seven Regional Bell Operating Companies (RBOC’s) created in 1983 in the divestiture of the American Telephone and Telegraph Company (founded 1885, later AT&T Corp.) following the 1982 United States v. AT&T antitrust lawsuit. Southwestern Bell changed its name to SBC Communications Inc. in 1995. In 2005, SBC purchased former parent AT&T Corp. and took on its branding, with the merged entity naming itself AT&T Inc. and using the iconic AT&T Corp. logo and stock-trading symbol. The current AT&T reconstitutes much of the former Bell System and includes ten of the original 22 Bell Operating Companies, along with the original long distance division.

History

Prior to 2005 purchase by SBC

AT&T can trace its origin back to the original Bell Telephone Company founded by Alexander Graham Bell after his invention of the telephone. One of that company’s subsidiaries was American Telephone and Telegraph Company (AT&T), established in 1885, which acquired the Bell Company on December 31, 1899 for legal reasons, leaving AT&T as the main company. AT&T established a network of subsidiaries in the United States and Canada that held a government-authorized phone service monopoly, formalized with the Kingsbury Commitment, throughout most of the twentieth century. This monopoly was known as the Bell System, and during this period, AT&T was also known by the nickname Ma Bell. For periods of time, the former AT&T was the world’s largest phone company. In 1982, US regulators broke up the AT&T monopoly, requiring AT&T to divest its regional subsidiaries and turning them each into individual companies. These new companies were known as Regional Bell Operating Companies, or more informally, Baby Bells. AT&T continued to operate long distance services, but as a result of this breakup, faced competition from new competitors such as MCI and Sprint. Southwestern Bell was one of the companies created by the breakup of AT&T. The architect of divestiture for Southwestern Bell was Robert G. Pope. The company soon started a series of acquisitions. This includes the 1987 acquisition of Metromedia mobile business, and the acquisition of several cable companies in the early 1990s. In the later half of the 1990s, the company acquired several other telecommunications companies, including some Baby Bells, while selling its cable business. During this time, the company changed its name to SBC Communications. By 1998, the company was in the top 15 of the Fortune 500, and by 1999 the company was part of the Dow Jones Industrial Average (lasting through 2015).

Since 2005

In 2005, SBC purchased AT&T for $16 billion. After this purchase, SBC adopted the AT&T name and brand. The original 1885 AT&T still exists as the long-distance phone subsidiary of this company. Although the current AT&T as a corporate structure has only existed since 1983, the company has adopted the original AT&T’s history as its own. In September 2013, AT&T announced it would expand into Latin America through a collaboration with Carlos Slim’s América Móvil. On December 17, 2013, AT&T announced plans to sell its Connecticut wireline operations to Stamford-based Frontier Communications. Roughly 2,700 wireline employees supporting AT&T’s operations in Connecticut was expected to transfer with the business to Frontier, as well as 900,000 voice connections, 415,000 broadband connections, and 180,000 U-verse video subscribers.

On May 18, 2014, AT&T announced it had agreed to purchase DirecTV for $48.5 billion, or $67.1 billion including assumed debt. The deal was aimed at increasing AT&T’s market share in the pay-TV sector and give AT&T access to fast-growing Latin American markets. The transaction closed in July 2015. The deal is subject to conditions for four years, including a requirement for AT&T to expand its fiberoptic broadband service to at least 12.5 million customer locations, not to discriminate against other online video services using bandwidth caps, submit any “interconnection agreements” for government review, and offer low-cost internet services for low-income households. AT&T subsequently announced plans to converge its existing U-verse home internet and IPTV brands into a combined platform with DirecTV, tentatively known as AT&T Entertainment.

On November 7, 2014, AT&T announced its purchase of Iusacell to create a wider North American network. In January 2015, AT&T announced it would be acquiring the bankrupt Mexican wireless business of NII Holdings for around $1.875 billion. AT&T subsequently merged the two companies to create AT&T Mexico. On March 29, 2016, AT&T announced that it will increase data caps on its Internet service on May 23, 2016.

The integration of AT&T and DirecTV is set to begin by the fourth quarter of 2016.On September 19, 2016, AT&T announced that it will be eliminating the “U-verse” brand and renaming the broadband and phone services “AT&T Internet” and “AT&T Phone”.

Austin – The History of Domain Names

Austin.com sold to iEstates, LLC

March 27, 2012

The domain name Austin.com has been purchased by iEstates, LLC for an undisclosed price.

As an Austinite, I’ve paid close attention to Austin.com over the years. Previous attempts to develop it have fallen flat, and I feel that’s mostly because the previous owners took the wrong approach to developing and marketing it.

I asked iEstates, LLC owner David Wieland for the scoop on why he bought the domain and what he plans to do with it.

Australian GenericDomains – The History of Domain Names

AUSTRALIAN BUSINESSES SHOW INTEREST IN NEW GENERIC DOMAIN NAMES

November 27, 2011

A survey of 200 small and medium enterprise owners and CEOs from Australia, Singapore, and Hong Kong has found nearly half have a degree of interest in registering a domain within a new gTLD related to their business.

Of those, registrations are likely to be higher among sole traders and small businesses.

The survey was carried out by Empirica Research and commissioned by ARI Registry, an Australian company with interests in seeing ICANN’s gTLD initiative come to fruition. Under ICANN’ plan, companies will be able to create extensions such as .brand or generic Top Level Domains (gTLD) such as .shop from next year.

The survey also found those who would be interested in registering a domain name under a relevant new gTLD would be prepared to pay a premium for the privilege; an average of 47% more than what they currently pay for their yearly domain name renewal.

Extrapolating the survey findings; based on 219,000 retail stores in Australia, the survey indicates that a fifth of Australian retail stores are either “very” or “extremely” likely to register domain under a .shop new Top-Level Domain; translating to a potential 43,800 retail stores.

One of the other interesting key findings is the perception by small and medium enterprises that registering such a domain name would give them an edge in search engine rankings, even though this remains to be seen. While registering a .com.au domain name can have some advantages in ranking in terms of geographic matching, the weight that Google gives to most other generic extensions such as .com, .net and .org has been debated for years.

While ICANN has come under increasing pressure not to roll out the new gTLD program, the .com.au extension certainly doesn’t appear to be under threat by the release of new gTLD’s should that come to pass.

When Australia’s 2 millionth .au domain name was registered earlier this year, auDA Board Chair, The Hon Tony Staley, commented that .au is a “trusted, well-recognised space for all Australian businesses, organisations and individuals.”

Babelfish – The History of Domain Names

Babel Fish automatic translation

Date: 01/01/1997

Yahoo! Babel Fish was a free web-based multilingual translation application. In May 2012 it was replaced by Bing Translator, to which queries were redirected. Although Yahoo! has transitioned its Babel Fish translation services to Bing Translator, it did not sell its translation application to Microsoft outright.  As the oldest free online language translator, the service translated text or web pages between 38 languages, including English, Simplified Chinese, Traditional Chinese, Dutch, French, German, Greek, Italian, Japanese, Korean, Portuguese, Russian, and Spanish.

The internet service derived its name from the Babel fish, a fictional species in Douglas Adams’s book and radio series The Hitchhiker’s Guide to the Galaxy that could instantly translate languages. In turn, the name of the fictional creature refers to the biblical account of the confusion of languages that arose in the city of Babel.

History

On December 9, 1997, Digital Equipment Corporation and SYSTRAN S.A. launched AltaVista Translation Service at babelfish.altavista.com, which was developed by a team of researchers at Digital Equipment. In February 2003, AltaVista was bought by Overture Services, Inc. In July 2003, Overture itself was taken over by Yahoo!.

The web address for Babel Fish remained babelfish.altavista.com until May 9, 2008, when the address changed to babelfish.yahoo.com.

As of May 30, 2012, the web address changed yet again, this time redirecting babelfish.yahoo.com to www.microsofttranslator.com when Microsoft’s Bing Translator replaced Yahoo Babel Fish.

Yahoo! Babel Fish should not be confused with The Babel Fish Corporation founded by Oscar Jofre, which was operated at the URL www.babelfish.com (created in 1995).

As of June 2013, babelfish.yahoo.com no longer refers to the Microsoft Bing Translator. Instead, it refers directly back to the main Yahoo.com page.

BDM – The History of Domain Names

Braddock, Dunn & McDonald – BDM.com was registered

Date: 10/27/1986

On October 27, 1986, BDM registered the bdm.com domain name, making it 30th .com domain ever to be registered.

Braddock, Dunn & McDonald, later known as BDM, then BDM International, was a technical services firm founded in 1959 in New York City. Its founders were Dr. Joseph V. Braddock, Dr. Bernard J. Dunn, and Dr. Daniel F. McDonald, who each received a PhD from Fordham University in the Bronx, New York. In 1997, TRW purchased BDM, and in 2002 Northrop Grumman bought TRW.

Move to Texas

Within a year of its founding, the company moved to El Paso, Texas, to be close to the U.S. Army’s Air Defense Center at Fort Bliss, Texas, the White Sands Missile Range in New Mexico, and Holloman Air Force Base, also in New Mexico. The founders offered their experience in missile guidance, applied optics, electronic instrumentation, and radiation physics to the U.S. Defense Department, primarily to the U.S. Army.

Williams hired

A few years later the three founders hired Earle Williams, an engineer with a degree from Auburn University in Alabama, who eventually became President and CEO. He led the company through a time of rapid growth and expansion. Among Williams’s most significant decisions was to move BDM to the Virginia suburbs of Washington, D.C., a few miles west of the Pentagon. That location offered the company a better opportunity to compete for defense contracts than it could from El Paso. The corporation broadened its client base to include other military services, the Joint Chiefs of Staff, and other U.S. government organizations. For the rest of its existence as a company it occupied a series of ever-larger office spaces in an unincorporated area known as Tysons Corner, Virginia, formed by the interchange of the newly completed Capital Beltway and Virginia Routes 7 and 123. Along with Western Union and Honeywell, BDM was one of the first firms to locate in the Westpark section of Tysons Corner, occupying buildings on Jones Branch Drive (ca. 1978) and Westbranch Drive (ca. 1980).

Rapid growth

The company grew rapidly, along with Tysons Corner. In the early 1960s Tysons Corner was a sleepy crossroads, but has since grown into a classic “edge city”, and a home of many government and military contractors. Williams promoted the area as a suitable place for technology-oriented firms. Tysons Corner and the surrounding towns became the home of many of BDM’s competitors, including Planning Research Corporation, DynCorp International, and CACI. Although all competed with BDM, in the buildup of defense budgets in the late 20th century, nearly all prospered. For a time, the press referred to these companies as “Beltway Bandits,” because of their location near (mostly Virginia) interchanges of the Washington, D.C. circumferential freeway. Employees of those companies, including BDM President Earle Williams, took offense to that term. As the Virginia-based defense contractors lost their independence and were absorbed by large aerospace giants, the term fell from use. Although the location of the headquarters of these defense contractors was part of an overall trend of movement to the suburbs beginning in the 1960s, BDM played a leading role in the specifics of this movement into the Virginia suburbs of Washington.

Community work

BDM’s executives were also active in the local community. President Earle Williams served as Director of Wolftrap Foundation for the Performing Arts. BDM’s Executive Vice President Stanley E. Harrison worked to strengthen the academic programs of George Mason University in Fairfax, VA. He later became the Provost of Shenandoah University in Winchester, VA.

Change in ownership

In 1997 BDM was purchased by TRW, an aerospace systems and technical services company, which in turn was acquired by Northrop Grumman in 2002. Northrop Grumman maintains a major corporate facility in Tysons Corner to this day.

Beenz – The History of Domain Names

Beenz.com domain name sells for $20,000

April 25, 2012

A European company, Beenz Europe BV, has purchased the domain name Beenz.com for $20,000 and placed a “coming soon” page on the web site. The domain purchase was made through domain aftermarket Afternic.com. Beenz.com was one of the most spectacular dot.com failures to lose its, um, beenz in the dot.com bust last decade.

Beenz rewarded web surfers with a virtual currency called beenz when they did things like visit certain web sites, by things online, etc. These beenz could then be cashed in for merchandise at various online retailers.

After raising nearly $100 million in venture capital, the company wound down operations in the bust. The purchaser of its assets let the Beenz.com domain name expire and the domain transferred owners multiple times after that prior to the recent sale. (The fate of rival Flooz and its domain name Flooz.com has been similar. It’s currently parked.)

Archie – The History of Domain Names

Archie search engine from McGill University

Date: 01/01/1990

Archie is a tool for indexing FTP archives, allowing people to find specific files. It is considered to be the first Internet search engine. The original implementation was written in 1990 by Alan Emtage, then a postgraduate student at McGill University in Montreal, and Bill Heelan, who studied at Concordia University in Montreal and worked at McGill University at the same time.

The first search engine was developed as a school project by Alan Emtage, a student at McGill University in Montreal. Back in 1990, Alan created Archie, an index (or archives) of computer files stored on anonymous FTP web sites in a given network of computers (“Archie” rather than “Archives” fit name length parameters – thus it became the name of the first search engine). In 1991, Mark McCahill, a student at the University of Minnesota, effectively used a hypertext paradigm to create Gopher, which also searched for plain text references in files.

Archie and Gopher’s searchable database of websites did not have natural language keyword capabilities used in modern search engines. Rather, in 1993 the graphical Mosaic web browser improved upon Gopher’s primarily text-based interface. About the same time, Matthew Gray developed Wandex, the first search engine in the form that we know search engines today. Wandex’s technology was the first to crawl the web indexing and searching the catalog of indexed pages on the web. Another significant development in search engines came in 1994 when WebCrawler’s search engine began indexing the full text of web sites instead of just web page titles.”

The archie service began as a project for students and volunteer staff at the McGill University School of Computer Science in 1987, when Peter Deutsch (systems manager for the School), Emtage, and Heelan were asked to connect the School of Computer Science to the Internet. The earliest versions of Archie, written by Alan Emtage, simply contacted a list of FTP archives on a regular basis (contacting each roughly once a month, so as not to waste too many resources of the remote servers) and requested a listing. These listings were stored in local files to be searched using the Unix grep command.

Bill Heelan and Peter Deutsch wrote a script allowing people to log in and search collected information using the Telnet protocol at the host “archie.mcgill.ca”.  Later, more efficient front- and back-ends were developed, and the system spread from a local tool, to a network-wide resource, and a popular service available from multiple sites around the Internet. The collected data would be exchanged between the neighbouring Archie servers. The servers could be accessed in multiple ways: using a local client (such as archie or xarchie); telnetting to a server directly; sending queries by electronic mail; and later via a World Wide Web interface. At the zenith of its fame the Archie search engine accounted for 50% of Montreal Internet traffic.

In 1992, Emtage along with Peter Deutsch and some financial help of McGill University formed Bunyip Information Systems the world’s first company expressly founded for and dedicated to providing Internet information services with a licensed commercial version of the Archie search engine used by millions of people worldwide. Bill Heelan followed them into Bunyip soon after, where he together with Bibi Ali and Sandro Mazzucato was a part of so-called Archie Group. The group significantly updated the archie database and indexed web-pages. Work on the search engine was ceased in the late 1990s.

The name derives from the word “archive” without the v. Alan Emtage has said that contrary to popular belief, there was no association with the Archie Comics and that he despised them. Despite this, other early Internet search technologies such as Jughead and Veronica were named after characters from the comics. Anarchie, one of the earliest graphical ftp clients was named for its ability to perform Archie searches.

A legacy Archie server is still maintained active for historic purposes in Poland at University of Warsaw’s Interdisciplinary Centre for Mathematical and Computational Modelling.

ARI-Registry – The History of Domain Names

ARI Registry Services picks up .sydney, .melbourne and .victoria new top level domains.

February 22, 2012

ARI Registry Services picks up three more new top level domains.

ARI Registry Services has won a contract with the governments of New South Wales and Victoria to provide registry services for .sydney, .melbourne and .victoria, the company announced today.

ARI definitely had a leg up in responding to the government’s RFP given that it’s located in Australia.

Ernst & Young and Melbourne IT will assist with developing the applications for the three top level domains.

Yesterday Melbourne IT announced that it is working on 120 new top level domain applications and expects this number to grow by the application deadline next month.

Arpanet Connects – The History of Domain Names

1969 October ARPANET Connects

Date: 10/29/1969

On October 29th 1969, two computers connected to form the ARPANET and launch the world’s first successful packet-switched wide area computer network. This first connection, in the form of a logon request, was sent to SRI International (then known as Stanford Research Institute) from the University of California, Los Angeles (UCLA). This remote access initiated a new, flexibly formed network structure for computer resource sharing. While not yet an internet, it did lay critical groundwork for the subsequent Internet and the dramatic changes in how we conduct business, communicate, socialize, learn, distribute knowledge, and travel. Internetworking began in 1977, when SRI also played a pivotal role in the first known connection of three dissimilar networks.

The U.S. Department of Defense Advanced Research Projects Agency (DARPA, then known as ARPA) began the work that led to the ARPANET in the mid 1960s. A major goal was to create a reliable computer network with built-in network redundancy that would provide reliable communications between its major nodes as well as remote access to these same computing resources, even when the network was subject to attack.

The initial ARPANET was a network of just four computers located at four different sites: first UCLA and SRI, followed by the University of California, Santa Barbara and the University of Utah. By 1972, the ARPANET comprised 37 computers. In the ensuing years it was opened to other research and development sites including other universities, research contractors, and government labs. Its usefulness as a platform for the new world of digital communications and information sharing soon became evident.

The first message on the ARPANET was sent by UCLA student programmer Charles S Kline at 10:30 pm on October 29, from the campus’ Boelter Hall to the Stanford Research Institute’s SDS 940 host computer.

The message text was meant to be the word “login,” but only the L and O were transmitted before the system crashed.

About an hour after the crash, the system was recovered and a full “login” message was sent as the second transmission.

The first permanent ARPANET link was established weeks later on November 21, 1969, between the IMP at UCLA and the IMP at the Stanford Research Institute. By December 5, 1969, the entire four-node network was established.

By 1975, ARPANET was declared “operational” and the Defense Communications Agency took control of it. In 1983, ARPANET was split with US military sites on their own Military Network (MILNET) for unclassified defense department communications. The combination was called the Defense Data Network.

Arpanet Decommissioned – The History of Domain Names

ARPANET Decommissioned

Date: 01/01/1990

The ARPANET was decommissioned in 1990. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990. 1990 Mike Karels, Phil Almquist, and Paul Vixie have maintained BIND since then. BIND was ported to the Windows NT platform in the early 1990s.

The ARPANET was decommissioned in 1990 after 30 years of experimentation and service. In those years the ARPANET  dramatically demonstrated the feasibility and efficiency of  packet switching communication, the desirability and productivity  of resource sharing and the value of open standards and  collaborative research and development. The great success of computer communications owes a great deal to the vision and  scientific and engineering skill of the ARPANET pioneers.

Arpanet Development – The History of Domain Names

The 1970’s ARPANET Development

Date: 01/01/1970

1969 April 7th ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled “Host Software”, waswritten by Steve Crocker from the University of California, Los Angeles, andpublished on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.

The Development of the Arpanet

In 1962, the report “On Distributed Communications” by Paul Baran, was published by the Rand Corporation. Baran’s research, done under a grant from the U.S. Air Force, discusses how the U.S. military could protect its communications systems from serious attack. He outlines the principle of “redundancy of connectivity” and explores various models of forming communications systems and evaluating their vulnerability. The report proposes a communications system where there would be no obvious central command and control point, but all surviving points would be able to reestablish contact in the event of an attack on any one point. Thus damage to a part would not destory the whole and its effect on the whole would be minimized. One of his recommendations is for a national public utility to transport computer data, much in the way the telephone system transports voice data. “Is it time now to start thinking about a new and possibly non-existant public utility,” Baran asks, “a common user digital data communication plant designed specifically for the transmission of digital data among a large set of subscribers?” He cautions against limiting the choice of technology for such a data network to that which is currently in use. He proposes that a packet switching, store and forward technology be developed for a data network. However, because some of his research was then classified, it did not get very wide dissemination. Other researchers were interested in computers and communications, particularly in the computer as a communication device. J.C.R. Licklider was one of the most influential. He was particularly interested in the man-computer communication relationship. Lick, as he asked people to call him, wondered how the computer could help humans to think and to solve problems. In an article called “Man Computer Symbiosis”, he explores how the computer could help humans to do intellectual work. Lick was also interested in the question of how the computer could help humans to communicate better. “In a few years men will be able to communicate more effectively through a machine than face to face,” Licklider and Robert Taylor wrote in an article they coauthored. “When minds interact,” they observe, “new ideas emerge.”

People like Paul Baran and J.C.R. Licklider were involved in proposing how to develop computer technology in ways that hadn’t been developed before.

While Baran’s work had been classified, and thus was known only around military circles, Licklider, who had access to such military research and writing, was also involved in the computer research and education community. Larry Roberts, another of the pioneers involved in the early days of network research, explains how Lick’s vision of an Intergalactic Computer Network changed his life and career. Lick’s contribution, Roberts explains, represented the effort to “define the problems and benefits resulting from computer networking.”

After informal conversations with Lick, F. Corbato and A. Perlis, at the Second Congress on Information System Sciences in Hot Springs, Virginia, in November 1964, Larry Roberts “concluded that the most important problem in the computer field before us at the time was computer networking; the ability to access one computer from another easily and economically to permit resource sharing.” Roberts recalls, “That was a topic in which Licklider was very interested and his enthusiasm infected me.”

During the early 1960’s the U.S. military under its Advanced Research Projects Agency (ARPA) established two new funding offices, the Information Processing Technology Office (IPTO) and another for behavioral science. From 1962-64, Licklider took a leave of absence from his position at a Massachusetts research firm, BBN, to give guidance to these two newly created offices. In reviewing this seminal period, Alan Perlis recalls how Lick’s philosophy guided ARPA’s funding of computer science research. Perlis explains, “I think that we all should be grateful to ARPA for not focusing on very specific projects such as workstations. There was no order issued that said, `We want a proposal on a workstation.’ Goodness knows, they would have gotten many of them. Instead, I think that ARPA, through Lick, realized that if you get `n’ good people together to do research on computing, you’re going to illuminate some reasonable fraction of the ways of proceeding because the computer is such a general instrument.” In retrospect Perlis explains, “We owe a great deal to ARPA for not circumscribing directions that people took in those days. I like to believe that the purpose of the military is to support ARPA, and the purpose of ARPA is to support research.”

Licklider confirms that he was guided in his philosophy by the rationale that a broad investigation of a problem was necessary in order to solve that problem. He explains “There’s a lot of reason for adopting a broad delimination rather than a narrow one because if you’re trying to find out where ideas come from, you don’t want to isolate yourself from the areas that they come from.”

Licklider attracted others involved in computer research to his vision that computer networking the most important challenge.

In 1966-67 Lincoln Labs in Lexington, Mass and SDR in Santa Monica, California, got a grant from the DOD to begin research on linking computers across the continent. Larry Roberts, describing this work, explains, “Convinced that it was a worthwhile goal, we set up a test network to see where the problems would be. Since computer time sharing experiments at MIT (CTSS) and Dartmouth (DTSS) had demonstrated that it was possible to link different computer users to a single computer, the cross country experiment built on this advance.”(i.e. Once timesharing was possible, the linking remote computers was also possible.)

Roberts reports that there was no trouble linking dissimilar computers. The problems, he claims, were with the telephone lines across the continent, i.e. that the throughput was inadequate to accomplish their goals. Thus their experiment set the basis for justifying research in setting up a nationwide store and forward packet switching data network.

During this period, ARPA was funding computer research at a number of U.S. Universities and research labs. A decision was made to include research contractors in the experimental network – the Arpanet. A plan was created for a working network to link the 16 research groups together. A plan for the ARPANET was made available at the October 1967 ACM Symposium on Operating Principles in Gatlingberg, Tennessee.

Shortly thereafter, Larry Roberts was recruited to head the ITPO office at ARPA to guide the research. The military set out specifications for the project and asked for bids. They wanted a proposal for a four computer network and a design for a network that would include 17 sites.

The award for the contract went to the Cambridge, Massachusetts firm Bolt Beranek and Newman Inc. (BBN).

The planned network would make use of mini computers to serve as switching nodes for the host computers at sites that were to be connected to the network. Honeywell mini computers (516’s) were chosen for the network of Information Message Processors (IMP’s) that would be linked to each other. And each of the IMP’s would be linked to a host computer. These IMP’s only had 12 kilobytes of memory though they were the most powerful mini computers available at the time.

On Sept 1, 1969, the first IMP arrived at UCLA which was to be the first site of the new network. It was connected to the Sigma 7 computer at UCLA. Shortly thereafter IMP’s were delivered to the other three sites in this initial testbed network. At SRI, the IMP was connected to an SDS-940 computer. At UCSB, the IMP was connected to an IBM 360/75. And at the University of Utah, the fourth site, the IMP was connected a DEC PDP-10.

By the end of 1969, the first four IMP’s had been connected to the computers at their individual sites and the network connections between the IMP’s were operational. The researchers and scientists involved could begin to identify the problems they had to solve to develop a working network.

There were programming and technical problems to be solved so the different computers would be able to communicate with each other. Also, there was a need for an agreed upon set of signals that would open up communication channels, allow data to pass thru, and then would close the channels. These agreed upon standards were called protocols. The initial proposal for the research required those involved to work to establish protocols. In April 1969, the first meeting of the group to discuss establishing these protocols took place. They put together a set of documents that would be available to everyone involved for consideration and discussion. They called these Requests for Comment (RFC’s) and the first RFC was April, 1969.

As the problems of setting up the four computer network were identified and solved, the network was expanded to several more sites.

By April 1971, there were 15 nodes and 23 hosts in the network.

These earliest sites attached to the network were connected to Honeywell DDP-516 IMPs. These were:

1 UCLA 2 SRI 3 UCSB 4 U of UTAH 5 BBN 6 MIT 7 RAND Corp 8 SDC ? (Systems Development Corporation) 9 Harvard 10 Lincoln Lab 11 Stanford 12 U of Illinois (Urbana) 13 Case Western Reserve U. 14 CMU 15 NASA-AMES

Then smaller minicomputers, the Honeywell 316, were introduced. They were compatible with the 516 IMP but at half the cost) were connected. Some were configured as TIPs (i.e. Terminal IMPs) beginning with:

16 NASA-AMES TIP 17 MITRE TIP

(Listing of sites based on a post on Usenet, but the Completion Report also lists Burroughs as one of the first 15 sites.)

By January 1973, there were 35 nodes of which 15 were TIPs.

Early in 1973, a satellite link connected California with a TIP in Hawaii. With the rapid increase of network traffic, problems were discovered with the reliability of the subnet and corrections had to be worked on. In mid 1973, Norway and England in Europe were added to the net and the resulting problems had to be solved. By September 1973, there were 40 nodes and 45 hosts on the network. And the traffic had expanded from 1 million packets/day in 1972 to 2,900,000 packets/day by September, 1973.

By 1977, there were 111 host computers connected via the Arpanet. By 1983 there were 4000.

As the network was put into operation, the researchers learned which of their original assumptions and models were inaccurate. For example, BBN describes how they had initially failed to understand that the IMP’s would need to do error checking. They explain:

“The first four IMPs were developed and installed on schedule by the end of 1969. No sooner were these IMPs in the field than it became clear that some provision was needed to connect hosts relatively distant from an IMP (i.e., up to 2000 feet instead of the expected 50 feet). Thus in early 1970 a `distant’ IMP/host interface was developed. Augmented simply by heftier line drivers, these distant interfaces made clear for the first time the fallacy in the assumption that had been made that no error control was needed on the host/IMP interface because there would be no errors on such a local connection.”

The network was needed to uncover the actual bugs. In describing the importance of a test network, rather than trying to do the research in a laboratory, Alex McKenzie and David Walden, in their article “Arpanet, the Defense Data Network, and Internet” write:

“Errors in coding control were another problem. However carefully one designs, codes, and performs quality control, errors can still slip through. Fortunately, with a large number of IMPs in the network, most of these errors are found quickly because they occur so frequently. For instance, a bug in an IMP code that occurs once a day in one IMP, occurs every 15 minutes in a 100-IMP network. Unfortunately, some bugs still will remain. If a symptom of a bug is detected somewhere in a 100-IMP network once a week (often enough to be a problem), then it will happen only once every two years in a single IMP in a development lab for a programmer trying to find the source of the symptom. Thus, achieving a totally bug-free network is very difficult.

In October 1972, the First International Conference on Computer Communications was held in Washington, D.C. A public demonstration of the ARPANET was given setting up an actual node with 40 machines. Representatives from projects around the world including Canada, France, Japan, Norway, Sweden, Great Britain and the U.S. discussed the need to begin work on establishing agreed upon protocols. The InterNetwork Working Group (INWG) was created to begin discussions for such a common protocol and Vinton Cerf, who was involved with UCLA Arpanet was chosen as the first Chairman. The vision proposed for the architectural principles for an international interconnection of networks was “a mess of independent, autonomous networks interconnected by gateways, just as independent circuits of ARPANET are interconnected by IMPs.”

The network continued to grow and expand.

In 1975 the ARPANET was transferred to the control of the DCA (Defense Communications Agency).

Evaluating the success of ARPANET research, Licklider recalled that he felt ARPA had been run by an enlightened set of military men while he was involved with it. “I don’t want to brag about ARPA,” he explains, ” It is in my view, however, a very enlightened place. It was fun to work there. I think I never encountered brighter, more creative people, than the inhabitants of the third floor E-ring of the Pentagon. But that, I’ll say, was a long time ago, and I simply don’t know how bright and likeable they are now. But ARPA didn’t constrain me much.”

A post on Usenet by Eugene Miya, who was a student at one of the early Arpa sites, conveys the exciting environment of the early Arpanet. He writes:

“It was an effort to connect different kinds of computers back when a school or company had only one (that’s 1) computer. The first configuration of the ARPAnet had only 4 computers, I had luckily selected a school at one of those 4 sites: UCLA/Rand Corp, UCSB (us), SRI, and the U of Utah.

Who? The US DOD: Defense Department’s Advanced Research Projects Agency. ARPA was the sugar daddy of computer science. Some very bright people were given some money, freedom, and had a lot of vision. It not only started computer networks, but also computer graphics, computer flight simulation, head mounted displays, parallel processing, queuing models, VLSI, and a host of other ideas. Far from being evil warmongers, some neat work was done.

Why? Lots of reasons: intellectual curiosity, the need to have different machines communicate, study fault tolerance of communications systems in the event of nuclear war, share and connect expensive resources, very soft ideas to very hard ideas.

I first saw the term “internetwork” in a paper by folk from Xerox PARC (another ARPANET host). The issue was one of interconnecting Ethernets (which had the 256 [slightly less] host limitation). Schoch’s CACM worm program paper is a good one.

I learned much of this with the help of the NIC (Network Information Center). This does not mean the Internet is like this today. I think the early ARPAnet was kind of a wondrous neat place, sort of a golden era. You could get into other people’s machines with a minimum of hassle (someone else paid the bills). No more.

He continues:

Where did I fit in? I was a frosh nuclear engineering major, spending odd hours (2am-4am, sometimes on Fridays and weekends) doing hackerish things rather than doing student things: studying or dating, etc. I put together an interactive SPSS and learned a lot playing chess on an MIT[-MC] DEC-10 from an IBM-360. Think of the problems: 32-bit versus 36-bit, different character set [remember I started with EBCDIC], FTP then is largely FTP now, has changed very little. We didn’t have text editors available to students on the IBM (yes you could use the ARPAnet via punched card decks). Learned a lot. I wish I had hacked more.

One of the surprising developments to the researchers of the ARPANET was the great popularity of electronic mail. Analyzing the reasons for this unanticipated benefit from their network development, Licklider and Vezza write, “By the fall of 1973, the great effectiveness and convenience of such fast, informed messages services�had been discovered by almost everyone who had worked on the development of the ARPANET – and especially by the then Director of ARPA, S.J. Lukasik, who soon had most of his office directors and program managers communicating with him and with their colleagues and their contractors via the network. Thereafter, both the number of (intercommunicating) electronic mail systems and the number of users of them on the ARPANET increased rapidly.”

“One of the advantages of the message system over letter mail,” they add, “was that, in an ARPANET message, one could write tersely and type imperfectly, even to an older person in a superior position and even to a person one did not know very well, and the recipient took no offense. The formality and perfection that most people expect in a typed letter did not become associated with network messages, probably because the network was so much faster, so much more like the telephone � Among the advantages of the network message services over the telephone were the fact that one could proceed immediately to the point without having to engage in small talk first, that the message services produced a preservable record, and that the sender and receiver did not have to be available at the same time.

Describing email, the authors of the Completion Report write:

The largest single surprise of the ARPANET program has been the incredible popularity and success of network mail. There is little doubt that the techniques of network mail developed in connection with the ARPANET program are going to sweep the country and drastically change the techniques used for intercommunication in the public and private sectors.

Not only was the network used to see what the actual problems would be, the communication it made possible gave the researchers the ability to collaborate to deal with these problems.

Summarizing the important breakthrough represented by the Arpanet, they conclude:

“This ARPA program has created no less than a revolution in computer technology and has been one of the most successful projects ever undertaken by ARPA. The program has initiated extensive changes in the Defense Department’s use of computers as well as in the use of computers by the entire public and private sectors, both in the United States and around the world.

Just as the telephone, the telegraph, and the printing press had far-reaching effects on human intercommunication, the widespread utilization of computer networks which has been catalyzed by the ARPANET project represents a similarly far-reaching change in the use of computers by mankind.

Arpanet Idea – The History of Domain Names

First Arpanet ideas

Date: 01/01/1969

The first internet

Computer networks, in any real sense, didn’t exist until the ARPANET was built starting in 1969.

On October 4, 1957, the Soviet Union launched Sputnik, the world’s first manmade satellite into orbit. After Sputnik’s launch, many Americans began to think more seriously about science and technology. Schools added courses on subjects like chemistry, physics and calculus. Corporations invested in scientific research and development. The federal government formed the National Aeronautics and Space Administration (NASA) and the Department of Defense’s Advanced Research Projects Agency (ARPA), to develop space-age technologies such as rockets, weapons and computers.

The initial idea of the internet is credited as being Leonard Kleinrock’s after he published his first paper entitled “Information Flow in Large Communication Nets” on May 31, 1961.

In 1962, a scientist from M.I.T. and ARPA named J.C.R. Licklider proposed a solution to the concern about what might happen in the event of a Soviet attack on the nation’s telephone system: a “galactic network” of computers that could talk to one another and that would enable government leaders to communicate even if the Soviets destroyed the telephone system.

In addition to the ideas from Licklider and Kleinrock, Robert Taylor helped create the idea of the government’s computer network, which later became ARPAnet.

In 1965, another M.I.T. scientist developed a way of sending information from one computer to another that he called “packet switching” so that each packet of data can take its own route from place to place. Without packet switching, the government’s computer network would have been just as vulnerable to enemy attacks as the phone system.

On July 3, 1969, UCLA puts out a press release introducing the Internet to the general public.

At 10:30 pm on 29 October 1969, ARPAnet delivered its first message: a “node-to-node” communication from one computer to another. (The first computer was located in a research lab at UCLA and the second was at Stanford Research Institute’s; each one was the size of a small house.) The message—“LOGIN”—was short and simple, but it crashed the fledgling ARPA network anyway: The Stanford computer only received the note’s first two letters. About an hour later, having recovered from the crash, the computer at UCLA effected a full login.

As ARPAnet grew, in 1973 Vinton Cerf at Stanford started working on a better host-to-host protocol. In the following 5 years, he invented the twofold Transmission Control Protocol / Internet Protocol.

TCP/IP allows for the “handshake” that introduces distant and different computers to each other in a virtual space.

TCP controls and keeps track of the flow of data packets.
IP addresses and forwards individual packets.

TCP/IP became the required protocol of ARPANET in 1983, also allowed ARPANET to expand into the Internet, facilitating features like remote login via Telnet and, later, the World Wide Web.

In 1989, the ARPANET officially became the Internet and moved from a government research project to an operational network; by then it had grown to more than 100,000 computers.

ARPANET itself was finally decommissioned in 1990.

The Internet took its current form since 1993.

Arpanet – The History of Domain Names

ARPANET planning starts

Date: 01/01/1966

Planning the ARPANET: 1967-1968

In 1966, Robert (Bob) W. Taylor succeeded Sutherland as Director of the IPTO. Taylor completely subscribed to Licklider’s vision of an Intergalactic Network. Furthermore, he faced the essence of the problem every day in his office: he had three different computer terminals because he needed access to three different computers. Taylor remembers:

“By late ’65, early ’66, there were a number of ARPA sponsored research groups that had built for themselves, and were using in their own work, some of the first time-sharing systems. So, it occurred to me, sort of taking off from this tongue and cheek ‘Intergalactic Network’ phrase of Licklider’s, that the next thing to do was obvious, and that is — if we had singular communities who could interactively communicate through a time-sharing system that they were all members of, why couldn’t we have clusters of communities interact, members of one community could interact with members of another community, as though they might be sharing a single time-sharing system.” In February 1966, influenced by the experiments conducted by Roberts and Marill at Lincoln Labs, Taylor decided the time had come for ARPA to focus on interconnecting existing, and planned, time-sharing systems into an ARPA scientific community. Not having the money to launch a new project, he approached Charles Hertzfeld, Director of ARPA. He remembers explaining to Hertzfeld:

“There are certain experts in certain fields who sit in California, and there are other experts in that same field who sit in Massachusetts or someplace else, and if we can make this work, we can have a medium through which they can work cooperatively, and so we get amplification of ideas. Another advantage of tackling this problem is that we might be able to achieve some fail-soft characteristics in any collection of computing that the Defense Department would especially be interested in.’ So, that discussion probably lasted 15 minutes, and he immediately was excited about it, and he said: ‘You’ve got the money. How much do you need to get started?’ I gave him a number, and he pulled it out of another one of his ARPA projects, and said: ‘Go.’” Taylor next needed a program manager. His first choice was Roberts. Roberts, however, wanted nothing to do with becoming a program manager or of moving to Washington. Unable to think of anyone as qualified as Roberts, Taylor kept asking and Roberts kept declining. Taylor remembers the ruse that won the day:

“In September or October of ’66, it dawned on me that ARPA supported 51%, or thereabouts, of all of Lincoln Lab’s work. So I went to see Hertzfeld and I said: ‘Charley, is it still true that ARPA supports 51% or more of Lincoln Lab,’ and he said: ‘Yeah.’ I said: ‘Well, you know this network project that I’m trying to get off the ground?’ He said: ‘Yeah.’ I said: ‘Well, there’s a guy at Lincoln Lab that I want to be the program manager for it and I can’t get him to come down here. His name is Larry Roberts. I’d like for you to call the Director of Lincoln Lab and tell him that it would be in Lincoln Lab’s best interest and Larry Roberts’ best interest if the Director of Lincoln Lab encouraged Larry Roberts to come down to Washington and be the program manger for this project.’ Charley said: ‘Sure,’ and he picked up the phone with me in his office, and he called the Director of Lincoln Lab and had a short conversation and he hung up the phone, and about a month later Larry accepted the job.

In Christmas of that year, he came down with his family and they stayed at my house over the holidays because they didn’t have a place to live yet. I blackmailed him into fame.”

With Roberts aboard as program manager, Taylor began visiting various ARPA contractors, explaining the purpose of the network in an effort to generate support. Not wanting to give others access to their computers, most of the contractors resisted the concept. It became very clear that the challenge of bringing into being the world’s first computer communication system would involve more than just technology. With his legendary intensity, Roberts tackled the challenges of bringing into being a network to interconnect the ARPA community. To him, the concept of communications by exchanging messages or packets seemed neither strange nor novel. It was simply a logical extension of how computers communicated internally — exchanging blocks of data.

Roberts remembers the beginnings of building a computer network:

“All of us in computing were clearly not going to go after it on a circuit switched basis. We were all thinking in blocks. That’s the way computers worked. So we approached it very differently than the communications people. We thought in terms of: “How can we do this such that it will be a functionally useful service for the computers?” I got together groups and committees of the ARPA people and started working on it.” He first focused on how to build a packet network; fully realizing that there also existed the challenge of how computers would communicate across the intended peer-like network, and that the two were dependent, locked in some unseen co-evolution.

After numerous conversations, Roberts concluded his first major decision had to be the network topology: how to link the computers together. A topology of interconnecting every computer to every other computer didn’t make sense, based on the results of his experiment at Lincoln Labs and the absurdity of projecting hundreds of computers all interconnected to each other. The number of connections would explode as the square of the number of computers. A shared network, however, entailed solving the problem of switching when using a packet, or message block, architecture. To explore the questions of packet size and contents, Roberts requested Frank Westervelt of the University of Michigan write a position paper on the intercomputer communication protocol including “conventions for character and block transmission, error checking and re transmission, and computer and user identification.”

Two alternative architectures for a shared network emerged: a star topology or a distributed message switched system. A star topology, or centralized network, would have one large central switch to which every computer was connected. It represented the least development risk because it was well understood. However, it was also known to perform poorly given lots of small messages — the precise condition of packet messaging. On the other hand, a distributed message switching system as proposed by Baran and Davies, had never been built, but held the theoretical advantage of performing best given lots of small messages. With a choice needing to be made, the upcoming annual meeting of ARPA contractors seemed an ideal time to air the issues and reach a decision.

Wesley Clark remembers the discussion:

“Towards the end of the meeting it became clear to me what the problem was. So I slipped a note to Taylor stating ‘I know what to do.’ Taylor probably didn’t look at my message until after the meeting. It was fairly obvious to me to put the communication functions in a separate, smaller computer outside the computer ‘network.’ “

After the meeting, Taylor, Roberts, Clark and Dave Evans shared a taxi to the airport and discussed the network topology decision. Taylor remembers:

“Larry, prior to this taxi ride, was thinking of a network controller in a centralized sense; something in the center of the country, a large machine, that would control the network, and I was nervous about that. I talked to Licklider and to Wes about it, separately, and Larry wasn’t irrevocably wedded to the idea, but that was his model at the time. In the cab to the airport after this meeting, I got Wes talking about it. Whether he had sorted it all out prior to that cab ride or whether he sorted it out based spontaneously in the conversation in that cab, I don’t know. But he said: “Why have a central control. Why not have small machines?”

Clark remembers:

“I think it was fairly obvious. I think somebody else would have come to it with a little more time. Let those little computers talk to one another and serve as terminals to which the big machines would talk. In a sense, that took the big machines outside the network. The concept up to that point was that the network was the big machines plus all the interconnections, and my sudden realization at the meeting was that you wanted the machines outside the network, not inside it. Larry or Bob asked me how to get these little small computers built and I said I thought there was only one person who could do that job in the country, namely Frank Heart.” Back in Washington, Roberts and his staff studied Clark’s suggestion and concluded that the idea of a separate computer located at each site handling the network functions was an ingenious solution. In addition to its technical merits, it allowed the network to be designed and built independently of host computer hardware and software, greatly simplifying project management. The small computers, to be called interface message processors (IMP’s), together with the telephone lines and the modems would constitute the message-switching, communications network, or “Subnet” (or subnet.)

Roberts understood a deep and important dilemma of the emerging network. Computers wanted to communicate in message sizes very much larger than ideal for a packet network. This meant an IMP would receive a message from its Host computer that it would have to parse into packets that in turn would traverse the subnet. The packets would then have to be reassembled into the message before being usable by the Host and, most likely, even before being delivered to the Host. Realistically, that implied that the IMPs would have to store packets before forwarding them. Such a network had never been designed, much less built.

Roberts remembers:

“All of us thought, clearly, in those days, about computer switching rather than circuit switching; some sort of computerized switching. You got the traffic in and you put it out. It could have been that we put it out block for block as fast as it came in; it could have been that we stored the whole message and forwarded it. What we concluded was that you wanted to not store the whole message and forward it, and you couldn’t have a perfect virtual cut-through where you sent every block immediately synchronously because it might interfere with the next message. So you had to do it in some smaller breakdown, which is like a packet, or whatever, which, of course, is the size lump you’re in anyway, because you’ve got to put sum checks on it every interval. So, there wasn’t any question about packets — and clearly Donald gave it the name — that we had to be in that sort of size. Now and then, each one of us in the world would derive what length that ought to be ultimately for sum check, because of errors, and that was around the thousand-bit area then, with error rates that were in existence then.”

Roberts sought help from Leonard Kleinrock his former officemate from MIT, now on the faculty of UCLA. Even though Kleinrock was not working on network queuing theory at the time, in his Ph.D. thesis, he had shown that a large store-and-forward could work, despite the delays introduced by store-and forward. In 1964, Dover Publications had published his thesis as a book; the same year RAND had published Baran’s work. Kleinrock reflects, saying the ideas in “Communication Nets: Stochastic Message Flow and Delay” “lay fallow in the scientific literature because it was, in some sense, before it’s time. Nobody believed these networks could be built. It drew upon ideas from classical military communications networks.”

During this period, Roberts was also coming to the opinion that a limited experiment would not be very meaningful: a network of many nodes had to be created if the network was going to be useful for serious purposes. But that introduced the problem of how to convince that many more potential computer sites to devote the resources to the network, as opposed to their existing priorities. Licklider comments:

“I have the strong impression that Larry thought there was no little experiment that would be very helpful; that he had to have a network with many nodes — a dozen anyway and probably 100 — and that it had to be used for serious purposes, and that his problem was — what he had to excel at as a manager was actually getting this whole ARPA community to make ten or fifteen percent of everybody’s effort to get the network to go. He though he was in a good position, that he had not only the people who could build it, but enough people to use it, if he could only get them really focused and interested. I believe I was director of Project MAC at that time, and I loved the network. I had trouble with my people about it. They didn’t like it, but I said: “I estimate as a minimum, ten percent of our effort is going to be the network. It’ll just happen that way,” and partly that was promotion on my part and partly it was true belief, but most of the others were much more interested in what they were doing than Larry’s network.” In 1967, Taylor began giving more responsibilities to Roberts, hoping that Roberts would succeed him as director of IPTO. It meant Roberts had less and less time for the network project. To compensate, Roberts recruited Barry Wessler from the University of Utah to be his assistant. (Wessler would not actually join IPTO until early 1968.)

Then, in another important community event that caused historical course-corrections, came the ACM Symposium on Operating System Principles in Gatlinburg, TN, October 1-4, 1967. Roberts presented a paper: “Multiple Computer Networks and Intercomputer Communications.” In it he lamented the deficiencies of dial-up telephone circuits, yet nevertheless concluded that the communication links between IMPs would be 2400 bit/second dial-up circuits. [18] That constraint would not last very long, however, for another speaker, Roger Scantlebury, presented a paper authored by Davies, Bartlett, Scantlebury and Wilkinson, describing a local network being developed at NPL based on Davies’ ideas and employing much higher speed circuits. [19] (On returning to the U. K. after the Gatlinburg conference, Scantlebury reported: “It would appear then that the ideas in the NPL paper at the moment are more advanced than any proposed in the USA”. Roberts took the findings of the NPL team seriously, later describing how they influenced his thinking: “The NPL paper clearly impacted the ARPANET in several ways. The name “packet” was adopted, a much higher speed was selected (50 Kilobit/second vs 2.4 Kilobit/second) for internode lines to reduce delay and generally the NPL analysis helped confirm the concept of packet switching.”

Asia – The History of Domain Names

.Asia Plans to Release 1 and 2 Character Domain Names

August 18, 2011

Registry for .asia domain names wants to release short names.

.Asia registry DotAsia Organisation Ltd is joining fellow top level domain name operators with plans to release one and two character domain names. The registry has filed a Registry Services Evaluation Process request with ICANN explaining how it plans to allocate the domain names.

Like other registries, it plans a three-pronged release of domain names:

  1. Request for Proposal – interested parties can submit a proposal about how they will use and promote a one or two character .asia domain name. Qualified applicants will be awarded the domains.
  1. Sunrise – trademark holders of one and two character names can apply for domains. An auction will be held in the event of more than one qualified sunrise request for the same name.
  1. Landrush and auction – any remaining unallocated domain names will be auctioned off or made available to the community.

You can view .asia’s request here.

According to HosterStats, there are just under 200,000 .asia domain names currently registered.

Amdahl – The History of Domain Names

Amdahl Corporation – amdahl.com was registered

Date: 12/11/1986

On December 11, 1986, Amdahl corporation registered the amdahl.com domain name, making it 49th .com domain ever to be registered.

Amdahl Corporation was an information technology company which specialized in IBM mainframe-compatible computer products, some of which were regarded as supercomputers competing with those from Cray Research. Founded in 1970 by Gene Amdahl, a former IBM computer engineer best known as chief architect of System/360, it has been a wholly owned subsidiary of Fujitsu since 1997. The company is located in Sunnyvale, California. Amdahl was a major supplier of large mainframe computers, and later of UNIX and open systems software and servers, data storage subsystems, data communications products, application development software, and a variety of educational and consulting services. In the 1970s, when IBM had come to dominate the mainframe industry, Amdahl created plug-compatible machines that could be used with the same hardware and software as offerings from IBM, but were more cost-effective. Boasting faster uniprocessors, the largest single image, greater performance characteristics and higher reliability, the Amdahl mainframe was compelling to buyers who were willing to consider alternatives to IBM. These machines gave “Big Blue” some of the little competition it had in that very high-margin computer market segment. Proverbially, during this time savvy IBM customers liked to have Amdahl coffee mugs visible in their offices when IBM salespeople came to visit. While winning about 8% of the mainframe business worldwide, Amdahl won a position of market leader in some regions, most notably Charlotte, North Carolina. In the early to mid-1990s, Amdahl won most of the major contracts for mainframes in the Carolinas.

Company origins

Amdahl launched its first product in 1975, the Amdahl 470/6, which competed directly against high-end models in IBM’s then-current System/370 family. When IBM announced th introduction of Dynamic Address Translation (DAT), Amdahl announced the 470V/6 and dropped the 470/6. At the time of its introduction, the 470V/6 was less expensive but still faster than IBM’s comparable offerings. The first two 470V/6 machines were delivered to NASA (Serial Number 00001) and the University of Michigan (Serial Number 00002). For the next quarter century Amdahl and IBM competed aggressively against one another in the high-end mainframe market. At its peak, Amdahl had a 24% market share. Amdahl owed some of its success to antitrust settlements between IBM and the U.S. Department of Justice, which ensured that Amdahl’s customers could license IBM’s mainframe software under reasonable terms. Dr. Gene Amdahl was committed to expanding the capabilities of the uniprocessor mainframe during the late 1970s and early 1980s. Amdahl engineers, working with Fujitsu circuit designers, developed unique, air-cooled chips which were based on high-speed emitter-coupled logic (ECL) circuit macros. These chips were packaged in a chip package with a heat-dissipating cooling attachment (that looked like the heat-dissipating fins on a motorcycle engine) mounted directly on the top of the chip. This patented technology allowed the Amdahl mainframes of this era to be completely air-cooled, unlike IBM systems that required chilled water and its supporting infrastructure. In the 470 systems, the chips were mounted in a 6-by-7 array on multi-layer cards (up to 14 layers), which were then mounted in vertical columns. The cards had eight connectors that attached the micro-coaxial cables that interconnected the system components. A conventional backplane was not used in the central processing units. The card columns held at least three cards per side (two per column in rare exceptions, such as the processor’s “C-Unit”). Each column had two large “Tarzan” fans (a “pusher” and a “puller”) to move the considerable amount of air needed to cool the chips. In the 580 systems, the chips were mounted in an 11-by-11 array on multi-layer boards called Multi-Chip Carriers (MCCs) that were positioned in high-airflow for cooling. The MCCs were mounted horizontally in a large rectangular frame. The MCCs slid into a complex, physical connection system and the processor “side panels” interconnected the system, providing clock propagation delays that maintained race-free synchronous operation at relatively high clock frequencies (15–18 ns base clock cycles). This processor box was cooled by high-speed fans generating horizontal air flow across the MCCs.

Additional models of Amdahl uniprocessor systems included the 470V/5, /7 and /8 systems. The 470V/8, first shipped in 1980, incorporated high speed 64 KB cache memories to improve performance, and the first real hardware-based virtualization (known as the “Multiple Domain Facility”). Amdahl also pioneered a variable-speed feature on the V5 and V7 systems that allowed the customer to run the CPUs at a higher level of performance when necessary. The customer was charged by the number of hours used. Some at Amdahl thought this feature would anger customers, but it became quite popular as management could now control expenses while still having greater performance available when necessary. Gene Amdahl left the company he founded in August 1979 to start Trilogy Systems. With Gene Amdahl’s departure, and increasing influence from Fujitsu, Amdahl entered the large-scale multiprocessor market in the mid-1980s with the 5870 (attached processor) and 5880 (full multiprocessor) models. Under the leadership of Tom O’Rourke, Amdahl entered the IBM-compatible peripherals business in front-end processors and storage products. These products were very successful for a number of years with the support of Jack Lewis, the former CEO of Amdahl. The reliance upon a single product, within the complex business of mainframes and their equally valuable peripherals, condemned the company’s hardware business when market forces shifted to x86-based processors. This had been foreseen, leading to an increasing emphasis on software and consulting services.

Company History:

Having abandoned its founding business of manufacturing mainframe computers, Amdahl Corporation has positioned itself in the early 21st century as a developer and implementer of information technology systems and services, and enterprise-level software, and as a provider of professional and consulting services. As an adjunct to its services businesses, the company, a wholly owned subsidiary of Japan’s Fujitsu Limited, continues to offer its customers computer servers and storage systems. Among Amdahl’s subsidiary operations are DMR Consulting Group, Inc., which focuses on e-consulting services and business solutions for both large corporations and Internet startups; Fujitsu Software Technology Corporation, which provides comprehensive software solutions in various areas of data storage; Fujitsu Technology Solutions, Inc., the unit that handles the company’s operations in the areas of servers and storage systems; and trustedanswer.com, a provider of outsourced customer service and customer support services.

Prehistory and the Startup Stage

Amdahl Corporation was founded on October 19, 1970, in Sunnyvale, California, by Gene M. Amdahl. Born in 1922 in South Dakota, Amdahl left his home state to pursue a doctoral degree in theoretical physics. With a knowledge of electronics gained in the Navy and a familiarity with computer programming garnered from a brief course, Amdahl designed and helped construct an early computer known as the WISC (Wisconsin Integrally Synchronized Computer). In 1952 Amdahl joined IBM and became chief designer of the IBM 704 computer, which was released in 1954. In 1955 Amdahl and other systems designers began conceptualizing a new computer for IBM, which they christened the Datatron. IBM’s Stretch, also known as the IBM 7030, was an outgrowth of the Datatron, a computer using new transistor technology. The name Stretch was not an acronym, but rather stood for ‘stretching the limits of computer technology development.’ Although Stretch was a financial failure for IBM, it was valuable as the precursor to the successful IBM System 360. In 1956 Amdahl left IBM; he worked at two other high-technology firms before returning to IBM four years later. Amdahl later became the principal architect for the phenomenally successful System 360, which was introduced in 1964. Amdahl was appointed an IBM fellow, and was thus free to pursue his own research projects. In early 1969, while director of IBM’s Advanced Computing Systems Laboratory in Menlo Park, California, he began to investigate the company’s cost-pricing cycle as it applied to a large computer they were developing. His team concluded that to make the computer pay for itself, IBM would also have to market two scaled-down versions of the advanced technology. IBM management insisted that Amdahl stay with the original plan to create only one large processor, while Amdahl recommended that they shut down the laboratory. The laboratory was closed in the spring of 1969. Over the following few months, Amdahl reviewed the policies that prevented IBM from aiming at the high end of computer development and presented his analysis to IBM’s top three executives. Although the officers agreed with his analysis, they maintained that it would not be in IBM’s best interest to change direction. Amdahl decided to strike out on his own.

Amdahl submitted his resignation to IBM for the second time in September 1970 and founded Amdahl Corporation just a few weeks later. Amdahl took none of IBM’s technical personnel with him when he left; he was joined only by young financial specialist Ray Williams and two secretaries. Amdahl and Williams determined that they would need between $33 million and $44 million to see a product to completion (in fact, it took $47.5 million). They had chosen a difficult year for raising money, as new capital gains taxes and an advancing recession made venture capital scarce. Amdahl and Williams first took their business plan to investment bankers, who rejected their proposal because they felt that Amdahl Corporation could not effectively challenge IBM. The pair eventually received $2 million from Heizer Corporation, venture capitalists in Chicago, the day after spending the last of their own investment. At the same time, three other young California computer companies–MASCOR (Multiple Access Systems Corporation, which was started by staff members who left IBM after the closing of the Advanced Computing Systems Laboratory), Berkeley Computers, and Gemini Computers–had gone bankrupt. Many of their employees joined Amdahl Corporation, forming an impressive technical team. During Amdahl Corporation’s first eight months, it continued the search for more capital. The needed funds came from Fujitsu Limited, a leading Japanese computer manufacturer, which suggested a joint development program and licensing under Amdahl’s patents. This 1971 agreement was accompanied by the $5 million investment that Amdahl needed to complete its second phase of development. In 1972 Nixdorf Computers, a leading German computer manufacturer, agreed to invest $6 million if Nixdorf could represent Amdahl in Europe. Fujitsu also increased its investment, and U.S. investors began to appear. Amdahl amassed a total of $20 million to build a prototype computer and a production facility. Also in 1972, IBM announced the debut of the 370, its first computer with virtual memory, a flexible, advanced memory technology. Amdahl had been developing a computer like the IBM 370, but without virtual memory, and IBM’s introduction forced Amdahl to scrap its initial design. Amdahl Corporation decided to offer stock publicly in early 1973, but could not find an underwriter. The company then experienced delays with the Securities and Exchange Commission until 1974, by which time the stock market had declined, so Amdahl returned to the private market. In August 1974 Eugene R. White, a vice-president at Fairchild Camera and Instrument Corporation, was appointed president of Amdahl Corporation. Effecting changes that helped save the company, White laid off almost half the employees and concentrated on marketing efforts and field support services. He was also instrumental in negotiations with Fujitsu and Heizer to get the funding necessary to complete the company’s first product.

1990s: Shrinking Mainframe Market, Focusing on Services

As the 1990s progressed, the major threat to Amdahl’s viability no longer appeared to be IBM, but the shrinking mainframe computer market. As smaller, cheaper, and more powerful machines became available, Amdahl found its sales slipping. Excessive costs forced the company to stop work on a mainframe Unix product that had long been underway. By September 1993, sales had collapsed. Amdahl’s Zemke (who became CEO in 1992) was quoted in Business Week as saying, ‘It was like Death Valley.’ Amdahl shut down factory lines and cut back the workforce three times that year. The company reported a net loss of more than $575 million for 1993 and revenues fell to $1.68 billion, a 33 percent drop from the record revenues of $2.52 billion the previous year.

Analysts predicted that Amdahl’s continued success would require stronger innovation. Amdahl’s strategy was to offer its customers integrated packages combining its hardware technology with the industry’s most advanced software, as well as stellar support and consulting services. Amdahl’s maintenance, support, and consulting services made up 28 percent of revenues in 1993 and increased another 11 percent in the first quarter of 1994. Margins on those services were almost double the hardware margins, and Amdahl’s service businesses were consistently given the highest ratings in the industry. In the following year, Amdahl entered into new partnerships with three computer firms: Electronic Data Systems, nCube, and Sun Microsystems. The agreement with Electronic Data Systems spawned the Antares Alliance Group, a joint software development group 80 percent owned by Amdahl. Antares was formed to market Amdahl’s Huron and research new software ideas and prototypes for business analysis and modeling programs. Helge Knudsen became director of the Antares Research Institute. In 1994 Amdahl introduced the Xplorer 2000 series. The new product was the result of an alliance between Amdahl, Oracle, and Information Builders, Inc. The partnership was formed, according to Software Magazine, to explore opportunities to create ‘massively parallel database servers and software that will let customers process thousands of transactions per second and share data between MVS and Unix systems.’ Later that year, Amdahl and Sun Microsystems introduced A+ Edition, a group of extensions that allowed Sun’s symmetrical multiprocessing servers to perform more efficiently when a higher number of total possible servers were working. The software accomplished this by providing tuning for database applications with a large number of users and more evenly distributing the workload among the processors in the servers. While the new product was well received, some potential customers expressed concern about the cost for the value. Expanding on its position as a provider of integrated services, Amdahl won a bidding war for DMR Group Inc., acquiring the Canadian firm in November 1995 for about $140 million. DMR provided information technology consulting services as well as systems development, systems integration, and outsourcing services on a worldwide basis. The firm had annual revenues of nearly $220 million, which boosted the share of Amdahl revenue that came from software and services to 40 percent. DMR was combined with Amdahl’s Business Solutions Group to form DMR Consulting Group, Inc., which operated as a subsidiary. Zemke resigned as CEO in March 1996 for ‘personal reasons.’ The move came in the wake of rather dismal results for 1995: net income of only $28.5 million and a further decline in revenues to $1.52 billion. Lewis became CEO once again. Soon thereafter, Amdahl acquired another services-oriented company, Trecom Business Systems Inc., for $145 million. Based in Edison, New Jersey, Trecom focused mainly on designing and providing client-server networks for corporations. Its geographic presence in the eastern and southern United States meshed well with DMR’s strength in the West and in Canada. The firm had annual revenues of $140 million. Trecom was eventually merged into DMR.

Amdahl returned to the red for 1996, reporting a $326 million net loss on revenues of $1.63 billion, partly due to costs related to the integration of its acquisitions. An even larger factor was a $130 million writeoff of outmoded water-cooled mainframe inventory. Amdahl’s mainframes were hurt by IBM’s 1995 introduction of CMOS-based mainframes, which were air-cooled and less costly to operate. Amdahl had to play catch-up in introducing its own CMOS-based models, the Millennium Global Servers, in late 1996. The later months of 1996 also saw Amdahl begin selling a line of Windows NT-based servers called EnVista. Amdahl packaged the servers with software and services related to the Internet, intracompany communications and data sharing, and database applications. In the area of open systems, the company added new storage systems to its product line and began reselling Sun Microsystems’ SPARC servers. Meantime, nearly two-thirds of revenues for 1996 were generated by Amdahl’s software and services operations. By mid-1997, with Amdahl having posted six straight quarterly losses, there was much skepticism about the future viability of the company. Those concerns were laid to rest in September of that year when Fujitsu purchased the 58 percent of Amdahl it did not already own for $878 million. Amdahl was now a wholly owned subsidiary of Fujitsu and could tap into the very deep pockets of the Japanese electronics giant. In the immediate wake of the buyout, David B. Wright was named to succeed Lewis as CEO. Wright had been executive vice-president of the company’s hardware and systems support operations, and his background in services was a key factor in his selection as Amdahl continued to increase its emphasis on software, services, and consulting. DMR in particular was growing at the rapid rate of 30 percent per year and its revenues reached $700 million in 1997. Part of this surge came from the accelerating demand for services related to the fixing or replacement of systems affected by the year 2000 computer bug. In early 1998 Amdahl entered into an alliance with Microsoft Corporation to provide products and services designed to integrate Microsoft’s Windows NT and BackOffice products with mainframe systems. The following year the company acquired Sentryl Software, which developed software for automated storage management systems.

In October 2000 Wright resigned as CEO, with Yasushi Tajiri named interim CEO. Just a few weeks later, the company announced that it was exiting from its founding mainframe business. On the hardware side, Amdahl would now be involved only in the server and storage sectors. The company’s future, in any event, clearly lay in the world of services and consulting. In December 2000 Amdahl announced that it would eliminate nearly one-fifth of its California workforce in connection with its exit from mainframes. At the same time, Fujitsu turned Amdahl into a holding company for several businesses: Amdahl Software; DMR Consulting; Amdahl IT Services, which focused on information technology infrastructure services for large-scale enterprises; and Fujitsu Technology Solutions, which was the company’s hardware arm, selling servers and storage systems. In March 2001 Amdahl Software was relaunched as Fujitsu Software Technology Corporation (Fujitsu Softek), with a mission of providing comprehensive software solutions in various areas of data storage. These moves early in the new millennium continued Amdahl’s transformation from mainframe manufacturer to provider of information technology software, services, and consulting.

AMNECN – The History of Domain Names

ARPANET, MILNET, NSI, ESNet, CSNET, and NSFNET

Date: 01/01/1983

After the ARPANET had been up and running for several years,ARPA looked for another agency to hand off the network to; ARPA’s primary mission was funding cutting edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as aseparate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.

The networks based on the ARPANET were government funded and therefore restricted to non commercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were.

Several other branches of the U.S. government, the National Aeronautics and Space Agency (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid 1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.

NASA developed the TCP/IP based NASA Science Network (NSN) in the mid 1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, the DEC net-based Space Physics Analysis Network(SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creatingthe first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.

In 1981 NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ranTCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. Its experience with CSNET lead NSF to use TCP/IP when it created NSFNET, a 56 kbit/s back bone established in 1986, that connected the NSF supported super computing centers and regional research and education networks in the United States. However, use of NSFNET was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988.The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990. NSFNET was expanded and upgraded to 45 Mbit/s in 1991, and was decommissioned in 1995 when it wasreplaced by backbones operated by several commercial Internet Service Providers.

ANA GenericDomains – The History of Domain Names

ANA FORMS COALITION AGAINST NEW GENERIC DOMAIN NAMES

November 12, 2011

A few months back, Association of National Advertisers (ANA) slammed ICANN’s new gTLD (generic Top Level Domain) initiative. Eighty-seven major national and international business associations and companies have joined the ANA to form the Coalition for Responsible Internet Domain Oversight (CRIDO); which has a sole purpose of opposing the rollout of ICANN’s top-level domain expansion program.

It comes as no surprise the ANA has taken this action as back in August the body made its opinion known in no uncertain terms and warned it would “continue to vigorously oppose implementation of the program”.

And vigorous opposition it is, with some very big names lending their support; including U.S. Chamber of Commerce, Adobe Systems Incorporated, Burger King Corporation, The Coca-Cola Company, Hewlett-Packard Company and Samsung.

ANA says CRIDO members represent around 90 percent of global marketing communications spending – equivalent to $700 billion annually.

“This unprecedented, united opposition to ICANN’s top-level domain expansion program clearly demonstrates the enormity of the dissatisfaction across the Internet stakeholder community,” said Bob Liodice, President and CEO of the ANA.

The CRIDO petition states opposition to ICANN’s program based on ” deeply flawed justification, excessive cost and harm to brand owners, likelihood of predatory cyber harm to consumers and lack of stakeholder consensus, a core requirement of its commitment to the U.S. Department of Commerce”.

ICANN’s program will allow companies and consortiums to create new gTLD domain name extensions; for example, .insurance and .bank. Cities around the world including New York, Paris, Sydney, Melbourne, Rome and Berlin have also expressed their eagerness to grab their relevant city gTLDs – before somebody else does.

ICANN is expected to start reviewing gTLD applications some time during the first quarter next year.

ANA – The History of Domain Names

ANA chief calls for new gTLDs to be suspended

August 9, 2011

The president of the Association of National Advertisers said the organization may sue ICANN unless it suspends its new top-level domains program.

Speaking to DomainIncite, ANA’s Bob Liodice said that American industry is “horrified” by the program, which he believes will cost his members a “quite humongous sum of money”.

Liodice wrote to ICANN president Rod Beckstrom a week ago, demanding the program be abandoned and dropping major hints that a lawsuit would be the alternative.

ANA’s board of directors, comprised of representatives of 36 of the largest companies in the US, is “unanimous” in its opposition to the program, he told me.

“We’ve had many conversations with our members, brand owners in the US, and nobody supports this to our knowledge,” Liodice said. “If American industry is not supporting the recommendation to do this, then who is? What is the benefit if brands owners are saying they’re horrified?”

ANA’s members simply do not understand why the program has been introduced, Liodice indicated.

“What’s the problem, what is ICANN trying to solve?” he said.

I put it to him that increasing competition in the registry space is in many ways ICANN’s raison d’etre, built into its founding principles.

“Just because this is something that was supposed to be done back in the Clinton days doesn’t mean it has to be done today,” he said. “The world has changed.”

“I think this is more for the benefit of ICANN than for the benefit of the [advertising] industry,” he said. “ICANN will secure substantial revenue for these changes and put incredible burdens on the industry to no benefit for the industry.”

ICANN, which is obviously a non-profit, says it has priced the program on a cost-recovery basis.

Not convinced by .brands

I asked Liodice if any of ANA’s members had expressed interest in “.brand” gTLDs, and put it to him that enjoy.coke or iwantmy.mtv might be innovative ways to advertise.

“That is not an issue right now,” he said. “The brand for the most part is in the URL anyway, what benefit does it get from moving to right of the dot?”

“The industry is in a period of stability and is very satisfied with status quo,” he added.

Liodice was not aware of the .brand announcements from Canon and Hitachi, but expressed skepticism about their reasons for applying.

“Are those companies saying this is important to me and will further my business interests?” he asked.

Canon USA does appear to be a member of ANA, although it does not have a seat on its board. Hitachi is not a member.

ANA’s plan

Last week’s letter gave Beckstrom an August 22 deadline to respond. The first thing ANA intends to do is wait for his reply, Liodice said.

Anything other than an undertaking to suspend the program for talks is likely to see an escalation.

“We first have to ensure this program is suspended,” Liodice said. “We’re trying to halt the introduction at this point in time and suspend it until we can have these conversations.”

ANA also hopes to speak to the US Department of Commerce, which has an oversight relationship with ICANN, as well as to members of Congress.

“We are lobbying members of Congress to make sure they’re aware of the detrimental characteristics of this, particularly at a time when the world is in great disorder with the financial crisis,” Liodice said.

There’s also the possibility of court action.

While stopping short of saying ANA will definitely sue, Liodice did say that the organization’s lawyers are looking into possible causes of action.

“If the reply is not consistent [with ANA’s requests] we will explore that possibility,” he said.

ANA would be represented by the law firm Reed Smith, which has already published its own statement of support for Liodice’s letter on its web site.

It’s good to talk

My feeling is that some of ANA’s concerns are already dealt with by the program’s Applicant Guidebook, and that a conversation explaining this could help reduce tensions.

Liodice, for example, appears convinced that top-level cybersquatting will be possible – that .coke could be registered by somebody other than Coca-Cola.

My view is that such an obvious transgression would be easily (and relatively cheaply) dealt with using the Legal Rights Objection mechanism already in the Guidebook.

That’s assuming, of course, that the $185,000 application fee failed to be a deterrent, and that a registry back-end provider dumb enough to put its name to the bid could be found.

But even if ANA can be convinced that the risk of TLD-squatting is negligible, its concerns about the potential for problems at the second level will be harder to address.

Let’s face it, while estimates of the increased cost of trademark enforcement vary wildly, nobody has disputed that there will be a cost.

One ANA member has estimated that the per-brand cost to companies would be $2 million over 10 years, Liodice said.

ANA does not appear to have spent much time getting involved in the development of the new gTLD program lately — the most recent submission I could find dates from 2009 — but Liodice said its counsel Reed Smith has been representing it in the ICANN process.

ANS Core – The History of Domain Names

ANS CO+RE allows commercial traffic

Date: 01/01/1991

ANS CO+RE

In 1991 a new ISP, ANS CO+RE (commercial plus research), was created as a for-profit subsidiary of the non-profit Advanced Network and Services. ANS CO+RE was created specifically to allow commercial traffic on ANSNet without jeopardizing its parent’s non-profit status or violating any tax laws.

The NSFNET Backbone Service and ANS CO+RE both used and shared the common ANSNet infrastructure. NSF agreed to allow ANS CO+RE to carry commercial traffic subject to several conditions:

  • that the NSFNET Backbone Service was not diminished;
  • that ANS CO+RE recovered at least the average cost of the commercial traffic traversing the network; and
  • that any excess revenues recovered above the cost of carrying the commercial traffic would be placed into an infrastructure pool to be distributed by an allocation committee broadly representative of the networking community to enhance and extend national and regional networking infrastructure and support.

ANS and in particular ANS CO+RE were involved in the controversies over who and how commercial traffic should be carried over what had, until recently, been a government sponsored networking infrastructure. These controversies are discussed in the “Commercial ISPs, ANS CO+RE, and the CIX” and “Controversy” sections of the NSFNET article.

ANS – The History of Domain Names

Advanced Network and Services (ANS)

Date: 01/01/1990

Advanced Network and Services (ANS) was a United States non-profit organization formed in September 1990 by the NSFNET partners (Merit Network, IBM, and MCI) to run the network infrastructure for the soon to be upgraded NSFNET Backbone Service.

HISTORY

ANSNet

In anticipation of the NSFNET Digital Signal 3 (T3) upgrade and the approaching end of the 5-year NSFNET cooperative agreement, in September 1990 Merit, IBM, and MCI formed Advanced Network and Services (ANS), a new non-profit corporation with a more broadly based Board of Directors than the Michigan-based Merit Network. Under its cooperative agreement with US National Science Foundation (NSF), Merit remained ultimately responsible for the operation of NSFNET, but subcontracted much of the engineering and operations work to ANS. Both IBM and MCI made substantial new financial and other commitments to help support the new venture. Allan Weis left IBM to become ANS’s first President and Managing Director. Douglas Van Houweling, former Chair of the Merit Network Board and Vice Provost for Information Technology at the University of Michigan, was the first Chairman of the ANS Board of Directors.

Completed in November 1991, the new T3 backbone was named ANSNet and provided the physical infrastructure used by Merit to deliver the NSFNET Backbone Service.

ANS CO+RE

In 1991 a new ISP, ANS CO+RE (commercial plus research), was created as a for-profit subsidiary of the non-profit Advanced Network and Services. ANS CO+RE was created specifically to allow commercial traffic on ANSNet without jeopardizing its parent’s non-profit status or violating any tax laws.

The NSFNET Backbone Service and ANS CO+RE both used and shared the common ANSNet infrastructure. NSF agreed to allow ANS CO+RE to carry commercial traffic subject to several conditions:

  • that the NSFNET Backbone Service was not diminished;
  • that ANS CO+RE recovered at least the average cost of the commercial traffic traversing the network; and
  • that any excess revenues recovered above the cost of carrying the commercial traffic would be placed into an infrastructure pool to be distributed by an allocation committee broadly representative of the networking community to enhance and extend national and regional networking infrastructure and support.

ANS and in particular ANS CO+RE were involved in the controversies over who and how commercial traffic should be carried over what had, until recently, been a government sponsored networking infrastructure. These controversies are discussed in the “Commercial ISPs, ANS CO+RE, and the CIX” and “Controversy” sections of the NSFNET article.

Sale of networking business to AOL and new life as a philanthropic organization

In 1995, after the transition to a new Internet architecture and the NSFNET Backbone Service was decommissioned, ANS sold its networking business to AOL for a little over $30M and became a philanthropic organization with a mission to advance education by accelerating the use of computer network applications and technology”. This work helped create ThinkQuest, the National Tele-Immersion Initiative, the IP Performance Metrics program, and provided grant funding for educational programs including TRIO Upward Bound, the Internet Society, Internet2, Computers for Youth, Year Up, National Foundation for Teaching Entrepreneurship, Sarasota TeXcellence Program, and many others.

ANS closes

ANS closed down its operations in the mid 2015.

ANS was incorporated in the State of New York and had offices in Armonk and Poughkeepsie, New York.

Apple – The History of Domain Names

Apple Inc – apple.com was registered

Date: 02/19/1987

On February 19, 1987, Apple Inc registered the apple.com domain name, making it 64th .com domain ever to be registered.

Apple Inc. is an American multinational technology company headquartered in Cupertino, California, that designs, develops, and sells consumer electronics, computer software, and online services. Its hardware products include the iPhone smartphone, the iPad tablet computer, the Mac personal computer, the iPod portable media player, the Apple Watch smartwatch, and the Apple TV digital media player. Apple’s consumer software includes the macOS and iOS operating systems, the iTunes media player, the Safari web browser, and the iLife and iWork creativity and productivity suites. Its online services include the iTunes Store, the iOS App Store and Mac App Store, and iCloud. Apple was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne in April 1976 to develop and sell personal computers. It was incorporated as Apple Computer, Inc. in January 1977, and was renamed as Apple Inc. in January 2007 to reflect its shifted focus toward consumer electronics. Apple (NASDAQ: AAPL) joined the Dow Jones Industrial Average in March 2015. Apple is the world’s largest information technology company by revenue, the world’s largest technology company by total assets, and the world’s second-largest mobile phone manufacturer. In November 2014, in addition to being the largest publicly traded corporation in the world by market capitalization, Apple became the first U.S. company to be valued at over US$700 billion. The company employs 115,000 permanent full-time employees as of July 2015 and maintains 478 retail stores in seventeen countries as of March 2016. It operates the online Apple Store and iTunes Store, the latter of which is the world’s largest music retailer. There are over one billion actively used Apple products worldwide as of March 2016. Apple’s worldwide annual revenue totaled $233 billion for the fiscal year ending in September 2015. This revenue generation accounts for approximately 1.25% of the total United States GDP. The company enjoys a high level of brand loyalty and, according to Interbrand’s annual Best Global Brands report, has been the world’s most valuable brand for 3 years in a row, with a valuation in 2015 of $170.3 billion. The corporation receives significant criticism regarding the labor practices of its contractors and its environmental and business practices, including the origins of source materials.

In August 2016, after a three-year investigation by the EU’s competition commissioner that concluded that Apple received “illegal state aid” from Ireland, the EU ordered Apple to pay 13 billion euros ($14.5 billion), plus interest, in unpaid taxes.

Company History:

Apple Computer, Inc. is largely responsible for the enormous growth of the personal computer industry in the 20th century. The introduction of the Macintosh line of personal computers in 1984 established the company as an innovator in industrial design whose products became renowned for their intuitive ease of use. Though battered by bad decision-making during the 1990s, Apple continues to exude the same enviable characteristics in the 21st century that catapulted the company toward fame during the 1980s. The company designs, manufactures, and markets personal computers, software, and peripherals, concentrating on lower-cost, uniquely designed computers such as iMAC and Power Macintosh models.

Origins

Apple was founded in April 1976 by Steve Wozniak, then 26 years old, and Steve Jobs, 21, both college dropouts. Their partnership began several years earlier when Wozniak, a talented, self-taught electronics engineer, began building boxes that allowed him to make long-distance phone calls for free. The pair sold several hundred such boxes. In 1976 Wozniak was working on another box–the Apple I computer, without keyboard or power supply–for a computer hobbyist club. Jobs and Wozniak sold their most valuable possessions, a van and two calculators, raising $1,300 with which to start a company. A local retailer ordered 50 of the computers, which were built in Jobs’s garage. They eventually sold 200 to computer hobbyists in the San Francisco Bay area for $666 each. Later that summer, Wozniak began work on the Apple II, designed to appeal to a greater market than computer hobbyists. Jobs hired local computer enthusiasts, many of them still in high school, to assemble circuit boards and design software. Early microcomputers had usually been housed in metal boxes. With the general consumer in mind, Jobs planned to house the Apple II in a more attractive modular beige plastic container. Jobs wanted to create a large company and consulted with Mike Markkula, a retired electronics engineer who had managed marketing for Intel Corporation and Fairchild Semiconductor. Chairman Markkula bought one-third of the company for $250,000, helped Jobs with the business plan, and in 1977 hired Mike Scott as president. Wozniak worked for Apple full time in his engineering capacity. Jobs recruited Regis McKenna, owner of one of the most successful advertising and public relations firms in Silicon Valley, to devise an advertising strategy for the company. McKenna designed the Apple logo and began advertising personal computers in consumer magazines. Apple’s professional marketing team placed the Apple II in retail stores, and by June 1977, annual sales reached $1 million. It was the first microcomputer to use color graphics, with a television set as the screen. In addition, the Apple II expansion slot made it more versatile than competing computers.

The earliest Apple IIs read and stored information on cassette tapes, which were unreliable and slow. By 1978 Wozniak had invented the Apple Disk II, at the time the fastest and cheapest disk drive offered by any computer manufacturer. The Disk II made possible the development of software for the Apple II. The introduction of Apple II, with a user manual, at a consumer electronics show signaled that Apple was expanding beyond the hobbyist market to make its computers consumer items. By the end of 1978, Apple was one of the fastest-growing companies in the United States, with its products carried by over 100 dealers. In 1979 Apple introduced the Apple II+ with far more memory than the Apple II and an easier startup system, and the Silentype, the company’s first printer. VisiCalc, the first spreadsheet for microcomputers, was also released that year. Its popularity helped to sell many Apple IIs. By the end of the year sales were up 400 percent from 1978, at over 35,000 computers. Apple Fortran, introduced in March 1980, led to the further development of software, particularly technical and educational applications.

In December 1980, Apple went public. Its offering of 4.6 million shares at $22 each sold out within minutes. A second offering of 2.6 million shares quickly sold out in May 1981. Meanwhile Apple was working on the Apple II’s successor, which was intended to feature expanded memory and graphics capabilities and run the software already designed for the Apple II. The company, fearful that the Apple II would soon be outdated, put time pressures on the designers of the Apple III, despite the fact that sales of the Apple II more than doubled to 78,000 in 1980. The Apple III was well received when it was released in September 1980 at $3,495, and many predicted it would achieve its goal of breaking into the office market dominated by IBM. However, the Apple III was released without adequate testing, and many units proved to be defective. Production was halted and the problems were fixed, but the Apple III never sold as well as the Apple II. It was discontinued in April 1984. The problems with the Apple III prompted Mike Scott to lay off employees in February 1981, a move with which Jobs disagreed. As a result, Mike Markkula became president and Jobs chairman. Scott was named vice-chairman shortly before leaving the firm. Despite the problems with Apple III, the company forged ahead, tripling its 1981 research and development budget to $21 million, releasing 40 new software programs, opening European offices, and putting out its first hard disk. By January 1982, 650,000 Apple computers had been sold worldwide. In December 1982, Apple became the first personal computer company to reach $1 billion in annual sales. The next year, Apple lost its position as chief supplier of personal computers in Europe to IBM, and tried to challenge IBM in the business market with the Lisa computer. Lisa introduced the mouse, a hand-controlled pointer, and displayed pictures on the computer screen that substituted for keyboard commands. These innovations come out of Jobs’s determination to design an unintimidating computer that anyone could use. Unfortunately, the Lisa did not sell as well as Apple had hoped. Apple was having difficulty designing the elaborate software to link together a number of Lisas and was finding it hard to break IBM’s hold on the business market. Apple’s earnings went down and its stock plummeted to $35, half of its sale price in 1982. Mike Markkula had viewed his presidency as a temporary position, and in April 1983, Jobs brought in John Sculley, formerly president of Pepsi-Cola, as the new president of Apple. Jobs felt the company needed Sculley’s marketing expertise.

1984 Debut of the Macintosh

The production division for Lisa had been vying with Jobs’s Macintosh division. The Macintosh personal computer offered Lisa’s innovations at a fraction of the price. Jobs saw the Macintosh as the ‘people’s computer’–designed for people with little technical knowledge. With the failure of the Lisa, the Macintosh was seen as the future of the company. Launched with a television commercial in January 1984, the Macintosh was unveiled soon after, with a price tag of $2,495 and a new 3-inch disk drive that was faster than the 5-inch drives used in other machines, including the Apple II. Apple sold 70,000 Macintosh computers in the first 100 days. In September 1984 a new Macintosh was released with more memory and two disk drives. Jobs was convinced that anyone who tried the Macintosh would buy it. A national advertisement offered people the chance to take a Macintosh home for 24 hours, and over 200,000 people did so. At the same time, Apple sold its two millionth Apple II. Over the next six months Apple released numerous products for the Macintosh, including a laser printer and a hard drive. Despite these successes, Macintosh sales temporarily fell off after a promising start, and the company was troubled by internal problems. Infighting between divisions continued, and poor inventory tracking led to overproduction. Although Jobs had originally been a strong supporter of Sculley, Jobs eventually decided to oust Sculley; Jobs, however, lost the ensuing showdown. Sculley reorganized Apple in June 1985 to end the infighting caused by the product-line divisions, and Jobs, along with several other Apple executives, left the company in September. They founded a new computer company, NeXT Incorporated , which would later emerge as a rival to Apple in the business computer market. The Macintosh personal computer finally moved Apple into the business office market. Corporations saw its ease of use as a distinct advantage. It was far cheaper than the Lisa and had the necessary software to link office computers. In 1986 and 1987 Apple produced three new Macintosh personal computers with improved memory and power. By 1988, over one million Macintosh computers had been sold, with 70 percent of sales to corporations. Software was created that allowed the Macintosh to be connected to IBM-based systems. Apple grew rapidly; income for 1988 topped $400 million on sales of $4.07 billion, up from income of $217 million on sales of $1.9 billion in 1986. Apple had 5,500 employees in 1986 and over 14,600 by the early 1990s.

In 1988, Apple management had expected a worldwide shortage of memory chips to worsen. They bought millions when prices were high, only to have the shortage end and prices fall soon after. Apple ordered sharp price increases for the Macintosh line just before the Christmas buying season, and consumers bought the less expensive Apple line or other brands. In early 1989, Apple released significantly enhanced versions of the two upper-end Macintosh computers, the SE and the Macintosh II, primarily to compete for the office market. At the same time IBM marketed a new operating system that mimicked the Macintosh’s ease of use. In May 1989 Apple announced plans for its new operating system, System 7, which would be available to users the next year and allow Macintoshes to run tasks on more than one program simultaneously. Apple was reorganized in August 1988 into four operating divisions: Apple USA, Apple Europe, Apple Pacific, and Apple Products. Dissatisfied with the changes, many longtime Apple executives left. In July 1990, Robert Puette, former head of Hewlett-Packard’s personal computer business, became head of the Apple USA division. Sculley saw the reorganization as an attempt to create fewer layers of management within Apple, thus encouraging innovation among staff. Analysts credit Sculley with expanding Apple from a consumer and education computer company to a business computer company, one of the biggest and fastest-growing corporations in the United States. Competition in the industry of information technology involved Apple in a number of lawsuits. In December 1989 for instance, the Xerox Corporation, in a $150 million lawsuit, charged Apple with unlawfully using Xerox technology for the Macintosh software. Apple did not deny borrowing from Xerox technology but explained that the company had spent millions to refine that technology and had used other sources as well. In 1990 the court found in favor of Apple in the Xerox case. Earlier, in March 1988, Apple had brought suits against Microsoft and Hewlett-Packard, charging copyright infringement. Four years later, in the spring of 1992, Apple’s case was dealt a severe blow in a surprise ruling: copyright protection cannot be based on ‘look and feel’ (appearance) alone; rather, ‘specific’ features of an original program must be detailed by developers for protection.

Mismanagement–Crippling an Industry Giant: 1990s

Apple entered the 1990s well aware that the conditions that made the company an industry giant in the previous decade had changed dramatically. Management recognized that for Apple to succeed in the future, corporate strategies would have to be reexamined.

Apple had soared through the 1980s on the backs of its large, expensive computers, which earned the company a committed, yet relatively small following. Sculley and his team saw that competitors were relying increasingly on the user-friendly graphics that had become the Macintosh signature and recognized that Apple needed to introduce smaller, cheaper models, such as the Classic and LC, which were instant hits. At a time when the industry was seeing slow unit sales, the numbers at Apple were skyrocketing. In 1990, desktop Macs accounted for 11 percent of the PCs sold through American computer dealers. In mid-1992, the figure was 19 percent. But these modestly priced models had a considerably smaller profit margin than their larger cousins. So even if sales took off, as they did, profits were threatened. In a severe austerity move, Apple laid off nearly ten percent of its workforce, consolidated facilities, moved production plants to areas where it was cheaper to operate, and drastically altered its corporate organizational chart. The bill for such forward-looking surgery was great, however, and in 1991 profits were off 35 percent. But analysts said that such pitfalls were expected, indeed necessary, if the company intended to position itself as a leaner, better-conditioned fighter in the years ahead. Looking ahead is what analysts say saved Apple from foundering. In 1992, after the core of the suit that Apple had brought against Microsoft and Hewlett-Packard was dismissed, industry observers pointed out that although the loss was a disappointment for Apple, the company wisely had not banked on a victory. They credited Apple’s ambitious plans for the future with quickly turning the lawsuit into yesterday’s news. In addition to remaining faithful to its central business of computer making–the notebook PowerBook series, released in 1991, garnered a 21 percent market share in less than six months–Apple intended to ride a digital wave into the next century. The company geared itself to participate in a revolution in the consumer electronics industry, in which products that were limited by a slow, restrictive analog system would be replaced by faster, digital gadgets on the cutting edge of telecommunications technology. Apple also experimented with the interweaving of sound and visuals in the operations of its computers.

For Apple, the most pressing issue of the 1990s was not related to technology, but concerned capable and consistent management. The company endured tortuous failures throughout much of the decade, as one chief executive officer after another faltered miserably. Scully was forced out of his leadership position by Apple’s board of directors in 1993. His replacement, Michael Spindler, broke tradition by licensing Apple technology to outside firms, paving the way for ill-fated Apple clones that ultimately eroded Apple’s profits. Spindler also oversaw the introduction of the Power Macintosh line in 1994, an episode in Apple’s history that typified the perception that the company had the right products but not the right people to deliver the products to the market. Power Macintosh computers were highly sought after, but after overestimating demand for the earlier release of its PowerBook laptops, the company grossly underestimated demand for the Power Macintosh line. By 1995, Apple had $1 billion worth of unfilled orders, and investors took note of the embarrassing miscue. In a two-day period, Apple’s stock value plunged 15 percent. After Spindler’s much-publicized mistake of 1995, Apple’s directors were ready to hand the leadership reins to someone new. Gil Amelio, credited with spearheading the recovery of National Semiconductor, was named chief executive officer in February 1996, beginning another notorious era of leadership for the beleaguered Cupertino company. Amelio cut Apple’s payroll by a third and slashed operating costs, but drew a hail of criticism for his compensation package and his inability to relate to Apple’s unique corporate culture. Apple’s financial losses, meanwhile, mounted, reaching $816 million in 1996 and a staggering $1 billion in 1997. The company’ stock, which had traded at more than $70 per share in 1991, fell to $14 per share. Its market share, 16 percent in the late 1980s, stood at less than four percent. Fortune magazine offered its analysis, referring to Apple in its March 3, 1997 issue as ‘Silicon Valley’s paragon of dysfunctional management.’

Amelio was ousted from the company in July 1997, but before his departure a significant deal was concluded that brought Apple’s savior to Cupertino. In December 1996, Apple paid $377 million for NeXT, a small, $50-million-in-sales company founded and led by Steve Jobs. Concurrent with the acquisition, Amelio hired Jobs as his special advisor, marking the return of Apple’s visionary 12 years after he had left. In September 1997, two months after Amelio’s exit, Apple’s board of directors named Jobs interim chief executive officer. Apple’s recovery occurred during the ensuing months. Jobs assumed his responsibilities with the same passion and understanding that had made Apple one of the greatest success stories in business history. He immediately discontinued the licensing agreement that spawned Apple clones. He eliminated 15 of the company’s 19 products, withdrawing Apple’s involvement in making printers, scanners, portable digital assistants, and other peripherals. From 1997 forward, Apple would focus exclusively on desktop and portable Macintoshes for professional and consumer customers. Jobs closed plants, laid off thousands of workers, and sold stock to rival Microsoft Corporation, receiving a cash infusion of $150 million in exchange. Apple’s organizational hierarchy underwent sweeping reorganization as well, but the most visible indication of Jobs’s return was unveiled in August 1998. Distressed by his company’s lack of popular computers that retailed for less than $2,000, Jobs tapped Apple’s resources and, ten months after the project began, unveiled the massively successful iMAC, a sleek and colorful computer that embodied Apple’s skill in design and functionality.

Because of Jobs’s restorative efforts, Apple exited the 1990s as a pared-down version of its former self, but, importantly, a profitable company once again. Annual sales, which totaled $11.5 billion in 1995, stood at $5.9 billion in 1998, from which the company recorded a profit of $309 million. In 1999, sales grew a modest 3.2 percent, but the newfound health of the company was evident in a 94 percent gain in net income, as Apple’s profits swelled to $601 million. Further, Apples’ stock mustered a remarkable rebound, climbing 140 percent to $99 per share in 1999. By the decade’s end, ‘interim’ was dropped from Jobs’s corporate title, signaling Jobs’s return on a permanent basis and fueling optimism that Apple could look forward to a decade of vibrant and consistent growth. In the year 2000 Steve Jobs announced that he would become the new CEO of the company and Mitch Mandich who was the former chief sales executive announced that he would be stepping down as well as the announcement of upcoming products and upgrades are provided such as the PowerMac Cube. Apples success continued with the launch of the PowerBook G4 in 2001 which included a series of Notebook home computers. Another great milestone for Apple INC. in 2001 was the launch of the popular iPod which is a small handheld media player. 2001 was the launch year for the OS x operating system. Another important milestone in 2001 was the licensing of Amazon’s 1 Click.

In 2002 Apple teamed up with Sun and Ericsson and the former Vice President of Education John Couch returned as well. Other notable advancements for Apple in 2002 were the acquisition of Magic, a music software company as well as the FireWire Company and the announcement that their retail stores would soon be expanding to include overseas locations. Apple was awarded an Emmy for technology in 2002 and there was also an announcement that Larry Ellison would be resigning form the board.  The CEO, Steve Jobs underwent surgery in 2003 for pancreatic cancer.  The new ad campaign which features the musical band U2 was launched in 2004.  One of the most exciting advancements of Apple in 2004 was the opening of the iTunes store. A new version of the iPod was also introduced in 2004 and featured the 4th generation iPod as well as the unveiling of the video iPod. In 2005 the release of the IPod Nano was successfully launched and Jeff Raskin who was the computer interface expert for Apple Computers Inc. at the time died from cancer. Further advancements and events in 2005 include the acquisition of Schema Soft as well as the switch to the use of Intel processor chips in Apple products. The success of Apply Computers was apparent with the download of more than one million videos within three weeks of the launch of the Video iPod.

In 2006 Avie Tevanian who was the software development leader for Apple announces his resignation and the announcement of the computer take bake program was also a buzz. The popular MacBook Pro line was also introduced in 2006 and offered a line of portable computers to consumers. Although Apple was already a leader in technology, the release of the iPhone in 2007 brought the company great gains and opened up a whole new world for users due to the sleek interface with a single button that featured a touch screen and virtual keyboard as well as the introduction of Apple TV and the iPod touch which was very similar to the iPhone without the telephone capabilities featuring wireless capabilities.  In 2008 the App Store was unveiled as an iTunes update and featured small applications which could be easily downloaded to your iPhone or iPod. These applications included everything from games to business and social tools. The MacBook air was also released in 2008. 2009 brought some problems for the company when CEO Steve Jobs had to take a leave of absence from the company due to health reasons.  After a liver transplant he returned to work that same year.

Later in 2009 the iPhone 3GS was released as the new version of the original iPhone and sales for their iPod reached more than $200 million. In 2012 Cooks who filled in for Jobs during his medical leave was awarded bonus of $22 million dollars for his outstanding leadership during Jobs’ leave of absence in which time Apple’s stock prices increased by almost 70%. In 2010 the new iPad was also launched which features a large 10” touchscreen. It quickly claimed more than 80% of the tablet market by the end of the year.Music from the British band The Beatles became available on iTunes after much debate.

In 2011 the announcement was made that Jobs would take an additional medical leave of absence. The iPhone was now available through Verizon wireless which ended the monopoly which AT&T had with the iPhone due to the expiration of the contract giving AT&T exclusive rights to the sales of the iPhone in the United States. The IPad 2 and iPhone 4 Pro were also introduced in 2011 which offered new innovative features and a more streamlined and sleek design and style.  2011 also brought the launch of the iPhone 4S in October with the introduction of Siri – is a voice control friend which will quickly provide maps, directions, phone calls, and other features by verbal request. Four million units were sold within the first few weeks of release.

2012 brought the release of the release of the new iPhone five in September with more than 5 million being sold within the first 3 days of the release and caused a backorder and delay in shipment because the company did not anticipate the demand.