Category Archives: Uncategorized

Vintcerf ICANN – The History of Domain Names

Vint Cerf was instrumental in the funding and formation of ICANN

Date: 01/01/2010

Vinton Gray “Vint” Cerf is an American computer scientist, who is recognized as one of “the fathers of the Internet”, sharing this title with American computer scientist Bob Kahn. His contributions have been acknowledged and lauded, repeatedly, with honorary degrees and awards that include the National Medal of Technology, the Turing Award, the Presidential Medal of Freedom, and membership in the National Academy of Engineering.

Vinton Cerf was instrumental in the funding and formation of ICANN from the start. Cerf waited in the wings for a year before he stepped forward to join the ICANN Board. Eventually he became the Chairman of ICANN. Cerf was elected as the president of the Association for Computing Machinery in May 2012 and in August 2013 he joined the Council on CyberSecurity’s Board of Advisors.

Cerf is active in many organizations that are working to help the Internet deliver humanitarian value in our world today. He is supportive of innovative projects that are experimenting with new approaches to global problems, including the digital divide, the gender gap, and the changing nature of jobs. Cerf is also known for his sartorial style, typically appearing in three-piece suit—a rarity in an industry known for its casual dress norms.

Cerf also went to Van Nuys High School along with Jon Postel and Steve Crocker; he wrote the former’s obituary. Both were also instrumental in the creation of the Internet as we know it.

Vintoncerf Chairman – The History of Domain Names

Vinton Cerf became the Chairman of the ICANN board

Date: 05/08/2011

Vint Cerf elected ICANN chairman.  The 57-year-old Internet pioneer was elected chairman of the Internet Corporation for Assigned Names and Numbers (ICANN).

He assumed the one-year, unpaid position this week at ICANN’s annual meeting. On Thursday, Cerf was among the board members who unanimously approved seven new top level domains, additions to the existing .com and .net Web address suffixes. Currently senior vice president for Internet architecture and technology at WorldCom Corp., Cerf said he believes ICANN should have a limited role in the Internet’s development. In the 1970s, as a Stanford University professor, he helped create the TCP/IP protocol for Internet communication.

Vintoncerf – The History of Domain Names

Vinton Cerf was a program manager

Date: 01/01/1984

Vint Cerf  – Widely known as a “Father of the Internet,” Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. In 2004 both Cerf and Kahn won the A.M. Turing Award, the highest honour in computer science, for their “pioneering work on internetworking, including the design and implementation of the Internet’s basic communications protocols, TCP/IP, and for inspired leadership in networking.”

In 1965 Cerf received a bachelor’s degree in mathematics from Stanford University in California. He then worked for IBM as a systems engineer before attending the University of California at Los Angeles (UCLA), where he earned a master’s degree and then a doctorate in computer science in 1970 and 1972, respectively. He then returned to Stanford, where he joined the faculty in computer science and electrical engineering. While at UCLA, Cerf worked under fellow student Stephen Crocker in the laboratory of Leonard Kleinrock on the project to write the communication protocol (Network Control Program [or Protocol]; NCP) for the ARPANET (Advanced Research Projects Agency Network; see DARPA), the first computer network based on packet switching, a heretofore untested technology. (In contrast to ordinary telephone communications, in which a specific circuit must be dedicated to the transmission, packet switching splits a message into “packets” that travel independently over many different circuits.) UCLA was among the four original ARPANET nodes. Cerf also worked on the software that measured and tested the performance of the ARPANET. While working on the protocol, Cerf met Kahn, an electrical engineer who was then a senior scientist at Bolt Beranek & Newman. Cerf’s professional relationship with Kahn was among the most important of his career.

In 1972 Kahn moved to DARPA as a program manager in the Information Processing Techniques Office (IPTO), where he began to envision a network of packet-switching networks—essentially, what would become the Internet. In 1973 Kahn approached Cerf, then a professor at Stanford, to assist him in designing this new network. Cerf and Kahn soon worked out a preliminary version of what they called the ARPA Internet, the details of which they published as a joint paper in 1974. Cerf joined Kahn at IPTO in 1976 to manage the office’s networking projects. Together, with many contributing colleagues sponsored by DARPA, they produced TCP/IP (Transmission Control Protocol/Internet Protocol), an electronic transmission protocol that separated packet error checking (TCP) from issues related to domains and destinations (IP).

Cerf’s work on making the Internet a publicly accessible medium continued after he left DARPA in 1982 to become a vice president at MCI Communications Corporation (WorldCom, Inc., from 1998 to 2003). While at MCI he led the effort to develop and deploy MCI Mail, the first commercial e-mail service that was connected to the Internet. In 1986 Cerf became a vice president at the Corporation for National Research Initiatives, a not-for-profit corporation located in Reston, Virginia, that Kahn, as president, had formed to develop network-based information technologies for the public good. Cerf also served as founding president of the Internet Society from 1992 to 1995. In 1994 Cerf returned to MCI as a senior vice president, and from 2000 to 2007 he served as chairman of the Internet Corporation for Assigned Names and Numbers (ICANN), the group that oversees the Internet’s growth and expansion. In 2005 he left MCI to become vice president and “chief Internet evangelist” at the search engine company Google Inc.

In addition to his work on the Internet, Cerf served on many government panels related to cybersecurity and the national information infrastructure. A fan of science fiction, he was a technical consultant to one of author Gene Roddenberry’s posthumous television projects, Earth: Final Conflict. Among his many honours were the U.S. National Academy of Engineering’s Charles Stark Draper Prize (2001), the Prince of Asturias Award for Technical and Scientific Research (2002), the Presidential Medal of Freedom (2005), the Queen Elizabeth Prize for Engineering (2013), and the French Legion of Honour (2014).

Vistaprint – The History of Domain Names

Vistaprint Buys

Dec 19, 2011

Printing services giant Vistaprint announced on Monday that it had acquired web hosting provider Webs Inc. for $117.5 million, to be paid at closing through a combination of $100 million cash and $17.5 million in restricted shares.

Webs, which has raised $12 million in funding from Novak Biddle Venture Partners and Columbia Capital, allows businesses to create simple websites, Facebook pages and mobile sites. Premium upgrades include personalized domain names, customer support, email addresses, and enhanced web and video storage. Webs has helped build over 50 million sites to date and has over 100,000 paying subscribers. Over 20,000 new users register daily for Webs’ suite of products. Vistaprint says that Webs’ 2011 revenues are $9 million.

Earlier this year, Webs acquired Facebook Pages creator PageModo and lightweight CRM tool ContactMe. Webs also recently debuted a new mobile site builder today that automatically keeps a smartphone-friendly website in sync with a business’ main website.

In acquiring, Vistaprint is perhaps most interestingly acquiring the company’s website building tool, which the company says has been refined over the years – via the company’s “freemium” distribution model – into one of the finer website builders in the business. Webs also recently acquired Pagemodo, a tool for creating customized Facebook pages.

WAIS – The History of Domain Names

Wide area information server (WAIS)

Date: 01/01/1991

Wide Area Information Server (WAIS) is a client–server text searching system that uses the ANSI Standard Z39.50 Information Retrieval Service Definition and Protocol Specifications for Library Applications”  to search index databases on remote computers. It was developed in the late 1980s as a project of Thinking Machines, Apple Computer, Dow Jones, and KPMG Peat Marwick.

WAIS (Wide Area Information Servers) is an Internet system in which specialized subject databases are created at multiple server locations, kept track of by a directory of servers at one location, and made accessible for searching by users with WAIS client programs. The user of WAIS is provided with or obtains a list of distributed database s. The user enters a search argument for a selected database and the client then accesses all the servers on which the database is distributed. The results provide a description of each text that meets the search requirements. The user can then retrieve the full text.

WAIS (pronounced “ways”) uses its own Internet protocol , an extension of the Z39.50 standard (Information Retrieval Service Definition and Protocol Specification for Library Applications) of the National Information Standards Organization. Web users can use WAIS by either downloading a WAIS client and a “gateway” to the Web browser or by using Telnet to connect to a public WAIS client.


The WAIS protocol and servers were primarily promoted by Thinking Machines Corporation (TMC) of Cambridge, Massachusetts. TMC produced WAIS servers which ran on their massively parallel CM-2 (Connection Machine) and SPARC-based CM-5 MP supercomputers. WAIS clients were developed for various operating systems and windowing systems including Microsoft Windows, Macintosh, NeXT, X, GNU Emacs, and character terminals. TMC, however, released a free open source version of WAIS to run on Unix in 1991.

Inspired by the WAIS project on full text databases and emerging SGML projects Z39.50 version 2 or Z39.50:1992 was released. Unlike its 1988 predecessor it was a compatible superset of the ISO 10162/10163 work that had been done internationally.

With the advent of Z39.50:1992, the termination of support for the free WAIS from Thinking Machines and the establishment of WAIS Inc as a commercial venture, the U.S. National Science Foundation funded the Clearinghouse for Networked Information Discovery and Retrieval (CNIDR) to create a clearinghouse of information related to Internet search and discovery systems and to promote open source and standards. CNIDR created a new freely available open-source WAIS. This created first the freeWAIS package based on the wais-8-b5 codebase implemented by Thinking Machines Corp and then a wholly new software suite Isite based upon Z39.50:1992 with Isearch as its full text search engine.

Ulrich Pfeifer and Norbert Gövert of the computer science department of the University of Dortmund took the CNIDR freeWAIS code and extended it to become freeWAIS-sf: sf means structured fields and indicated its main improvement. Ulrich Pfeifer rewrote freeWAIS-sf in Perl where it became WAIT.

Directory of Servers

Thinking Machines Corp provided a service called the Directory of Servers. It was a WAIS server like any other information source but contained information about the other WAIS servers on the Internet. When one would create a WAIS server with the TMC WAIS code it would create a special kind of record containing metadata and some common words to describe the content of the index. It would be uploaded to the central server and indexed along with the records from other public servers. One could search the directory to find servers that might have content relevant to a specific field of interest. This model of searching for (WAIS) servers to search became the role model for GILS and Peter Deutsch’s WHOIS++ distributed white pages directory.


Two of the developers of WAIS, Brewster Kahle and Harry Morris, left Thinking Machines to found WAIS Inc in Menlo Park, California with Bruce Gilliat. WAIS Inc. was originally developed as a joint project between Apple Computer, Peat Markwick, Dow Jones, and Thinking Machines. In 1992, the presidential campaign of Ross Perot used WAIS as a campaign wide information system, connecting the field offices to the national office. Later, Perot Systems adopted WAIS to better access the information in its corporate databases. Other early clients were the Environmental Protection Agency, Library of Congress, and the Department of Energy and later the Wall Street Journal and Encyclopædia Britannica.

WAIS Inc was sold to AOL in May 1995 for $15 million. Following the sale, Margaret St. Pierre left WAIS Inc to start Blue Angel Technologies. Her “WAIS variant” formed the basis of MetaStar. Georgios Papadopoulos left to found Atypon. François Schiettecatte left Human Genome Project at Johns Hopkins Hospital and started FS-Consult and developed his own variant of WAIS which eventually became ScienceServer, which was later sold to Elsevier Science. Kahle and Gilliat went on to found the Internet Archive and Alexa Internet.

WAIS and Gopher

Public WAIS is often used as a full text search engine for individual Internet Gopher servers, supplementing the popular Veronica system which only searches the menu titles of Gopher sites. WAIS and Gopher share the World Wide Web’s client–server architecture and a certain amount of its functionality. The WAIS protocol is influenced largely by the z39.50 protocol designed for networking library catalogs. It allows a text-based search, and retrieval following a search. Gopher provides a free text search mechanism, but principally uses menus. A menu is a list of titles, from which the user may pick one. While gopher space is a web containing many loops, the menu system gives the user the impression of a tree.

The Web’s data model is similar to the gopher model, except that menus are generalized to hypertext documents. In both cases, simple file servers generate the menus or hypertext directly from the file structure of a server. The Web’s hypertext model permits the author more freedom to communicate the options available to the reader, as it can include headings and various forms of list structure.

Because of the abundance of content and search engines now available on the Web, few if any WAIS servers remain in operation.

UUNET – The History of Domain Names

UUNET founded

Date: 01/01/1987

UUNET, founded in 1987, was one of the largest Internet service providers and one of the early Tier 1 networks. It was based in Northern Virginia and was one of the first commercial Internet service providers. Today, UUNET is an internal brand of Verizon Business (formerly MCI).

Company History:

Founded in 1987, UUNET is recognized not only as the first commercial Internet service provider (ISP), but it is also the world’s leading Internet carrier. The company provides a comprehensive range of Internet services to more than 70,000 business customers worldwide, including Internet access, web hosting, remote access, virtual private networking (VPN), managed security, and multicast services, among others. It owns and operates one of the most widely deployed IP networks in the world, with more than 2,500 POPs (points of presence, or primary Internet connections) providing Internet connectivity in more than 100 countries. UUNET’s parent company, WorldCom Inc., is one of the world’s largest telecommunications companies.

A Pioneering Internet Service Provider: 1987-94

UUNet Technologies Inc. (UUNET) was founded in 1987, and in 1988 it sold the first commercial connection to the Internet. For the next several years UUNET would be a pioneering Internet service provider (ISP) and offer innovative services. In 1992 UUNET developed the first commercial application–layer firewall services for IP (internet protocol) networks. In 1993 UUNET began offering T-1 connections to the Internet. In 1994 UUNET designed and installed the first virtual private network (VPN) service, the Virtual Private Data Network (VPDN) product family. The company also began offering web hosting services by joining forces with electronic publisher Interse Corporation to provide companies with a turnkey solution for putting their businesses on the Internet. The service included a T-1 connection, offered at two different speeds, from UUNET, and Interse’s World Wide Web server. Interse would also build the company’s home page using HTML (hypertext markup language) and design an interactive, multi-tiered, and graphical presentation. In 1994 John Sidgmore became UUNET’s CEO and president. He was formerly president and CEO of CSC Intelicom.

Providing Internet Access and Other Services to Corporate Customers: 1995-97

In early 1995 Microsoft Corporation was busy preparing its Microsoft Network (MSN), which would be offered with the release of Windows 95. UUNET provided Microsoft with a dedicated TCP/IP network that would allow MSN users to have dial-up access to both MSN and the Internet. UUNET would provide MSN users with a variety of high-speed service options, including ISDN and 28.8K-bps (bits per second) modem access. Microsoft planned to offer full access to the Internet by the end of 1995. At the time UUNET had 25 points of presence and planned to have 100 connections in place within a few months. According to one analyst, UUNET was a good choice for Microsoft, because it had more expertise than telephone carriers about Internet protocol network management. Microsoft also took a minority interest in UUNET, with the funds to be used to expand UUNET’s 25-city dial-up network, and also gained a seat on UUNET’s board of directors.

UUNET completed its initial public offering (IPO) in May 1995. The stock was priced at $14 a share and raised more than $50 million. Two other ISPs–Netcom On-Line Communications Services Inc. and Performance Systems International (PSINet)–also had their IPOs that month, and all were well received by Wall Street. By July UUNET’s stock was trading in the $45 range. Although none of the companies had shown a profit, investors were betting on their potential for explosive revenue growth in the coming year. At the time of UUNET’s IPO, Microsoft owned about 15 percent of the company. As it turned out, UUNET’s revenue for 1995 was $94 million, compared to $12.4 million in the previous year.

While UUNET had a smaller network than competitor Netcom On-Line Communications Services Inc., UUNET was focused on the corporate market, while Netcom was targeting the consumer market. The corporate market for Internet access was expected to grow more quickly, with the consumer market set to explode toward the end of the 1990s. The corporate market gave UUNET higher revenue streams, bigger margins, and a more reliable customer base. UUNET also offered premium services, such as network management and security, for which it could charge more. UUNET’s corporate clients included America Online and AT & T as well as Microsoft, for whom it built Internet backbones. UUNET was not ignoring the consumer market, though, and most ISPs were pursuing a hybrid business model for both the corporate and consumer markets. UUNET was in the process of building out its network and adding points of presence.

In May 1995 UUNET introduced its Internet firewall and an encryption system for sending private data over public networks. The package included LanGuardian, a hardware-based encryption system, and Gauntlet, an application gateway firewall. It would allow workgroup users to create a virtual private network (VPN) over the Internet, thus facilitating collaborative development.

Later in the year UUNET introduced T-3 and SMDS (Switched Multimegabit Data Services), with pricing for the T-3 service starting at $5,000 per month. T-3 service was initially available in seven U.S. cities. The SMDS option was a public-switched data offering from the regional Bell operating companies (RBOCs), which allowed users to connect to UUNET’s Internet backbone for around $1,500 a month.

UUNET’s two principal services were leased-line connections for businesses under its AlterNet program and dial-up access through its AlterDial program. AlterDial was expected to have 130 points of presence by the end of 1995. In November 1995 UUNET and Premenos Corporation entered into an alliance to build a system for electronic data interchange (EDI) over the Internet. EDI was seen to be a key element in business-to-business electronic commerce. Premenos produced a popular EDI software suite that enabled confidentiality, data integrity, user authentication, and built-in security. In 1996 UUNET began offering more services related to electronic commerce over the Internet. They included end-to-end security, FTP (File Transport Protocol) hosting, dedicated servers that companies could lease for processor-intensive tasks such as graphics applications or web-based compilation, and improved reporting capabilities. By early 1996 UUNET was refocused on the corporate market, letting other ISPs such as America Online and CompuServe concentrate on consumers. With 1995 revenue of $94 million, UUNET was the largest player in the industry.

New Owners for UUNET: 1996

At the end of April 1996 it was announced that MFS Communications Inc. would acquire UUNET for $2 billion in stock. MFS Communications paid a 37 percent premium over market value for UUNET, making 40 of UUNET’s 700 employees who owned UUNET stock millionaires. MFS Communications was based in Omaha, Nebraska, and offered local and long distance telephone service in New York and nationwide. It had built fiber-optic cable connections to 7,400 buildings in key financial districts in the United States and Europe. Together, the two companies would be able to offer end-to-end voice, data, and Internet services. Following the announcement UUNET extended its AlterDial service to 92 international cities.

In mid-1996 UUNET picked up GTE as a corporate customer when GTE introduced GTE Internet Solutions, which used UUNET’s existing network to offer Internet access service in 250 cities in 46 states. At the time GTE was the largest local telephone service provider in the United States. Its Internet access service would be offered to consumers for $19.95 a month. At the same time UUNET formed an alliance with USConnect Inc. that would combine UUNET’s national network with USConnect’s systems integration expertise. The partnership would provide businesses with a single source for integrating their LANs with the Internet.

Later in 1996 UUNET became the first major commercial ISP to offer web hosting services for Windows NT 4.0. UUNET’s hosting service included a personalized domain name for the customer, servers housed at UUNET with guaranteed power backup, constant web site monitoring, daily tape backups, monthly server traffic reports, and third-party audit services. UUNET priced the service at $3,000 a month, with a $5,000 start-up fee for the dedicated server. If clients required assistance with web site design, UUNET referred them to third-party content developers who were part of the Team UUNET program. UUNET first offered web hosting services in 1994 using the Unix platform and claimed to have more than 800 customers. Web hosting was one service that ISPs could offer to distinguish themselves from their numerous competitors. In late 1996 UUNET began offering another service, ExtraLink, a package of extranet services that would enable companies to share information with customers and suppliers over a virtual private network (VPN). A related service, called ExtraLink Remote Access, allowed remote workers to access the corporate network without a large bank of modems.

For 1996 UUNET’s revenues were estimated to be $216 million. Before the end of the year WorldCom Inc. acquired MFS Communications for approximately $12 billion in stock. Sidgmore remained CEO of UUNET and became vice-chairman and chief operating officer (COO) of WorldCom.

Prospering Under WorldCom: 1997-2000

With WorldCom as its parent company, UUNET was able to offer a pioneering service guarantee in early 1997. The company guaranteed uptime of more than 99.9 percent for intranet and business-to-business services. The guarantee applied to sites that were connected to one another by the UUNET network. It covered more than 300 locations in the United States and 500 in Europe and Asia.

UUNET also received $300 million from WorldCom to upgrade its Internet backbone in the United States and quadruple its capacity. Company officials estimated that the load on UUNET’s backbone doubled every quarter, which meant the company would need to increase its capacity tenfold within a year and a hundredfold within two years. UUNET expected to have to spend about $300 million a year on network expansion for the next four or five years. UUNET’s main ISP customers at the time were Microsoft, GTE, and EarthLink, with America Online also leasing a small portion of UUNET’s network.

At the Internet World show in March 1997, UUNET announced it would offer global web hosting facilities to help multinational companies overcome the problem of narrow bandwidth between continents. UUNET planned to provide local web hosting through peer web service providers in Canada, Germany, the United Kingdom, and Taiwan. UUNET also announced the availability of IDSL technology, a combination of ISDN and DSL developed with MFS Communications and Ascend Communications Inc., that would provide small and medium-size businesses with direct connections to the Internet at a low cost. IDSL, which offered high-speed Internet access over standard phone lines, would first be deployed by the regional Bell operating companies (RBOCs), starting in California.

With the ISP market evolving and businesses demanding better quality and faster throughput, the large Internet backbone providers–including UUNET and Sprint–announced they would no longer carry traffic for mid-size ISPs free of charge. In some cases, the traffic-exchange agreements would be terminated, while in other cases the mid-size ISPs would be charged a fee. Costs were expected to be passed along to the consumer. To further ensure that its Internet traffic reached its destination, UUNET introduced a Shadow Support Program that provided a secondary T-1 or T-3 line to businesses. Network administrators could then redirect traffic if there was a line outage or other problem, thus avoiding an interruption of service. At the time of the announcement, UUNET had been dealing with a growing number of customer complaints after two UUNET hubs in Los Angeles and Tyson’s Corner, Virginia, failed.

In mid-1997 UUNET announced it would begin offering an Internet fax service by the end of the year. Called UUFax, the service was based on technology developed with software maker Open Port Technology Inc. and remote access hardware provider Ascend Communications Inc. Instead of sending faxes as analog traffic over telephone lines, UUFax would turn faxes into IP packets and deliver them via UUNET’s global Internet backbone. By sending faxes over the Internet, companies could save the cost of a long-distance call. In September 1997 UUNET acquired NLnet, the leading ISP in the Netherlands. NLnet’s 45 points of presence there provided comprehensive local-dial access throughout the country. At the time the Netherlands was one of the top five countries for Internet usage on a per capita basis. With the acquisition UUNET had nearly 1,000 points of presence in the United States, Canada, Europe, and the Asia-Pacific region.

Also in September UUNET introduced multicast services for mass Internet broadcasting. Called UUCasting, the service allowed content providers to send only one stream of audio, video, or text data to UUNET, which would then deliver the content to large numbers of users. The service utilized 60 Cisco System routers that were dispersed throughout the UUNET network. UUCasting was the first commercially available multicasting service to give Internet content providers a way to reach hundreds of thousands of users without having to invest in their own T-3 lines. UUCasting became available in December 1997. Later in the year UUNET became the first ISP to offer OC-3 service. The program, called OCDirect, was introduced at the 1997 Networld+Interop trade show. It would provide users with 155 mbps (megabytes per second) direct access to UUNET’s Internet backbone, the fastest speed currently available and faster than anything being offered by other ISPs. OCDirect was only offered in the San Francisco Bay area, Washington, D.C., and New York City, mainly to ISP resellers, large Web-hosting services, and large corporate users. In November 1997 UUNET and BellSouth Corporation began offering 56K-bps dial-up services to provide faster access to the Internet and corporate LANs. UUNET’s 56K-bps access was immediately available through 415 points of presence, to be expanded to 500 POPs by the end of 1997. AT & T was the first to provide 56K-bps service, starting in October 1997 and available in 11 U.S. cities. BellSouth’s 56K-bps service would be offered in four Southern cities. Following the adoption of the V.90 standard for 56K-bps modems in February 1998, UUNET upgraded 300,000 of its dial-up access ports in more than 700 U.S. points of presence by mid-1998, with an additional 200,000 upgrades planned by the end of the year.

Following the acquisition by WorldCom of three other ISPs–ANS Communications Inc., CompuServe Network Services, and GridNet International–WorldCom announced it would consolidate them with UUNET and reorganize into two main divisions: UUNET WorldCom, which would emphasize packaged services, and WorldCom Advanced Networks, which would provide Web hosting, intranet and extranet, VPN, and data security services. UUNET Worldcom would be headed by Mark Spagnolo, UUNET’s president and COO, while WorldCom Advanced Networks would be headed by former CompuServe Network Services president Peter Van Kamp. Both would report to John Sidgmore, WorldCom’s vice-chairman and UUNET’s CEO. The reorganization was necessitated in part by the need to stop providing overlapping Internet services from the different ISPs WorldCom had acquired, and also in part by its pending acquisition of MCI. Following the reorganization UUNET WorldCom would have about 3,000 employees and WorldCom Advanced Networks about 1,700 employees.

In August 1998 UUNET became the first ISP to offer a service-level agreement (SLA) that guaranteed customers 100 percent end-to-end network availability. To qualify companies had to have at least one-year contracts for frame relay, dedicated 56K-bps, T-1, T-3, or OC-3 Internet access services. UUNET also provided other SLAs covering problems with installation and transmission delays.

In November 1998 WorldCom closed its acquisition of MCI Corporation for $40 billion. John Sidgmore became vice-chairman of MCI WorldCom and remained CEO of UUNET.

In 1999 UUNET increased the capacity of its national network backbone from OC-12 to OC-48, quadrupling network speed from 644Mb/s to 2.4Gb/s. With more than 60,000 miles of OC-48 in its backbone, UUNET was attempting to stay ahead of increasing demand for Internet bandwidth. The company also upgraded its UUCast service by increasing the number of supporting router ports from 500,000 to one million and by offering transmission speeds up to OC-3 (155 mbps). In 1999 UUNET began offering a managed global VPN service called UUsecure. The service, which would be available in 14 countries by the end of 1999, included network design, construction, management, and monitoring. Managed global VPN services represented a newly developing market, but competing services were already being offered by companies such as IBM, AT & T Corp., GTE Corporation, and Equant Network Services.

A mid-1999 survey of ISPs by Data Communications magazine revealed that UUNET was the largest, serving 178 of the 500 largest domains. UUNET was also ranked the best overall ISP and received the magazine’s User’s Choice Award. In 1999 the company expanded its DSL service, which it introduced in 1998, to include more than 1,000 points of presence in 850 cities. It also reasserted its presence in the web hosting market, expanding its hosting centers in San Jose, California, and Washington, D.C., and announcing plans to build seven others, which would give UUNET a total of 15 UUhost Data Centers. The overall web hosting market was projected to grow from $4.4 billion in 1999 to $14.4 billion in 2003, according to The Yankee Group.

For 2000 UUNET planned to upgrade its U.S. network to OC-192 (ten gbps, or gigabytes per second) using Juniper Networks Inc.’s new M160 routers. The company also began targeting small businesses, offering them a turnkey set of Internet services. Dubbed Business Essentials and Business Essentials Plus, the packages included the services and equipment most often needed by small businesses to use the Internet.

Later in 2000 UUNET established a Latin American regional headquarters in Sao Paulo, Brazil. UUNET’s parent company, WorldCom, held a controlling interest in Embratel, Brazil’s former state-owned long-distance company. UUNET planned to take advantage of Embratel’s infrastructure and presence in Brazil to enter the ISP market there, then expand throughout Latin America. At the time UUNET operated in 114 countries and served more than 70,000 businesses around the world.

For the future it was likely that UUNET would continue to expand globally and develop new services to maintain its leadership position as an ISP. For example, the company announced it would offer a bundled group of services called Access Choice, which was aimed at remote users and offered them wired as well as wireless access. While some new services would be developed internally, others might come from acquisitions made by UUNET’s parent company, WorldCom. For example, WorldCom acquired Intermedia for $6 billion in 2000, which added Intermedia’s web hosting subsidiary Digex to its group of companies. Tapping the resources of its parent company, UUNET was also well-positioned to build out its network, to increase the number of data centers it operated for web hosting and co-location services, and generally to invest $2–$3 million a day in its infrastructure.

VBNS – The History of Domain Names

Very high-speed Backbone Network Service (vBNS)

Date: 01/01/1995

The very high-speed Backbone Network Service (vBNS) came on line in April 1995 as part of a National Science Foundation (NSF) sponsored project to provide high-speed interconnection between NSF-sponsored supercomputing centers and select access points in the United States. The network was engineered and operated by MCI Telecommunications under a cooperative agreement with the NSF.

NSF support was available to organizations that could demonstrate a need for very high speed networking capabilities and wished to connect to the vBNS or later to the Abilene Network, the high speed network operated by the University Corporation for Advanced Internet Development (UCAID, which operates Internet2).

By 1998, the vBNS had grown to connect more than 100 universities and research and engineering institutions via 12 national points of presence with DS-3 (45 Mbit/s), OC-3c (155 Mbit/s), and OC-12c (622 Mbit/s) links on an all OC-12c, a substantial engineering feat for that time. The vBNS installed one of the first ever production OC-48c (2.5 Gbit/s) IP links in February 1999, and went on to upgrade the entire backbone to OC-48c.

In June 1999 MCI WorldCom introduced vBNS+ which allowed attachments to the vBNS network by organizations that were not approved by or receiving support from NSF. The vBNS pioneered the production deployment of many novel network technologies including Asynchronous Transfer Mode (ATM), IP multicasting, quality of service, and IPv6. After the expiration of the NSF agreement, the vBNS largely transitioned to providing service to the government. Most universities and research centers migrated to the Internet2 educational backbone.

In January 2006 MCI and Verizon merged. The vBNS+ is now a service of Verizon Business.

What is the vBNS?

vBNS stands for very high performance Backbone Network Service.

The vBNS is a nationwide network that operates at a speed of 622 megabits per second (OC12) using MCI’s network of advanced switching and fiber optic transmission technologies. At speeds of 622 megabits per second, 322 copies of a 300-page book can be sent every seven seconds.

Launched in April 1995, the vBNS is the product of a five-year cooperative agreement between MCI and the National Science Foundation (NSF) to provide a high bandwidth network for research applications.

How does the vBNS work?

The vBNS relies on advanced switching and fiber optic transmission technologies, known as Asynchronous Transfer Mode (ATM) and Synchronous Optical Network (SONET). The combination of ATM and SONET enables very high speed, high capacity voice, data, and video signals to be combined and transmitted “on demand”.

The vBNS’ speeds are achieved by connecting Internet Protocol (IP) through an ATM switching matrix, and running this combination on the SONET network.

Who can use the vBNS?

The vBNS was designed for the scientific and research communities and originally provided high speed interconnection among NSF supercomputing centers and connection to NSF-specified Network Access Points. Today the vBNS connects two NSF supercomputing centers and research institutions that are selected under the NSF’s high performance connections program.

The vBNS is only available for meritorious research projects with high bandwidth uses and is not used for general Internet traffic.

Since MCI owns and operates the network, can MCI determine who connects to the vBNS?

No. The NSF awards grants under its high performance connection program. MCI is not involved in the program or the decision process. For more information on the NSF’s high performance connections program, contact:

What is the difference between the vBNS and the Internet?

The Internet is a ubiquitous network that has become an information tool for researchers, students, teachers, business and the general public. The vBNS is a non-commercial research platform for the advancement and development of high speed applications, data routing and data switching capabilities.

What is the relationship between vBNS and Internet2?

Through the NSF’s high performance connections program, fifty-three Internet2 university members have received grants to support the acquisition of high performance network connections to the vBNS. Internet2 members will use the vBNS to enable the advanced, networked computing applications they are developing. The vBNS serves as initial interconnect for Internet2 members.

Verisign Domainfees – The History of Domain Names

Verisign Announces Increase in .com/.net Domain Name FeesHello

Jul 14, 2011

VeriSign, Inc. (NASDAQ: VRSN), the trusted provider of Internet infrastructure services for the networked world, today announced, effective Jan. 15, 2012, an increase in registry domain name fees for .com and .net, per its agreements with the Internet Corporation for Assigned Names and Numbers (ICANN).

Verisign announced that as of Jan. 15, 2012, the registry fee for .com domain names will increase from $7.34 to $7.85 and that the registry fee for .net domain names will increase from $4.65 to $5.11.

Continued strong global Internet usage growth, along with increasingly powerful distributed denial of service (DDoS) attacks leveled against all parts of the Internet’s critical infrastructure, have dramatically increased the demands on Internet infrastructure providers such as Verisign. In the last five years, the volume of Domain Name System (DNS) queries on Verisign’s global Internet infrastructure has more than doubled, increasing to an average daily query load of 57 billion in the first quarter of 2011. Future growth is expected to occur at an even faster pace. Verisign’s infrastructure has maintained 100 percent operational security, accuracy and stability for more than a decade due to continued innovation and investment in the infrastructure.

The VeriSign fee doesn’t include ICANN’s 18 cent fee per year. So the wholesale cost of a .com domain name will be $8.03 and a .net will be $5.29.

VeriSign just renewed its contract with ICANN to run .net. It allows VeriSign to continue jacking up .net prices 10% a year. ICANN didn’t provide an explanation for this arbitrary increase.

VeriSign’s press release about the price increase mentions the increasing load of DNS queries the company handles.

Verisign Domaintraffic – The History of Domain Names

VeriSign Releases Domain Traffic Treasure Trove to the Public

July 6, 2011

VeriSign has released VERISIGN Domain Score, to the public.

The beta tool allows anyone to enter an unregistered domain name and get an idea of how much traffic the domain gets.

DomainScore uses NXD data — basically visit requests to non existent domain names — to calculate a score ranging from 1-10. The higher the score the higher the traffic to the domain. The score is for the last full week, the last 30 days, and last 60 days.

Based on my experience with similar data for pending delete domains, I think you’ll find the data most relevant for domains that haven’t just expired. This reduces the amount of dead link traffic included in the total and gives a better impression of type-in traffic.

VeriSign already offers this data to registrars, some of which in turn offer it to customers. Dynadot offers the data but with a day turnaround time. Some customers have apparently run millions of domains through Dynadot’s system. However, a number of large registrars have held this data close to their chest for their own use.

The public tool provides data instantly on up to 100 domain names.

Verisign Mobileview – The History of Domain Names

VeriSign Gets Mobile with MobileView

August 17, 2011

Last week at HostingCon I caught up with Patti Kelly, director of product management at Verisign, to discuss the launch of a new product that makes it easy for small businesses to create a mobile friendly version of their web site.

VeriSign MobileView uses mobile detection and redirection to render mobile friendly web sites while keeping the same domain name that customers already use. The key is simplicity. MobileView was created for small businesses who have small marketing budgets and little technical expertise. It’s offered through VeriSign’s partners, so a registrar can offer it to its existing client base.

MobileView automatically creates a mobile friendly display of an existing web site, but users can edit it through an online console. (Users can also create a separate, mobile-only web site from scratch). If the customers’ web host supports MobileView then they won’t have to edit their existing site’s HTML. This is one key difference between VeriSign’s offering and dotMobi’s GoMobi. GoMobi users either need to use a separate URL or edit their web site’s HTML to enter a mobile browser detection script.

To gain adoption, VeriSign is offering the service free to its partners on VeriSign managed domains (e.g. .com and .net) through December 2012. Partners can mark the cost up as they please.

US – The History of Domain Names

.us created

Date: 04/01/2002

.us is the Internet country code top-level domain (ccTLD) for the United States. It was established in 1985. Registrants of .us domains must be United States citizens, residents, or organizations, or a foreign entity with a presence in the United States. Most registrants in the country have registered for .com, .net, .org and other gTLDs, instead of .us, which has primarily been used by state and local governments despite any entity having the option of registering a .us domain.


On February 15, 1985, .us was created as the Internet’s first ccTLD.[2] Its original administrator was Jon Postel of the Information Sciences Institute (ISI) at the University of Southern California (USC). He administered .us under a subcontract that the ISI and USC had from SRI International (which held the .us and the gTLD contract with the United States Department of Defense) and later Network Solutions (which held the .us and the gTLD contract with the National Science Foundation).

Postel and his colleague Ann Cooper codified the .us ccTLD’s policies in December 1992 as RFC 1386 and revised them the following June in RFC 1480. Registrants could only register third-level domains or higher in a geographic and organizational hierarchy. From June 1993 to June 1997, Postel delegated the vast majority of the geographic subdomains under .us to various public and private entities. .us registrants could register with the delegated manager for the specific zone they wished to register in, but not directly with the .us administrator. In July 1997, Postel instituted a “50/500 rule” that limited each delegated manager to 500 localities maximum, 50 in a given state.

On October 1, 1998, the NSF transferred oversight of the .us domain to the National Telecommunications and Information Administration (NTIA) of the United States Department of Commerce. Postel died that month, leaving his domain administration responsibilities with ISI. In December 2000, these responsibilities were transferred to Network Solutions, which had recently been acquired by Verisign.

On October 26, 2001, Neustar was awarded the contract to administer .us. On April 24, 2002, second-level domains under .us became available for registration. One of the first .us domain hacks,, was registered on May 3, 2002, for the creation of the subdomain A moratorium was placed on additional delegations of locality-based namespaces, and Neustar became the default delegate for undelegated localities. Neustar’s contract was renewed by the National Telecommunications and Information Administration (NTIA) in 2007 and most recently in 2014.

Locality namespace

The .us ccTLD is historically organized under a complex locality namespace hierarchy. Until second-level registrations were introduced in 2002, .us permitted only fourth-level domain registrations of the form “<organization-name>.<locality>.<state>.us”, with some exceptions for government entities. Registrants of locality-based domains must meet the same criteria as in the rest of the .us ccTLD. Though the locality namespace is most commonly used for government entities, it is also open to registrations by private businesses and individuals. Since 2002, second-level domain registrations have eclipsed those in the locality namespace, and many local governments have transitioned to .org and other TLDs.

Many locality-based zones of .us are delegated to various public and private entities known as delegated managers. Domains in these zones are registered through the delegated manager, rather than through Neustar. As the delegated managers are expected to receive requests directly from registrants, few if any domain name registrars serve this space, possibly contributing to its lower visibility and utilization. RFC 1480 describes the rationale for the locality namespace’s deep hierarchy and local delegation, which has proven unappealing to companies that operate nationally or globally:

One concern is that things will continue to grow dramatically, and this will require more subdivision of the domain name management. Maybe the plan for the US Domain is overkill on growth planning, but there has never been overplanning for growth yet.

As of October 31, 2013, 12,979 domains were registered under the locality namespace, of which 3,653 were managed by about 1,300 delegated managers while 9,326 were managed by Neustar as the de facto manager.  According to a 2013 survey of 539 delegated managers, 282 were state or local government agencies, while 98 were private individuals and 85 were commercial Internet service providers. Nearly 90% of the respondents offer domain registrations for free.

States and territories

A two-letter second-level domain is formally reserved for each U.S. state, federal territory, and the District of Columbia. Each domain corresponds to a USPS abbreviation. For example, is reserved for websites affiliated with New York, while is for those affiliated with Virginia. Second-level domains are also reserved for five U.S. territories: for American Samoa, for Guam, for the Northern Mariana Islands, for Puerto Rico, and for the U.S. Virgin Islands. However, these domains go unused because each territory has its own ccTLD per ISO 3166-1 alpha-2: respectively, .as, .gu, .mp, .pr, and .vi.

A state’s main government portal is usually found at the third-level domain state.<state>.us, which is reserved for this purpose. However, some state administrations prefer .gov domains: for example, California’s government portal is located at both and, while Massachusetts’ is located at instead of Fully spelled-out names of states are also reserved under .us, so the State of Ohio’s website can be found at and, with serving as a redirect. Other than for state governments, no third-level domain registrations are permitted under state or territory second-level domains.

Locality domains

A large number of third-level domains are reserved for localities within states. Each fourth-level domain registration under this namespace follows the format “<organization-name>.<locality>.<state>.us”, where <state> is a state’s two-letter postal abbreviation and <locality> is a hyphenated name that corresponds to a ZIP code or appears in a well-known atlas.

UTC – The History of Domain Names

United Technologies Corporation – was registered

Date: 05/27/1987

On May 27, 1987, United Technologies Corporation registered the domain name, making it 76th .com domain ever to be registered.

United Technologies Corporation (UTC) is an American multinational conglomerate headquartered in Farmington, Connecticut. It researches, develops, and manufactures high-technology products in numerous areas, including aircraft engines, HVAC, fuel cells, elevators and escalators, fire and security, building systems, and industrial products, among others. UTC is also a large military contractor, producing missile and aircraft systems. United Technologies provides high technology products to the aerospace and building systems industries throughout the world. UTC’s companies are industry leaders and include Pratt & Whitney, Carrier, Otis, International Fuel Cells, Hamilton Sundstrand, Sikorsky and our world-class Research Center.

Company History:

United Technologies Corporation (UTC) is one of the largest conglomerates in the United States and a major military contractor. Although it keeps a low profile, UTC’s holdings (Carrier, Hamilton Sundstrand, Otis, Pratt & Whitney, and Sikorsky) are among the leading companies in their respective fields. Stung by global recession in the 1990s, UTC has embarked upon a never-ending quest to cut costs, and jobs, much to the displeasure of its unions.


United traces its origins to Fred Rentschler, who founded the Pratt & Whitney Aircraft Company in 1925 as one of the first companies to specialize in the manufacture of engines, or ‘power plants,’ for airframe builders. Pratt & Whitney’s primary customers were Bill Boeing and Chance Vought. Interested in securing a market for his company’s engines, Rentschler convinced Boeing and Vought to join him in forming a new company called the United Aircraft and Transportation Company. The company was formed in 1929, and thereafter Pratt & Whitney, Boeing, and Vought gave exclusive priority to each other’s business.

Early in its history, United Aircraft became so successful that it was soon able to purchase other important suppliers and competitors, establishing a strong monopoly. The group grew to include Boeing, Pratt & Whitney, and Vought, as well as Sikorsky, Stearman, and Northrop (airframes); Hamilton Aero Manufacturing and Standard Steel Prop (propellers); and Stout Airlines, in addition to Boeing’s airline companies. The men who led these individual divisions of United Aircraft exchanged stock in their original companies for stock in United. The strong public interest in the larger company drove the value of United Aircraft’s stock up in subsequent flotations. The original shareholders quickly became very wealthy; Rentschler himself had turned a meager $253 cash investment into $35.5 million by 1929.

During this time, U.S. Postmaster William Folger Brown cited United Aircraft as the largest airline network and the most stable equipment supplier in the country. Thus, the company was assured of winning the postal service’s lucrative airmail contracts before it applied for them. The company’s airmail business required the manufacturing division to devote all of its resources to expansion of the airline division. Soon United Aircraft controlled nearly half of the nation’s airline and aircraft business, becoming a classic example of an aeronautic monopoly.

Breaking Up in the 1930s

In 1934, Senator Hugo Black initiated an investigation of fraud and improprieties in the aeronautics business. Bill Boeing was called to the witness stand, and subsequent interrogation exposed United Aircraft’s monopolistic business practices, eventually leading to the break-up of the huge aeronautic combine. Thereafter, Boeing sold all of his stock in his company and retired. In the reorganization of the corporation, all manufacturing interests west of the Mississippi went to Boeing Airplane in Seattle, everything east of the river went to Rentschler’s United Aircraft in Hartford, and the transport services became a third independent company under the name of United Air Lines which was based in Chicago.

Chance Vought died in 1930, and his company, along with Pratt & Whitney, Sikorsky, Ham Standard and Northrop, became part of the new United Aircraft Company. Sikorsky became a principal manufacturer of helicopters, Pratt & Whitney continued to build engines, and Vought later produced a line of airplanes including the Corsair and the Cutlass.

At the onset of World War II, business increased dramatically at United’s Pratt & Whitney division. The company produced several hundred thousand engines for airplanes built by Boeing, Lockheed, McDonnell Douglas, Grumman, and Vought. Over half the engines in American planes were built by Pratt & Whitney. After the war, United Aircraft turned its attention to producing jet engines. The Pratt & Whitney subsidiary’s entrance into the jet engine industry was hindered, however, as customers were constantly demanding improvements in the company’s piston-driven Wasp engine. In the meantime, Pratt & Whitney’s competitors, General Electric and Westinghouse, were free to devote more of their capital to the research and development of jet engines. Thus, when airframe builders started looking for jet engine suppliers, Pratt & Whitney was unprepared. Even United Aircraft’s Vought division had to purchase turbo jets for its Cutlass model from Westinghouse.

Postwar Jets

Recognizing the gravity of the situation, United Aircraft began an ambitious program to develop a line of advanced jet engines. When the Korean War began in 1950, Pratt & Whitney was again deluged with orders. The mobilization of forces gave the company the opportunity to reestablish its strong relationship with the Navy and conduct business with its newly created Air Force. In the early 1950s, United Aircraft experienced a conflict of interest between its airframe and engine manufacturing subsidiaries, as Vought’s alternate engine suppliers–Westinghouse and General Motors’ Allison division–were reluctant to do business with a company so closely associated with their competitor, Pratt & Whitney. On the other hand, Pratt & Whitney’s other customers, Grumman, McDonnell, and Douglas, were concerned that their airframe technology would find its way to Vought. As a result, both of United Aircraft’s divisions were suffering, and, in 1954, the board of directors voted to dissolve Vought.

In 1959, Fred Rentschler died, following a long illness, at the age of 68. Commenting on Rentschler’s role in developing engine technology to keep pace with that of the Soviet Union, a reporter in the New York Times stated: ‘This nation’s air superiority is due in no small measure to Mr. Rentschler’s vision and talents.’ Rentschler was succeeded as president of United Aircraft by W.P. Gwinn, while Jack Horner became chairman of the company’s subsidiary Pratt & Whitney.

United Aircraft continued to manufacture engines and a variety of other aircraft accessories into the 1960s. Much of its business came from Boeing, which had several Pentagon contracts and whose 700-series jets were capturing 60 percent of the commercial airliner market. When Horner retired in 1968, he was succeeded by Gwinn. While this change in leadership was of little consequence to United Aircraft, which was running smoothly, Pratt & Whitney was about to enter a period of crisis.

First, there was considerable trouble with Pratt & Whitney’s engines for Boeing’s 747 jumbo jet. The problem, traced to a design flaw, cost Pratt & Whitney millions of dollars in research and redevelopment. Moreover, it also cost millions of dollars for Boeing in service calls and lost sales. Commercial airline companies suffered lost revenue from canceled flights and reduced passenger capacity.

A Change of Vision in the 1970s

By 1971, the performance of the Pratt & Whitney division had begun to depress company profits. The directors of United Aircraft acted quickly by hiring a new president, Harry Gray, who was drafted away from Litton Industries. Harry Gray was born Harry Jack Grusin in 1919. He suffered the loss of his mother at age six and was entrusted to the care of his sister in Chicago, when his father’s business was ruined in the Depression. In 1936, he entered the University of Illinois at Urbana, earning a degree in journalism before serving in Europe with General Patton’s Third Army infantry and artillery during World War II. After the war, he returned to Urbana, where he received a Master’s degree in journalism. In Chicago, Grusin went through a succession of jobs, working as a truck salesperson and as a manager of a transport company. In 1951, he changed his name to Harry Gray, according to the court record, for ‘no reason.’ He moved to California in 1954 to work for the Litton Industries conglomerate, and he spent the next 17 years at Litton working his way up the corporate ladder.

Hindered in promotion at Litton by superiors who were not due to retire for several years, Gray accepted an offer from United Aircraft. While at Litton, Gray had been invited to tour General Electric’s facility in Evandale, Ohio. Litton was a trusted customer of General Electric, and consequently Gray was warmly welcomed. He was made privy to rather detailed information on GE’s long-range plans. A few weeks later, officials at GE read that Gray had accepted the presidency at their competitor United Aircraft. The officials protested Gray’s actions but were casually reminded that Gray had asked not to be informed of any plans of a ‘proprietary’ nature during his visit to the GE plant.

One of Gray’s first acts at United Aircraft was to order an investigation into and reengineering of the Pratt & Whitney engines for Boeing’s 747. He then sought to reduce United Aircraft’s dependence on the Pentagon and began a purchasing program in an effort to diversify the business. In 1974, United Aircraft acquired Essex International, a manufacturer of wire and cables. One year later, the company purchased a majority interest in Otis Elevator for $276 million, and, in 1978, Dynell Electronics, a builder of radar systems, was added to the company’s holdings. Next came Ambac Industries, which made diesel power systems and fuel injection devices.

United Aircraft changed its name to United Technologies (UTC) in 1975 in order to emphasize the diversification of the company’s business. Acquisitions continued, as UTC purchased Mostek, a maker of semiconductors, for $345 million in 1981. Two years later, the company acquired the Carrier Corporation, a manufacturer of air conditioning systems. In addition, UTC purchased several smaller electronics, software, and computer firms.

Gray was reportedly known to maintain a portfolio of the 50 companies he most wanted to purchase; virtually all of his targets, including the ones he later acquired, viewed Gray’s takeovers as hostile. Some of the companies which successfully resisted Gray’s takeover attempts were ESB Ray-O-Vac (the battery maker), Signal (which built Mack Trucks), and Babcock and Wilcox (a manufacturer of power generating equipment).

During the 1980s, UTC operated four principal divisions: Power Products, including aircraft engines and spare parts; Flight Systems, which manufactured helicopters, electronics, propellers, instruments and space-related products; Building Systems, encompassing the businesses of Otis and Carrier; and Industrial Products, which produced various automotive parts, electric sensors and motors, and wire and printing products. The company, through its divisions, built aircraft engines for General Dynamic’s YF-16 and F-111 bomber, Grumman’s F-14 Tomcat, and McDonnell Douglas’ F-15 Eagle. In addition, it supplied Boeing with engines for its 700-series jetliners, AWACs, B-52 bombers, and other airplanes. McDonnell Douglas and Airbus also purchased Pratt & Whitney engines.

Gray, who aimed to provide a new direction for UTC away from aerospace and defense, proved to be one of the company’s most successful presidents. He learned the business of the company’s principal product, jet engines, in a very short time; upon his appointment as president of United Aircraft, sales for the year amounted to $2 billion, and, by 1986, the company was recording $16 billion in sales. A year after he joined the company, Gray was named CEO, and soon thereafter he became chairman as well. In his 15 years at UTC, Gray completely refashioned the company. As Gray’s retirement drew near, however, UTC’s directors had a difficult time convincing him to relinquish power and name a successor. When a potential new leader appeared to be preparing for the role, Gray would allegedly subvert that person’s power. One former UTC executive commented, ‘Harry equates his corporate position with his own mortality.’

One welcome candidate to succeed Gray was Alexander Haig, who had served on UTC’s board. However, Haig left the company after being appointed secretary of state in the Reagan administration. The members of the UTC board then created a special committee to persuade Gray to name a successor. Finally, in September 1985, Robert Daniell (formerly head of the Sikorsky division) was appointed to take over Gray’s responsibilities as CEO of UTC. Nevertheless, Gray remained chairman.

Getting Rid of Gray in 1986

In light of the poor performances posted by the company’s various divisions, some industry analysts were beginning to question Gray’s leadership. His refusal to step aside threatened the stability of UTC. With the $424 million write-off of the failed Mostek unit, many analysts began talking of a general dissolution of UTC; the divisions were worth more individually than together. But these critics were silenced when Gray announced in September 1986 that he would retire and that Daniell would take his place. Even before the official departure of Gray, Daniell had moved quickly to dismantle the company’s philosophy of ‘growth through acquisition.’ Hundreds of middle-management positions were eliminated, and there was speculation that some of the less promising divisions would be sold. Daniell told the Wall Street Journal, ‘This is a new era for United Technologies. Harry Gray was brought here to grow the company. But now the company is built, the blocks are in place and growth will be a secondary objective.’ Daniell then had to prove that neither Gray’s overstayed welcome nor his departure would affect the company adversely.

Daniell also had more pressing challenges. The U.S.S.R.’s collapse in the late 1980s revealed that it had been a much weaker military foe than previously believed. As a result, the end of the Cold War brought Congressional and public pressure to cut domestic defense budgets. While some other leading defense companies moved to carve out niches in the shrinking market, UTC worked to strengthen its interests in more commercial industries.

UTC’s transition was not smooth, and Pratt & Whitney suffered the most. While in 1990 Pratt & Whitney had brought in one-third of UTC’s sales and an impressive two-thirds of operating profit, the subsidiary’s losses from 1991 to 1993 reached $1.3 billion. Pratt & Whitney was hampered not only by defense cuts, but also by the serious downturn in the commercial airline industry, intense global competition, and a worldwide recession. Moreover, saturation of the commercial real estate market during this time caused declines in demand for elevators and air conditioners, products manufactured by UTC’s Otis and Carrier subsidiaries. These companies also recorded losses for 1991. That year, UTC also faced six charges of illegal dumping against its Sikorsky Aircraft division. In the largest penalty levied under the Resource Conservation & Recovery Act up to that time, UTC agreed to pay $3 million in damages.

In 1992, Daniell brought George David, who had been instrumental in the revival of both the Otis and Carrier units, on board as UTC president. David, in turn, tapped Karl Krapek, who was called a ‘veteran turnaround artist’ by Financial World, to lead the beleaguered Pratt & Whitney subsidiary. Krapek quickly reduced employment at the unit from a high of 50,000 to 40,000 by the beginning of 1993. The divisional reformation also focused on manufacturing, with the goals of shortening lead times, reducing capacity, and expediting processes. Overall employment at UTC was cut by 16,500 from 1991 to 1993. By the end of 1993, Daniell was able to report positive results; UTC made $487 million on sales of $20.74 billion. In April 1994, after leading the corporation for nearly a decade, Daniell appointed David as the company’s CEO, retaining his position as UTC’s chairman. Otis’s annual revenues remained in the $4.5 billion range in the early to mid-1990s, while Carrier’s rose from $4.3 billion in 1992 to nearly $5 billion in 1994. During the same period, automotive sales rose from $2.4 billion to $2.7 billion. Pratt & Whitney saw commercial engine revenues fall by $800 million, to 2.9 billion. Military and space engine sales fell from $2.5 billion to $1.8 billion while general aviation sales fell by about ten percent to $1.1 billion. During this time, the company paid $180 million for environmental remediation at more than 300 sites. Although cost-cutting improved profits by the mid-1990s, UTC continued cutting jobs.

UTC entered more than a dozen joint ventures overseas while the aerospace industry suffered a recession. The company derived a little over half of its revenues from abroad, and enjoyed strong growth in Asia in the mid-1990s, at least until the Asian financial crisis of 1997. In 1996, UTC had revenues of $1 billion in the People’s Republic of China and Hong Kong. Some technical developments seemed promising. Pratt & Whitney unveiled its most powerful engine ever in 1996. The PW4090 was rated at 90,000 pounds of thrust. (Three years later, the company tested the PW4098, rated at 98,000 pounds.) The new Odyssey system at Otis allowed elevator cars to move both vertically and horizontally.

In January 1997, Sikorsky and Boeing won a $1.7 billion contract to continue developing their RAH-66 Comanche armed reconnaissance helicopter. Sikorsky was able to maintain production levels of its Black Hawk helicopter. Pratt & Whitney engines were chosen for two new military aircraft programs, the F-22 fighter and the C-17 freighter. On the civilian side, Otis Elevator cut 2,000 jobs as sales fell in the wake of the Asian financial crisis. It also closed its Paris headquarters and most of its engineering centers.

UTC was able to save money on commodities by having its thousands of vendors bid online. These types of products accounted for about a quarter of the $14 billion the company spent on outside goods and services, according to the Financial Times. International revenues accounted for about 56 percent of UTC’s total in the late 1990s, reaching 60 percent in 1999. Profits were rising in all divisions except UT Automotive.

Reconfiguring for the Future

UTC bought Sundstrand Corp. for about $4 billion in 1999, merging it with Hamilton Standard. Sundstrand derived 60 percent of its $2 billion in annual revenues from aerospace products. On the recommendation of the Goldman, Sachs & Co. investment bank, UTC decided to sell its automotive parts unit in the light of growing price pressure from automakers. Lear Corporation bought the business for $2.3 billion in May 1999. Otis Elevator entered a joint venture with LG Industries in South Korea while Carrier bought out North American competitor International Comfort Products and allied with the Toshiba Corporation.

Layoffs continued at Sikorsky, Carrier, Pratt & Whitney, Hamilton Sundstrand, and Otis–part of a new wave of company-wide restructuring designed to reduce UTC’s total workforce by ten percent, or 15,000 jobs. However, in February 2000, a federal judge barred Pratt & Whitney from moving engine repair work out of Connecticut, saying this violated an existing union agreement. Plans to close Hamilton Sundstrand’s Connecticut electronics facility also prompted complaints this was aimed at taking away about 400 jobs from the machinists’ union.

The company bought Cade Industries, a Michigan aerospace supplier, in February 2000. The next month, UTC announced a new engine overhaul joint venture with KLM, the Dutch airline, which already had a relationship with Hamilton Sundstrand.

UUCP – The History of Domain Names

UUCP and Usenet

Date: 01/01/1979

Using UUCP and Usenet shows how to communicate with both UNIX and non-UNIX systems, using UUCP and cu or tip. It also shows how to read news, post your own articles, and mail to other Usenet members. In 1979, two students at Duke University, Tom Truscott and Jim Ellis, came up with the idea of using simple Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lowercosts involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies (commercial organizations who might provide bug fixes) compared to later networks like CSnet and Bitnet. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984. – Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its inter connectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and smallcompanies. Sublink Network represented possibly one of the first examples ofthe internet technology becoming progress through popular diffusion.


Usenet is a worldwide distributed discussion system available on computers. It was developed from the general-purpose UUCP dial-up network architecture. Tom Truscott and Jim Ellis conceived the idea in 1979, and it was established in 1980. Users read and post messages (called articles or posts, and collectively termed news) to one or more categories, known as newsgroups. Usenet resembles a bulletin board system (BBS) in many respects and is the precursor to Internet forums that are widely used today. Usenet can be superficially regarded as a hybrid between email and web forums. Discussions are threaded, as with web forums and BBSs, though posts are stored on the server sequentially. The name comes from the term “users network”.

One notable difference between a BBS or web forum and Usenet is the absence of a central server and dedicated administrator. Usenet is distributed among a large, constantly changing conglomeration of servers that store and forward messages to one another in so-called news feeds. Individual users may read messages from and post messages to a local server operated by a commercial usenet provider, their Internet service provider, university, employer, or their own server.


UUCP is an abbreviation of Unix-to-Unix Copy. The term generally refers to a suite of computer programs and protocols allowing remote execution of commands and transfer of files, email and netnews between computers.

A command named uucp is one of the programs in the suite; it provides a user interface for requesting file copy operations. The UUCP suite also includes uux (user interface for remote command execution), uucico (the communication program that performs the file transfers), uustat (reports statistics on recent activity), uuxqt (execute commands sent from remote machines), and uuname (reports the UUCP name of the local system). Some versions of the suite include uuencode/uudecode (convert 8-bit binary files to 7-bit text format and vice versa).

Although UUCP was originally developed on Unix in the 1970s and 1980s, and is most closely associated with Unix-like systems, UUCP implementations exist for several non-Unix-like operating systems, including Microsoft’s MS-DOS, IBM’s OS/2, Digital’s VAX/VMS, Commodore’s AmigaOS, classic Mac OS, and even CP/M.

Twitter – The History of Domain Names

Twitter – Microblogging

Date: 01/01/2006

Twitter is an online news and social networking service where users post and read short 140-character messages called “tweets”. Registered users can post and read tweets, but those who are unregistered can only read them. Users access Twitter through the website interface, SMS or mobile device app. Twitter Inc. is based in San Francisco and has more than 25 offices around the world.

Twitter was created in March 2006 by Jack Dorsey, Noah Glass, Biz Stone, and Evan Williams and launched in July, whereby the service rapidly gained worldwide popularity. In 2012, more than 100 million users posted 340 million tweets a day, and the service handled an average of 1.6 billion search queries per day. In 2013, it was one of the ten most-visited websites and has been described as “the SMS of the Internet”. As of March 2016, Twitter had more than 310 million monthly active users. On the day of the 2016 U.S. presidential election, Twitter proved to be the largest source of breaking news, with 40 million tweets sent by 10 p.m. that day.

Creation and initial reaction

Twitter’s origins lie in a “daylong brainstorming session” held by board members of the podcasting company Odeo. Jack Dorsey, then an undergraduate student at New York University, introduced the idea of an individual using an SMS service to communicate with a small group. The original project code name for the service was twttr, an idea that Williams later ascribed to Noah Glass, inspired by Flickr and the five-character length of American SMS short codes. The decision was also partly due to the fact that domain was already in use, and it was six months after the launch of twttr that the crew purchased the domain and changed the name of the service to Twitter. The developers initially considered “10958” as a short code, but later changed it to “40404” for “ease of use and memorability”. Work on the project started on March 21, 2006, when Dorsey published the first Twitter message at 9:50 PM Pacific Standard Time (PST): “just setting up my twttr”. Dorsey has explained the origin of the “Twitter” title:

…we came across the word ‘twitter’, and it was just perfect. The definition was ‘a short burst of inconsequential information,’ and ‘chirps from birds’. And that’s exactly what the product was.

The first Twitter prototype, developed by Dorsey and contractor Florian Weber, was used as an internal service for Odeo employees and the full version was introduced publicly on July 15, 2006. In October 2006, Biz Stone, Evan Williams, Dorsey, and other members of Odeo formed Obvious Corporation and acquired Odeo, together with its assets—including and—from the investors and shareholders. Williams fired Glass, who was silent about his part in Twitter’s startup until 2011. Twitter spun off into its own company in April 2007. Williams provided insight into the ambiguity that defined this early period in a 2013 interview:

With Twitter, it wasn’t clear what it was. They called it a social network, they called it microblogging, but it was hard to define, because it didn’t replace anything. There was this path of discovery with something like that, where over time you figure out what it is. Twitter actually changed from what we thought it was in the beginning, which we described as status updates and a social utility. It is that, in part, but the insight we eventually came to was Twitter was really more of an information network than it is a social network.

The tipping point for Twitter’s popularity was the 2007 South by Southwest Interactive (SXSWi) conference. During the event, Twitter usage increased from 20,000 tweets per day to 60,000. “The Twitter people cleverly placed two 60-inch plasma screens in the conference hallways, exclusively streaming Twitter messages,” remarked Newsweek’s Steven Levy. “Hundreds of conference-goers kept tabs on each other via constant twitters. Panelists and speakers mentioned the service, and the bloggers in attendance touted it.”

Reaction at the conference was highly positive. Blogger Scott Beale said that Twitter was “absolutely ruling” SXSWi. Social software researcher danah boyd said Twitter was “owning” the conference. Twitter staff received the festival’s Web Award prize with the remark “we’d like to thank you in 140 characters or less. And we just did!” The first unassisted off-Earth Twitter message was posted from the International Space Station by NASA astronaut T. J. Creamer on January 22, 2010. By late November 2010, an average of a dozen updates per day were posted on the astronauts’ communal account, @NASA_Astronauts. NASA has also hosted over 25 “tweetups”, events that provide guests with VIP access to NASA facilities and speakers with the goal of leveraging participants’ social networks to further the outreach goals of NASA. In August 2010, the company appointed Adam Bain from News Corp.’s Fox Audience Network as president of revenue.


The company experienced rapid initial growth. It had 400,000 tweets posted per quarter in 2007. This grew to 100 million tweets posted per quarter in 2008. In February 2010, Twitter users were sending 50 million tweets per day. By March 2010, the company recorded over 70,000 registered applications. As of June 2010, about 65 million tweets were posted each day, equaling about 750 tweets sent each second, according to Twitter. As of March 2011, that was about 140 million tweets posted daily. As noted on, Twitter moved up to the third-highest-ranking social networking site in January 2009 from its previous rank of twenty-second.

Twitter’s usage spikes during prominent events. For example, a record was set during the 2010 FIFA World Cup when fans wrote 2,940 tweets per second in the thirty-second period after Japan scored against Cameroon on June 14. The record was broken again when 3,085 tweets per second were posted after the Los Angeles Lakers’ victory in the 2010 NBA Finals on June 17, and then again at the close of Japan’s victory over Denmark in the World Cup when users published 3,283 tweets per second. The record was set again during the 2011 FIFA Women’s World Cup Final between Japan and the United States, when 7,196 tweets per second were published. When American singer Michael Jackson died on June 25, 2009, Twitter servers crashed after users were updating their status to include the words “Michael Jackson” at a rate of 100,000 tweets per hour. The current record as of August 3, 2013 was set in Japan, with 143,199 tweets per second during a television screening of the movie Castle in the Sky (beating the previous record of 33,388, also set by Japan for the television screening of the same movie).

Twitter acquired application developer Atebits on April 11, 2010. Atebits had developed the Apple Design Award-winning Twitter client Tweetie for the Mac and iPhone. The application, now called “Twitter” and distributed free of charge, is the official Twitter client for the iPhone, iPad and Mac.

From September through October 2010, the company began rolling out “New Twitter”, an entirely revamped edition of Changes included the ability to see pictures and videos without leaving Twitter itself by clicking on individual tweets which contain links to images and clips from a variety of supported websites including YouTube and Flickr, and a complete overhaul of the interface, which shifted links such as ‘@mentions’ and ‘Retweets’ above the Twitter stream, while ‘Messages’ and ‘Log Out’ became accessible via a black bar at the very top of As of November 1, 2010, the company confirmed that the “New Twitter experience” had been rolled out to all users.


On April 5, 2011, Twitter tested a new homepage and phased out the “Old Twitter”. However, a glitch came about after the page was launched, so the previous “retro” homepage was still in use until the issues were resolved; the new homepage was reintroduced on April 20. On December 8, 2011, Twitter overhauled its website once more to feature the “Fly” design, which the service says is easier for new users to follow and promotes advertising. In addition to the Home tab, the Connect and Discover tabs were introduced along with a redesigned profile and timeline of Tweets. The site’s layout has been compared to that of Facebook. On February 21, 2012, it was announced that Twitter and Yandex agreed to a partnership. Yandex, a Russian search engine, finds value within the partnership due to Twitter’s real time news feeds. Twitter’s director of business development explained that it is important to have Twitter content where Twitter users go. On March 21, 2012, Twitter celebrated its sixth birthday while also announcing that it has 140 million users and sees 340 million tweets per day. The number of users is up 40% from their September 2011 number, which was said to have been at 100 million at the time.

In April 2012, Twitter announced that it was opening an office in Detroit, with the aim of working with automotive brands and advertising agencies. Twitter also expanded its office in Dublin. On June 5, 2012, a modified logo was unveiled through the company blog, removing the text to showcase the slightly redesigned bird as the sole symbol of Twitter. On October 5, 2012, Twitter acquired a video clip company called Vine that launched in January 2013. Twitter released Vine as a standalone app that allows users to create and share six-second looping video clips on January 24, 2013. Vine videos shared on Twitter are visible directly in users’ Twitter feeds. Due to an influx of inappropriate content, it is now rated 17+ in Apple’s app store. On December 18, 2012, Twitter announced it had surpassed 200 million monthly active users. Twitter hit 100 million monthly active users in September 2011.

On April 18, 2013, Twitter launched a music app called Twitter Music for the iPhone. On August 28, 2013, Twitter acquired Trendrr, followed by the acquisition of MoPub on September 9, 2013. As of September 2013, the company’s data showed that 200 million users send over 400 million tweets daily, with nearly 60% of tweets sent from mobile devices. On June 4, 2014, Twitter announced that it will acquire Namo Media, a technology firm specializing in “native advertising” for mobile devices. On June 19, 2014, Twitter announced that it has reached an undisclosed deal to buy SnappyTV, a service that helps edit and share video from television broadcasts. The company was helping broadcasters and rights holders to share video content both organically across social and via Twitter’s Amplify program. In July 2014, Twitter announced that it intends to buy a young company called CardSpring for an undisclosed sum. CardSpring enables retailers to offer online shoppers coupons that they can automatically sync to their credit cards in order to receive discounts when they shop in physical stores. On July 31, 2014, Twitter announced that it has acquired a small password-security startup called Mitro. On October 29, 2014, Twitter announced a new partnership with IBM. The partnership is intended to help businesses use Twitter data to understand their customers, businesses and other trends.

2015 and slow growth

On February 11, 2015, Twitter announced that it had acquired Niche, an ad network for social media stars, founded by Rob Fishman and Darren Lachtman. The acquisition price was reportedly $50 million. On March 13, 2015, Twitter announced its acquisition of Periscope, an app which allows live streaming of video. In April 2015, the desktop homepage changed. However, a glitch came about after the page was launched, so the previous “retro” homepage was still in use until the issues were resolved; the new homepage was reintroduced on April 20., Twitter announced that it has acquired TellApart, a commerce ads tech firm, with $532 million stock. Later in the year it became apparent that growth had slowed, according to Fortune, Business Insider, Marketing Land and other news websites including Quartz (in 2016).

Initial public offering (IPO)

On September 12, 2013, Twitter announced that it had filed papers with the U.S. Securities and Exchange Commission ahead of a planned stock market listing. It revealed its prospectus in an 800-page filing. Twitter planned to raise US$1 billion as the basis for its stock market debut. The IPO filing states that “200,000,000+ monthly active users” access Twitter and “500,000,000+ tweets per day” are posted. In an October 15, 2013 amendment to their SEC S-1 filing, Twitter declared that they would list on the New York Stock Exchange (NYSE), quashing speculation that their stock would trade on the NASDAQ exchange. This decision was widely viewed to be a reaction to the botched initial public offering of Facebook. On November 6, 2013, 70 million shares were priced at US$26 and issued by lead underwriter Goldman Sachs.

On November 7, 2013, the first day of trading on the NYSE, Twitter shares opened at $26.00 and closed at US$44.90, giving the company a valuation of around US$31 billion. This was $18.90 above the initial offering price and Twitter ended with a market capitalization of $24.46 billion. The paperwork from show of November 7s that among the founders, Williams received a sum of US$2.56 billion and Dorsey received US$1.05 billion, while Costolo’s payment was US$345 million. As of December 13, 2013, Twitter had “a market capitalization of $32.76 billion”. On February 5, 2014, Twitter published its first results as a public company, showing a net loss of $511 million in the fourth quarter of 2013. On January 5, 2016, CEO Jack Dorsey commented on a report that Twitter planned to expand its character limit to 10,000 (private messages already had the longer limit as of July), requiring users to click to see anything beyond 140 characters. He said while Twitter would “never lose that feeling” of speed, users could do more with the text.

In 2016, it shares rose 20% after a report that it had received takeover approaches. Potential buyers were Alphabet (parent company of Google), Microsoft,, Verizon, and the Walt Disney Company. Twitter’s board of directors were open to a deal, which could have come by the end of 2016. In November 2016, Twitter’s stock dwindled causing potential acquirers to pass on a deal.



Tweets are publicly visible by default, but senders can restrict message delivery to just their followers. Users can tweet via the Twitter website, compatible external applications (such as for smartphones), or by Short Message Service (SMS) available in certain countries. Users may subscribe to other users’ tweets—this is known as “following” and subscribers are known as “followers” or “tweeps”, a portmanteau of Twitter and peeps. Individual tweets can be forwarded by other users to their own feed, a process known as a “retweet”. Users can also “like” (formerly “favorite”) individual tweets. Twitter allows users to update their profile via their mobile phone either by text messaging or by apps released for certain smartphones and tablets. Twitter has been compared to a web-based Internet Relay Chat (IRC) client. In a 2009 Time essay, technology author Steven Johnson described the basic mechanics of Twitter as “remarkably simple”:


San Antonio-based market-research firm Pear Analytics analyzed 2,000 tweets (originating from the United States and in English) over a two-week period in August 2009 from 11:00 am to 5:00 pm (CST) and separated them into six categories:

  • Pointless babble – 40%
  • Conversational – 38%
  • Pass-along value – 9%
  • Self-promotion – 6%
  • Spam – 4%
  • News – 4%

Despite Jack Dorsey’s own open contention that a message on Twitter is “a short burst of inconsequential information”, social networking researcher danah boyd responded to the Pear Analytics survey by arguing that what the Pear researchers labelled “pointless babble” is better characterized as “social grooming” and/or “peripheral awareness” (which she justifies as persons “want[ing] to know what the people around them are thinking and doing and feeling, even when co-presence isn’t viable”). Similarly, a survey of Twitter users found that a more specific social role of passing along messages that include a hyperlink is an expectation of reciprocal linking by followers.


Users can group posts together by topic or type by use of hashtags – words or phrases prefixed with a “#” sign. Similarly, the “@” sign followed by a username is used for mentioning or replying to other users. To repost a message from another Twitter user and share it with one’s own followers, a user can click the retweet button within the Tweet.

In late 2009, the “Twitter Lists” feature was added, making it possible for users to follow ad hoc lists of authors instead of individual authors.

Through SMS, users can communicate with Twitter through five gateway numbers: short codes for the United States, Canada, India, New Zealand, and an Isle of Man-based number for international use. There is also a short code in the United Kingdom which is only accessible to those on the Vodafone, O2 and Orange networks. In India, since Twitter only supports tweets from Bharti Airtel, an alternative platform called smsTweet was set up by a user to work on all networks. A similar platform called GladlyCast exists for mobile phone users in Singapore and Malaysia.

The tweets were set to a largely constrictive 140-character limit for compatibility with SMS messaging, introducing the shorthand notation and slang commonly used in SMS messages. The 140-character limit also increased the usage of URL shortening services such as,, and, and content-hosting services, such as Twitpic, and NotePub to accommodate multimedia content and text longer than 140 characters. Since June 2011, Twitter has used its own domain for automatic shortening of all URLs posted on its website, making other link shorteners superfluous for staying within the 140 character limit.

On May 24, 2016, Twitter announced that media such as photos and videos, and the person’s handle, would not count against the 140 character limit. A user photo post used to count for about 24 characters. Attachments and links will also no longer be part of the character limit.

On July 17, Twitter will launch a new way for advertisers to target users that have tweeted with a certain emoji or engaged with tweets with a certain emoji.

Trending topics

A word, phrase or topic that is mentioned at a greater rate than others is said to be a “trending topic”. Trending topics become popular either through a concerted effort by users, or because of an event that prompts people to talk about a specific topic. These topics help Twitter and their users to understand what is happening in the world and what people’s opinions are about it.Trending topics are sometimes the result of concerted efforts and manipulations by preteen and teenaged fans of certain celebrities or cultural phenomena, particularly musicians like Lady Gaga (known as Little Monsters), Justin Bieber (Beliebers), Rihanna (Rih Navy) and One Direction (Directioners), and novel series Twilight (Twihards) and Harry Potter (Potterheads). Twitter has altered the trend algorithm in the past to prevent manipulation of this type, with limited success.

The Twitter web interface displays a list of trending topics on a sidebar on the home page, along with sponsored content.Twitter often censors trending hashtags that are claimed to be abusive or offensive. Twitter censored the #Thatsafrican and #thingsdarkiessay hashtags after users complained that they found the hashtags offensive. There are allegations that Twitter removed #NaMOinHyd from the trending list and added an Indian National Congress-sponsored hashtag.

TxMod2 – The History of Domain Names

TX-2 Mod Top

Developed late 1950’s had a 64k core memory

The TX-2 was a transistor-based computer using the then-huge amount of 64K 36-bit words of core memory. The TX-2 became operational in 1958. Because of its then powerful capabilities Ivan Sutherland’s revolutionary Sketchpad program was developed for and ran on the TX-2.

The MIT Lincoln Laboratory TX-2 computer was the successor to the Lincoln TX-0 and was known for its role in advancing both artificial intelligence and human-computer interaction. Wesley A. Clark was the chief architect of the TX-2.


One of the most influential groups in shaping interactive computing as we know it was based at MIT’s Lincoln Lab between about 1953 and 1969. The focal point of the group was the TX-2 computer (and its predecessor, the TX-0), designed by Wesley Clark.

As a direct beneficiary of this work (Ron Baecker who did his PhD there was one of my main mentors during my graduate student days), I have always held it in high esteem. I have also always felt that it has not gotten the attention that it deserved. This came to a head when I had the privilege to meet and work with Bert Sutherland in 2001. I knew of Bert, but we had never met, and I had never seen the film of the graphical programming system that he had done for his 1966 PhD thesis. Exposure to it spurred me into action, with the result that at the 2005 SIGCHI conference, I organized a panel which highlighted the work of the group, and its relevance to computing and research today.

My purpose with this web page is to provide a portal to the archival relating to the work of this group, including video demos, links to articles and theses, and a video and article documenting the SIGCHI panel the I organized.

My hope is to augment this material in the future with additional material, such as interviews with some of the key players in the lab – including some who were unable to attend the panel. All of this takes time, so this page is a work-in-progress. So return from time to time in order to check what is new.

In the meantime, please cut me some slack about bad web design, etc. Of course, in the spirit of open source, if you want to redesign it and send me another version, that would be more than welcome. For the time being, I hope that what I have gotten up is of interest and value. And, of course, I welcome feedback, suggestions, etc., and will do my best to respond promptly.

Tymnet – The History of Domain Names

Tymnet packet-switched network

Date: 01/01/1971

Tymnet was an international data communications network headquartered in Cupertino, California that used virtual call packet switched technology and X.25, SNA/SDLC, ASCII and BSC interfaces to connect host computers (servers) at thousands of large companies, educational institutions, and government agencies. Users typically connected via dial-up connections or dedicated asynchronous connections. The business consisted of a large public network that supported dial-up users and a private network business that allowed government agencies and large companies (mostly banks and airlines) to build their own dedicated networks. The private networks were often connected via gateways to the public network to reach locations not on the private network. Tymnet was also connected to dozens of other public networks in the United States and internationally via X.25/X.75 gateways.

As the Internet grew and became almost universally accessible in the late 1990s, the need for services such as Tymnet migrated to the Internet style connections, but still had some value in the Third World and for specific legacy roles. However the value of these links continued to decrease, and Tymnet shut down in 2004.


Tymnet offered local dial-up modem access in most cities in the United States and to a limited degree in Canada, which preferred its own DATAPAC service.

Users would dial into Tymnet and then interact with a simple command-line interface to establish a connection with a remote system. Once connected, data was passed to and from the user as if connected directly to a modem on the distant system. For various technical reasons, the connection was not entirely “invisible”, and sometimes required the user to enter arcane commands to make 8-bit clean connections work properly for file transfer.

Tymnet was extensively used by large companies to provide dial-up services for their employees who were “on the road”, as well as a gateway for users to connect to large online services such as CompuServe or The Source.

Organization and functionality

In its original implementation, the network supervisor contained most of the routing intelligence in the network. Unlike the TCP/IP protocol underlying the internet, Tymnet used a circuit switching layout which allowed the supervisors to be aware of every possible end-point. In its original incarnation, the users connected to nodes built using Varian minicomputers, then entered commands that were passed to the supervisor which ran on a XDS 940 host.

Circuits were character oriented and the network was oriented towards interactive character-by-character full-duplex communications circuits. The nodes handled character translation between various character sets, which were numerous at that time. This did have the side effect of making data transfers quite difficult, as bytes from the file would be invisibly “translated” without specific intervention on the part of the user.

Tymnet later developed their own custom hardware, the Tymnet Engine, which contained both nodes and a supervisor running on one of those nodes. As the network grew, the supervisor was in danger of being overloaded by the sheer number of nodes in the network, since the requirements for controlling the network took a great part of the supervisor’s capacity. Tymnet II was developed in response to this challenge. Tymnet II was developed to ameliorate the problems outlined above by off-loading some of the work-load from the supervisor and providing greater flexibility in the network by putting more intelligence into the node code. A Tymnet II node would set up its own “permuter tables”, eliminating the need for the supervisor to keep copies of them, and had greater flexibility in handling its inter-node links. Data transfers were also possible via “auxiliary circuits”.

Tymnet, Inc. spun off

In about 1979, Tymnet Inc. was spun off from Tymshare Inc. to continue administration and operation of the network. The network continued to grow, and customers who owned their own host computers and wanted access to them from remote sites became interested in connecting their computers to the network. This led to the foundation of Tymnet as a wholly owned subsidiary of Tymshare to run a public network as a common carrier within the United States. This allowed users to connect their host computers and terminals to the network, and use the computers from remote sites or sell time on their computers to other users of the network, with Tymnet charging them for the use of the network.

Electronic Data Interchange (EDI)

Tymshare EDI, MD Payment Systems Company, MCI EDI Department

Tymshare was one of the pioneers in the EDI field. Under McDonnell Douglas, the Payment Systems Company continued that legacy and maintained its own network monitoring and support group. They used Tandem computers connected to a high speed data link using Tymnet as the connection and translation medium. Tymshare developed a bi-sync modem interface (HSA), a translation module to translate between EBCDIC and ASCII (BBXS), and a highly customized x.25 module (XCOM) to interface with the Tandem computers. Apparently, there was no TCP/IP equivalent service, so to continue use of this service after the shutdown of Tymnet, an ingenious solution was selected. A special version of Tymnet Engine node code which allows nodes and interfaces to communicate with one another and the rest of the network was created. Instead of relying on the “supervisor” to validate calls, a table of permitted connections was defined per customer to allow an incoming call to be made from the HSA interface to the BBXS interface to the XCOM interface and on to the Tandem computer. In effect, they created a “Tymnet Island” consisting of a single Tymnet node that accepted calls for a pre-determined list of clients. No supervisor needed.These islands of Tymnet have not only outlived the parent company, Tymshare, and the operations company, Tymnet, but also the Tymnet Network itself. As of 2008, these Tymnet Island nodes are still running and doing their jobs.



In operation, Tymshare’s Data Networks Division was responsible for the development and maintenance of the network and Tymnet was responsible for the administration, provisioning and monitoring of the network. Each company had their own software development staff and a line was drawn to separate what each group could do. Tymshare development engineers wrote all the code which ran in the network, and the Tymnet staff wrote code running on host computers connected to the network. It is for this reason, that many of the Tymnet projects ran on the Digital Equipment Corporation DECSystem-10 computers that Tymshare offered as timesharing hosts for their customers. Tymnet operations formed a strategic alliance with the Tymshare PDP-10 TYMCOM-X operating systems group to assist them in developing new network management tools.


Trouble reports were initially tracked on pieces of paper. This was until a manager at Tymnet wrote a small FORTRAN IV program to maintain a list of problem reports and track their status in a System 1022 database (a hierarchical database system for TOPS-10 published by Software House). He called the program PAPER after the old manual way of managing trouble tickets. The program grew as features were added to handle customer information, call-back contact information, escalation procedures, and outage statistics.

Company-wide use

Access to PAPER became critical as more and more functionality was added. It eventually was maintained on two dedicated PDP-10 computers, model KL-1090, accessible via the Tymnet Packet Network as Tymshare hosts 23 and 26. Each computer was the size of 5 refrigerators, and had a string of disks that looked like 18 washing machines. Their power supplies produced +5 volts at 200 amps (non-switching) making them expensive to operate.

Major upgrades

In 1996 the DEC PDP-10s that ran Tymnet’s trouble-ticket system were replaced by PDP-10 clones from XKL, Inc. They were accessible via TCP/IP as and, by both TELNET and HTTP. A low-end workstation from Sun was used as a telnet gateway; it accepted logins from the Tymnet network via x.25 to IP translation done by a Cisco router forwarded to “ticket” and/or “token”. The XKL TOAD-1 systems ran a modified TOPS-20. The application was ported to a newer version of the Fortran compiler, and still used the 1022 database.


In mid to late 1998, Concert produced an inter-company trouble tracking system for use by both MCI and Concert. This was adopted and the TTS PAPER data necessary for ongoing tickets was re-entered on the new system. TTS was kept up for historical information until the end of the year.

In January 1999, both XKL servers (ticket and token) were decommissioned. In late 2003 the hardware left onsite in San Jose was accidentally scrapped by the facilities manager during a scheduled cleanup.

UB – The History of Domain Names

Ungermann-Bass 1986 July 10th was registered

Date: 07/10/1986

On May 8, 1986, Ungermann-Bass  became the 18th company to register their domain

Ungermann-Bass, also known as UB and UB Networks, was a computer networking company in the 1980s to 1990s. Located in Santa Clara, California, in Silicon Valley, UB was the first large networking company independent of any computer manufacturer. UB was founded by Ralph Ungermann and Charlie Bass. John Davidson, vice president of engineering, was one of the creators of NCP, the transport protocol of the ARPANET before TCP. UB specialized in large enterprise networks connecting computer systems and devices from multiple vendors, which was unusual in the 1980s. At that time most network equipment came from computer manufacturers and usually used only protocols compatible with that one manufacturer’s computer systems, such as IBM’s SNA or DEC’s DECNet. Many UB products initially used the XNS protocol suite, including the flagship ‘Net/One, and later transitioned to TCP/IP as it became an industry standard in the late 1980s.

UB specialized in large enterprise networks connecting computer systems and devices from multiple vendors, which was unusual in the 1980s. At that time most network equipment came from computer manufacturers and usually used only protocols compatible with that one manufacturer’s computer systems, such as IBM’s SNA or DEC’s DECNet. Many UB products initially used the XNS protocol suite, including the flagship ‘Net/One, and later transitioned to TCP/IP as it became an industry standard in the late 1980s. UB marketed a broadband (in the original technical sense) version of Ethernet known as 10BROAD36 in the mid 1980s. It was generally seen as hard to install.[6] UB was one of the first network manufacturers to sell equipment that implemented Ethernet over twisted pair wiring. UB’s AccessOne product line initially used the pre-standard StarLAN and, when it became standard, 10BASE-T.

In 1992, Ungermann Bass engineering in Andover, Mass produced the first Virtual Private Network. It was marketed as a “Virtual Network Architecture”. The product’s implementation utilized UB Ethernet switches, which were modified to add metadata to every packet which indicated which VPN originated the packet. Specialized segmentation and reassembly, which were compatible with Ethernet’s spanning tree algorithm, was implemented in each switch to support jumbo packets. While this product was not successful, this technology was transfered to Cisco when UB was broken up a few years later. Cisco subsequently offered the first successful commercial VPN which included “tunneling” ISO layer 2 VPN LAN packets through layer 3 to interconnect two LANS via Cisco routers.

UB marketed a broadband (in the original technical sense) version of Ethernet known as 10BROAD36 in the mid 1980s. It was generally seen as hard to install. UB was one of the first network manufacturers to sell equipment that implemented Ethernet over twisted pair wiring. UB’s AccessOne product line initially used the pre-standard StarLAN and, when it became standard, 10BASE-T. In 1992, Ungermann Bass engineering in Andover, Mass produced the first Virtual Private Network. It was marketed as a “Virtual Network Architecture”. The product’s implementation utilized UB Ethernet switches, which were modified to add metadata to every packet which indicated which VPN originated the packet. Specialized segmentation and reassembly, which were compatible with Ethernet’s spanning tree algorithm, was implemented in each switch to support jumbo packets. While this product was not successful, this technology was transfered to Cisco when UB was broken up a few years later. Cisco subsequently offered the first successful commercial VPN which included “tunneling” ISO layer 2 VPN LAN packets through layer 3 to interconnect two LANS via Cisco routers.

UB went public in 1983. It was bought by Tandem Computers in 1988. UB was sold in 1997 by Tandem to Newbridge Networks. Over the next several months, Newbridge laid off the bulk of the Ungermann-Bass employees, and closed the doors of the Santa Clara operation. Newbridge was later acquired by Alcatel, a French telecommunications company.

Unisys – The History of Domain Names

Unisys Corporation – was registered

Date: 12/11/1986

On December 11, 1986, Unisys registered the domain name, making it 49th .com domain ever to be registered.

Unisys Corporation is an American global information technology company based in Blue Bell, Pennsylvania, that provides a portfolio of IT services, software, and technology. Unisys Corporation is a major provider of computer-related services and technologies to customers in the financial services, communications, transportation, publishing, commercial, and government sectors, in more than 100 countries. The company offers an integrated suite of products and services known as Unisys e-&#064ction Solutions designed to help its customers meet the challenges and seize upon the opportunities of the Internet economy. Unisys provides consulting, systems integration, and outsourcing services; designs, implements, and maintains computer networks and multivendor information systems; and manufactures high-end, mission-critical servers for such organizations as the NASDAQ and the New York Clearinghouse.

Company History:

Unisys traces its roots back to the founding of American Arithmometer Company (later Burroughs Corporation) in 1886 and the Sperry Gyroscope Company in 1910. Unisys predecessor companies also include the Eckert–Mauchly Computer Corporation, which developed the world’s first commercial digital computers, the BINAC and the UNIVAC.

In September 1986 Unisys was formed through the merger of the mainframe corporations Sperry and Burroughs, with Burroughs buying Sperry for $4.8 billion. The name was chosen from over 31,000 submissions in an internal competition when Chuck Ayoub submitted the word “Unisys” which was composed of parts of the words united, information and systems. The merger was the largest in the computer industry at the time and made Unisys the second largest computer company with annual revenue of $10.5 billion.

Adding Machine Origins

Unisys, formed from the 1986 merger of the Burroughs Corporation with Sperry Corporation, traces its origins to over 100 years before that; in 1885, William Seward Burroughs invented the first recording adding machine. Burroughs called his device an arithmometer and the next year he and three partners founded the American Arithmometer Company in St. Louis, Missouri. Creating a commercially viable version proved difficult; Burroughs was unable to patent a salable model until 1892. Once on the market though, the adding machine became a success–in 1897 Burroughs was awarded the Franklin Institute’s John Scott Medal in honor of his invention. Burroughs died of tuberculosis the next year, however, sadly before realizing much profit from his invention. The company, which moved to Detroit in 1905, was renamed the Burroughs Adding Machine Company in his memory.

During the early years of the 20th century, Burroughs consolidated a position in the adding machine business by acquiring both Universal Adding Machine and Pike Adding Machine in 1908, and Moon-Hopkins Billing Machine in 1921. By 1914 the company offered 90 different types of data-processing machines which, with the help of interchangeable parts, could be modified into 600 different configurations. Accountants formed the core customer base, and in 1917 Burroughs increased courtship of those customers with the debut of a magazine devoted to accounting called Burroughs Clearing House. By the 1920s Burroughs was an established mainstay of the office-machine industry and remained so for the next three decades, with adding machines still at the heart of the product line.

1950s: Expanding into Computers

All of that changed, however, as a result of J. Presper Eckert and John W. Mauchly’s invention of ENIAC, the first electronic computer, in 1946. At first the market for computers appeared to be limited to a handful of government agencies that used them for large-scale number crunching. The only companies to commit themselves to computer research and development were large electronics and office-machine firms for whom the computer was a natural extension. When the Defense Department awarded the design contract for the new SAGE early-warning computer system in 1952, Burroughs, IBM, RCA, Remington Rand, and Sylvania were all prime choices. IBM won, giving that company an advantage competitors struggled to overcome.

Burroughs did not immediately plunge wholeheartedly into computer technology, preferring, along with Sperry Rand’s UNIVAC unit, NCR, Control Data, and Honeywell, to keep up with IBM during the 1950s. At the end of the decade Burroughs’s reputation was still, in the words of a Time magazine correspondent, that of ‘a stodgy old-line adding machine maker.’ Even so, in 1952 the company developed an add-on memory for Eckert and Mauchly’s ENIAC. The following year the company name was shortened to Burroughs Corporation, in recognition of its diversification. In 1956 Burroughs introduced its first commercial electronic computer and acquired ElectroData Corporation, a leading maker of high-speed computers. Burroughs also entered the field of automated office machines, introducing the Sensitronic electronic bank bookkeeping machine in 1958.

Burroughs entered the computer field during the tenure of John Coleman, whose last major act as president was to negotiate a partnership agreement between his company’s computer operations and those of RCA, which was also looking for a way to catch up to IBM through a pooling of financial resources. RCA approved the agreement in 1959, but Coleman died before he could sway Burroughs’s board of directors and the plan was never realized. Business historian Robert Sobel wrote that the Burroughs-RCA partnership might have produced ‘the best possible challenger for IBM.’

1960s Through Early 1980s: Struggling to Compete with IBM

Coleman was succeeded by executive vice-president Ray Eppert. Under Eppert, Burroughs expanded its place in the rapidly growing bank-automation market in 1960, as the company began selling magnetic inks and automatic check-sorting equipment. In 1961 the company introduced the B5000 computer, which was less expensive and simpler to operate than other commercial mainframes. Expansion and diversification during the early years of the computer age led to a fourfold increase in sales between 1948 and 1960, from $94 million to $389 million. At the same time, however, increased research and development costs cut profit margins, a problem the company struggled with until the late 1960s.

Despite this surge in earnings, Burroughs remained among the smallest of IBM’s main competitors in the early 1960s. Although the B5000’s distinctive design had earned a solid following, Burroughs’s computer product line remained narrow and the company was still too dependent on accounting machines. Research and development costs continued hacking away at profit margins, leaving the company’s future clouded.

In 1964 Ray Macdonald became executive vice-president and began overseeing the company’s day-to-day operations. With the help of several like-minded executives, he took control of Burroughs from Eppert and committed the company to a course of steady profit growth through cost cutting. Macdonald succeeded Eppert as CEO in 1967. Burroughs’s financial performance continued improving and the company became a Wall Street favorite before the decade was out.

The Defense Department awarded Burroughs a contract in 1967 to build the Illiac IV supercomputer which had been designed by a team at the University of Illinois–a major coup for the company. The Illiac IV was ten to 20 times faster than any existing supercomputer in 1972 and was delivered to NASA’s Ames Research Center in California. The sudden lag in research and development created by Macdonald’s policy of cutting costs also contributed to two significant technical failures around this time. The B8500 mainframe, which had been scheduled for delivery in 1967, had to be scrapped altogether in 1968, after Burroughs engineers realized they could not produce reliable components at a reasonable price. The B6500 was riddled with breakdowns caused by the development team’s strategy for bringing the project in on time and under budget–namely, cutting corners in the high-speed circuitry design and neglecting to test the completed machines properly before delivery.

An interesting aspect of Macdonald’s stewardship was his reemphasis on accounting machines as an integral part of Burroughs’s product line. Foremost among his talents was a genius for salesmanship; the company won a considerable chunk of the high-speed accounting machines market from rival NCR. In 1974 Burroughs entered the facsimile equipment business, acquiring Graphic Services for $30 million. The next year the company paid $8.8 million for Redactron, a maker of automatic typewriters and computer-related equipment.

Ray Macdonald retired in 1977 and was replaced by Paul Mirabito, his hand-picked successor. During Mirabito’s brief tenure, the consequences of Macdonald’s fiscal policies began manifesting themselves in earnest. In 1979 IBM announced a powerful new generation of computer systems. Burroughs countered by announcing its own new series of systems. Unfortunately, although Burroughs’s design ideas were good, the company did not have the development or manufacturing resources to translate them into actual computers. Burroughs’s inability to deliver finished products resulted in an embarrassing stream of canceled orders. Years of salary cuts and other forms of budget-tightening had engendered low morale among field engineers and a reputation for poor service among clients. Customer complaints came to a head in 1981, when 129 Burroughs users sued the company over product unreliability and difficulty in getting their machines fixed.

Mirabito had retired in 1979 and was replaced by W. Michael Blumenthal, the former chairman of Bendix and secretary of treasury in the Carter administration–a move that surprised many industry observers. Blumenthal took over a company that was deceptively profitable, chalking up record sales of $2.8 billion in 1979. He immediately set about shaking up Burroughs’s corporate culture, firing veteran executives and replacing them with his own appointees, phasing out the adding machine and calculator businesses, implementing a plan to improve repair service, and discontinuing accounting practices that tended to inflate earnings. Blumenthal’s reforms did not come without cost, however; in July 1980, the company reported its first drop in quarterly profits in 17 years.

Blumenthal concentrated on Burroughs’s computer business in an effort to secure the position of the largest of IBM’s U.S. competitors. In 1981 the company covered one weak spot by acquiring System Development Corporation, a software development firm, for $9.6 million. Burroughs also procured Memorex that year, maker of disc drives and other data-storage equipment, for $85.2 million, despite Memorex’s shaky financial condition. These moves added $1 billion to the company’s annual sales.

Mid-1980s: Burroughs + Sperry = Unisys

Blumenthal eventually decided that economies of scale were necessary to compete with IBM. In 1985 Burroughs launched a $65-per-share takeover bid, worth $3.7 billion, for Sperry Corporation. Sperry had been a takeover candidate since holding unsuccessful merger talks with ITT in March 1984. The Sperry board of directors and investors, from whom Burroughs hoped to obtain shares, balked at the offer, though, and the deal fell through. Burroughs came back with a $70-per-share bid, worth $4.1 billion, in May 1986, and a four-week battle ensued. Sperry executives, anxious to preserve the company’s independence, argued against selling out. The board put up a defense that included an $80-per-share stock buyback offer while casting about for a white knight. Sperry eventually agreed to a $76.50-per-share deal worth $4.8 billion–at the time, by far the largest merger in the history of the computer industry and one of the largest in U.S. corporate history. The resulting company was the second largest computer firm in the nation, leapfrogging over Digital Equipment Corporation.

Sperry, which was founded in 1933 but traced its roots back to the 1910-formed Sperry Gyroscope Co., originally made aircraft instruments. In 1955 the manufacturer jumped into the computer business, merging with Remington Rand, whose history dated back farther than Burroughs or Sperry. In 1873 E. Remington & Sons, forerunner of Remington Typewriter Co., introduced the first commercially successful typewriter. After producing the first ‘noiseless’ typewriter in 1909, Remington introduced the first electric typewriter in the United States in 1925. Two years later, Remington Typewriter merged with Rand Kardex to form Remington Rand. The latter introduced the world’s first business computer, the 409, in 1949. The following year, Remington Rand acquired Eckert-Mauchly Corporation, the company founded by the developers of the ENIAC and the UNIVAC. The 1955 merger of Sperry and Remington Rand resulted in Sperry Rand, which quickly became one of the industry’s leading companies due to its technical prowess and by the 1960s had gained a reputation for wonderful products. At the same time Sperry had inherited a legacy of poor management and marketing from Remington Rand. By the time Burroughs showed interest, the renamed Sperry Corporation had profitable defense-electronics operations, but a struggling computer business.

Six months after the acquisition, the combined company adopted the name Unisys Corporation. The moniker was selected from suggestions submitted by Burroughs and Sperry employees and was conceived as a synthesis of the words ‘United Information Systems.’ But the real work of fusing the two companies still remained. Over the next two years the Unisys workforce was reduced by 20 percent–24,000 of the 121,000 positions were eliminated–while unwanted and redundant businesses were placed on the market in order to generate cash. In December 1986, Unisys sold Sperry Aerospace to Honeywell and later sold off Memorex’s marketing arm.

Late 1980s and Early 1990s: Sinking Fortunes and a Turnaround

Meanwhile Unisys stepped up diversification of its product line. In 1987 Unisys obtained Timeplex, a high-tech communications equipment company, for $300 million, and Convergent Technologies, a maker of office workstations, for $351 million. By 1989 the company had begun to move into the small and mid-sized computer market, adopting AT & T’s popular Unix operating system as the standard configuration for Unisys machines. In 1989 Unisys also began manufacturing its own personal computers for the first time.

Unisys was not entirely successful in the late 1980s, however. Despite strong earnings growth from the time of the Sperry deal through 1988, the company posted a loss of nearly $100 million in the first quarter of 1989. Management shakeups in 1987 had resulted in the departure of two key executives–vice-chairman Joseph Kroger, the former president of Sperry who commanded intense loyalty from former Sperry employees, and Paul G. Stern, a physicist whom Blumenthal had brought into the company from IBM and made president and chief operating officer in 1982. Sluggish sales, manufacturing cost overruns, and fierce price competition among the many companies using the Unix system all cut into revenues.

Unisys also found itself caught up in the Pentagon procurement scandal of 1988. Federal prosecutors brought charges against some Unisys executives–including former vice-president Charles Gaines, who headed the Washington, D.C., office of one of the company’s defense units–with fraud, bribing Defense Department officials into yielding classified procurement documents, and making illegal campaign contributions to members of Congress; these activities allegedly occurred at Sperry prior to the merger. Unisys had already begun an internal investigation when the government made the accusations public. According to Paul Mann of Aviation Week & Space Technology, the company settled its part in the Operation Ill Wind court case in September 1991, pleading guilty to fraud and bribery and agreeing to pay a record of up to $190 million in damages, penalties, and fines. In the same article, James A. Unruh, who succeeded Blumenthal in 1990, was quoted as saying, ‘we as a company must accept responsibility for the past actions of a few people, even though today we have a completely different management team and different shareholders.’

Unisys’s difficulties continued and deepened in the early 1990s, with much of the troubles easily traced back to the merger of Burroughs and Sperry. The operations of the two companies had never been properly integrated, leaving duplicate R & D, marketing, and accounting departments. Already saddled with a huge debt load from the 1986 merger, Unisys was forced to take on an additional $1.4 billion in debt to cover negative cash flow, as the company’s mainframe computers were quickly losing market share to IBM and Amdahl. The company’s stock, which sold for about $50 in 1987, collapsed to a low of $1.75 during 1990. Unisys posted successive net losses of $639 million in 1989, $436 million in 1990, and $1.39 billion in 1991. Bankruptcy neared.

Amid a depressed global economy, Unruh managed to turn Unisys’s fortunes around by 1992 through a draconian restructuring, the success of which surprised many observers. Unisys exacted additional drastic employee reductions, eliminating some 23,000 people from 1989 through 1991. At the end of 1991, the remaining Unisys workforce was roughly half the size of that at the time of the merger. An additional 6,000 jobs were cut in 1992, leaving a workforce of 54,300. Other major restructuring costs led Unisys to take massive charges of $1.2 billion in 1991, directly contributing to overall unprofitability for the year. These measures, however, were expected to reduce costs on an annual basis by approximately $800 million. In its aggressive drive to cut costs, Unisys reduced its 50,000-product line by 15,000 items, having determined that ten percent of its products were bringing in 90 percent of the revenue. Its mainframe computer lines were reduced from four to two (Sperry’s 2200 series and Burroughs’s A series). The Timeplex subsidiary, responsible for only a small fraction of overall revenues, was divested. The company shuttered seven of its 15 manufacturing facilities, and Unisys began concentrating on those market sectors where it was traditionally the strongest: banking, airlines, government, and communications. Debt was brought down to a more manageable $1.4 billion, from its peak of $3.5 billion.

This massive reengineering effort not only pulled Unisys from the brink of disaster, it also resulted in two solid years of financial performance: for 1992, net income of $361.2 million on sales of $8.7 billion, and for the following year, net income of $565.4 million on $7.74 billion in revenues. Unisys was much smaller–revenues had totaled $10.11 billion in 1990–but much more profitable.

Mid-1990s and Beyond: Shift to Services and Continued Restructuring

As the turnaround was taking shape, Unruh pushed the company in a new direction. With a clear shift taking place from mainframes to networked computing, Unruh moved to deemphasize the former through an expansion into computer services. Beginning in 1992 with the formation of a unit dedicated to providing information technology services, Unisys became active in the areas of systems consulting and design and systems integration services. One rationale behind the shift to services was that as computer systems grew ever more complex, in-house personnel were less and less able to cope, leading to a growing market for outside information technology expertise. Building on its existing mainframe maintenance activities, Unisys was able to generate $1.3 billion from services in 1992, then $2 billion the following year. By 1994 the company’s ‘services and solutions’ unit was generating more revenue than the mainstay mainframe hardware operations.

Unfortunately, Unisys’s comeback proved short-lived. Services revenues were growing about 30 percent per year but the company had failed to make a profit from its new activities, losing about $54 million during 1995 alone. Part of a 1994 profit decline was attributed to a delay in getting the company’s latest servers, the 550 and 580, to market. Another factor was a $186 million charge for a further restructuring of the mainframe operations, including a workforce reduction of 4,000 and the long overdue merging of the 2200 series and the A series into a single mainframe line. After the depressed profit figure of $100.5 million for 1994, Unisys posted a net loss of $624.6 million in 1995 thanks to a $717.6 million charge for another restructuring (the fifth in seven years). This time the company reorganized itself into three units: hardware and software, which included mainframes, servers, and a recent foray into PCs; maintenance and networking, which concentrated on servicing and connecting computers; and services, which involved consulting and outsourcing in integrated systems design. This restructuring also involved the paring of a few thousand more workers from the payroll and the consolidating of facilities and manufacturing, as well as the 1995 sale of its defense contracting unit to Loral for $862 million.

The following year Unisys introduced to positive market reaction the ClearPath line of computers, which combined proprietary mainframes with open systems capable of running standard Unix and Windows NT software and applications in a single system. In April 1996 Unruh managed to defeat Greenway Partners’ proposal to shareholders for a breakup of Unisys into three parts. (Greenway held nearly a five percent stake in Unisys.) A similar breakup proposal one year later failed as well. In September 1997 Unruh stepped aside from his leadership position at Unisys, having kept the company alive but having never fully turned it around. The financial ups and downs and the frequent restructurings had left the remaining workforce demoralized. Nevertheless, most observers praised Unruh’s shift into services, and during 1997 that unit finally turned its first profit.

Unruh helped select his successor, Lawrence A. Weinbach, former head of accounting and consulting giant Andersen Worldwide. The new chairman, CEO, and president immediately began working to improve employee morale, meeting with more than one-third of the workforce and revoking unpopular policies from recent austerity programs, such as the elimination of the company match on 401(k) contributions. Weinbach also initiated $1.04 billion in fourth-quarter 1997 charges, which resulted in a net loss for the year of $853.6 million. Some $900 million of the charges were to write down the value of goodwill left from the acquisition of Sperry, with the remainder going toward reducing debt. At year-end 1997 debt stood at $1.4 billion but was reduced to less than $1 billion by the end of 1999.

In addition to focusing on debt reduction, Weinbach moved Unisys out of the manufacturing of PCs and smaller servers. The company began outsourcing the manufacture of such hardware to Hewlett-Packard in 1998. He also jettisoned the company’s three unit structure in favor of a simpler division between hardware, which would now focus on high-end servers and mainframes, and services, which included maintenance as well as consulting and systems design. On the hardware side, Unisys worked to upgrade its existing mainframe line, while also introducing in 1999 a mainframe-class server called the Unisys e-&#064ction ClearPath Enterprise Server, which was Intel microprocessor-based and ran Windows NT (later Windows 2000) software. This server was part of a comprehensive and integrated portfolio of hardware and services–known as Unisys e-&#064ction Solutions–that Unisys unveiled in 1999 to support the burgeoning e-business market. On the services side, Unisys became more selective in the type of projects it took on, concentrating on key markets where it had the most expertise–including financial services, government, communications, transportation, and publishing.

By 1999, 70 percent of the company’s revenues were being generated by the services operations. For the year, Unisys posted net income of $510.7 million on sales of $7.54 billion, its best year since 1993. It was difficult to predict whether this turnaround would last longer than that of the early 1990s. As Unisys’s services side grew, profit increases were likely to be harder won, as its services business was markedly less profitable than its hardware side. Nevertheless, one possible avenue for early 21st-century growth was in international markets, and Unisys was seeking acquisitions to fuel an overseas push in services. In 1999 the company made several acquisitions, including Datamec, a Brazilian application outsourcing company, and City Lifeline Systems Limited, a U.K.-based provider of services and solutions for firms trading in fixed-income securities.

TLD – The History of Domain Names

Country Code Top Level Domains

Date: 01/02/1981

Country Code Top-Level Domains (ccTLDs) are two-letter Internet top-level domains (TLDs) specifically designated for a particular country, sovereign state or autonomous territory for use to service their community. ccTLDs are derived from ISO 3166-1 alpha-2 country codes.

A country code top-level domain (ccTLD) is an Internet top-level domain generally used or reserved for a country, sovereign state, or dependent territory identified with a country code. All ASCII ccTLD identifiers are two letters long, and all two-letter top-level domains are ccTLDs. In 2010, the Internet Assigned Numbers Authority (IANA) began implementing internationalized country code top-level domains, consisting of language-native characters when displayed in an end-user application. Creation and delegation of ccTLDs is described in RFC 1591, corresponding to ISO 3166-1 alpha-2 country codes.

When the Domain Name System was created in the 1980s,the domain name space was divided into two main groups of domains. The country code top-level domains (ccTLD) were primarily based on the two-character territory codes of ISO-3166 country abbreviations. In addition, a group of seven generic top-level domains (gTLD) was implemented which represented a set of categories of names and multi-organizations. These were the domains GOV, EDU,COM, MIL, ORG, NET, and INT.


The implementation of ccTLDs was started by IANA. The delegation and creation of ccTLDs is presented within RFC 1591. In order to determine whether new ccTLDs should be added or not, the IANA follows the provisions of ISO 3166 – Maintenance Agency.

IANA’s Procedures for ccTLDs

Within its database, IANA contains authoritative information related to ccTLDs, referring to sponsoring organizations, technical and administrative contacts, name servers, registration URLs and other such information. This type of information provides extra details regarding the IANA’s procedures for maintaining the ccTLD database.

Delegation and Redelegation

The process through which the designated manager, or managers, is changed is know as redelegation. The process follows the provisions of ICP-1 and RFC 1591. IANA receives all requests of a sponsoring organization related to delegation and redelegation for the ccTLDs. The requests are then analyzed by IANA based on various technical and public criterion, and finally sent to the ICANN Board of Directors for approval or refusal. If approved, IANA is also responsible for the implementation of the request.

Conceptually speaking, the delegation and redelegation processes are simple, but can easily become complex if there are many organizations and individuals involved in the process. There is a set of steps that must be followed before sending the request for delegation or redelegation. An initial request should be developed based on The Change Request Template and supplementary information to prove that the eligibility criteria has been met by the initial request. All the information supplied is used by IANA to fortify the request received.

ccTLDs and ICANN

The policies developed by ICANN are implemented by gTLD registry operators, ccTLD managers, root-nameserver operators and regional Internet registries. One of the main activities of ICANN is to work with other organizations involved in the technical coordination of the Internet with the purpose of formally documenting their participatory role within the ICANN process. These organizations are committed to the ICANN policies that result from their work. Starting in 2000, ICANN started cooperating with ccTLD managers to document their relationship. Due to various circumstances such as: the type of organization, cultural issues, economics, the legal environment, etc., the relationships between ICANN and ccTLD mangers are often complex. Another consideration is the role of the national government in “managing or establishing policy for their own ccTLD” (role recognized in the June 1998, U.S. Government White Paper). In 2009, ICANN began the implementation of an IDN ccTLD Fast Track Process, whereby countries that use non-Latin script are able to claim ccTLDs in their native script and the corresponding Latin version. As of early 2011, 33 requests have been received, representing 22 languages. More than half have already been approved.

Toad – The History of Domain Names was registered

Date: 08/18/1987

On August 18, 1987, John Gilmore registered the domain name, making it 84th .com domain ever to be registered.

Gilmore owns the domain name, which is one of the 100 oldest active .com domains. It was registered on August 18, 1987. He runs the mail server at as an open mail relay. In October 2002, Gilmore’s ISP, Verio, cut off his Internet access for running an open relay, a violation of Verio’s terms of service. Many people contend that open relays make it too easy to send spam. Gilmore protests that his mail server was programmed to be essentially useless to spammers and other senders of mass email and he argues that Verio’s actions constitute censorship. He also notes that his configuration makes it easier for friends who travel to send email, although his critics counter that there are other mechanisms to accommodate people wanting to send email while traveling. The measures Gilmore took to make his server useless to spammers may or may not have helped, considering that in 2002, at least one mass-mailing worm that sent through open relays—W32.Yaha—had been hardcoded to relay through the mailserver.

Gilmore famously stated of Internet censorship that “The Net interprets censorship as damage and routes around it”. He unsuccessfully challenged the constitutionality of secret laws regarding travel security policies in Gilmore v. Gonzales. Gilmore is also an advocate for the relaxing of drug laws, and has given financial support to, Students for Sensible Drug Policy, the Marijuana Policy Project, Erowid, MAPS, Flex Your Rights, and various other organizations seeking to end the war on drugs. He is a member of the boards of MAPS, the Marijuana Policy Project, and the Electronic Frontier Foundation.

John Gilmore (born 1955) is one of the founders of the Electronic Frontier Foundation, the Cypherpunks mailing list, and Cygnus Solutions. He created the alt.* hierarchy in Usenet and is a major contributor to the GNU project.

An outspoken civil libertarian, Gilmore has sued the Federal Aviation Administration, Department of Justice, and others. He was the plaintiff in the prominent case Gilmore v. Gonzales, challenging secret travel-restriction laws. He is also an advocate for drug policy reform.

He co-authored the Bootstrap Protocol, which evolved into DHCP – the primary way local networks assign devices an IP address.

As the fifth employee of Sun Microsystems and founder of Cygnus Support, he became wealthy enough to retire early and pursue other interests.

He is a frequent contributor to free software, and worked on several GNU projects, including maintaining the GNU Debugger in the early 1990s, initiating GNU Radio in 1998, starting Gnash in December 2005 to create a free software player for Flash movies, and writing the pdtar program which became GNU tar. Outside of the GNU project he founded the FreeS/WAN project, an implementation of IPsec, to promote the encryption of Internet traffic. He sponsored the EFF’s Deep Crack DES cracker, the Micropolis city building game based on SimCity, and he is a proponent of opportunistic encryption.

Gilmore co-authored the Bootstrap Protocol (RFC 951) with Bill Croft in 1985. The Bootstrap Protocol evolved into DHCP, the Ethernet and wireless networks typically assign devices an IP address.

Toys – The History of Domain Names auction for $5.1 million

Date: 01/01/2009

Toys ‘R’ Us Buys Domain Name for $5.1M

Reauction of goes for $5.1M.

Toys ‘R’ Us will be purchasing the domain name for $5.1M. The company outbid National A-1 Advertising in an auction that was live blogged on ( is written by Larry Fischer, who attended the teleconference auction in his role with Faculty Lounge Partners.)

The domain originally sold for $1.25M in an auction that Toys ‘R’ Us participated in. So it begs the question, why four times as much now?

Sources tell Domain Name Wire that there was some horse trading in the initial auction. Faculty Lounge Partners bought and Toys ‘R’ Us bought eToys. We’ll see what kind of horse trading went on when we see the original auction’s transcript next month.

The eToys estate isn’t the only winner today. Faculty Lounge Partners will get a break-up fee of about $150,000 $37,500 plus legal fees. $37,500 is 3% of Faculty Lounge’s bid of $1.25M.

I bet eToys (The Parent Company) creditors will be wondering how the law firm Pachulski Stang Ziehl & Jones LLP blew the initial auction. [Update 2-28-09: It has come to my attention that the law firm may not be on the hook for the results. They were there to collect offers and run the auction, but a separate party was supposed to get buyers. I’ll get more details and update this story.] If sold for 4 times what it did in the original auction, were the other domains sold below value? All the law firm had to do was send a press release about the original auction and the results would have been drastically different.

Here’s a time line of the saga:

February 5, 2009: Domain Name Wire first reports about the closely held auction for domains from The Parent Company, owner of eToys.

February 6, 2009: Based on tips from multiple sources, DNW suggests that the buyer of and related properties is Toys ‘R’ Us, which attended the auction as “Eagle LLC”.

February 6, 2009: Domain Name Wire learns that at least one company that wasn’t at the original auction intends to challenge the sale.

February 12, 2009: After receiving court approval, Toys ‘R’ Us announces it was the buyer of

February 18, 2009: DNW learns that will be auctioned again and the previous auction winner, Faculty Lounge Partners, will be the stalking horse bidder.

February 26, 2009: All bidders complete asset purchase agreements, Toys ‘R’ Us, National A-1, and Frank Schilling among the bidders.

February 27, 2009: sells for staggering $5.1M.

Transition – The History of Domain Names

Transition Towards the Internet

Date: 01/01/1984

The term “internet” was adopted in the first RFC published on the TCP protocol (RFC 675: Internet Transmission Control Program, December 1974) as an abbreviation of the term internetworking and the two terms were used interchangeably. In general, an internet was any network using TCP/IP. It was around the time when ARPANET was interlinked with NSFNET in the late 1980s, that the term was used as the name of the network, Internet,being a large and global TCP/IP network. As interest in widespread networking grew and new applications for it were developed, the Internet’s technologies spread throughout the rest of the world. The network-agnostic approach in TCP/IP meant that it was easy to use any existing network infrastructure, such as the IPSS X.25 network, to carry Internet traffic. In 1984, University College London replaced its transatlantic satellite links with TCP/IP over IPSS.

Many sites unable to link directly to the Internet started to create simple gateways to allow transfer of email, at that time the most important application. Sites which only had intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple email peering, such as allowing access to FTP sites via UUCP or email. Finally, the Internet’s remaining centralized routing aspects were removed. The EGP routing protocol was replaced by a new protocol, the Border Gateway Protocol (BGP). This turned the Internet into a meshed topology and moved away from the centric architecture which ARPANET had emphasized. In 1994, Classless Inter-Domain Routing was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables.

IPv4/IPv6 Transition towards the Next Generation Internet

Although the basic protocol of the Next-Generation Internet (NGI) was defined as IPv6 over 10 years ago, the transition from the current IPv4-based Internet to IPv6-based NGI was still a long way to go. With the growth of Internet, it is predicted that IANA will exhaust its IPv4 address pool on 17-Jun-2011. Therefore, IPv6 networks and IPv6 applications will be widely used in the coming days. However, as different address families, IPv4 and IPv6 are difficult to inter-connect or even long-term coexist in the complex topology of Internet. After giving some basic IPv6 transition technologies in the literature, the talk will present the active work in IETF for IPv4/IPv6 coexistence. At last, the IPv6 progress in China will also be introduced.