Category Archives: Uncategorized

Lycos – The History of Domain Names

Lycos

Date: 01/01/1993

Lycos, Inc., is a search engine and web portal established in 1994, spun out of Carnegie Mellon University. Lycos also encompasses a network of email, webhosting, social networking, and entertainment websites.

History

Lycos is a university spin-off that began as a research project by Michael Loren Mauldin of Carnegie Mellon University’s main Pittsburgh campus in 1994. Lycos Inc. was formed with approximately US $2 million ($3.2 million today) in venture capital funding from CMGI. Bob Davis became the CEO and first employee of the new company in 1995, and concentrated on building the company into an advertising-supported web portal. Lycos enjoyed several years of growth during the 1990s and became the most visited online destination in the world in 1999, with a global presence in more than 40 countries.

In 1996, the company completed the fastest IPO from inception to offering in NASDAQ history. In 1997, it became one of the first profitable internet businesses in the world. In 1998, Lycos paid $58 million ($84.3 million today) for Tripod in an attempt to “break into the portal market.” Over the course of the next few years, Lycos acquired nearly two dozen internet brands including Gamesville, WhoWhere, Wired Digital (eventually sold to Wired), Quote.com, Angelfire, Matchmaker.com and Raging Bull.

Lycos Europe was a joint venture between Lycos and the Bertelsmann transnational media corporation, but it has always been a distinct corporate entity. Although Lycos Europe remains the largest of Lycos’s overseas ventures, several other companies also entered into joint venture agreements including Lycos Canada, Lycos Korea and Lycos Asia.

Near the peak of the internet bubble on May 16, 2000, Lycos announced its intent to be acquired by Terra Networks, the internet arm of the Spanish telecommunications giant Telefónica, for $12.5 billion ($17.8 billion today).[9] The acquisition price represented a return of nearly 3000 times the company’s initial venture capital investment and about 20 times its initial public offering valuation. The transaction closed in October 2000 and the merged company was renamed Terra Lycos, although the Lycos brand continued to be used in the United States. Overseas, the company continued to be known as Terra Networks.

On August 2, 2004, Terra announced that it was selling Lycos to Seoul, South Korea-based Daum Communications Corporation for $95.4 million in cash ($119.72 million today), less than 2% of Terra’s initial multibillion-dollar investment. In October 2004, the transaction closed for sale of half of the business and the company name was changed back to Lycos Inc. The remaining Terra half was reacquired by Telefónica.

Under new ownership, Lycos began to refocus its strategy. In 2005, the company moved away from a search-centric portal and toward a community destination for broadband entertainment content. With a new management team in place, Lycos also began divesting properties that were not core to its new strategy. In July 2006, Wired News, which had been part of Lycos since the purchase of Wired Digital in 1998, was sold to Condé Nast Publications and re-merged with Wired Magazine. The Lycos Finance division, best known for Quote.com and RagingBull.com, was sold to FT Interactive Data Corporation in February 2006, while its online dating site, Matchmaker.com, was sold to Date.com. In 2006, Lycos regained ownership of the Lycos trademark from Carnegie Mellon University.

During 2006, Lycos introduced several media services, including Lycos Phone which combined video chat, real-time video on demand, and an MP3 player. In August of the same year, a new version of Lycos Mail was released, which allowed sending and receiving large files, including unlimited file attachment sizes. In November 2006, Lycos began to roll out applications centered on social media, including the first “watch and chat” video application with the launch of its Lycos Cinema platform. In February 2007, Lycos MIX was launched, allowing users to pull video clips from YouTube, Google Video, Yahoo! Video and MySpace Video. Lycos MIX also allowed users to create playlists where other users could add video comments and chat in real-time.

As part of a corporate restructuring to focus on mobile, social networks and location-based services, Daum sold Lycos for $36 million in August 2010 to Ybrant Digital, an internet marketing company based in Hyderabad, India.

In May 2012 Lycos announced the appointment of former employee Rob Balazy as CEO.

Due to a disagreement over the price of Lycos, Daum and Ybrant went to court, which backed Daum’s claims. This prompted Daum in 2016 to seize Lycos’s shares back from Ybrant.

Marchex – The History of Domain Names

Marchex sells record $3.3 million in domains

August 2, 2012

Marchex just released earnings for its second quarter.

Most notable to the domain name industry is that the company had a record quarter for domain name sales — $3.3 million.

There’s no additional insight into the sales in the company press release, but perhaps we’ll get more from its conference call.

Marchex typically sells $1M-2.5M in domain names each quarter from its portfolio. Last time I checked the minimum offer it considered for a domain was $30,000.

Second quarter revenue was $34.0 million, compared to $38.8 in the same quarter last year.

GAAP income was $330,000, or one cent per share.

Me Domains – The History of Domain Names

Sedo going to Auction Over 170 Premium .Me Domain Names

October 7, 2011

Sedo is going to auction more than 170 .me domain names currently held by the .me registry. The auction will start October 27th and runs until Thursday, November 3rd.

Looking over the list there are definitely a lot of good keywords. But missing are the catchy domains that take advantage of the .me extension, such as love.me or like.me.

Some of the domains include: Sydney.me, London.me, Rio.me, golf.me, baseball.me, basketball.me, rugby.me, lottery.me astrology.me, Houses.me, Property.me, RealEstate.me, and Foreclosures.me

MelbourneIT – The History of Domain Names

Melbourne IT to work on at least 120 new TLD applications

Februrary 21, 2012

Melbourne IT has disclosed that it already has 120 new top level domain applications it is working on as of February 14. It expects that number to hit 150 by the end of the application period in April.

The disclosure was made in the company’s annual financial presentation.

Melbourne IT’s Digital Brand Services division saw revenue grow to $6.2M AUS in the second half of 2011, which it credits to “new .brand domain applications and brand protection” services.

The company previously said it was in talks with 150 potential new TLD applicants, but now it’s putting firm numbers to how many it expects to assist with applications.

Overall the company predicts 1,000 to 1,500 applications will be received, which is in line with what registry VeriSign anticipates.

Mentat – The History of Domain Names

Mentat Inc

Date: 09/30/1987

On September 30, 1987, Mentat Inc registered the mentat.com domain name, making it 93rd .com domain ever to be registered.

Based in Los Angeles, California, Mentat is the leading supplier of standards-based, high-performance networking solutions to the computer and satellite industries. Mentat’s SkyX products, named the World Teleport Association’s 2002 Technology of the Year, overcome the limitations of TCP/IP to allow high-performance Internet access over satellite-based networks. SkyX products are used by ISPs, corporations, and government organizations in over 85 countries across all seven continents.

Mentat, Inc. was acquired by Packeteer, Inc. on December 22, 2004. Mentat, Inc. provides networking solutions and operating systems to the computer and satellite industries in the United States. Its networking protocol products for operating system developers include TCP/IP protocol suite offering IPv6 and IPsec; Portable Streams, a portable implementation of the STREAMS infrastructure that provides a mechanism for networking protocols, terminal subsystems, and other kernel-level I/O facilities for porting to various operating environments; and XTP, a commercial implementation of the Xpress Transport Protocol for networks. The company also offers SkyX Gateway and SkyX client/server solutions to allow Internet access over satellite-based networks for ISPs, corporations, and government organizations. Mentat, Inc. was founded in 1987 and is based in Los Angeles, California.

Mentor – The History of Domain Names

Mentor Graphics – mentor.com was registered

Date: 10/27/1986

On October 27, 1986, Mentor Graphics registered the mentor.com domain name, making it 30th .com domain ever to be registered.

Mentor Graphics, Inc is a US-based multinational corporation dealing in electronic design automation (EDA) for electrical engineering and electronics. In 2001, it was ranked third in the EDA industry it helped create. Founded in 1981, the company is headquartered in Wilsonville, Oregon, and employs roughly 5,200 people worldwide with annual revenues of around $1 billion.

Company History

Mentor Graphics Corporation is among the world leaders in electronic design automation (EDA), the use of computer software to design and analyze electronic components and systems. Mentor Graphics designs, manufactures, and markets software used in a number of industries, including aerospace, consumer electronics, computer, semiconductor, and telecommunications. Software produced by Mentor Graphics assists engineers in all of these industries in developing complex integrated circuits. Missile guidance systems, microprocessors, and automotive electronics are among the products designed with the help of Mentor Graphics software. Mentor Graphics also offers customers support and training in the use of its EDA systems.

Mentor Graphics was founded in 1981 by a group of young aggressive computer professionals at Tektronix, Inc., the largest electronics manufacturing company in Oregon. The main visionary in the group was Thomas Bruggere, a software engineering manager who had spent several years at Burroughs Corporation before joining Tektronix in 1977. Convinced that he could do better with his own company, Bruggere began assembling a core group of collaborators from among his associates at Tektronix. The group eventually consisted of seven members, who would meet after work to discuss what form a new company should take. Along with Bruggere, the group included Gerard Langeler, head of the Business Unit marketing department at Tektronix, and David Moffenbeier, the company’s manager of operations analysis.

Initially, the company’s pioneering group met in Bruggere’s living room and had only a vague idea of what they would be producing. They decided that the area with the best prospects for success was computer aided engineering, or CAE. Once startup financing was in place, members of the Mentor Graphics team traveled the country interviewing engineers to see what qualities were most important to them in a CAE system. For the company’s initial product, Bruggere and company settled on a workstation that used their own software run on a powerful desktop computer manufactured by Apollo Computer, a Massachusetts-based company also in its infancy. The system was named the IDEA 1000, and represented a substantial improvement over anything already in use in the CAE field.

Once the system was conceived, its production became a race against time. The Mentor Graphics team believed that it was critical to have a working product finished in time to unveil at the June 1982 Design Automation Conference in Las Vegas, the industry’s most important trade show. The IDEA 1000 made a big splash at the conference, and orders for the workstation began to pour in.

Throughout the planning stages, Bruggere and the others expected the company’s principal competition to come from established industry heavyweights like Hewlett-Packard and alma mater Tektronix. However, during Mentor Graphics’ first year of operation, two small companies, Daisy Systems and Valid Logic Systems, emerged in the Silicon Valley with CAE products and proved to be Mentor Graphics’ stiffest competition. For several years the computer press generally lumped the three companies together, referring to them collectively by their first initials, DMV. Two things actually distinguished Mentor Graphics from the others. First, Mentor Graphics bought its computers from Apollo, while the Daisy and Valid Logic built their own hardware. This allowed Mentor Graphics to concentrate on the software side. Secondly, Mentor Graphics developed its software from scratch, in contrast to its competitors, whose software was either a hybrid or an adaptation of existing software packages. Because Mentor Graphics took the time following its conference success to develop its own database package rather than rely on the inferior one supplied by another company, Daisy gained a headstart in the race for customers. From the fall of 1982 until about 1985, Mentor Graphics and Daisy engaged in a brutal war for domination of the CAE business, with nearly every decision made at Mentor Graphics aimed at gaining market share from its rival.

In 1983 Mentor Graphics made its first acquisition of another company, California Automated Design, Inc. (CADI). CADI was developing software similar to Daisy’s, and the purchase both strengthened Mentor Graphics’ position against its chief rival and nipped another potential competitor in the bud. The results of the acquisition were mixed. Although the purchase gave Mentor Graphics an entrance into a new market segment, the two companies clashed philosophically. The relationship remained strained until 1986, when CADI founder Ning Nan stepped down from his position as vice chairman of Mentor Graphics’ board. A more clear-cut success for Mentor Graphics in 1983 was the introduction of a new product called MSPICE, an interactive analog simulator. The first product of its kind on the CAE market, MSPICE made the process of designing and analyzing the behavior of analog circuits much more efficient.

1983 also marked Mentor Graphics’ move into the international market with the formation of Mentor Graphics (UK) Ltd. Subsidiaries were added in France, Italy, the Netherlands, West Germany, Japan, and Singapore by the following year. By 1984, international sales were accounting for about 20 percent of the company’s total. In September 1984 Mentor Graphics completed the acquisition of Synergy Dataworks, Inc., another young company based in Oregon. Mentor Graphics turned its first profit that year, reporting net income of $8.3 million after losing $221,000 in 1983. In addition, Mentor’s initial public stock offering took place in January 1984. $51 million was raised through the sale of about 3 million shares of Mentor Graphics common stock.

Mentor Graphics’ decision to use hardware produced outside the company in its workstations paid off handsomely in 1985. That year, archrival Daisy missed the deadline for its next generation of computer. Because Mentor Graphics’ industry-standard workstations built by Apollo were experiencing no such delays in upgrading, Mentor Graphics was able to move into the industry lead for the first time. 1985 did not pass without major problems, however. The U.S. electronics industry suddenly encountered its worst recession in 20 years. One result was a glut in the semiconductor market, and semiconductor manufacturers were responsible for a quarter of Mentor Graphics’ business. Mentor Graphics’ net income for 1985 slipped to $7.99 million. The company was spared from worse devastation by the relative health of the aerospace and telecommunications industries, plus substantial growth in the company’s international sales, which accounted for 37 percent of revenue for 1985.

By 1986, Mentor Graphics was releasing new products at the rate of one a month. The company’s international operations continued to grow briskly, consisting by that time of 260 individuals working out of 17 offices in 13 countries. Their share of Mentor Graphics’ revenue had reached 44 percent. One of the year’s highlights was the debut of the Compute Engine Accelerator, a device capable of breaking through the computer bottlenecks often encountered by engineers during complex, multifaceted CAE operations. That year, Mentor Graphics’ revenue reached $173.5 million. With both Daisy and Valid Logic losing money, Mentor Graphics’ position at the top of the CAE industry was more or less cemented. In the broader design automation arena, Mentor Graphics was fourth largest.

The downturn in the computer industry had ended by 1987, and Mentor Graphics was able to increase its profits by 85 percent for the year. Sales were up to $222 million. 1988 was even better, as revenue passed the $300 million mark, and net income grew by another 65 percent. That year, Mentor Graphics was the most profitable among all design automation firms, earning more per share than such major players as IBM and McDonnell Douglas. In March 1988 Mentor Graphics absorbed the CAE business of Tektronix, paying $5 million for a business into which Tektronix had already sunk $200 million in development costs. By the middle of the year, Mentor Graphics controlled about a fourth of the $900 million market for electronic computer-aided design products, whereas the fading Daisy’s share had dropped to 12 percent. About half the company’s business was overseas by this time. Mentor Graphics was making an especially good showing in Japan, where the company held 60 percent of the market for CAE workstations.

Mentor Graphics’ growth continued through 1989. The company’s net income made another big jump, reaching $44.8 million on sales of $380 million. With everything looking rosy, the company embarked on an ambitious new project that year. Mentor Graphics announced its commitment to develop Release 8.0, a new generation of design automation software with capabilities far exceeding those of any existing product. This dream package was a bundle of 50 integrated programs designed into a framework that would allow a customer to move data freely among the various programs. It was hoped that Release 8.0 would cut months off the time required to design a new computer chip.

Several problems in 1990 combined to halt Mentor Graphics’ dominance of the market. As Mentor Graphics’ engineers continued incorporating new features into Release 8.0, the project became increasingly complex. Work on Release 8.0 fell months behind schedule. The company suffered from a faltering economy and customers who stopped buying Mentor Graphics’ older products knowing that 8.0 was to be released soon. At the same time, new competition sprang up from Cadence Design Systems, a five-year-old company that sold only software rather than entire workstations. Whereas Mentor Graphics’ products were essentially a closed system, incompatible with other software packages, Cadence was producing software that could run on a wide range of workstations and design more complex chips. Between the delays in 8.0 and the emergence of Cadence, Mentor Graphics hit a wall.

The company made several changes to protect its position in the newly heated up race for EDA preeminence. One was to strengthen its integrated circuit design capability by acquiring Silicon Compiler Systems, which was integrated as a division of Mentor Graphics. The company also adopted Sun Microsystems hardware as a second platform for its products. Toward the end of 1990, Mentor Graphics reorganized its command structure in an effort to get the 8.0 project back on track. The company was divided into three distinct product groups: Concurrent Engineering, headed by Philip Robinson, a former vice president at Tektronix, the Systems group, led by Langeler, whose previous titles of president and chief operating officer were eliminated, and World Trade, under David Moffenbeier, another member of Mentor Graphics’ core founding group. Bruggere remained chairman and CEO.

One of the most important causes of Mentor Graphics’ ills during this period was its reluctance to adapt to certain changes taking place in the electronic design automation industry. Prior to the 1990s, the bulk of Mentor Graphics’ sales came from complete packages of workstations and software. Around 1991, however, most customers already had workstations they were comfortable with and were interested mainly in purchasing software that could run on whatever hardware platform they preferred.

In April 1991, Mentor Graphics reported a quarterly loss for the first time in its history as a public company. A few months later, the company announced a round of layoffs that eliminated 435 jobs, or about 15 percent of its work force. By the end of 1991, Cadence had passed Mentor Graphics in software revenues. Mentor Graphics finished the year by losing $61.6 million on sales of $400 million. The company’s skid continued into 1992. When 8.0 was finally released early in the year, it performed more slowly than expected, and was plagued with bugs. Mentor Graphics’ stock plummeted, diving as low as 5 in October. For 1992, the company’s sales took another major plunge to $351 million, and the company reported a net loss of nearly $51 million.

Mentor Graphics’ struggle to turn itself around continued in 1993. The rivalry between Mentor Graphics and Cadence became fierce, with each company aggressively courting the other’s customers. Cadence won a three-year multimillion dollar contract from Tektronix, who had been a loyal Mentor Graphics customer for years. Mentor Graphics countered by forging a cooperative relationship with Harris Corporation, an early Cadence ally. The company underwent further restructuring in an effort to cut costs. Mentor Graphics still lost money in 1993 ($32 million on revenue of $340 million), but some of its business segments showed signs of recovery. A $17 million contract with Motorola contributed to the company’s slightly improving prospects. The process of changing itself more completely into a software company continued.

In March 1994, Bruggere announced that he was stepping down as chairman to pursue other interests. After a short period during which the company’s day-to-day operations were handled by president and chief executive, Walden Rhines, Jon Shirley (a former Microsoft president) was named Mentor Graphics’ new chairman. With adjustments in the company’s approach to its products completed, the leadership at Mentor Graphics hoped that its offerings–once at the cutting edge of electronic design automation–had again caught up with the needs of its customers.

Merit – The History of Domain Names

Merit Network founded

Date: 01/01/1966

Merit Network, Inc., is a nonprofit member-governed organization providing high-performance computer networking and related services to educational, government, health care, and nonprofit organizations, primarily in Michigan. Created in 1966, Merit operates the longest running regional computer network in the United States.

Organization

Created in 1966 as the Michigan Educational Research Information Triad by Michigan State University (MSU), the University of Michigan (U-M), and Wayne State University (WSU),[2] Merit was created to investigate resource sharing by connecting the mainframe computers at these three Michigan public research universities. Merit’s initial three node packet-switched computer network was operational in October 1972 using custom hardware based on DEC PDP-11 minicomputers and software developed by the Merit staff and the staffs at the three universities. Over the next dozen years the initial network grew as new services such as dial-in terminal support, remote job submission, remote printing, and file transfer were added; as gateways to the national and international Tymnet, Telenet, and Datapac networks were established, as support for the X.25 and TCP/IP protocols was added; as additional computers such as WSU’s MVS system and the UM’s electrical engineering’s VAX running UNIX were attached; and as new universities became Merit members.

Merit’s involvement in national networking activities started in the mid-1980s with connections to the national supercomputing centers and work on the 56 kbit/s National Science Foundation Network (NSFNET), the forerunner of today’s Internet. From 1987 until April 1995, Merit re-engineered and managed the NSFNET backbone service. MichNet, Merit’s regional network in Michigan was attached to NSFNET and in the early 1990s Merit began extending “the Internet” throughout Michigan, offering both direct connect and dial-in services, and upgrading the statewide network from 56 kbit/s to 1.5 Mbit/s, and on to 45, 155, 622 Mbit/s, and eventually 1 and 10 Gbit/s. In 2003 Merit began its transition to a facilities based network, using fiber optic facilities that it shares with its members, that it purchases or leases under long term agreements, or that it builds. In addition to network connectivity services, Merit offers a number of related services within Michigan and beyond, including: Internet2 connectivity, VPN, Network monitoring, Voice over IP (VOIP), Cloud storage, E-mail, Domain Name, Network Time, VMware and Zimbra software licensing, Colocation, Michigan Cyber Range cybersecurity courses, and professional development seminars, workshops, classes, conferences, and meetings.

Creating the network: 1966 to 1973

The Michigan Educational Research Information Triad (MERIT) was formed in the fall of 1966 by Michigan State University (MSU), University of Michigan (U-M), and Wayne State University (WSU). More often known as the Merit Computer Network or simply Merit, it was created to design and implement a computer network connecting the mainframe computers at the universities. In the fall of 1969, after funding for the initial development of the network had been secured, Bertram Herzog was named director for MERIT. Eric Aupperle was hired as senior engineer, and was charged with finding hardware to make the network operational. The National Science Foundation (NSF) and the State of Michigan provided the initial funding for the network. In June 1970, the Applied Dynamics Division of Reliance Electric in Saline, Michigan was contracted to build three Communication Computers or CCs. Each would consist of a Digital Equipment Corporation (DEC) PDP-11 computer, dataphone interfaces, and interfaces that would attach them directly to the mainframe computers. The cost was to be slightly less than the $300,000 ($1,828,000, adjusted for inflation) originally budgeted. Merit staff wrote the software that ran on the CCs, while staff at each of the universities wrote the mainframe software to interface to the CCs. The first completed connection linked the IBM S/360-67 mainframe computers running the Michigan Terminal System at WSU and U-M, and was publicly demonstrated on December 14, 1971. The MSU node was completed in October 1972, adding a CDC 6500 mainframe running Scope/Hustler. The network was officially dedicated on May 15, 1973

Expanding the network: 1974 to 1985

In 1974, Herzog returned to teaching in the University of Michigan’s Industrial Engineering Department, and Aupperle was appointed as director.Use of the all uppercase name “MERIT” was abandoned in favor of the mixed case “Merit”.The first network connections were host to host interactive connections which allowed person to remote computer or local computer to remote computer interactions. To this, terminal to host connections, batch connections (remote job submission, remote printing, batch file transfer), and interactive file copy were added. And, in addition to connecting to host computers over custom hardware interfaces, the ability to connect to hosts or other networks over groups of asynchronous ports and via X.25 were added. Merit interconnected with Telenet (later SprintNet) in 1976 to give Merit users dial-in access from locations around the United States. Dial-in access within the U.S. and internationally was further expanded via Merit’s interconnections to Tymnet, ADP’s Autonet, and later still the IBM Global Network as well as Merit’s own expanding network of dial-in sites in Michigan, New York City, and Washington, D.C. In 1978, Western Michigan University (WMU) became the fourth member of Merit (prompting a name change, as the acronym Merit no longer made sense as the group was no longer a triad). To expand the network, the Merit staff developed new hardware interfaces for the Digital PDP-11 based on printed circuit technology. The new system became known as the Primary Communications Processor (PCP), with the earliest PCPs connecting a PDP-10 located at WMU and a DEC VAX running UNIX at U-M’s Electrical Engineering department. A second hardware technology initiative in 1983 produced the smaller Secondary Communication Processors (SCP) based on DEC LSI-11 processors. The first SCP was installed at the Michigan Union in Ann Arbor, creating UMnet, which extended Merit’s network connectivity deeply into the U-M campus. In 1983 Merit’s PCP and SCP software was enhanced to support TCP/IP and Merit interconnected with the ARPANET.

National networking, NSFNET, and the Internet: 1986 to 1995

In 1986 Merit engineered and operated leased lines and satellite links that allowed the University of Michigan to access the supercomputing facilities at Pittsburgh, San Diego, and NCAR. In 1987, Merit, IBM and MCI submitted a winning proposal to NSF to implement a new NSFNET backbone network. The new NSFNET backbone network service began 1 July 1988. It interconnected supercomputing centers around the country at 1.5 megabits per second (T1), 24 times faster than the 56 kilobits-per-second speed of the previous network. The NSFNET backbone grew to link scientists and educators on university campuses nationwide and connect them to their counterparts around the world. The NSFNET project caused substantial growth at Merit, nearly tripling the staff and leading to the establishment of a new 24-hour Network Operations Center at the U-M Computer Center. In September 1990 in anticipation of the NSFNET T3 upgrade and the approaching end of the 5-year NSFNET cooperative agreement, Merit, IBM, and MCI formed Advanced Network and Services (ANS), a new non-profit corporation with a more broadly based Board of Directors than the Michigan-based Merit Network. Under its cooperative agreement with NSF, Merit remained ultimately responsible for the operation of NSFNET, but subcontracted much of the engineering and operations work to ANS. In 1991 the NSFNET backbone service was expanded to additional sites and upgraded to a more robust 45 Mbit/s (T3) based network. The new T3 backbone was named ANSNet and provided the physical infrastructure used by Merit to deliver the NSFNET Backbone Service. On April 30, 1995 the NSFNET project came to an end, when the NSFNET backbone service was decommissioned and replaced by a new Internet architecture with commercial ISPs interconnected at Network Access Points provided by multiple providers across the country.

Bringing the Internet to Michigan: 1985 to 2001

During the 1980s, Merit Network grew to serve eight member universities, with Oakland University joining in 1985 and Central Michigan University, Eastern Michigan University, and Michigan Technological University joining in 1987. In 1990, Merit’s board of directors formally changed the organization’s name to Merit Network, Inc., and created the name MichNet to refer to Merit’s statewide network. The board also approved a staff proposal to allow organizations other than publicly supported universities, referred to as affiliates, to be served by MichNet without prior board approval. 1992 saw major upgrades of the MichNet backbone to use Cisco routers in addition to the PDP-11 and LSI-11 based PCPs and SCPs. This was also the start of relentless upgrades to higher and higher speeds, first from 56 kbit/s to T1 (1.5 Mbit/s) followed by multiple T1s (3.0 to 10.5 Mbit/s), T3 (45 Mbit/s), OC3c (155 Mbit/s), OC12c (622 Mbit/s), and eventually one and ten gigabits (1000 to 10,000 Mbit/s). In 1993 Merit’s first Network Access Server (NAS) using RADIUS (Remote Authentication Dial-In User Service) was deployed. The NASs supported dial-in access separate from the Merit PCPs and SCPs. In 1993 Merit started what would become an eight-year phase out of its aging PCP and SCP technology. By 1998 the only PCPs still in service were supporting Wayne State University’s MTS mainframe host. During their remarkably long twenty-year life cycle the number of PCPs and SCPs in service reached a high of roughly 290 in 1991, supporting a total of about 13,000 asynchronous ports and numerous LAN and WAN gateways. In 1994 the Merit Board endorsed a plan to expand the MichNet shared dial-in service, leading to a rapid expansion of the Internet dial-in service over the next several years.

In 1994 there were 38 shared dial-in sites. By 1996 there were 131 shared dial-in sites and more than 92% of Michigan residents could reach the Internet with a local phone call. And by the end of 2001 there were 10,733 MichNet shared dial-in lines in over 200 Michigan cities plus New York City, Washington, D.C., and Windsor, Ontario, Canada. As an outgrowth of this work, in 1997, Merit created the Authentication, Authorization, and Accounting (AAA) Consortium. During 1994 an expanded K-12 outreach program at Merit helped lead the formation of six regional K-12 groups known as Hubs. The Hubs and Merit applied for and were awarded funding from the Ratepayer fund, which as part of a settlement of an earlier Ameritech of Michigan ratepayer overcharge, had been established by Michigan Public Service Commission to further the K-12 community’s network connectivity.

Transition to the commercial Internet, Internet2 and the vBNS: 1994 to 2005

In 1994, as the NSFNET project was drawing to a close, Merit organized the meetings for the North American Network Operators’ Group (NANOG). NANOG evolved from the NSFNET “Regional-Techs” meetings, where technical staff from the regional networks met to discuss operational issues of common concern with each other and with the Merit engineering staff. At the February 1994 regional techs meeting in San Diego, the group revised its charter to include a broader base of network service providers, and subsequently adopted NANOG as its new name. In 2000, Merit spun off two for-profit companies: NextHop Technologies, which developed and marketed GateD routing software, and Interlink Networks, which specialized in authentication, authorization, and accounting (AAA) software. Eric Aupperle retired as president in 2001, after 27 years at Merit. He was appointed President Emeritus by the Merit board. Hunt Williams became Merit’s new president.

Creating a facilities based network, adding new services: 2003 to the present

Licklider – The History of Domain Names

J. C. R. LICKLIDER

Licklider and the ARPANET idea

Date: 01/01/1960

Another fundamental pioneer in the call for a global networkwas J. C. R. Licklider who articulated the first ideas of the internet in his January 1960 paper called “Man-Computer Symbiosis”. In this paper he wrote “A network of such, connected to one another by wide-band communication lines the functions of present-day libraries together with anticipated advances in information storage and retrieval and symbiotic functions.”

Licklider created the idea of a universal network, spread his vision throughout the IPTO, and inspired his successors to realize his dream by inventing the ARPANET, which led to the Internet. He also developed the concepts that led to the idea of the Netizen.

Licklider started his scientific career as an experimental psychologist and professor at MIT interested in psychoacoustics, the study of how the human ear and brain convert air vibrations into the perception of sound. At MIT he also worked on the SAGE project as a human factors expert, which helped convince him of the great potential for human / computer interfaces.

Licklider’s psychoacoustics research at MIT took an enormous amount of data analysis, requiring construction of several types of mathematical graphs based on the data collected by his research. By the late 1950’s he realized that his mathematical models of pitch perception had grown too complex to solve, even with the analog computers then available, and that he wouldn’t be able to build a working mechanical model and advance the theory of psychoacoustics as he had wished.

In response to this revelation, in 1957 Licklider spent a day measuring the amount of time it took him to perform the individual tasks of collecting, sorting, and analyzing information, and then measured the time it took him to make decisions based on the data once it was collected. He discovered that the preparatory work took about 85% of the time, and that the decision could then be made almost immediately once the background data was available. This exercise had a powerful effect, and convinced him that one of the most useful long term contributions of computer technology would be to provide automated, extremely fast support systems for human decision making.

After Licklider was awarded tenure at MIT, he joined the company Bolt, Beranek & Newman to pursue his psychoacoustic research, where he was given access to one of the first minicomputers, a PDP-1 from Digital Equipment Company. The PDP-1 had comparable computing power to a mainframe computer of the time, at a cost of $250K, and only took up as much space of a couple of household refrigerators. Most importantly, instead of having to hand over punched cards to an operator and wait days for a printed response from the computer, Licklider could program the PDP-1 directly on paper tape, even stopping it and changing the tape when required, and view the results directly on a display screen in real-time. The PDP-1 was the first interactive computer.

Licklider quickly realized that minicomputers were getting to be powerful enough to support the type of automated library systems that Vannevar Bush had described. In 1959, Licklider wrote his first influential book, titled “Libraries of the Future”, about how a computer could provide an automated library with simultaneous remote use by many different people through access to a common database.

Licklider also realized that interactive computers could provide more than a library function, and could provide great value as automated assistants. He captured his ideas in a seminal paper in 1960 called Man-Computer Symbiosis, in which he described a computer assistant that could answer questions, perform simulation modeling, graphically display results, and extrapolate solutions for new situations from past experience. Like Norbert Wiener, Licklider foresaw a close symbiotic relationship between computer and human, including sophisticated computerized interfaces with the brain.

Licklider also quickly appreciated the power of computer networks, and predicted the effects of technological distribution, describing how the spread of computers, programs, and information among a large number of computers connected by a network would create a system more powerful than could be built by any one organization. In August, 1962, Licklider and Welden Clark elaborated on these ideas in the the paper “On-Line Man Computer Communication”, one of the first conceptions of the future Internet.

In October, 1962, Licklider was hired to be Director of the IPTO newly established by DARPA. His mandate was to find a way to realize his networking vision and interconnect the DoD’s main computers at the Pentagon, Cheyenne Mountain, and SAC HQ. He started by writing a series of memos to the other members of the office describing the benefits of creation of a global, distributed network, addressing some memos to “Members and Affiliates of the Intergalactic Computer Network”. Licklider’s vision of a universal network had a powerful influence on his successors at the IPTO, and provided the original intellectual push that led to the realization of the ARPANET only seven years later.

In 1964, Licklider left the IPTO and went to work at IBM. In 1968, he went back to MIT to lead the Project MAC project. In 1973, he returned again to lead the IPTO for two years. In 1979, he was one of the founding members of Infocom.

Netizen. In April, 1968, Licklider and Robert Taylor published a ground-breaking paper The Computer as a Communication Device in Science and Technology, portraying the forthcoming universal network as more than a service to provide transmission of data, but also as a tool whose value came from the generation of new information through interaction with its users. In other words, the old golden rule applied to an as yet unbuilt network world, where each netizen contributes more to the virtual community than they receive, producing something more powerful and useful than anyone could create by themself.

Michael Hauben, a widely read Internet pioneer, encountered this spirit still going strong in his studies of online Internet communities in the 1990’s, leading to his coinage of the term “net citizen” or “netizen”. Newcomers to the Internet usually experience the same benefit of participating in a larger virtual world, and adopt the spirit of the netizen as it is handed down the generations. It cannot be a coincidence that so many Internet technologies are built specifically to leverage the power of community information sharing, such as the Usenet newsgroups, IRC, MUD’s, and mailing lists. The concept of the netizen is also the foundation for the motivation of netiquette.

LinkedIn – The History of Domain Names

LinkedIn business networking

Date: 01/01/2003

LinkedIn  is a business and employment-oriented social networking service that operates via websites. Founded on December 14, 2002, and launched on May 5, 2003, it is mainly used for professional networking, including employers posting jobs and job seekers posting their CVs. As of 2015, most of the site’s revenue came from selling access to information about its users to recruiters and sales professionals.  As of September 2016, LinkedIn has more than 467 million accounts, out of which more than 106 million are active. LinkedIn allows users (workers and employers) to create profiles and “connections” to each other in an online social network which may represent real-world professional relationships. Users can invite anyone (whether a site user or not) to become a connection. The site has an Alexa Internet ranking as the 14th most popular website (October 2016). According to the New York Times, US high school students are now creating LinkedIn profiles to include with their college applications.

Based in the United States, the site is, as of 2013, available in 24 languages, including Arabic, Chinese, English, French, German, Italian, Portuguese, Spanish, Dutch, Swedish, Danish, Romanian, Russian, Turkish, Japanese, Czech, Polish, Korean, Indonesian, Malay, and Tagalog. LinkedIn filed for an initial public offering in January 2011 and traded its first shares on May 19, 2011, under the NYSE symbol “LNKD”.

On June 13, 2016, Microsoft announced it will acquire LinkedIn for $26.2 billion, a deal expected to be completed by the end of 2016.

LinkedIn is headquartered in Sunnyvale, California, with offices in Omaha, Chicago, Los Angeles, New York, San Francisco, Washington, Sao Paulo, London, Dublin, Amsterdam, Milan, Paris, Munich, Madrid, Stockholm, Singapore, Hong Kong, China, Japan, Australia, Canada, India and Dubai. In January 2016, the company had around 9,200 employees.

LinkedIn’s CEO is Jeff Weiner, previously a Yahoo! Inc. executive. Founder Reid Hoffman, previously CEO of LinkedIn, is Chairman of the Board. It is funded by Sequoia Capital, Greylock, Bain Capital Ventures, Bessemer Venture Partners and the European Founders Fund. LinkedIn reached profitability in March 2006. Through January 2011, the company had received a total of $103 million of investment.

Founding to 2010

The company was founded by Reid Hoffman and founding team members from PayPal and Socialnet.com (Allen Blue, Eric Ly, Jean-Luc Vaillant, Lee Hower, Konstantin Guericke, Stephen Beitzel, David Eves, Ian McNish, Yan Pujante, Chris Saccheri). In late 2003, Sequoia Capital led the Series A investment in the company. In June 2008, Sequoia Capital, Greylock Partners, and other venture capital firms purchased a 5% stake in the company for $53 million, giving the company a post-money valuation of approximately $1 billion. In 2006, LinkedIn reached 20 million members. In 2010, LinkedIn opened an International Headquarters in Dublin, Ireland, received a $20 million investment from Tiger Global Management LLC at a valuation of approximately $2 billion, and announced its first acquisition, Mspoke, and improved its 1% premium subscription ratio. In October of that year Silicon Valley Insider ranked the company No. 10 on its Top 100 List of most valuable start ups. By December, the company was valued at $1.575 billion in private markets.

2011 to present

LinkedIn filed for an initial public offering in January 2011. The company traded its first shares on May 19, 2011, under the NYSE symbol “LNKD”, at $45 per share. Shares of LinkedIn rose as much as 171 percent in their first day of trade on the New York Stock Exchange and closed at $94.25, more than 109 percent above IPO price. Shortly after the IPO, the site’s underlying infrastructure was revised to allow accelerated revision-release cycles. In 2011, LinkedIn earned $154.6 million in advertising revenue alone, surpassing Twitter, which earned $139.5 million. LinkedIn’s fourth-quarter 2011 earnings soared because of the company’s increase in success in the social media world. By this point, LinkedIn had about 2,100 full-time employees compared to the 500 that it had in 2010.

In Q2 2012, LinkedIn leased 57,120 square feet on three floors of the One Montgomery Tower building in the Financial District of San Francisco, which was expanded to 135,000 square feet by 2014. In May 2012, LinkedIn announced its 2012 Q1 revenues were up to $188.5 million compared to $93.9 million in Q1 2011. Net income increased 140% over Q1 2011 to $5 million. Revenue for Q2 was estimated to be between $210 to $215 million. In November 2012, LinkedIn released their third quarter earnings, reporting earnings-per-share of $0.22 on revenue of $252 million. As a result of these numbers, LinkedIn’s stock increased in value, trading at roughly $112 a share.

In April 2014, LinkedIn announced that it had leased 222 Second Street, a 26-story building under construction in San Francisco’s SoMa district, to accommodate up to 2,500 of its employees, with the lease covering 10 years. The goal was to join all San Francisco based staff (1,250 as of January 2016) in one building, bringing sales and marketing employees together with the research and development team. They started to move in in March 2016. In February 2016, following an earnings report, LinkedIn’s shares dropped 43.6% within a single day, down to $108.38 per share. LinkedIn lost $10 billion of its market capitalization that day. On June 13, 2016, Microsoft announced it would acquire LinkedIn for $196 a share, a total value of $26.2 billion and the largest acquisition made by Microsoft to date. The acquisition will be an all-cash, debt-financed transaction. Microsoft will allow LinkedIn to “retain its distinct brand, culture and independence”, with Weiner to remain as CEO, who will then report to Microsoft CEO Satya Nadella. Analysts believe Microsoft saw the opportunity to integrate LinkedIn with its Office product suite to help better integrate the professional network system with its products. The deal is expected to be complete by the end of 2016.

In its headquarters in Dublin it currently has 1000 employees. It is expected to increase with the announcement of 200 new jobs taking the total employees to 1200

Security and technology

In June 2012, cryptographic hashes of approximately 6.4 million LinkedIn user passwords were stolen by hackers who then published the stolen hashes online. This action is known as the 2012 LinkedIn hack. In response to the incident, LinkedIn asked its users to change their passwords. Security experts criticized LinkedIn for not salting their password file and for using a single iteration of SHA-1. On May 31, 2013 LinkedIn added two-factor authentication, an important security enhancement for preventing hackers from gaining access to accounts. In May 2016, 117 million LinkedIn usernames and passwords were offered for sale online for the equivalent of $2,200. These account details are believed to be sourced from the original 2012 LinkedIn hack, in which the number of user IDs stolen had been underestimated. To handle the large volume of emails sent to its users every day with notifications for messages, profile views, important happenings in their network, and other things, LinkedIn uses the Momentum email platform from Message Systems.

Applications

In October 2008, LinkedIn enabled an “applications platform” which allows external online services to be embedded within a member’s profile page. Among the initial applications were an Amazon Reading List that allows LinkedIn members to display books they are reading, a connection to Tripit, and a Six Apart, WordPress and TypePad application that allows members to display their latest blog postings within their LinkedIn profile. In November 2010, LinkedIn allowed businesses to list products and services on company profile pages; it also permitted LinkedIn members to “recommend” products and services and write reviews.

Mobile

A mobile version of the site was launched in February 2008, which gives access to a reduced feature set over a mobile phone. The mobile service is available in six languages: Chinese, English, French, German, Japanese and Spanish. In January 2011, LinkedIn acquired CardMunch, a mobile app maker that scans business cards and converts into contacts. In June 2013, CardMunch was noted as an available LinkedIn app. In August 2011, LinkedIn revamped its mobile applications on the iPhone, Android and HTML5. At the time, mobile page views of the application were increasing roughly 400% year over year according to CEO Jeff Weiner. In October 2013, LinkedIn announced a service for iPhone users called “Intro”, which inserts a thumbnail of a person’s LinkedIn profile in correspondence with that person when reading mail messages in the native iOS Mail program. This is accomplished by re-routing all emails from and to the iPhone through LinkedIn servers, which security firm Bishop Fox asserts has serious privacy implications, violates many organizations’ security policies, and resembles a man-in-the-middle attack.

Groups

LinkedIn also supports the formation of interest groups, and as of March 29, 2012 there are 1,248,019 such groups whose membership varies from 1 to 744,662. The majority of the largest groups are employment related, although a very wide range of topics are covered mainly around professional and career issues, and there are currently[when?] 128,000 groups for both academic and corporate alumni. Groups support a limited form of discussion area, moderated by the group owners and managers. Since groups offer the functionality to reach a wide audience without so easily falling foul of anti-spam solutions, there is a constant stream of spam postings, and there now exist a range of firms who offer a spamming service for this very purpose. LinkedIn has devised a few mechanisms to reduce the volume of spam, but recently[when?] took the decision to remove the ability of group owners to inspect the email address of new members in order to determine if they were spammers. Groups also keep their members informed through emails with updates to the group, including most talked about discussions within your professional circles. Groups may be private, accessible to members only or may be open to Internet users in general to read, though they must join in order to post messages.

In December 2011, LinkedIn announced that they are rolling out polls to groups. In November 2013, LinkedIn announced the addition of Showcase Pages to the platform. In 2014, LinkedIn announced they were going to be removing Product and Services Pages paving the way for a greater focus on Showcase Pages.

Job listings

LinkedIn allows users to research companies, non-profit organizations, and governments they may be interested in working for. Typing the name of a company or organization in the search box causes pop-up data about the company or organization to appear. Such data may include the ratio of female to male employees, the percentage of the most common titles/positions held within the company, the location of the company’s headquarters and offices, and a list of present and former employees. In July 2011, LinkedIn launched a new feature allowing companies to include an “Apply with LinkedIn” button on job listing pages. The new plugin allowed potential employees to apply for positions using their LinkedIn profiles as resumes.

Online recruiting

Job recruiters, head hunters, and personnel HR are increasingly using LinkedIn as a source for finding potential candidates. By using the Advanced search tools, recruiters can find members matching their specific key words with a click of a button. They then can reach out to those members by sending a request to connect or by sending InMail about a specific job opportunity he or she may have. Recruiters also often join industry based groups on LinkedIn to create connections with professionals in that line of business.

Skills

Since September 2012, LinkedIn has enabled users to “endorse” each other’s skills. This feature also allows users to efficiently provide commentary on other users profiles – network building is reinforced. However, there is no way of flagging anything other than positive content.  LinkedIn solicits endorsements using algorithms that generate skills members might have. Members cannot opt out of such solicitations, with the result that it sometimes appears that a member is soliciting an endorsement for a non-existent skill.

Publishing platform

LinkedIn continues to add different services to its platform to expand the ways that people use it. On May 7, 2015, LinkedIn added an analytics tool to its publishing platform. The tool allows authors to better track traffic that their posts receive.

Loans – The History of Domain Names

Loans.com sells for $3,000,000 becomes GreatDomains.com

Date: 01/01/2000

Loans.com Domain Auction Fetches Record $3M

Banking behemoth Bank of America has finalized a record $3 million (US$) auction bid for the domain name Loans.com. Domain name auctioneer GreatDomains.com, which hosted the auction, said that the site was sold by a San Jose, California computer consultant. The company claimed that the figure was the highest price ever paid at auction for a domain name, dwarfing the $1 million paid for WallStreet.com last April.

The company added that the highest price ever paid for a domain name was $7 million for the purchase of Business.com in November 1999. That purchase was not at an auction.

A number of financial institutions and investment capital firms apparently bid for Loans.com. The company closed the auction at the end of January and has spent the rest of the time negotiating the terms between the bank and Marcelo Siero, the seller.

Glee-Commerce

Siero, the company said, registered the name in September 1994 with the intention of creating an e-commerce application, but opted later to sell the name.

In a statement issued by GreatDomains.com, Siero said he almost accepted a $100,000 offer for Loans.com before he decided to list it with the company. GreatDomains.com takes a commission of 10 to 15 percent for hosting the auction and brokering the sale.

The company sold the domain name Cinema.com for $700,000 at the same time, but failed to sell Taxes.com — which was widely expected to draw top dollar. Company spokesman Paul Jouve said that recent negotiations for the sale of the name broke off and that it will be resold at auction.

Both Loans.com and Taxes.com have been receiving in excess of a thousand hits per day, Jouve said, as Internet surfers punch in the URL and expect to reach an associated Web site. Bank of America will surely welcome the traffic.

High-Priced Cyberspace

Los Angeles-based Great Domains, which has 312,662 current domain name listings on its site and sells them for an average of $14,500 each, said it has brokered four of the top five auctioned domain names. In addition to Loans.com and Cinema.com, the company also brokered the sale of ForSaleByOwner.com for $835,000 and Drugs.com for $823,000.

Lockheed – The History of Domain Names

Lockheed Corporation

Date: 07/27/1987

On July 27, 1987, Lockheed Corporation registered the lockheed.com domain name, making it 81st .com domain ever to be registered.

The Lockheed Corporation (originally the Loughead Aircraft Manufacturing Company) was an American aerospace company. Lockheed was founded in 1912 and later merged with Martin Marietta to form Lockheed Martin in 1995.

Company History:

Formed in 1995 via the union of the nation’s second- and third-ranking defense contractors, Lockheed Corporation and Martin Marietta Corporation, Lockheed Martin Corporation is the world’s largest defense contractor. Lockheed Martin further broadened its lead over second ranking McDonnell Douglas with the January 1996 acquisition of Loral Corporation’s Defense Electronics and Systems Integration for $9.1 billion and $2.1 billion of assumed debt. Both Lockheed and Martin Marietta had evolved from relatively small aerospace manufacturers into titans of the global defense industry. A thorough treatment of Lockheed’s history appears elsewhere in this series, while the Martin Marietta saga is recounted here.

In 1905 a youthful Glenn Martin moved with his family to California. In the hills of Santa Ana, Martin built and flew his first experimental gliders. Not long afterwards Martin started a small airplane factory while working as a salesman for Ford and Maxwell cars. Martin applied his earnings from the auto sales, as well as money from barnstorming performances, to finance an airplane business. During this time he hired a man named Donald Douglas to help him develop new airplanes. Soon thereafter, Douglas and Martin collaborated to produce a small flight trainer called a Model TT which was sold to the U.S. Army and the Dutch government.

On the eve of World War I, Douglas was summoned to Washington to help the Army develop its aerial capabilities. Less than a year later, he became frustrated with the slow moving bureaucracy in Washington and returned to work for Martin, who had relocated to Cleveland. While there, Douglas directed the development of Martin’s unnamed twin-engine bomber. Neither he nor Martin was willing to compromise or shorten the period of time needed for the development of their airplane. For that reason the “Martin” bomber, arrived too late to see action in World War I. When Martin moved to Baltimore in 1929, Douglas left the company to start his own aircraft company in California.

Martin continued to impress the military with his aircraft demonstrations even after the war. In July of 1921, off the Virginia Capes, seven Martin MB-2 bombers under the command of General Billy Mitchell sank the captured German battleship Ostfreisland. Continued interest from the War Department led Martin’s company to develop its next generation of airplanes, culminating with the B-10 bomber. The B-10 was a durable bomber, able to carry heavy payloads and cruise 100 miles per hour faster than conventional bombers of the day. Martin’s work on the B-10 bomber earned him the Collier Trophy in 1932.

Although Martin continued to manufacture bombers throughout the 1930s, he also began to branch out into commercial passenger aircraft. With substantial financial backing from Pan Am’s Juan Trippe, Martin developed the M-130 “China Clipper,” the first of which was delivered in 1932. The clipper weighed 26 tons, carried up to 32 passengers and was capable of flying the entire 2,500 miles between San Francisco and Honolulu. Pan Am flew Martin’s planes to a variety of Asian destinations, including Manila and Hong Kong.

But Martin’s consistent development of military aircraft through the decade prepared it well for the start of World War II. The company produced thousands of airplanes for the Allied war effort, including the A-30 Baltimore, the B-26 and B-29 bombers, the PBM Mariner flying boat, and the 70-ton amphibious Mars air freighter. Martin invited some criticism in 1942 when he suggested that the United States could dispense with its costly two-ocean navy and defense of the Panama Canal if it had enough airplanes like the Mars.

After the war ended Martin continued to manufacture what few airplanes the Army and Navy were still ordering. In 1947 the company re-entered the highly competitive commercial airliner market with a model called the M-202. The development of later aircraft, the M-303 (which was never built) and the M-404, was a severe drain on company finances. Despite loans from the Reconstruction Finance Corporation, the Mellon Bank of Pittsburgh, and a number of other sources, the Martin Company was unable to generate an operating profit.

In July 1949 Chester C. Pearson was hired as president and general manager of the company. Glenn Martin, at the age of 63, was moved up to the position of chairman. Despite the new management and an increase in orders as a result of the Korean War, the Martin Company was still losing money. There were two reasons: first, production of the 404 was interrupted which, in turn, halted delivery and therefore payment for the aircraft. Second, the company hired hundreds of new but unskilled workers, which lowered productivity.

By the end of 1951 George M. Bunker and J. Bradford Wharton, Jr. were asked to take over the management of the company. As part of a refinancing plan Glenn Martin was given the title of honorary chairman and his 275,000 shares in the company were placed in a voting trust. Glenn Martin resigned his position in the company in May of 1953, but remained as a company director until his death. George Bunker succeeded Martin as president and chairman and directed the company for the next 20 years. Pearson, who was demoted to vice-president, later resigned. Bunker and Wharton were successful in arresting the company’s losses and by the end of 1954 declared the company out of debt. Martin, who never married, died of a stroke in 1955 at the age of 69.

Under its new leadership, Martin substantially reengineered a version of the English Electric Canberra bomber for the United States Air Force. Known as the M-272, the bomber was given the Air Force designation B-57. Martin built a number of scout and patrol planes, including the P5-M and P6-M flying boats, and expanded its interest in the development of rockets and missiles. One of Martin’s first projects in this area was the Viking high-altitude research rocket, followed by the Vanguard missile. By the 1960s the company was a leader in the manufacture of second generation rockets like the Titan II.

Despite the company’s return to profitability after the Korean War, the larger airplane manufacturers such as Boeing, Douglas and Lockheed had the advantage of size, which allowed them to compete more effectively with smaller companies like Martin, Vought and Grumman. These smaller companies, however, retained very different kinds of engineering teams which allowed them to continue developing unique aeronautic equipment and weapons systems.

The company was largely unsuccessful in achieving diversification in anything but its number of government customers. Martin aircraft was subject to the whims of the Department of Defense with its unstable pattern of purchases. By December of 1960 Martin’s last airplane, a Navy P5M-2 antisubmarine patrol plane, rolled off the production line. From this point forward the company produced only missiles, including the Bullpup, Matador, Titan and Pershing among them.

The Martin Company diversified through a merger with the American-Marietta Corporation, a manufacturer of chemical products, paints, inks, household products and construction materials, in 1961. After convincing the government that the merger would not reduce competition in any of either company’s industries, the two companies formed Martin Marietta. The diversification continued in 1968 with the purchase of Harvey Aluminum. The name of the subsidiary was changed to Martin Marietta Aluminum in 1971.

Martin Marietta became known for its space projects, but remained a major producer of aluminum and construction materials, during the late 1960s and early 1970s. In 1969 the company’s aerospace unit was selected to lead construction of the two Viking capsules which landed on Mars in 1976. In 1973 the company was awarded a contract to build the external fuel tank for NASA’s space shuttles.

Thomas G. Pownall advanced to the presidency of Martin Marietta in 1977 and chief executive officer in 1982, succeeding J. Donald Rauth. The same year Martin Marietta faced the most significant challenge to its existence in its history–a hostile takeover bid from the Bendix Corporation. Bendix, which had earlier abandoned an attempt to take over RCA, was led at the time by Bill Agee. For several years Agee had been divesting Bendix of its residual businesses, accumulating a $500 million “war chest” in the process. In 1982, he leveraged that fund into a $1.5 billion bid for Martin Marietta.

Martin Marietta responded with a surprising turnabout. CEO Pownall invited a friend, Harry Gray of United Technologies, to assist with a takeover strategy of their own. Pownall and Gray agreed to divide Bendix among them in the event that either Martin Marietta or United Technologies was successful in taking over Bendix. The takeover was stalemated until a three-way deal was arranged wherein the Allied Corporation agreed to purchase Martin Marietta’s holdings in Bendix on the condition that Bendix abandon its bid for Martin Marietta. The deal left Allied with a 39 percent ownership of Martin Marietta, but it was agreed that Allied’s voting share would be directed by Martin’s board until such time that Allied could sell its interest in Martin. Bill Agee joined Allied’s board of directors but later left the company. In the meantime, Martin Marietta went $1.34 billion into debt as a result of its takeover defense.

In order to reduce the company’s debt load, Pownall divested its cement, chemical and aluminum operations, and accelerated a reorganization begun before the takeover crisis. By 1986 debt was down to $220 million, giving Martin Marietta a comfortable debt-to-total capitalization ratio of 24 percent. In retrospect, Tom Pownall acknowledged that his company had emerged from Bendix’s takeover attempt as a more tightly managed and efficient business.

In the late 1980s, the company became active in the design, manufacture, and management of energy, electronics communication, and information systems, including the highly sophisticated level of computer technology known as artificial intelligence. Even with this diversification, 80 percent of Martin Marietta’s revenues continued to be generated via U.S. government contracts. The company supplies the Pentagon with a number of weapons systems, including the Pershing II missile, a major part of the MX missile; the Patriot missile, designed for air defense of field armies; and the Copperhead, a “smart,” or guided, cannon shell. Martin Marietta also developed a series of night vision devices for combat aircraft.

The company continued to build external fuel tanks for NASA’s space shuttle program, despite the temporary suspension of that program following the Challenger tragedy. Martin Marietta was also a major contractor for the American space station scheduled to be built in 1993. In another public project, the company was working on a new air traffic management system for the Federal Aviation Administration.

Norman R. Augustine, Tom Pownall’s hand-picked successor, succeeded his mentor as chairman and CEO upon the latter’s mid-1980s retirement. Augustine proved an auspicious choice. Anticipating the impending reductions in the U.S. defense budget, which slid from a high of $96 billion in 1987 down to $75 billion by 1992, the new leader and his executive team developed a three-pronged plan to survive the shakeout. Dubbed the “Peace Dividend Strategy,” the blueprint called for growth through acquisition, diversification into civil and commercial infrastructure markets, and maintaining financial health. Under Augustine, Martin Marietta dove into the wave of consolidation that swept over the American defense industry in the early 1990s. He guided the $3 billion acquisition of General Electric Co.’s aerospace operations in 1992. The merger, which added about $6 billion in annual sales, boosted Martin Marietta’s capabilities in digital processing, artificial intelligence, and electronics. Two years later, Martin Marietta expanded its capabilities in the wireless communications and commercial aviation markets with the acquisition of Grumman Corp. for $1.9 billion.

However, Augustine’s most dramatic move came in 1994, when Martin Marietta and Lockheed announced a “merger of equals.” It took the Federal Trade Commission several months to approve the union, which created the world’s largest defense company. While the federal government typically discouraged such massive combinations within the same business area, it regarded this consolidation in the defense industry with favor, since, according to one statement, it “boosts the industry’s efficiency and lowers costs for the government, which in turn benefits taxpayers, shareholders and employees.”

The spring 1995 exchange of stock created an advanced technology conglomerate with interests in the defense, space, energy, and government sectors serving the commercial, civil, and international markets. Daniel M. Tellep, chairman and CEO of Lockheed, held those same positions at the new company. Martin Marietta leader Augustine stepped into the office of president with the promise that he would advance into the top spots upon Tellep’s retirement.

Headquartered in Bethesda, Maryland, Lockheed Martin began a process of consolidation and reorganization even before the merger was completed in March 1995. An organizational consolidation grouped operations around four major business sectors: space and strategic missiles, aeronautics, electronics, and information technology services. The plan merged and eliminated dozens of offices and functions, rendering thousands of jobs redundant in the process. In fact, Lockheed Martin slashed its work force from a combined total of 170,000 people to 130,000 by mid-1995 and expected to furlough another 12,000 by 1999.

The unified company was involved in a number of well-publicized projects, including the Hubble Space Telescope, Motorola’s Iridium satellite telecommunications system, the F-22 Stealth fighter, Titan and Atlas space launch vehicles, the Space Shuttle program, and the space station Freedom.

The January 1996 acquisition of Loral Corp.’s Defense Electronics and Systems Integration business made it clear that Lockheed Martin would not soon relinquish its number-one status. Established in 1948, the Loral division was a $6.8 billion operation and a global leader in defense electronics, communications, space and systems integration. The acquisition was initially categorized as a sixth division, Tactical Systems, at Lockheed Martin. Anthony L. Velocci Jr., an analyst with Aviation Week and Space Technology, predicted that Lockheed Martin would encounter difficulty in consolidating the Loral operations into its own recently-reorganized divisions, but that the acquisition would bring economies of scale and boost electronics, tactical systems, and information technology.

Loral Chairman and CEO Bernard Schwartz held those same positions at the newly-formed Lockheed Martin subsidiary and was invited to join the latter company’s board of directors. Schwartz, Tellep, and Augustine became the first members of Lockheed Martin’s three-man office of the chairman as a result of the acquisition.

Jackpot – The History of Domain Names

Jackpot.com domain sells for $500,000

April 18, 2012

A British Virgin Islands company has purchased the domain name Jackpot.com for $500,000.

The sale was brokered by Moniker. The domain name has been listed in multiple Moniker live auctions, but the sale occurred after the most recent live auction at DOMAINfest.

The updated whois record for the domain shows Palek International Ltd as the new owner. The company appears to have been set up just for this web site. However, using DomainTools’ reverse IP tool, I found that the new IP address for Jackpot.com hosts just one other site: lottery.net.

Jonpostel – The History of Domain Names

Jon Postel worked as the manager of IANA until his death in 1998

Date: 01/01/1998

Jonathan Bruce Postel made many significant contributions to the creation of the Internet, particularly in the area of standards. The Economist dubbed him the “God” of the Internet, and many still refer to him as the network’s principal founder. He is largely known for being the Editor of the RFC document series, and for managing the creation and allocation of Top Level Domains and IP addresses in the pre-ICANN era. When he passed away he was the Director of the University of Southern California’s Information Sciences Institute’s Computer Network Division; he led a staff of 70. He pioneered many initiatives, which led to creation of the modern Internet and its governing body, ICANN; he established IANA, ICANN’s precursor and the current Internet numbering authority.

Jon Postel’s technical influence can be seen at the very heart of many of the protocols which make the Internet work: TCP/IP determines the way data is moved through a network; SMTP allows us to send emails; and DNS, the Domain Name Service, helps people make sense of the Internet. He contributed to these and many other technologies. He studied at UCLA, ultimately gaining his Ph.D. in computer science in 1974. Those studies led to his early involvement in the ARPANET project, the packet switching network from which the modern Internet evolved.

In addition, he was involved with Request For Comment (RFC) document series, which contains the standards and practices of the Internet’s infrastructure. For almost three decades, Jon Postel was RFC Editor, shepherding drafts through the open consensus processes that characterize Internet development efforts.

For many, Jon’s greatest contribution to the Internet was his role in creating the Internet Assigned Numbers Authority (IANA). This task – which he volunteered to take on and which he at first performed manually – provided the stability the Internet’s numbering and protocol management systems needed for it to grow and scale. He was also involved with the Los Nettos network (a regional network for the greater Los Angeles area) and was one of Internet Society’s founders, the first individual member; and he served as a Trustee from 1993-98.

Jon voluntarily took on the task of founding and running IANA, the Internet’s necessary numbering authority. He initially performed all numbering procedures and allocations manually. Thus, in Vint Cerf’s words, he kept track of the names of all things in the networked universe.[16] IANA sprung from the expansion of the ARPANET, and the vision of breaking messages into packets, each carrying an address, and sending them over a network to find their own way to another computer; the packets would then be reassembled into the original message. For this system to function each computer would have to have an individual address that would both be intelligible and constant; Jon invented this numbering address scheme. His system also allowed the numbers that computers used for addresses to be translated into English, and thus servers could be accessed by going to a site; i.e. www.example.com, instead of typing in something like 124.345.253.196. As the early network was quite small, Jon initially kept track of all of the existent addresses on scraps of paper. As the network grew, a more formal organization was needed; and the USC’s ISI was contracted by the U.S. government to manage the address system, Jon Postel was the founder and director. Thus, Jon was influential in establishing the protocols of the DNS, the roles of registries and registrars, and all necessary technical standards.

He died October 16, 1998 at the age of 55.

Kesmai – The History of Domain Names

Kesmai Corporation – kesmai.com was registered

Date: 10/27/1986

On October 27, 1986, Kesmai Corporation registered the kesmai.com domain name, making it 30th .com domain ever to be registered.

Kesmai was a pioneering game developer and online game publisher, founded in 1981 by Kelton Flinn and John Taylor. The company was best known for the combat flight sim Air Warrior on the GEnie online service, one of the first graphical MMOGs, launched in 1987. They also developed an ASCII-based MUD, Island of Kesmai, which ran on CompuServe. Founded in 1981, Kesmai has led the online gaming industry as a developer and publisher of online-multiplayer games. The company was best known for the combat flight sim Air Warrior on the GEnie online service, one of the first graphical massive multiplayers online games (MMOG), launched in 1987. They also developed an text-based multi-user-dungon (MUD), Island of Kesmai, which ran on CompuServe.

Based in Charlottesville, VA, Kesmai was a world leader in multiplayer online games and the parent company of ARIES Online Games, Kesmai Studios and GameStorm.  The company developed, published, and distributed interactive gaming content to over 12 million paying subscribers of America Online, Prodigy, CompuServe, EarthLink Network, Delphi, and major websites throughout the Internet. Popular Kesmai titles include Air Warrior, Online Casino, Harpoon Online, Legends of Kesmai, MultiPlayer BattleTech, Star Rangers Online, Stellar Emperor, CatchWord, Jack Nicklaus Online Tour and a collection of classic board and card games.

Kesmai GameStorm was unique among early online services. Instead of being a matchmaking service for players of popular retail computer games, GameStorm specialized in proprietary, online-only games for large numbers of players, but it was an expensive service charging $9.95 a month.  However, it should be noted it was one of the earliest browser-based service where the player need not download the full game to play.

The company was acquired by Rupert Murdoch’s News Corp. in 1994. The company continued to develop massively multiplayer games such as Air Warrior 2 and Legends of Kesmai. News Corp distributed their games through AOL. However, this proved a contenius countship when in 1997  Kesmai Corp. filed a suit in U.S. District Court for the Eastern District of Virginia chargeing that AOL is exercising “monopolistic control” over on-line services and Internet access, preventing small on-line content developers, like Kesmai, from distributing its product.  It was revealed by the lawyer for News Corp ( Jonathan S. Abady)  that the case was settled in Kesmai favor.

Electronic Arts bought the company from News Corp in 1999, but Kesmai studios and subsidaries were closed in 2001 as changes to how online servies were sold to consumers less through services and more through cable providers interested less in making destination landing pages online.

Kevin Dunlap – The History of Domain Names

Kevin Dunlap of DEC significantly re-wrote the DNS implementation

Date: 01/01/1985

The first versions of BIND were developed by said promoters Berkeley. All versions up to 4.8.3 were developed under the responsibility of the Computer Research Group. From 1985 to 1987, Kevin Dunlap worked on BIND. Dunlap was employed by Digital Equipment Corporation and posted at the university. Versions 4.9 and 4.9.1 were issued by Digital Equipment Corporation , now part of Hewlett Packard . DEC employee Paul Vixie was the leader of these developments. Version 4.9.2 was released by Vixie Enterprises . From version 4.9.3 was the development in the hands of Internet Systems Consortium (ISC) and was financially supported by the sponsors of ISC. In 1997 the first production version of ripe came BIND 8 on the market, the latest version to version 9, which would be completely redeveloped.

In 1985, Kevin Dunlap of DEC substantially revised the DNS implementation. Mike Karels, Phil Almquist, and Paul Vixie have maintained BIND since then. BIND was ported to the Windows NT platform in the early 1990s. BIND was widely distributed, especially on Unix systems, and is still the most widely used DNS software on the Internet.

Keydrive – The History of Domain Names

KeyDrive Acquires Moniker and SnapNames from Oversee

FEBRUARY 3 2012

Luxembourg-based KeyDrive S.A. announced on Friday it has acquired the Moniker and SnapNames business units of online marketing firm Oversee.  With the acquisition of the two businesses, KeyDrive S.A. manages more than 6 million domains, making it the sixth largest ICANN registrar in the world in terms of the number of managed gTLD domain names.

Moniker and SnapNames offer businesses and individuals a range of services for domain name registration, acquisition, brokerage and sales.  The two companies have been instrumental in the advance of domain aftermarket sales in the industry, with Moniker introducing the live domain name auction concept and SnapNames operating the largest online auction of expired and deleting domain names.

“The purchase of these two leaders in the domain aftermarket perfectly ?ts our global growth strategy,” said Alexander Siffrin, chairman of the KeyDrive S.A., and CEO and founder of Key-Systems. “We now have the opportunity to extend our global outreach, target a broader customer base and cross-sell our services. Furthermore, our European clients will gain access to US buyers and sellers of domain names. We’re delighted to welcome the Moniker and SnapNames teams to KeyDrive S.A.”

Kindlefire – The History of Domain Names

Amazon Purchases Kindlefire.com

Sep 28, 2011

Amazon Confirms The purchase of “Kindle Fire” On Their Website

In a smart move by Amazon, just before announcing the new Kindle Fire HD family of products to the world on, the company acquired the domain name.  According to Whois records, the name switched hands from its previous owner to Amazon.

The New York resident who sold the domain name didn’t divulge any details, but the domain name went under whois privacy in early 2010. It remains that way today.

Upon purchase of the domain and on the very same day Amazon unveiled the first generation Kindle Fire in 2011, the company registered well over 500 domain names related to the Kindle Fire and Silk browser products, through the internet brand protection company MarkMonitor.

KSR – The History of Domain Names

Kendall Square Research – KSR.com was registered

Date: 11/24/1987

On November 24, 1987, Kendall Square Research registered the ksr.com domain name, making it 99th .com domain ever to be registered.

Kendall Square Research (KSR) was a supercomputer company headquartered originally in Kendall Square in Cambridge, Massachusetts in 1986, near Massachusetts Institute of Technology (MIT). It was co-founded by Steven Frank and Henry Burkhardt III, who had formerly helped found Data General and Encore Computer and was one of the original team that designed the PDP-8. KSR produced two models of supercomputer, the KSR1 and KSR2.

History

As the company scaled up quickly to enter production, they moved in the late 1980s to 170 Tracer Lane, Waltham, Massachusetts. KSR refocused its efforts from the scientific to the commercial marketplace, with emphasis on parallel relational databases and OLTP operations. It then got out of the hardware business, but continued to market some of its data warehousing and analysis software products.

The first KSR1 system was installed in 1991. With new processor hardware, new memory hardware and a novel memory architecture, a new compiler port, a new port of a relatively new operating system, and exposed memory hazards, early systems were noted for frequent system crashes. KSR called their cache-only memory architecture (COMA) by the trade name Allcache; reliability problems with early systems earned it the nickname Allcrash, although memory was not necessarily the root cause of crashes. A few KSR1 models were sold, and as the KSR2 was being rolled out, the company collapsed amid accounting irregularities involving the overstatement of revenue.

KSR used a proprietary processor because 64-bit processors were not commercially available. However, this put the small company in the difficult position of doing both processor design and system design. The KSR processors were introduced in 1991 at 20 MHz and 40 MFlops. At that time, the 32-bit Intel 80486 ran at 50 MHz and 50 MFlops. When the 64-bit DEC Alpha was introduced in 1992, it ran at up to 192 MHz and 192 MFlops, while the 1992 KSR2 ran at 40 MHz and 80 MFlops.

One customer of the KSR2, the Pacific Northwest National Laboratory, a United States Department of Energy facility, purchased an enormous number of spare parts, and kept their machines running for years after the demise of KSR.

KSR, along with many of its competitors (see below), went bankrupt during the collapse of the supercomputer market in the early 1990s. KSR went out of business in February 1994, when their stock was delisted from the stock exchange.

Technology

The KSR systems ran a specially customized version of the OSF/1 operating system, a Unix variant, with programs compiled by a KSR-specific port of the Green Hills Software C and FORTRAN compilers. The architecture was shared memory implemented as a cache-only memory architecture or “COMA”. Being all cache, memory dynamically migrated and replicated in a coherent manner based on the access pattern of individual processors. The processors were arranged in a hierarchy of rings, and the operating system mediated process migration and device access. Instruction decode was hardwired, and pipelining was used. Each KSR1 processor was a custom 64-bit reduced instruction set computing (RISC) CPU clocked at 20 MHz and capable of a peak output of 20 million instructions per second (MIPS) and 40 million floating-point operations per second (MFLOPS). Up to 1088 of these processors could be arranged in a single system, with a minimum of eight. The KSR2 doubled the clock rate to 40 MHz and supported over 5000 processors. The KSR-1 chipset was fabricated by Sharp Corporation while the KSR-2 chipset was built by Hewlett-Packard.

Software

Besides the traditional scientific applications, KSR with Oracle Corporation, addressed the massively parallel database market for commercial applications. The KSR-1 and -2 supported Micro Focus COBOL and C/C++ programming languages, and the Oracle PRDBMS and the MATISSE OODBMS from ADB, Inc. Their own product, the KSR Query Decomposer, complemented the functions of the Oracle product for SQL uses. The TUXEDO transaction monitor for OLTP was also provided. The KAP program (Kuck & Associate Preprocessor) provided for pre-processing for source code analysis and parallelization. The runtime environment was termed PRESTO, and was a POSIX compliant multithreading manager.

Hardware

The KSR-1 processor was implemented as a four-chip set in 1.2 micrometer complementary metal–oxide–semiconductor (CMOS). These chips were: the cell execution unit, the floating point unit, the arithmetic logic unit, and the external I/O unit (XIO). The CEU handled instruction fetch (two per clock), and all operations involving memory, such as loads and stores. 40-bit addresses were used, going to full 64-bit addresses later. The integer unit had 32, 64-bit-wide registers. The floating point unit is discussed below. The XIO had the capacity of 30 MB/s throughput to I/O devices. It included 64 control and data registers.

The KSR processor was a 2-wide VLIW, with instructions of 6 types: memory reference (load and store), execute, control flow, memory control, I/O, and inserted. Execute instructions included arithmetic, logical, and type conversion. They were usually triadic register in format. Control flow refers to branches and jumps. Branch instructions were two cycles. The programmer (or compiler) could implicitly control the quashing behavior of the subsequent two instructions that would be initiated during the branch. The choices were: always retain the results, retain results if branch test is true, or retain results if branch test is false. Memory control provided synchronization primitives. I/O instructions were provided. Inserted instructions were forced into a flow by a coprocessor. Inserted load and store were used for direct memory access (DMA) transfers. Inserted memory instructions were used to maintain cache coherency. New coprocessors could be interfaced with the inserted instruction mechanism. IEEE standard floating point arithmetic was supported. Sixty-four 64-bit wide registers were included.

The following example of KSR assembly performs an indirect procedure call to an address held in the procedure’s constant block, saving the return address in register c14. It also saves the frame pointer, loads integer register zero with the value 3, and increments integer register 31 without changing the condition codes. Most instructions have a delay slot of 2 cycles and the delay slots are not interlocked, so must be scheduled explicitly, else the resulting hazard means wrong values are sometimes loaded.

In the KSR design, all of the memory was treated as cache. The design called for no home location- to reduce storage overheads and to software transparently, dynamically migrate/replicate memory based on where it was be utilized; A Harvard architecture, separate bus for instructions and memory was used. Each node board contained 256 kB of I-cache and D-cache, essentially primary cache. At each node was 32 MB of memory for main cache. The system level architecture was shared virtual memory, which was physically distributed in the machine. The programmer or application only saw one contiguous address space, which was spanned by a 40-bit address. Traffic between nodes traveled at up to 4 gigabytes per second. The 32 megabytes per node, in aggregate, formed the physical memory of the machine.

Specialized input/output processors could be used in the system, providing scalable I/O. A 1088 node KSR1 could have 510 I/O channels with an aggregate in excess of 15 GB/s. Interfaces such as Ethernet, FDDI, and HIPPI were supported.

Lambdarail – The History of Domain Names

National Lambda Rail founded

Date: 01/01/2003

National LambdaRail, Inc. provides a network for a range of academic disciplines and public-private partnerships. It offers Lambda-based connectivity and transport services, an optical network infrastructure that provides services and applications for meeting the needs of network and scientific research projects; and Ethernet-based services that facilitate point-to-point and multipoint Ethernet transport. The company also provides IP-based services, including routed IP and IP VPN services that connect educational institutions with one another, as well as with various networks, such as regional, national, and international IP-based research and education networks; video-based collaboration services; Internet transit solutions; and co-location, cross connection, and interconnection services. It offers its services for exploration and discovery in the biomedical, engineering, network research, physics, and various disciplines at research institutions and federal agencies in the United States. The company was founded in 2003 and is based in Cypress, California.

National LambdaRail (NLR) is a 12,000-mile (19,000 km), high-speed national computer network owned and operated by the U.S. research and education community.

The goals of the National LambdaRail project are:

  • To bridge the gap between leading-edge optical network research and state-of-the-art applications research;
  • To push beyond the technical and performance limitations of today’s Internet backbones;
  • To provide the growing set of major computationally intensive science (often termed e-Science) projects, initiatives and experiments with the dedicated bandwidth, deterministic performance characteristics, and/or other advanced network capabilities they need; and
  • To enable creative experimentation and innovation that characterized facilities-based network research during the early years of the Internet.

NLR uses fiber-optic lines, and is the first transcontinental 10 Gigabit Ethernet network. Its high capacity (up to 1.6 Tbit/s aggregate), high bitrate (40 Gbit/s implemented; planning for 100 Gbit/s underway[when?]) and high availability (99.99% or more), enable National LambdaRail to support demanding research projects. Users include NASA, the National Oceanic and Atmospheric Administration, Oak Ridge National Laboratory, and over 280 research universities and other laboratories. In 2009 National LambdaRail was selected to provide wide-area networking for U.S. laboratories participating in research related to the Large Hadron Collider project, based near Geneva, Switzerland.

It is primarily oriented to aid terascale computing efforts and to be used as a network testbed for experimentation with large-scale networks. National LambdaRail is a university-based and -owned initiative, in contrast with the Abilene Network and Internet2, which are university-corporate sponsorships. National LambdaRail does not impose any acceptable use policies on its users, in contrast to commercial networks. This gives researchers more control to use the network for these research projects. National LambdaRail also supports a production layer on its infrastructure. Links in the network use dense wavelength-division multiplexing (DWDM), which allows up to 64 individual optical wavelengths to be used (depending on hardware configuration at each end) separated by 100 GHz spacing. At present,[when?] individual wavelengths are used to carry traditional OC-X (OC3, OC12, OC48 or OC192) time-division multiplexing circuits or Ethernet signals for Gigabit Ethernet or 10 Gigabit Ethernet.

National LambdaRail was founded in 2003 and in 2004 its national, advanced fiber optic network was completed. In addition to being the first transcontinental, production 10 Gigabit Ethernet network, National LambdaRail was also the first intelligently managed, nationwide peering and transit program focused on research applications.

In 2008, a company named Darkstrand purchased capacity on NLR for commercial use. By the end of the year the Chicago-based company was having trouble raising funding due to the Great Recession. On May 24, 2012 the NLR network operations center services were transferred to the Corporation for Education Network Initiatives in California. In October 2009 Glenn Ricart was named president and CEO. On September 7, 2010 Ricart announced his resignation.

In November 2011 the control of NLR was purchased from its university membership by a billionaire Patrick Soon-Shiong for $100M, who indicated his intention to upgrade NLR infrastructure and repurpose portions of it to support an ambitious healthcare project through NantHealth. The upgrade never took place. NLR ceased operations in March 2014.

Leonard – The History of Domain Names

Leonard Kleinrock and IMP1

Date: 01/01/1973

Leonard Kleinrock (born June 13, 1934) is an American engineer and computer scientist. A computer science professor at UCLA’s Henry Samueli School of Engineering and Applied Science, he made several important contributions to the field of computer networking, in particular to the theoretical foundations of computer networking. He played an influential role in the development of the ARPANET, the precursor to the Internet, at UCLA.

The Interface Message Processor (IMP) was the packet switching node used to interconnect participant networks to the ARPANET from the late 1960s to 1989. It was the first generation of gateways, which are known today as routers.

The first IMP was delivered to Leonard Kleinrock’s group at UCLA on August 30, 1969. It used an SDS Sigma-7 host computer. Douglas Engelbart’s group at the Stanford Research Institute (SRI) received the second IMP on October 1, 1969. It was attached to an SDS-940 host. The third IMP was installed in University of California, Santa Barbara on November 1, 1969. The fourth and final IMP was installed in the University of Utah in December 1969. The first communication test between two systems (UCLA and SRI) took place on October 29, 1969, when a login to the SRI machine was attempted, but only the first two letters could be transmitted. The SRI machine crashed upon reception of the ‘g’ character.  A few minutes later, the bug was fixed and the login attempt was successfully completed.

BBN developed a program to test the performance of the communication circuits. According to a report filed by Heart, a preliminary test in late 1969 based on a 27-hour period of activity on the UCSB-SRI line found “approximately one packet per 20,000 in error;” subsequent tests “uncovered a 100% variation in this number – apparently due to many unusually long periods of time (on the order of hours) with no detected errors.”

A variant of the IMP existed, called the TIP, which connected terminals instead of computers to the network; it was based on the 316. Initially, some Honeywell-based IMPs were replaced with multiprocessing BBN Pluribus IMPs, but ultimately BBN developed a microprogrammed clone of the Honeywell processor.

IMPs were at the heart of the ARPANET until DARPA decommissioned ARPANET in 1989. Most IMPs were either taken apart, junked or transferred to MILNET. Some became artifacts in museums; Kleinrock placed IMP Number One on public view at UCLA. The last IMP on the ARPANET was the one at the University of Maryland.

IPV4 – The History of Domain Names

ICANN announced that it had distributed the last batch of its remainingIPv4 addresses to the world’s five Regional Internet Registries

Date: 02/03/2011

On February 3, 2011, ICANN announced that it had distributed the last batch of its remaining IPv4 addresses to the world’s five Regional Internet Registries, the organizations that manage IP addresses in different regions. These Registries began assigning the final IPv4 addresses within their regions until they ran out completely, which could come as soon as early 2012.

As the last blocks of IPv4 addresses are assigned, adoption of a new protocol—IPv6—is essential to the continued growth of the open Internet. IPv6 will expand Internet address space to 128 bits, making room for approximately 340 trillion trillion trillion addresses (enough to last us for the foreseeable future).

Google, along with others, has been working for years to implement the larger IPv6 format. We’re also participating in the planned World IPv6 Day, scheduled for June 8, 2011. On this day, all of the participating organizations will enable access to as many services as possible via IPv6.

Today’s ICANN announcement marks a major milestone in the history of the Internet. IPv6, the next chapter, is now under way.

The activation of this procedure was triggered when Latin America and Caribbean Network Information Centre’s (LACNIC) supply of addresses dropped to below 8 million.

This move signals that the global supply of IPv4 addresses is reaching a critical level. As more and more devices come online, the demand for IP addresses rises, and IPv4 is incapable of supplying enough addresses to facilitate this expansion. ICANN encourages network operators around the globe to adopt IPv6, which allows for the rapid growth of the Internet.

“We are grateful for the guidance we’ve received from the RIRs as the number of unallocated IPv4 addresses dwindles,” said Elise Gerich, Vice President of IANA and Technical Operations at ICANN. “This redistribution of the small pool of IPv4 addresses held by us ensures that every region receives an equal number of addresses while we continue to work with the community to raise support for IPv6.”

To handle this critical drop in the numbers available to LACNIC, the five RIRs’ policy making communities established a policy for the equal redistribution by ICANN. This is known as the allocation phase outlined in the Global Policy for Post Exhaustion IPv4 Allocation Mechanisms.

“The IANA IPv4 Recovered Address Space registry contained about 20 million IPv4 addresses earlier today and is now about half that size,” said Leo Vegoda, Operational Excellence Manager at ICANN. “Redistributing increasingly small blocks of IPv4 address space is not a sustainable way to grow the Internet. IPv6 deployment is a requirement for any network that needs to survive.”

IPv6 facilitates the exponential growth of the Internet by providing 340-undecillion unique addresses, compared to the 3.7 billion afforded by IPv4.

“To continue to fuel the economic growth and opportunity that is brought by the Internet, we are at the point where rapid adoption of IPv6 is a necessity to maintain that growth,” said Gerich.

IPv6 – The History of Domain Names

IPv6 proposed

Date: 01/01/1995

Internet Protocol version 6 (IPv6) is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion. IPv6 is intended to replace IPv4.

Every device on the Internet is assigned a unique IP address for identification and location definition. With the rapid growth of the Internet after commercialization in the 1990s, it became evident that far more addresses would be needed to connect devices than the IPv4 address space had available. By 1998, the Internet Engineering Task Force (IETF) had formalized the successor protocol. IPv6 uses a 128-bit address, theoretically allowing 2128, or approximately 3.4×1038 addresses. The actual number is slightly smaller, as multiple ranges are reserved for special use or completely excluded from use. The total number of possible IPv6 addresses is more than 7.9×1028 times as many as IPv4, which uses 32-bit addresses and provides approximately 4.3 billion addresses. The two protocols are not designed to be interoperable, complicating the transition to IPv6. However, several IPv6 transition mechanisms have been devised to permit communication between IPv4 and IPv6 hosts.

IPv6 provides other technical benefits in addition to a larger addressing space. In particular, it permits hierarchical address allocation methods that facilitate route aggregation across the Internet, and thus limit the expansion of routing tables. The use of multicast addressing is expanded and simplified, and provides additional optimization for the delivery of services. Device mobility, security, and configuration aspects have been considered in the design of the protocol.

IPv6 addresses are represented as eight groups of four hexadecimal digits with the groups being separated by colons, for example 2001:0db8:0000:0042:0000:8a2e:0370:7334, but methods to abbreviate this full notation exist.

Main features

IPv6 is an Internet Layer protocol for packet-switched internetworking and provides end-to-end datagram transmission across multiple IP networks, closely adhering to the design principles developed in the previous version of the protocol, Internet Protocol Version 4 (IPv4). IPv6 was first formally described in Internet standard document RFC 2460, published in December 1998.

In addition to offering more addresses, IPv6 also implements features not present in IPv4. It simplifies aspects of address assignment (stateless address autoconfiguration), network renumbering, and router announcements when changing network connectivity providers. It simplifies processing of packets in routers by placing the responsibility for packet fragmentation into the end points. The IPv6 subnet size is standardized by fixing the size of the host identifier portion of an address to 64 bits to facilitate an automatic mechanism for forming the host identifier from link layer addressing information (MAC address). Network security was a design requirement of the IPv6 architecture, and included the original specification of IPsec.

IPv6 does not specify interoperability features with IPv4, but essentially creates a parallel, independent network. Exchanging traffic between the two networks requires translator gateways employing one of several transition mechanisms, such as NAT64, or a tunneling protocol like 6to4, 6in4, or Teredo.

Motivation and origin

IPv4

Internet Protocol Version 4 (IPv4) was the first publicly used version of the Internet Protocol. IPv4 was developed as a research project by the Defense Advanced Research Projects Agency (DARPA), a United States Department of Defense agency, before becoming the foundation for the Internet and the World Wide Web. It is currently described by IETF publication RFC 791 (September 1981), which replaced an earlier definition (RFC 760, January 1980). IPv4 included an addressing system that used numerical identifiers consisting of 32 bits. These addresses are typically displayed in quad-dotted notation as decimal values of four octets, each in the range 0 to 255, or 8 bits per number. Thus, IPv4 provides an addressing capability of 232 or approximately 4.3 billion addresses. Address exhaustion was not initially a concern in IPv4 as this version was originally presumed to be a test of DARPA’s networking concepts. During the first decade of operation of the Internet, it became apparent that methods had to be developed to conserve address space. In the early 1990s, even after the redesign of the addressing system using a classless network model, it became clear that this would not suffice to prevent IPv4 address exhaustion, and that further changes to the Internet infrastructure were needed.

The last unassigned top-level address blocks of 16 million IPv4 addresses were allocated in February 2011 by the Internet Assigned Numbers Authority (IANA) to the five regional Internet registries (RIRs). However, each RIR still has available address pools and is expected to continue with standard address allocation policies until one /8 Classless Inter-Domain Routing (CIDR) block remains. After that, only blocks of 1024 addresses (/22) will be provided from the RIRs to a local Internet registry (LIR). As of September 2015, all of Asia-Pacific Network Information Centre (APNIC), the Réseaux IP Européens Network Coordination Centre (RIPE_NCC), Latin America and Caribbean Network Information Centre (LACNIC), and American Registry for Internet Numbers (ARIN) have reached this stage. This leaves African Network Information Center (AFRINIC) as the sole regional internet registry that is still using the normal protocol for distributing IPv4 addresses.

Working-group proposals

By the beginning of 1992, several proposals appeared for an expanded Internet addressing system and by the end of 1992 the IETF announced a call for white papers. In September 1993, the IETF created a temporary, ad-hoc IP Next Generation (IPng) area to deal specifically with such issues. The new area was led by Allison Mankin and Scott Bradner, and had a directorate with 15 engineers from diverse backgrounds for direction-setting and preliminary document review: The working-group members were J. Allard (Microsoft), Steve Bellovin (AT&T), Jim Bound (Digital Equipment Corporation), Ross Callon (Wellfleet), Brian Carpenter (CERN), Dave Clark (MIT), John Curran (NEARNET), Steve Deering (Xerox), Dino Farinacci (Cisco), Paul Francis (NTT), Eric Fleischmann (Boeing), Mark Knopper (Ameritech), Greg Minshall (Novell), Rob Ullmann (Lotus), and Lixia Zhang (Xerox).

The Internet Engineering Task Force adopted the IPng model on 25 July 1994, with the formation of several IPng working groups. By 1996, a series of RFCs was released defining Internet Protocol version 6 (IPv6), starting with RFC 1883. (Version 5 was used by the experimental Internet Stream Protocol.)

It is widely expected that the Internet will use IPv4 alongside IPv6 for the foreseeable future. Direct communication between the IPv4 and IPv6 network protocols is not possible; therefore, intermediary trans-protocol systems are needed as a communication conduit between IPv4 and IPv6 whether on a single device or among network nodes.

Comparison with IPv4

On the Internet, data is transmitted in the form of network packets. IPv6 specifies a new packet format, designed to minimize packet header processing by routers. Because the headers of IPv4 packets and IPv6 packets are significantly different, the two protocols are not interoperable. However, in most respects, IPv6 is an extension of IPv4. Most transport and application-layer protocols need little or no change to operate over IPv6; exceptions are application protocols that embed Internet-layer addresses, such as File Transfer Protocol (FTP) and Network Time Protocol (NTP), where the new address format may cause conflicts with existing protocol syntax.

Larger address space

The main advantage of IPv6 over IPv4 is its larger address space. The length of an IPv6 address is 128 bits, compared with 32 bits in IPv4. The address space therefore has 2128 or approximately 3.4×1038 addresses.

In addition, the IPv4 address space is poorly allocated; in 2011, approximately 14% of all available addresses were utilized. While these numbers are large, it was not the intent of the designers of the IPv6 address space to assure geographical saturation[clarification needed] with usable addresses. Rather, the longer addresses simplify allocation of addresses, enable efficient route aggregation, and allow implementation of special addressing features. In IPv4, complex Classless Inter-Domain Routing (CIDR) methods were developed to make the best use of the small address space. The standard size of a subnet in IPv6 is 264 addresses, the square of the size of the entire IPv4 address space. Thus, actual address space utilization rates will be small in IPv6, but network management and routing efficiency are improved by the large subnet space and hierarchical route aggregation.

Renumbering an existing network for a new connectivity provider with different routing prefixes is a major effort with IPv4. With IPv6, however, changing the prefix announced by a few routers can in principle renumber an entire network, since the host identifiers (the least-significant 64 bits of an address) can be independently self-configured by a host.

Multicasting

Multicasting, the transmission of a packet to multiple destinations in a single send operation, is part of the base specification in IPv6. In IPv4 this is an optional although commonly implemented feature. IPv6 multicast addressing shares common features and protocols with IPv4 multicast, but also provides changes and improvements by eliminating the need for certain protocols. IPv6 does not implement traditional IP broadcast, i.e. the transmission of a packet to all hosts on the attached link using a special broadcast address, and therefore does not define broadcast addresses. In IPv6, the same result can be achieved by sending a packet to the link-local all nodes multicast group at address ff02::1, which is analogous to IPv4 multicasting to address 224.0.0.1. IPv6 also provides for new multicast implementations, including embedding rendezvous point addresses in an IPv6 multicast group address, which simplifies the deployment of inter-domain solutions.

In IPv4 it is very difficult for an organization to get even one globally routable multicast group assignment, and the implementation of inter-domain solutions is arcane. Unicast address assignments by a local Internet registry for IPv6 have at least a 64-bit routing prefix, yielding the smallest subnet size available in IPv6 (also 64 bits). With such an assignment it is possible to embed the unicast address prefix into the IPv6 multicast address format, while still providing a 32-bit block, the least significant bits of the address, or approximately 4.2 billion multicast group identifiers. Thus each user of an IPv6 subnet automatically has available a set of globally routable source-specific multicast groups for multicast applications.

Stateless address autoconfiguration (SLAAC)

IPv6 hosts can configure themselves automatically when connected to an IPv6 network using the Neighbor Discovery Protocol via Internet Control Message Protocol version 6 (ICMPv6) router discovery messages. When first connected to a network, a host sends a link-local router solicitation multicast request for its configuration parameters; routers respond to such a request with a router advertisement packet that contains Internet Layer configuration parameters.

If IPv6 stateless address auto-configuration is unsuitable for an application, a network may use stateful configuration with the Dynamic Host Configuration Protocol version 6 (DHCPv6) or hosts may be configured manually using static methods.

Routers present a special case of requirements for address configuration, as they often are sources of autoconfiguration information, such as router and prefix advertisements. Stateless configuration of routers can be achieved with a special router renumbering protocol.

Network-layer security

Internet Protocol Security (IPsec) was originally developed for IPv6, but found widespread deployment first in IPv4, for which it was re-engineered. IPsec was a mandatory specification of the base IPv6 protocol suite, but has since been made optional.

Simplified processing by routers

In IPv6, the packet header and the process of packet forwarding have been simplified. Although IPv6 packet headers are at least twice the size of IPv4 packet headers, packet processing by routers is generally more efficient, because less processing is required in routers. This furthers the end-to-end principle of Internet design, which envisioned that most processing in the network occurs in the leaf nodes.

The packet header in IPv6 is simpler than the IPv4 header. Many rarely used fields have been moved to optional header extensions.

IPv6 routers do not perform IP fragmentation. IPv6 hosts are required to either perform path MTU discovery, perform end-to-end fragmentation, or to send packets no larger than the default Maximum transmission unit (MTU), which is 1280 octets.

The IPv6 header is not protected by a checksum. Integrity protection is assumed to be assured by both the link layer or error detection and correction methods in higher-layer protocols, such as TCP and UDP. In IPv4, UDP may actually have a checksum of 0, indicating no checksum; IPv6 requires a checksum in UDP. Therefore, IPv6 routers do not need to recompute a checksum when header fields change, such as the time to live (TTL) or hop count.

The TTL field of IPv4 has been renamed to Hop Limit in IPv6, reflecting the fact that routers are no longer expected to compute the time a packet has spent in a queue.

Mobility

Unlike mobile IPv4, mobile IPv6 avoids triangular routing and is therefore as efficient as native IPv6. IPv6 routers may also allow entire subnets to move to a new router connection point without renumbering.

Options extensibility

The IPv6 packet header has a minimum size of 40 octets. Options are implemented as extensions. This provides the opportunity to extend the protocol in the future without affecting the core packet structure. However, recent studies indicate that there is still widespread dropping of IPv6 packets that contain extension headers.

Packet format

An IPv6 packet has two parts: a header and payload.

The header consists of a fixed portion with minimal functionality required for all packets and may be followed by optional extensions to implement special features.

The fixed header occupies the first 40 octets (320 bits) of the IPv6 packet. It contains the source and destination addresses, traffic classification options, a hop counter, and the type of the optional extension or payload which follows the header. This Next Header field tells the receiver how to interpret the data which follows the header. If the packet contains options, this field contains the option type of the next option. The “Next Header” field of the last option, points to the upper-layer protocol that is carried in the packet’s payload.

Extension headers carry options that are used for special treatment of a packet in the network, e.g., for routing, fragmentation, and for security using the IPsec framework.

Without special options, a payload must be less than 64KB. With a Jumbo Payload option (in a Hop-By-Hop Options extension header), the payload must be less than 4 GB.

Unlike with IPv4, routers never fragment a packet. Hosts are expected to use Path MTU Discovery to make their packets small enough to reach the destination without needing to be fragmented.

IPv6 in the Domain Name System

In the Domain Name System, hostnames are mapped to IPv6 addresses by AAAA resource records, so-called quad-A records. For reverse resolution, the IETF reserved the domain ip6.arpa, where the name space is hierarchically divided by the 1-digit hexadecimal representation of nibble units (4 bits) of the IPv6 address.

Deployment

The 1993 introduction of Classless Inter-Domain Routing (CIDR) in the routing and IP address allocation for the Internet, and the extensive use of network address translation (NAT) delayed IPv4 address exhaustion. The final phase of exhaustion started on 3 February 2011. However, despite a decade long development and implementation history as a Standards Track protocol, general worldwide deployment of IPv6 is increasing slowly. As of September 2013, about 4% of domain names and 16.2% of the networks on the Internet have IPv6 protocol support.

IPv6 has been implemented on all major operating systems in use in commercial, business, and home consumer environments. Since 2008, the domain name system can be used in IPv6. IPv6 was first used in a major world event during the 2008 Summer Olympic Games, the largest showcase of IPv6 technology since the inception of IPv6. Some governments including the Federal government of the United States and China have issued guidelines and requirements for IPv6 capability.

In 2009, Verizon mandated IPv6 operation, and reduced IPv4 to an optional capability, for LTE cellular hardware. As of June 2012, T-Mobile USA also supports external IPv6 access.

As of 2014, IPv4 still carried more than 99% of worldwide Internet traffic. The Internet exchange in Amsterdam is the only large exchange that publicly shows IPv6 traffic statistics, which as of November 2016 is tracking at about 1.6%, growing at about 0.3% per year. As of 12 November 2016, the percentage of users reaching Google services with IPv6 reached 15.0% for the first time, growing at about 5.6% per year, although varying widely by region. As of 22 April 2015, deployment of IPv6 on web servers also varied widely, with over half of web pages available via IPv6 in many regions, with about 14% of web servers supporting IPv6.

IPv6 – The History of Domain Names

IPv6 proposed

Date: 01/01/1995

Internet Protocol version 6 (IPv6) is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion. IPv6 is intended to replace IPv4.

Every device on the Internet is assigned a unique IP address for identification and location definition. With the rapid growth of the Internet after commercialization in the 1990s, it became evident that far more addresses would be needed to connect devices than the IPv4 address space had available. By 1998, the Internet Engineering Task Force (IETF) had formalized the successor protocol. IPv6 uses a 128-bit address, theoretically allowing 2128, or approximately 3.4×1038 addresses. The actual number is slightly smaller, as multiple ranges are reserved for special use or completely excluded from use. The total number of possible IPv6 addresses is more than 7.9×1028 times as many as IPv4, which uses 32-bit addresses and provides approximately 4.3 billion addresses. The two protocols are not designed to be interoperable, complicating the transition to IPv6. However, several IPv6 transition mechanisms have been devised to permit communication between IPv4 and IPv6 hosts.

IPv6 provides other technical benefits in addition to a larger addressing space. In particular, it permits hierarchical address allocation methods that facilitate route aggregation across the Internet, and thus limit the expansion of routing tables. The use of multicast addressing is expanded and simplified, and provides additional optimization for the delivery of services. Device mobility, security, and configuration aspects have been considered in the design of the protocol.

IPv6 addresses are represented as eight groups of four hexadecimal digits with the groups being separated by colons, for example 2001:0db8:0000:0042:0000:8a2e:0370:7334, but methods to abbreviate this full notation exist.

Main features

IPv6 is an Internet Layer protocol for packet-switched internetworking and provides end-to-end datagram transmission across multiple IP networks, closely adhering to the design principles developed in the previous version of the protocol, Internet Protocol Version 4 (IPv4). IPv6 was first formally described in Internet standard document RFC 2460, published in December 1998.

In addition to offering more addresses, IPv6 also implements features not present in IPv4. It simplifies aspects of address assignment (stateless address autoconfiguration), network renumbering, and router announcements when changing network connectivity providers. It simplifies processing of packets in routers by placing the responsibility for packet fragmentation into the end points. The IPv6 subnet size is standardized by fixing the size of the host identifier portion of an address to 64 bits to facilitate an automatic mechanism for forming the host identifier from link layer addressing information (MAC address). Network security was a design requirement of the IPv6 architecture, and included the original specification of IPsec.

IPv6 does not specify interoperability features with IPv4, but essentially creates a parallel, independent network. Exchanging traffic between the two networks requires translator gateways employing one of several transition mechanisms, such as NAT64, or a tunneling protocol like 6to4, 6in4, or Teredo.

Motivation and origin

IPv4

Internet Protocol Version 4 (IPv4) was the first publicly used version of the Internet Protocol. IPv4 was developed as a research project by the Defense Advanced Research Projects Agency (DARPA), a United States Department of Defense agency, before becoming the foundation for the Internet and the World Wide Web. It is currently described by IETF publication RFC 791 (September 1981), which replaced an earlier definition (RFC 760, January 1980). IPv4 included an addressing system that used numerical identifiers consisting of 32 bits. These addresses are typically displayed in quad-dotted notation as decimal values of four octets, each in the range 0 to 255, or 8 bits per number. Thus, IPv4 provides an addressing capability of 232 or approximately 4.3 billion addresses. Address exhaustion was not initially a concern in IPv4 as this version was originally presumed to be a test of DARPA’s networking concepts. During the first decade of operation of the Internet, it became apparent that methods had to be developed to conserve address space. In the early 1990s, even after the redesign of the addressing system using a classless network model, it became clear that this would not suffice to prevent IPv4 address exhaustion, and that further changes to the Internet infrastructure were needed.

The last unassigned top-level address blocks of 16 million IPv4 addresses were allocated in February 2011 by the Internet Assigned Numbers Authority (IANA) to the five regional Internet registries (RIRs). However, each RIR still has available address pools and is expected to continue with standard address allocation policies until one /8 Classless Inter-Domain Routing (CIDR) block remains. After that, only blocks of 1024 addresses (/22) will be provided from the RIRs to a local Internet registry (LIR). As of September 2015, all of Asia-Pacific Network Information Centre (APNIC), the Réseaux IP Européens Network Coordination Centre (RIPE_NCC), Latin America and Caribbean Network Information Centre (LACNIC), and American Registry for Internet Numbers (ARIN) have reached this stage. This leaves African Network Information Center (AFRINIC) as the sole regional internet registry that is still using the normal protocol for distributing IPv4 addresses.

Working-group proposals

By the beginning of 1992, several proposals appeared for an expanded Internet addressing system and by the end of 1992 the IETF announced a call for white papers. In September 1993, the IETF created a temporary, ad-hoc IP Next Generation (IPng) area to deal specifically with such issues. The new area was led by Allison Mankin and Scott Bradner, and had a directorate with 15 engineers from diverse backgrounds for direction-setting and preliminary document review: The working-group members were J. Allard (Microsoft), Steve Bellovin (AT&T), Jim Bound (Digital Equipment Corporation), Ross Callon (Wellfleet), Brian Carpenter (CERN), Dave Clark (MIT), John Curran (NEARNET), Steve Deering (Xerox), Dino Farinacci (Cisco), Paul Francis (NTT), Eric Fleischmann (Boeing), Mark Knopper (Ameritech), Greg Minshall (Novell), Rob Ullmann (Lotus), and Lixia Zhang (Xerox).

The Internet Engineering Task Force adopted the IPng model on 25 July 1994, with the formation of several IPng working groups. By 1996, a series of RFCs was released defining Internet Protocol version 6 (IPv6), starting with RFC 1883. (Version 5 was used by the experimental Internet Stream Protocol.)

It is widely expected that the Internet will use IPv4 alongside IPv6 for the foreseeable future. Direct communication between the IPv4 and IPv6 network protocols is not possible; therefore, intermediary trans-protocol systems are needed as a communication conduit between IPv4 and IPv6 whether on a single device or among network nodes.

Comparison with IPv4

On the Internet, data is transmitted in the form of network packets. IPv6 specifies a new packet format, designed to minimize packet header processing by routers. Because the headers of IPv4 packets and IPv6 packets are significantly different, the two protocols are not interoperable. However, in most respects, IPv6 is an extension of IPv4. Most transport and application-layer protocols need little or no change to operate over IPv6; exceptions are application protocols that embed Internet-layer addresses, such as File Transfer Protocol (FTP) and Network Time Protocol (NTP), where the new address format may cause conflicts with existing protocol syntax.

Larger address space

The main advantage of IPv6 over IPv4 is its larger address space. The length of an IPv6 address is 128 bits, compared with 32 bits in IPv4. The address space therefore has 2128 or approximately 3.4×1038 addresses.

In addition, the IPv4 address space is poorly allocated; in 2011, approximately 14% of all available addresses were utilized. While these numbers are large, it was not the intent of the designers of the IPv6 address space to assure geographical saturation[clarification needed] with usable addresses. Rather, the longer addresses simplify allocation of addresses, enable efficient route aggregation, and allow implementation of special addressing features. In IPv4, complex Classless Inter-Domain Routing (CIDR) methods were developed to make the best use of the small address space. The standard size of a subnet in IPv6 is 264 addresses, the square of the size of the entire IPv4 address space. Thus, actual address space utilization rates will be small in IPv6, but network management and routing efficiency are improved by the large subnet space and hierarchical route aggregation.

Renumbering an existing network for a new connectivity provider with different routing prefixes is a major effort with IPv4. With IPv6, however, changing the prefix announced by a few routers can in principle renumber an entire network, since the host identifiers (the least-significant 64 bits of an address) can be independently self-configured by a host.

Multicasting

Multicasting, the transmission of a packet to multiple destinations in a single send operation, is part of the base specification in IPv6. In IPv4 this is an optional although commonly implemented feature. IPv6 multicast addressing shares common features and protocols with IPv4 multicast, but also provides changes and improvements by eliminating the need for certain protocols. IPv6 does not implement traditional IP broadcast, i.e. the transmission of a packet to all hosts on the attached link using a special broadcast address, and therefore does not define broadcast addresses. In IPv6, the same result can be achieved by sending a packet to the link-local all nodes multicast group at address ff02::1, which is analogous to IPv4 multicasting to address 224.0.0.1. IPv6 also provides for new multicast implementations, including embedding rendezvous point addresses in an IPv6 multicast group address, which simplifies the deployment of inter-domain solutions.

In IPv4 it is very difficult for an organization to get even one globally routable multicast group assignment, and the implementation of inter-domain solutions is arcane. Unicast address assignments by a local Internet registry for IPv6 have at least a 64-bit routing prefix, yielding the smallest subnet size available in IPv6 (also 64 bits). With such an assignment it is possible to embed the unicast address prefix into the IPv6 multicast address format, while still providing a 32-bit block, the least significant bits of the address, or approximately 4.2 billion multicast group identifiers. Thus each user of an IPv6 subnet automatically has available a set of globally routable source-specific multicast groups for multicast applications.

Stateless address autoconfiguration (SLAAC)

IPv6 hosts can configure themselves automatically when connected to an IPv6 network using the Neighbor Discovery Protocol via Internet Control Message Protocol version 6 (ICMPv6) router discovery messages. When first connected to a network, a host sends a link-local router solicitation multicast request for its configuration parameters; routers respond to such a request with a router advertisement packet that contains Internet Layer configuration parameters.

If IPv6 stateless address auto-configuration is unsuitable for an application, a network may use stateful configuration with the Dynamic Host Configuration Protocol version 6 (DHCPv6) or hosts may be configured manually using static methods.

Routers present a special case of requirements for address configuration, as they often are sources of autoconfiguration information, such as router and prefix advertisements. Stateless configuration of routers can be achieved with a special router renumbering protocol.

Network-layer security

Internet Protocol Security (IPsec) was originally developed for IPv6, but found widespread deployment first in IPv4, for which it was re-engineered. IPsec was a mandatory specification of the base IPv6 protocol suite, but has since been made optional.

Simplified processing by routers

In IPv6, the packet header and the process of packet forwarding have been simplified. Although IPv6 packet headers are at least twice the size of IPv4 packet headers, packet processing by routers is generally more efficient, because less processing is required in routers. This furthers the end-to-end principle of Internet design, which envisioned that most processing in the network occurs in the leaf nodes.

The packet header in IPv6 is simpler than the IPv4 header. Many rarely used fields have been moved to optional header extensions.

IPv6 routers do not perform IP fragmentation. IPv6 hosts are required to either perform path MTU discovery, perform end-to-end fragmentation, or to send packets no larger than the default Maximum transmission unit (MTU), which is 1280 octets.

The IPv6 header is not protected by a checksum. Integrity protection is assumed to be assured by both the link layer or error detection and correction methods in higher-layer protocols, such as TCP and UDP. In IPv4, UDP may actually have a checksum of 0, indicating no checksum; IPv6 requires a checksum in UDP. Therefore, IPv6 routers do not need to recompute a checksum when header fields change, such as the time to live (TTL) or hop count.

The TTL field of IPv4 has been renamed to Hop Limit in IPv6, reflecting the fact that routers are no longer expected to compute the time a packet has spent in a queue.

Mobility

Unlike mobile IPv4, mobile IPv6 avoids triangular routing and is therefore as efficient as native IPv6. IPv6 routers may also allow entire subnets to move to a new router connection point without renumbering.

Options extensibility

The IPv6 packet header has a minimum size of 40 octets. Options are implemented as extensions. This provides the opportunity to extend the protocol in the future without affecting the core packet structure. However, recent studies indicate that there is still widespread dropping of IPv6 packets that contain extension headers.

Packet format

An IPv6 packet has two parts: a header and payload.

The header consists of a fixed portion with minimal functionality required for all packets and may be followed by optional extensions to implement special features.

The fixed header occupies the first 40 octets (320 bits) of the IPv6 packet. It contains the source and destination addresses, traffic classification options, a hop counter, and the type of the optional extension or payload which follows the header. This Next Header field tells the receiver how to interpret the data which follows the header. If the packet contains options, this field contains the option type of the next option. The “Next Header” field of the last option, points to the upper-layer protocol that is carried in the packet’s payload.

Extension headers carry options that are used for special treatment of a packet in the network, e.g., for routing, fragmentation, and for security using the IPsec framework.

Without special options, a payload must be less than 64KB. With a Jumbo Payload option (in a Hop-By-Hop Options extension header), the payload must be less than 4 GB.

Unlike with IPv4, routers never fragment a packet. Hosts are expected to use Path MTU Discovery to make their packets small enough to reach the destination without needing to be fragmented.

IPv6 in the Domain Name System

In the Domain Name System, hostnames are mapped to IPv6 addresses by AAAA resource records, so-called quad-A records. For reverse resolution, the IETF reserved the domain ip6.arpa, where the name space is hierarchically divided by the 1-digit hexadecimal representation of nibble units (4 bits) of the IPv6 address.

Deployment

The 1993 introduction of Classless Inter-Domain Routing (CIDR) in the routing and IP address allocation for the Internet, and the extensive use of network address translation (NAT) delayed IPv4 address exhaustion. The final phase of exhaustion started on 3 February 2011. However, despite a decade long development and implementation history as a Standards Track protocol, general worldwide deployment of IPv6 is increasing slowly. As of September 2013, about 4% of domain names and 16.2% of the networks on the Internet have IPv6 protocol support.

IPv6 has been implemented on all major operating systems in use in commercial, business, and home consumer environments. Since 2008, the domain name system can be used in IPv6. IPv6 was first used in a major world event during the 2008 Summer Olympic Games, the largest showcase of IPv6 technology since the inception of IPv6. Some governments including the Federal government of the United States and China have issued guidelines and requirements for IPv6 capability.

In 2009, Verizon mandated IPv6 operation, and reduced IPv4 to an optional capability, for LTE cellular hardware. As of June 2012, T-Mobile USA also supports external IPv6 access.

As of 2014, IPv4 still carried more than 99% of worldwide Internet traffic. The Internet exchange in Amsterdam is the only large exchange that publicly shows IPv6 traffic statistics, which as of November 2016 is tracking at about 1.6%, growing at about 0.3% per year. As of 12 November 2016, the percentage of users reaching Google services with IPv6 reached 15.0% for the first time, growing at about 5.6% per year, although varying widely by region. As of 22 April 2015, deployment of IPv6 on web servers also varied widely, with over half of web pages available via IPv6 in many regions, with about 14% of web servers supporting IPv6.