Category Archives: Uncategorized

dotUK – The History of Domain Names

.Uk domain hits 10 million milestone

March 16, 2012

10 million .uk domain names currently registered.

Today .uk domain registry Nominet announced that the .uk domain crossed the 10 million domain milestone this week.

The domain name swarvemagazine.co.uk, registered on March 12, represented the 10 millionth domain.

Of course, more than 10 million domains have been registered to date, but this is the base of currently registered .uk domain names.

The .uk domain ranks fourth in the world for size, following .com, .de (Germany), and .net, according to VeriSign’s latest domain industry report. That makes it number two for country code domain names, with .tk for Tokelau nipping at its heels.

Dudu – The History of Domain Names

Dudu.com Sold for $1 million

January 5, 2012

A Dubai-based social networking site, Dudu, has paid $1 million for dudu.com, making one Chinese domainer a very happy man indeed. Sedo brokered the deal over three months and announced the sale today.

Dudu was previously located at godudu.com. The lesson to be learned here is so painfully obvious it’s barely worth mentioning: if you’re going to launch a brand and try to make it successful, first make sure you have a domain to match.

Dudu.com is a memorable domain name and a rather short one, so it was bound to fetch a decent price. But there aren’t that many people or companies in the world that would pay $1 million for it.

Before Dudu built up the brand, dudu.com was probably a five-figure sale.

To Dudu’s credit, it does not appear to have ever attempted a reverse domain name hijacking using the UDRP.

The domain name has since replaced the social networking site that was located on godudu.com. Dudu uses a unique translation technology to allow its users from around the world to communicate with each other in their native languages.

Dubai-based businessman and Chairman of Dudu Communications Alibek Issaev purchased the domain from its owner in China.

The social networking site was launched in April 2011 and within a small span, has over 2 million registered users who can add friends, upload and share pictures, just like Facebook, and also listen to music and see what your friends are listening to.

Dupont – The History of Domain Names

Dupont – dupont.com was registered

Date: 07/27/1987

On July 27, 1987, Dupont registered the dupont.com domain name, making it 81st .com domain ever to be registered.

EI. du Pont de Nemours and Company, commonly referred to as DuPont, is an American conglomerate that was founded in July 1802 as a gunpowder mill by French Éleuthère Irénée du Pont. In the 20th century, DuPont developed many polymers such as Vespel, neoprene, nylon, Corian, Teflon, Mylar, Kevlar, Zemdrain, M5 fiber, Nomex, Tyvek, Sorona, Corfam, and Lycra. DuPont developed Freon (chlorofluorocarbons) for the refrigerant industry, and later more environmentally friendly refrigerants. It also developed synthetic pigments and paints including ChromaFlair.

In 2014, DuPont was the world’s fourth largest chemical company based on market capitalization and eighth based on revenue. Its stock price is a component of the Dow Jones Industrial Average.

History

Establishment: 1802

DuPont was founded in 1802 by Éleuthère Irénée du Pont, using capital raised in France and gunpowder machinery imported from France. The company was started at the Eleutherian Mills, on the Brandywine Creek, near Wilmington, Delaware, two years after he and his family left France to escape the French Revolution and religious persecutions against Huguenot protestants. It began as a manufacturer of gunpowder, as du Pont noticed that the industry in North America was lagging behind Europe. The company grew quickly, and by the mid-19th century had become the largest supplier of gunpowder to the United States military, supplying half the powder used by the Union Army during the American Civil War. The Eleutherian Mills site is now a museum and a National Historic Landmark.

Expansion: 1902 to 1912

DuPont continued to expand, moving into the production of dynamite and smokeless powder. In 1902, DuPont’s president, Eugene du Pont, died, and the surviving partners sold the company to three great-grandsons of the original founder. Charles Lee Reese was appointed as director and the company began centralizing their research departments. The company subsequently purchased several smaller chemical companies, and in 1912 these actions gave rise to government scrutiny under the Sherman Antitrust Act. The courts declared that the company’s dominance of the explosives business constituted a monopoly and ordered divestment. The court ruling resulted in the creation of the Hercules Powder Company (later Hercules Inc. and now part of Ashland Inc.) and the Atlas Powder Company (purchased by Imperial Chemical Industries (ICI) and now part of AkzoNobel). At the time of divestment, DuPont retained the single base nitrocellulose powders, while Hercules held the double base powders combining nitrocellulose and nitroglycerine. DuPont subsequently developed the Improved Military Rifle (IMR) line of smokeless powders.

In 1910, DuPont published a brochure entitled “Farming with Dynamite”. The pamphlet was instructional, outlining the benefits to using their dynamite products on stumps and various other obstacles that would be easier to remove with dynamite as opposed to other more conventional, inefficient means. DuPont also established two of the first industrial laboratories in the United States, where they began the work on cellulose chemistry, lacquers and other non-explosive products. DuPont Central Research was established at the DuPont Experimental Station, across the Brandywine Creek from the original powder mills.

Automotive investments: 1914

In the 1920s, DuPont continued its emphasis on materials science, hiring Wallace Carothers to work on polymers in 1928. Carothers invented neoprene, a synthetic rubber; the first polyester superpolymer; and, in 1935, nylon. The invention of Teflon followed a few years later. DuPont introduced phenothiazine as an insecticide in 1935.

Second World War: 1941 to 1945

DuPont ranked 15th among United States corporations in the value of wartime production contracts. As the inventor and manufacturer of nylon, DuPont helped produce the raw materials for parachutes, powder bags, and tires. DuPont also played a major role in the Manhattan Project in 1943, designing, building and operating the Hanford plutonium producing plant in Hanford, Washington. In 1950 DuPont also agreed to build the Savannah River Plant in South Carolina as part of the effort to create a hydrogen bomb.

Space Age developments: 1950 to 1970

After the war, DuPont continued its emphasis on new materials, developing Mylar, Dacron, Orlon, and Lycra in the 1950s, and Tyvek, Nomex, Qiana, Corfam, and Corian in the 1960s. DuPont materials were critical to the success of the Apollo Project of the United States space program. DuPont has been the key company behind the development of modern body armor. In the Second World War DuPont’s ballistic nylon was used by Britain’s Royal Air Force to make flak jackets. With the development of Kevlar in the 1960s, DuPont began tests to see if it could resist a lead bullet. This research would ultimately lead to the bullet resistant vests that are the mainstay of police and military units in the industrialized world.

Conoco holdings: 1981 to 1999

In 1981, DuPont acquired Conoco Inc., a major American oil and gas producing company that gave it a secure source of petroleum feedstocks needed for the manufacturing of many of its fiber and plastics products. The acquisition, which made DuPont one of the top ten U.S.-based petroleum and natural gas producers and refiners, came about after a bidding war with the giant distillery Seagram Company Ltd., which would become DuPont’s largest single shareholder with four seats on the board of directors. On April 6, 1995, after being approached by Seagram Chief Executive Officer Edgar Bronfman, Jr., DuPont announced a deal in which the company would buy back all the shares owned by Seagram.

In 1999, DuPont sold all of its shares of Conoco, which merged with Phillips Petroleum Company, and acquired the Pioneer Hi-Bred agricultural seed company.

Activities, 2000–present

DuPont describes itself as a global science company that employs more than 60,000 people worldwide and has a diverse array of product offerings. The company ranks 86th in the Fortune 500 on the strength of nearly $36 billion in revenues, $4.848 billion in profits in 2013. In April 2014, Forbes ranked DuPont 171st on its Global 2000, the listing of the world’s top public companies.

DuPont businesses are organized into the following five categories, known as marketing “platforms”: Electronic and Communication Technologies, Performance Materials, Coatings and Color Technologies, Safety and Protection, and Agriculture and Nutrition.

The agriculture division, DuPont Pioneer makes and sells hybrid seed and genetically modified seed, some of which goes on to become genetically modified food. Genes engineered into their products include LibertyLink, which provides resistance to Bayer’s Ignite Herbicide/Liberty herbicides; the Herculex I Insect Protection gene which provides protection against various insects; the Herculex RW insect protection trait which provides protection against other insects; the YieldGard Corn Borer gene, which provides resistance to another set of insects; and the Roundup Ready Corn 2 trait that provides crop resistance against glyphosate herbicides. In 2010, DuPont Pioneer received approval to start marketing Plenish soybeans, which contains “the highest oleic acid content of any commercial soybean product, at more than 75 percent. Plenish provides a product with no trans fat, 20 percent less saturated fat than regular soybean oil, and more stabile oil with greater flexibility in food and industrial applications.” Plenish is genetically engineered to “block the formation of enzymes that continue the cascade downstream from oleic acid (that produces saturated fats), resulting in an accumulation of the desirable monounsaturated acid.”

In 2004, the company sold its textiles business, which included some of its best-known brands such as Lycra (Spandex), Dacron polyester, Orlon acrylic, Antron nylon and Thermolite, to Koch Industries.

In 2011, DuPont was the largest producer of titanium dioxide in the world, primarily provided as a white pigment used in the paper industry.

DuPont has 150 research and development facilities located in China, Brazil, India, Germany, and Switzerland with an average investment of $2 billion annually in a diverse range of technologies for many markets including agriculture, genetic traits, biofuels, automotive, construction, electronics, chemicals, and industrial materials. DuPont employs more than 10,000 scientists and engineers around the world.

On January 9, 2011, DuPont announced that it had reached an agreement to buy Danish company Danisco for US$6.3 billion. On May 16, 2011, DuPont announced that its tender offer for Danisco had been successful and that it would proceed to redeem the remaining shares and delist the company.

On May 1, 2012, DuPont announced that it had acquired from Bunge full ownership of the Solae joint venture, a soy-based ingredients company. DuPont previously owned 72 percent of the joint venture while Bunge owned the remaining 28 percent.

In February 2013, DuPont Performance Coatings was sold to the Carlyle Group and rebranded as Axalta Coating Systems.

Chemours

In October 2013, DuPont announced that it was planning to spin off its Performance Chemicals business into a new publicly traded company in mid-2015. The company filed its initial Form 10 with the SEC in December 2014 and announced that the new company would be called The Chemours Company. The spin-off to DuPont shareholders was completed on July 1, 2015 and Chemours stock began trading on the New York Stock Exchange on the same date. DuPont will focus on production of GMO seeds, materials for solar panels, and alternatives to fossil fuels. Chemours becomes responsible for the cleanup of 171 former DuPont sites, which DuPont says will cost between $295 million and $945 million.

Merger with Dow

On December 11, 2015, DuPont announced that it would merge with the Dow Chemical Company, in an all-stock deal. The combined company, which will be known as DowDuPont, will have an estimated value of $130 billion, be equally held by the shareholders of both companies, and maintain their headquarters in Delaware and Michigan respectively. Within two years of the merger’s closure, expected in late 2016 and subject to regulatory approval, DowDuPont will be split into three separate public companies, focusing on the agricultural chemicals, materials science, and specialty product industries. Commentators have questioned the economic viability of this plan because, of the three companies, only the specialty products industry has prospects for high growth. The outlook on the profitability of the other two proposed companies has been questioned due to reduced crop prices and lower margins on plastics such as polyethylene. They have also noted that the deal is likely to face antitrust scrutiny in several countries.

DVD – The History of Domain Names

Netflix purchases DVD.com

March 30, 2012

Netflix has confirmed its purchase of DVD.com. A spokesperson tells Domain Name Wire “Netflix cares about keeping DVD healthy, and this is just one small investment in keeping DVD healthy.”

The nameservers and registrant information for DVD.com just changed. The domain now forwards to DVD.netflix.com, signalling that Netflix may have purchased the domain.

Sometime around March 25 the nameservers changed from worldnic.com to ULTRADNS.NET, the same DNS provider Netflix uses.

The domain’s whois also changed from a Network Solutions private registration to a DNStinations registration. DNStinations is essentially a whois proxy service for brand protection company Mark Monitor. Mark Monitor has done work for Netflix in the past, including registering domains related to its short-lived Qwikster name.

Early Computers – The History of Domain Names

The Internet (early computers)

Initial concepts of packet networking originated in several computer science laboratories in the United States, United Kingdom, and France. The US Department of Defense awarded contracts as early as the 1960s for packet network systems, including the development of the ARPANET. The first message was sent over the ARPANET from computer science Professor Leonard Kleinrock’s laboratory at University of California, Los Angeles (UCLA) to the second network node at Stanford Research Institute (SRI).

Packet switching networks such as ARPANET, NPL network, CYCLADES, Merit Network, Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of communications protocols. Donald Davies first designed a packet-switched network at the National Physics Laboratory in the UK, which became a testbed for UK research for almost two decades. The ARPANET project led to the development of protocols for internetworking, in which multiple separate networks could be joined into a network of networks.

Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet protocol suite (TCP/IP) was introduced as the standard networking protocol on the ARPANET. In the early 1980s the NSF funded the establishment for national supercomputing centers at several universities, and provided interconnectivity in 1986 with the NSFNET project, which also created network access to the supercomputer sites in the United States from research and education organizations. Commercial Internet service providers (ISPs) began to emerge in the very late 1980s. The ARPANET was decommissioned in 1990. Limited private connections to parts of the Internet by officially commercial entities emerged in several American cities by late 1989 and 1990, and the NSFNET was decommissioned in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic.

In the 1980s, research at CERN in Switzerland by British computer scientist Tim Berners-Lee resulted in the World Wide Web, linking hypertext documents into an information system, accessible from any node on the network. Since the mid-1990s, the Internet has had a revolutionary impact on culture, commerce, and technology, including the rise of near-instant communication by electronic mail, instant messaging, voice over Internet Protocol (VoIP) telephone calls, two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking, and online shopping sites. The research and education community continues to develop and use advanced networks such as NSF’s very high speed Backbone Network Service (vBNS), Internet2, and National LambdaRail. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. The Internet’s takeover of the global communication landscape was almost instant in historical terms: it only communicated 1% of the information flowing through two-way telecommunications networks in the year 1993, already 51% by 2000, and more than 97% of the telecommunicated information by 2007. Today the Internet continues to grow, driven by ever greater amounts of online information, commerce, entertainment, and social networking.

The concept of data communication – transmitting data between two different places through an electromagnetic medium such as radio or an electric wire – predates the introduction of the first computers. Such communication systems were typically limited to point to point communication between two end devices. Telegraph systems and telex machines can be considered early precursors of this kind of communication. The Telegraph in the late 19th century was the first fully digital communication system.

Fundamental theoretical work in data transmission and information theory was developed by Claude Shannon, Harry Nyquist, and Ralph Hartley in the early 20th century.

Early computers had a central processing unit and remote terminals. As the technology evolved, new systems were devised to allow communication over longer distances (for terminals) or with higher speed (for interconnection of local devices) that were necessary for the mainframe computer model. These technologies made it possible to exchange data (such as files) between remote computers. However, the point-to-point communication model was limited, as it did not allow for direct communication between any two arbitrary systems; a physical link was necessary. The technology was also considered unsafe for strategic and military use because there were no alternative paths for the communication in case of an enemy attack.

Development of packet switching

The issue of connecting separate physical networks to form one logical network was the first of many problems. In the 1960s, Paul Baran of the RAND Corporation produced a study of survivable networks for the U.S. military in the event of nuclear war. Information transmitted across Baran’s network would be divided into what he called “message-blocks”. Independently, Donald Davies (National Physical Laboratory, UK), proposed and was the first to put into practice a similar network based on what he called packet-switching, the term that would ultimately be adopted. Leonard Kleinrock (MIT) developed a mathematical theory behind this technology (without the packets). Packet-switching provides better bandwidth utilization and response times than the traditional circuit-switching technology used for telephony, particularly on resource-limited interconnection links.

Packet switching is a rapid store and forward networking design that divides messages up into arbitrary packets, with routing decisions made per-packet. Early networks used message switched systems that required rigid routing structures prone to single point of failure. This led Tommy Krash and Paul Baran’s U.S. military-funded research to focus on using message-blocks to include network redundancy.

Networks that led to the Internet

ARPANET
Promoted to the head of the information processing office at Defense Advanced Research Projects Agency (DARPA), Robert Taylor intended to realize Licklider’s ideas of an interconnected networking system. Bringing in Larry Roberts from MIT, he initiated a project to build such a network. The first ARPANET link was established between the University of California, Los Angeles (UCLA) and the Stanford Research Institute at 22:30 hours on October 29, 1969.

By December 5, 1969, a 4-node network was connected by adding the University of Utah and the University of California, Santa Barbara. Building on ideas developed in ALOHAnet, the ARPANET grew rapidly. By 1981, the number of hosts had grown to 213, with a new host being added approximately every twenty days.

ARPANET development was centered around the Request for Comments (RFC) process, still used today for proposing and distributing Internet Protocols and Systems. RFC 1, entitled “Host Software”, was written by Steve Crocker from the University of California, Los Angeles, and published on April 7, 1969. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.

ARPANET became the technical core of what would become the Internet, and a primary tool in developing the technologies used. The early ARPANET used the Network Control Program (NCP, sometimes Network Control Protocol) rather than TCP/IP. On January 1, 1983, known as flag day, NCP on the ARPANET was replaced by the more flexible and powerful family of TCP/IP protocols, marking the start of the modern Internet.

International collaborations on ARPANET were sparse. For various political reasons, European developers were concerned with developing the X.25 networks. Notable exceptions were the Norwegian Seismic Array (NORSAR) in 1972, followed in 1973 by Sweden with satellite links to the Tanum Earth Station and Peter Kirstein’s research group in the UK, initially at the Institute of Computer Science, London University and later at University College London.

NPL
In 1965, Donald Davies of the National Physical Laboratory (United Kingdom) proposed a national data network based on packet-switching. The proposal was not taken up nationally, but by 1970 he had designed and built the Mark I packet-switched network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions. By 1976 12 computers and 75 terminal devices were attached and more were added until the network was replaced in 1986. NPL, followed by ARPANET, were the first two networks in the world to use packet switching.

Merit Network
The Merit Network was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan’s public universities as a means to help the state’s educational and economic development. With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between the IBM mainframe computer systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit. In October 1972 connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network. All of this set the stage for Merit’s role in the NSFNET project starting in the mid-1980s.

CYCLADES
The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. First demonstrated in 1973, it was developed to explore alternatives to the initial ARPANET design and to support network research generally. It was the first network to make the hosts responsible for the reliable delivery of data, rather than the network itself, using unreliable datagrams and associated end-to-end protocol mechanisms.

X.25 and public data networks
Based on ARPA’s research, packet switching network standards were developed by the International Telecommunication Union (ITU) in the form of X.25 and related standards. While using packet switching, X.25 is built on the concept of virtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976.

The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong, and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.

Unlike ARPANET, X.25 was commonly available for business use. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET.

The first public dial-in networks used asynchronous TTY terminal protocols to reach a concentrator operated in the public network. Some networks, such as CompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. Many bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.

UUCP and Usenet
In 1979, two students at Duke University, Tom Truscott and Jim Ellis, originated the idea of using Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software in 1980, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies compared to later networks like CSNET and Bitnet. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984. – Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network represented possibly one of the first examples of the Internet technology becoming progress through popular diffusion.

Early Domain Names – The History of Domain Names

Early Domain Names

Date: 01/01/1960

As the first computers were connected there became a requirement to name computers in order to understand where information was being sent and received from and form of identification was needed to properly access the various systems. At first the networks were composed of only a few computer systems associated with the U.S. Department of Defence and other institutions Host.txt files were used, however as the number of connections grew, a more effective system was needed to regulate and maintain the domain paths throughout the networks.

Before the World Wide Web

The ubiquitous domain name system (DNS) has its roots almost as early as the Internet itself. As the early Internet grew through the 1970s with the advent of email and newsgroups, the problem of locating computers on connected networks also grew. In 1972, IANA, the Internet Assigned Numbers Authority was created by the U.S. Defense Department agency responsible for the Internet to assign a unique number to each computer in its network. By 1973, the TCP/IP network technology became the widely adopted communication protocol for locating and connecting to computers and was the de facto standard among many other competing choices at the time.

In a key milestone in domain name history, the University of Wisconsin developed and helped to standardize the first name server in 1984, which converted the familiar domain names into these unique numbers. In 1985, top-level domain names, .com, .net, and.org were organized, and the first domain name, Symbolics.com, was registered to a now-defunct computer hardware manufacturer. With these two milestones in domain name history, the modern Internet era was born.

History of the Internet Domain Name

When the first computers began connecting to each other over Wide Area Networks (WAN’s), like the ARPANET in the 1960’s, a form of identification was needed to properly access the various systems. At first the networks were composed of only a few computer systems associated with the U.S. Department of Defence and other institutions. As the number of connections grew, a more effective system was needed to regulate and maintain the domain paths throughout the network. In 1972 the U.S. Defence Information Systems Agency created the Internet Assigned Numbers Authority (IANA). IANA was responsible for assigning unique ‘addresses’ to each computer connected to the Internet. By 1973, the Internet Protocol or IP addressing system became the standard by which all networked computers could be located. The new Internet continued to grow throughout the 70’s with the creation of electronic mail (e-mail) and newsgroups.

Greater numbers of users networking with each other created a demand for a more simple and easy-to-remember system than the bulky and often confusing IP system of long, cumbersome strings of numbers. This demand was answered by researchers and technicians at the University of Wisconsin who developed the first ‘name server’ in 1984. With the new name server, users were no longer required to know the exact path to other systems. And thus the birth of the current addressing system in use today.

A year later the Domain Name System was implemented and the initial top-level domain names, including .com, .net, and .org, were introduced.
Suddenly 121.245.078.2 became ‘company.com’.

The World Wide Web, InterNIC, and the public domain

In 1990, the Internet exploded into commercial society and was followed a year later by the release of the World Wide Web by originator Tim Berners-Lee and CERN. The same year the first commercial service provider began operating and domain registration officially entered the public domain. Initially the registration of domain names was free, subsidized by the National Science Foundation through IANA, but by 1992 a new organization was needed to specifically handle the exponential increase in flow to the Internet. IANA and the NSF jointly created InterNIC, a quasi-governmental body mandated to organize and maintain the growing DNS registry and services.

Overwhelming growth forced the NSF to stop subsidizing domain registrations in 1995. InterNIC, due to budget demands, began imposing a $100.00 fee for each two-year registration. The next wave in the evolution of the DNS occurred in 1998 when the U.S. Department of Commerce published the ‘White Paper’. This document outlined the transition of management of domain name systems to private organizations, allowing for increased competition.

ICANN and the spirit of the Internet

That same year, the Internet Corporation for Assigned Names and Numbers was formed. This non-profit, private sector corporation formed by a broad coalition of the Internet’s business, technical, and academic interests worldwide is recognized as the “global consensus entity to coordinate the technical management of the Internet’s domain name system, the allocation of IP address space, the assignment of protocol parameters, and the management of the root server system.”One of ICANN’s primary concerns is to foster a greater spirit of competition within the domain registration industry. Where before there was only a single entity offering registration services, ICANN has now accredited a number of other companies to add to the global domain name database. This is called the Shared Registration System.

More names

Today there is an estimated 19 million domain names registered, with forty thousand more registered every day. The Internet continues its unprecedented growth into the stratosphere and there is really no end in sight. This growth only serves to underline the benefits of moving registration from government control to private sector control, benefits that are embedded within the spirit of the Internet itself: accessibility, freedom, competition.

Domain name space

Today, the Internet Corporation for Assigned Names and Numbers (ICANN) manages the top-level development and architecture of the Internet domain name space. It authorizes domain name registrars, through which domain names may be registered and reassigned.

Domain name syntax

A domain name consists of one or more parts, technically called labels, that are conventionally concatenated, and delimited by dots, such as example.com. The right-most label conveys the top-level domain; for example, the domain name www.example.com belongs to the top-level domain com. The hierarchy of domains descends from the right to the left label in the name; each label to the left specifies a subdivision, or subdomain of the domain to the right. For example: the label example specifies a node example.com as a subdomain of the com domain, and www is a label to create www.example.com, a subdomain of example.com. This tree of labels may consist of 127 levels. Each label may contain from 1 to 63 octets. The empty label is reserved for the root node. The full domain name may not exceed a total length of 253 ASCII characters in its textual representation In practice, some domain registries may have shorter limits.

A hostname is a domain name that has at least one associated IP address. For example, the domain names www.example.com and example.com are also hostnames, whereas the com domain is not. However, other top-level domains, particularly country code top-level domains, may indeed have an IP address, and if so, they are also hostnames.

Hostnames impose restrictions on the characters allowed in the corresponding domain name. A valid hostname is also a valid domain name, but a valid domain name may not necessarily be valid as a hostname.

Domains-Growth – The History of Domain Names

REGISTERED DOMAIN NAMES GROW BY 5.9 MILLION

March 11, 2012

The latest Domain Name Industry Brief from Verisign shows in the final quarter of last year, registrations of all Top Level Domains (TLDs) grew by 2.7 percent since the third quarter.

Q4 last year was a huge one for domain name registration. The quarter closed with more than 225 million registered names, which was 10% more than the same quarter in 2010.

The leaderboard now in terms of domain extension popularity remained unchanged from 2011 Q3, with .com still the leader; followed by de (Germany), .net, .uk (United Kingdom), .org, .info, .tk (Tokelau), .nl (Netherlands), .ru (Russian Federation) and .eu (European Union).
The popularity of .tk continues to puzzle some given Tokelau has a tiny population. As we mentioned in an earlier article, a Dutch firm bought the rights to .tk and now provides free domain registration of the extension.

While on the topic of country code Top Level Domains (ccTLDs), the top ccTLD registries by domain name base for the final quarter of last year reported by Verisign were:

1. .de – Germany
2. .uk – United Kingdom
3. .tk – Tokelau
4. .nl – Netherlands
5. .ru – Russian Federation
6. .eu – European Union
7. .cn – China
8. .br – Brazil
9. .ar – Argentina
10. .it – Italy

Verisign says there are now 290 ccTLD extensions globally and the top 10 ccTLDs account for 60 percent of all registrations.

Verisign manages two of the world’s 13 Internet root name servers and according to the latest Brief, its average daily Domain Name System (DNS) query load during the fourth quarter of 2011 was 64 billion, peaking at 117 billion. This represented a daily average increase of 8 percent and a peak increase of 51 percent. Year on year, the increase was 2% and 59% respectively.

Domains Portfolio – The History of Domain Names

150,000+ domain portfolio sells for $5.2 million

November 16, 2012

A portfolio of more than 150,000 domain names has sold in a bankruptcy auction for $5.2 million.

Yet that might not be the end of it.

The auction was of two separate portfolios and were part of bankrupt firm Ondova. The $5.2 purchase price is 26.8% over the stalking horse bid.

The domains had significant traffic and revenue.

The first portfolio of about 3,000 premium domains earned close to $400,000 in the 12 months ending September 30, 2012. The second portfolio had 150,000+ domain that averaged $12.60 in revenue in the same 12 month period. After domain costs that netted out to $3.80 per domain, or net revenue of around $580,000.

That means the overall sale was about 5x-6x earnings. The premium domains probably have value above the PPC earnings yet the larger portfolio definitely has some trademark issues.

Donuts – The History of Domain Names

Donuts raises $100 million, applies for 307 new TLDs

June 5, 2012

Donuts rolls in a lot of dough, but you’ll have to wait another week to see which domains they applied for.

This morning Donuts, a new TLD applicant founded by eNom founder Paul Stahura, announced the most ambitious public new TLD plans to date: a $100 investment and 307 applications.

The company also hired Mason Cole as Vice President, Communications and Industry Relations. I had the chance to connect with him this morning to understand more about Donuts’ plans. Cole told me the company looked at thousands of possible strings, ultimately settling on the 307 it has applied for. Back in January Bloomberg reported that Donuts was applying for 10 strings, but Cole says that may have been a miscommunication between Stahura and the Bloomberg reporter.

“At the time the final number hadn’t been settled on,” said Cole.

Cole isn’t sure how many of the 307 strings will be contested by other applicants. The company is prepared to bring all 307 to market.

“I can tell you we’ve made sure to resource the company in a way that would allow us to get all 307 strings if we decide to,” he said.

Cole was tight-lipped on what the company is doing in the digital archery process for batching. He also said the company does not plan to reveal its applications until ICANN makes them public on June 13.

The company announced today that it will use Demand Media as its backend registry provider. Stahura sold eNom to Demand Media in 2006. Demand Media has announced its own $18 million investment in new top level domains, but it has not invested in Donuts. Assuming Demand Media is applying for its own TLDs, it will be interesting to see if there’s any contention between Demand Media’s domains and those of Donuts. Cole said he can’t comment on Demand Media’s business.

For now, Donuts is engaged in a high stakes game of digital archery and is preparing for whatever roadblocks that may occur — but hoping for no more delays.

Dot-DE – The History of Domain Names

Germany’s .De crosses 15 million domain mark

April 19, 2012

.De registry DENIC has announced that Germany’s country code domain name has hit a major milestone: 15 million domains currently registered.

The milestone was hit yesterday with the registration of floristennetzwerk.de, which translates to “florists’ network” in English. The domain was registered at German domain registrar 1&1 Internet.

.De is the largest country code domain name in terms of registrations. DENIC says that 50,000 .de domain names are registered every month.

The United Kingdom’s .uk is the second largest country code domain name according to VeriSign’s most recent quarterly domain name brief. In third place and rising rapidly is Tokelau’s .tk, which are offered at no or low cost.

Dot-Tel – The History of Domain Names

The state of .tel in 2012

March 13, 2012

.Tel has been quiet, at least in the domainer community, for quite some time. There are a couple good reasons for this: you can’t park .tel domains and no one is getting rich trying to resell them for a profit.

The company sent out its latest newsletter today and it has some interesting data.

The first thing that caught my eye was that you will soon be able to add video to your site. But it can only be done via API. I’ve long thought that .tel is an over-technicalized solution geared at non-techies, and this is a prime example. Granted, there are plenty of third party solutions to develop your .tel domain. But why is a third party necessary? .Tel should be easy. It’s not.

The video also signals that .tel domains are becoming just a bit more like web pages.

Now, about those numbers. Here’s a handy infographic from .tel.

In 2011, .tel says the “number of members in the .tel community” expanded by 41%.

I’m not quiet sure what “members” mean. Unless the number of registrants increased by a bunch in December, this doesn’t represent total .tel domains. In November 2011, the last month for which official numbers are available, there were 280,502 .tel domains. At the beginning of 2012 there were 256,566.

This certainly isn’t what Telnic investors had in mind when they plowed $35 million into the company.

On the plus side, 79% of the “.tel community” owns just one domain. And as of February there were an impressive 64,274 .tel IDNs.

Dotcom Bubble – The History of Domain Names

Dot com Bubble

Date: 03/01/2000

The dot-com bubble (also known as the dot-com boom, the tech bubble, the Internet bubble, the dot-com collapse, and the information technology bubble was a historic speculative bubble covering roughly 1995–2001 during which stock markets in industrialized nations saw their equity value rise rapidly from growth in the Internet sector and related fields. While the latter part was a boom and bust cycle, the Internet boom is sometimes meant to refer to the steady commercial growth of the Internet with the advent of the World Wide Web, as exemplified by the first release of the Mosaic web browser in 1993, and continuing through the 1990s.

The period was marked by the founding (and, in many cases, spectacular failure) of several new Internet-based companies commonly referred to as dot-coms. Companies could cause their stock prices to increase by simply adding an “e-” prefix to their name or a “.com” suffix, which one author called “prefix investing.” A combination of rapidly increasing stock prices, market confidence that the companies would turn future profits, individual speculation in stocks, and widely available venture capital created an environment in which many investors were willing to overlook traditional metrics, such as P/E ratio, in favor of basing confidence on technological advancements. By the end of the 1990s, the NASDAQ hit a price-to-earnings (P/E) ratio of 200, a truly astonishing plateau that dwarfed Japan’s peak P/E ratio of 80 a decade earlier.

The collapse of the bubble took place during 1999–2001. Some companies, such as pets.com and Webvan, failed completely. Others – such as Cisco, whose stock declined by 86% – lost a large portion of their market capitalization but remained stable and profitable. Some, such as eBay.com, later recovered and even surpassed their dot-com-bubble peaks. The stock of Amazon.com came to exceed $700 per share, for example, after having gone from $107 to $7 in the crash.

Bubble growth

Due to the rise in the commercial growth of the Internet, venture capitalists saw record-setting growth as “dot-com” companies experienced meteoric rises in their stock prices and therefore moved faster and with less caution than usual, choosing to mitigate the risk by investing in many contenders and letting the market decide which would succeed. The low interest rates of 1998–99 helped increase the start-up capital amounts. A canonical “dot-com” company’s business model relied on harnessing network effects by operating at a sustained net loss and building market share (or mind share). These companies offered their services or end product for free with the expectation that they could build enough brand awareness to charge profitable rates for their services later. The motto “get big fast” reflected this strategy.

This occurred in industrialized nations due to the reducing “digital divide” in the late 1990s, and early 2000s. Previously, individuals were less capable of accessing the Internet, many stopped by lack of local access/connectivity to the infrastructure, and/or the failure to understand use for Internet technologies. The absence of infrastructure and a lack of understanding were two major obstacles that previously obstructed mass connectivity. For these reasons, individuals had limited capabilities in what they could do and what they could achieve in accessing technology. Increased means of connectivity to the Internet than previously available allowed the use of ICT (information and communications technology) to progress from a luxury good to a necessity good. As connectivity grew, so did the potential for venture capitalists to take advantage of the growing field. The functionalism, or impacts of technologies driven from the cost effectiveness of new Internet websites ultimately influenced the demand growth during this time.

Soaring stocks

In financial markets, a stock market bubble is a self-perpetuating rise or boom in the share prices of stocks of a particular industry; the term may be used with certainty only in retrospect after share prices have crashed. A bubble occurs when speculators note the fast increase in value and decide to buy in anticipation of further rises, rather than because the shares are undervalued. Typically, during a bubble, many companies thus become grossly overvalued. When the bubble “bursts”, the share prices fall dramatically. The prices of many non-technology stocks increased in tandem and were also pushed up to valuations discorrelated relative to fundamentals.

American news media, including respected business publications such as Forbes and the Wall Street Journal, encouraged the public to invest in risky companies, despite many of the companies’ disregard for basic financial and even legal principles.

Andrew Smith argued that the financial industry’s handling of initial public offerings tended to benefit the banks and initial investors rather than the companies. This is because company staff were typically barred from reselling their shares for a lock-in period of 12 to 18 months, and so did not benefit from the common pattern of a huge short-lived share price spike on the day of the launch. In contrast, the financiers and other initial investors were typically entitled to sell at the peak price, and so could immediately profit from short-term price rises. Smith argues that the high profitability of the IPOs to Wall Street was a significant factor in the course of events of the bubble. He writes:

“But did the kids [the often young dotcom entrepreneurs] dupe the establishment by drawing them into fake companies, or did the establishment dupe the kids by introducing them to Mammon and charging a commission on it?”

In spite of this, however, a few company founders made vast fortunes when their companies were bought out at an early stage in the dot-com stock market bubble. These early successes made the bubble even more buoyant. An unprecedented amount of personal investing occurred during the boom, and the press reported the phenomenon of people quitting their jobs to become full-time day traders.

Academics Preston Teeter and Jorgen Sandberg have criticized Federal Reserve chairman Alan Greenspan for his role in the promotion and rise in tech stocks. Their research cites numerous examples of Greenspan putting a positive spin on historic stock valuations despite a wealth of evidence suggesting that stocks were overvalued.

Free spending

According to dot-com theory, an Internet company’s survival depended on expanding its customer base as rapidly as possible, even if it produced large annual losses. For instance, Google and Amazon.com did not see any profit in their first years. Amazon was spending to alert people to its existence and expand its customer base, and Google was busy spending to create more powerful machine capacity to serve its expanding web search engine. The phrase “Get large or get lost” was the wisdom of the day. At the height of the boom, it was possible for a promising dot-com to make an initial public offering (IPO) of its stock and raise a substantial amount of money even though it had never made a profit—or, in some cases, earned any revenue whatsoever. In such a situation, a company’s lifespan was measured by its burn rate: that is, the rate at which a non-profitable company lacking a viable business model ran through its capital.

Public awareness campaigns were one of the ways in which dot-coms sought to expand their customer bases. These included television ads, print ads, and targeting of professional sporting events. Many dot-coms named themselves with onomatopoeic nonsense words that they hoped would be memorable and not easily confused with a competitor. Super Bowl XXXIV in January 2000 featured 16 dot-com companies that each paid over $2 million for a 30-second spot. By contrast, in January 2001, just three dot-coms bought advertising spots during Super Bowl XXXV. In a similar vein, CBS-backed iWon.com gave away $10 million to a lucky contestant on an April 15, 2000 half-hour primetime special that was broadcast on CBS.

Not surprisingly, the “growth over profits” mentality and the aura of “new economy” invincibility led some companies to engage in lavish internal spending, such as elaborate business facilities and luxury vacations for employees. Executives and employees who were paid with stock options instead of cash became instant millionaires when the company made its initial public offering; many invested their new wealth into yet more dot-coms.

Cities all over the United States sought to become the “next Silicon Valley” by building network-enabled office space to attract Internet entrepreneurs. Communication providers, convinced that the future economy would require ubiquitous broadband access, went deeply into debt to improve their networks with high-speed equipment and fiber optic cables. Companies that produced network equipment like Nortel Networks were irrevocably damaged by such over-extension; Nortel declared bankruptcy in early 2009. Companies like Cisco, which did not have any production facilities, but bought from other manufacturers, were able to leave quickly and actually do well from the situation as the bubble burst and products were sold cheaply.

In the struggle to become a technology hub, many cities and states used tax money to fund technology conference centers, advanced infrastructure, and created favorable business and tax law to encourage development of the dotcom industry in their locale. Virginia’s Dulles Technology Corridor is a prime example of this activity. Large quantities of high-speed fiber links were laid, and the State and local governments gave tax exemptions to technology firms. Many of these buildings could be viewed along I-495, after the burst, as vacant office buildings.

Similarly, in Europe the vast amounts of cash the mobile operators spent on 3G licences in Germany, Italy, and the United Kingdom, for example, led them into deep debt. The investments were far out of proportion to both their current and projected cash flow, but this was not publicly acknowledged until as late as 2001 and 2002. Due to the highly networked nature of the IT industry, this quickly led to problems for small companies dependent on contracts from operators. One example is of a then Finnish mobile network company Sonera, which paid huge sums in German broadband auction then dubbed as 3G licenses. Third-generation networks however took years to catch on and Sonera ended up as a part of TeliaSonera, then simply Telia.

Aftermath

On January 10, 2000, America Online (now Aol.), a favorite of dot-com investors and pioneer of dial-up Internet access, announced plans to merge with Time Warner, the world’s largest media company, in the second-largest M&A transaction worldwide. The transaction has been described as “the worst in history”. Within two years, boardroom disagreements drove out both of the CEOs who made the deal, and in October 2003 AOL Time Warner dropped “AOL” from its name.

On March 10, 2000 the NASDAQ peaked at 5,132.52 intraday before closing at 5,048.62. Afterwards, the NASDAQ fell as much as 78%.

Several communication companies could not weather the financial burden and were forced to file for bankruptcy. One of the more significant players, WorldCom, was found engaging in illegal accounting practices to exaggerate its profits on a yearly basis. WorldCom was one of the last standing combined competitive local exchange and inter-exchange companies and struggled to survive after the implementation of the Telecommunications Act of 1996. This Act favored incumbent players formerly known as Regional Bell Operating Companies (RBOCS) and led to the demise of competition and the rise of consolidation and a current day oligopoly ruled by lobbyist saturated powerhouses AT&T and Verizon.

WorldCom’s stock price fell drastically when this information went public, and it eventually filed the third-largest corporate bankruptcy in U.S. history. Other examples include NorthPoint Communications, Global Crossing, JDS Uniphase, XO Communications, and Covad Communications. Companies such as Nortel, Cisco, and Corning were at a disadvantage because they relied on infrastructure that was never developed which caused the stock of Corning to drop significantly.

Many dot-coms ran out of capital and were acquired or liquidated; the domain names were picked up by old-economy competitors, speculators or cybersquatters. Several companies and their executives were accused or convicted of fraud for misusing shareholders’ money, and the U.S. Securities and Exchange Commission fined top investment firms like Citigroup and Merrill Lynch millions of dollars for misleading investors. Various supporting industries, such as advertising and shipping, scaled back their operations as demand for their services fell. A few large dot-com companies, such as Amazon.com, eBay, and Google have become industry-dominating mega-firms.

The stock market crash of 2000–2002 caused the loss of $5 trillion in the market value of companies from March 2000 to October 2002. The September 11, 2001, attacks accelerated the stock market drop; the NYSE suspended trading for four sessions. When trading resumed, some of it was transacted in temporary new locations.

More in-depth analysis shows that 48% of the dot-com companies survived through 2004. With this, it is safe to assume that the assets lost from the stock market do not directly link to the closing of firms. More importantly, however, it can be concluded that even companies who were categorized as the “small players” were adequate enough to endure the destruction of the financial market during 2000–2002. Additionally, retail investors who felt burned by the burst transitioned their investment portfolios to more cautious positions.

Nevertheless, laid-off technology experts, such as computer programmers, found a glutted job market. University degree programs for computer-related careers saw a noticeable drop in new students. Anecdotes of unemployed programmers going back to school to become accountants or lawyers were common.

Turning to the long-term legacy of the bubble, Fred Wilson, who was a venture capitalist during it, said:

“A friend of mine has a great line. He says ‘Nothing important has ever been built without irrational exuberance’. Meaning that you need some of this mania to cause investors to open up their pocketbooks and finance the building of the railroads or the automobile or aerospace industry or whatever. And in this case, much of the capital invested was lost, but also much of it was invested in a very high throughput backbone for the Internet, and lots of software that works, and databases and server structure. All that stuff has allowed what we have today, which has changed all our lives… that’s what all this speculative mania built”.

As the technology boom receded, consolidation and growth by market leaders caused the tech industry to come to more closely resemble other traditional U.S. sectors. As of 2014, ten information technology firms are among the 100 largest U.S. corporations by revenues: Apple, Hewlett-Packard, IBM, Microsoft, Amazon.com, Google, Intel, Cisco Systems, Ingram Micro, and Oracle.

Conclusion

The Dot-com Bubble of the 1990s and early 2000s was characterized by a new technology which created a new market with many potential products and services, and highly opportunistic investors and entrepreneurs who were blinded by early successes. After the crash, both companies and the markets have become a lot more cautious when it comes to investing in new technology ventures. It might be noted, though, that the current popularity of mobile devices such as smartphones and tablets, and their almost infinite possibilities, and the fact that there have been a few successful tech IPOs recently, will give life to a whole new generation of companies that want to capitalize on this new market. Let’s see if investors and entrepreneurs are a bit more sensible this time around.

Dotcom Bubblebursts – The History of Domain Names

Dot-com bubble bursts

Date: 01/01/2000

The dot-com bubble (also known as the dot-com boom, the tech bubble, the Internet bubble, the dot-com collapse, and the information technology bubble) was a historic speculative bubble covering roughly 1995–2001 during which stock markets in industrialized nations saw their equity value rise rapidly from growth in the Internet sector and related fields. While the latter part was a boom and bust cycle, the Internet boom is sometimes meant to refer to the steady commercial growth of the Internet with the advent of the World Wide Web, as exemplified by the first release of the Mosaic web browser in 1993, and continuing through the 1990s.

A year ago Americans could hardly run an errand without picking up a stock tip. Day-trading manuals were selling briskly. Neighbors were speaking a foreign tongue, carrying on about B2B’s and praising the likes of JDS Uniphase and Qualcomm. Venture capital firms were throwing money at any and all dot-coms to help them build market share, never mind whether they could ever be profitable. It was a brave new era, in which more than a dozen fledgling dot-coms that nobody had ever heard of could pay $2 million of other people’s money for a Super Bowl commercial.

What a difference a year makes. The Nasdaq sank. Stock tips have been replaced with talk of recession. Many pioneering dot-coms are out of business or barely surviving. The Dow Jones Internet Index, made up of dot-com blue chips, is down more than 72 percent since March. Online retailers Priceline and eToys, former Wall Street darlings, have seen their stock prices fall more than 99 percent from their highs.

Unlike the worrisome decline in the stock prices of solidly grounded technology firms due to a slowdown in profits, the sharper plunge taken by some of the trendy Internet companies that had no earnings in the first place has proved comforting to those who believe in the rationality of markets. After all, many of them lacked one key asset — a sensible business plan. Even the most traditional brokers and investment banks set aside the notion that a company’s stock price should reflect its profits and urged investors not to miss out on the gold rush. At the craze’s zenith, Priceline, the money-losing online ticket seller, was worth more than the airlines that provided its inventory.

The current sense of despair in the dot-com universe may be as overdone as last year’s euphoria. The Internet, after all, really is a transforming technology that has revolutionized the way we communicate. What recent months suggest, however, is that it may not be an indiscriminate, magical new means of making money.

Woeful tales of visionary innovators failing to capitalize on their revolutionary new technology are not new. The advent of railroads, the automobile and radio, to name other watershed innovations in history, also led to many a shattered dream. The number of failed auto makers far exceeded the number that ultimately succeeded.

In this holiday season, the financial implosion of so many dot-com retailers seems particularly cruel. It is not as if consumers do not appreciate shopping online. Online sales this season are expected to be about two-thirds greater than last year. But it is not the innovators who are reaping all the benefits. Online retailers are losing market share to the likes of Wal-Mart and Kmart. This holiday season, online sales of traditional retailers, initially hesitant to embrace the Web, will outpace those of the pure dot-coms for the first time.

The endearing Pets.com sock puppet is a fitting mascot for the demise of the dot-com mania. Less than a year ago the spokesdog for the online pet-supply retailer was starring in some of those $2 million Super Bowl commercials. Now, in the wake of his master’s bankruptcy, he is looking to shill for another company — one that can actually make money.

Factors That Led to the Dot-Com Bubble Burst

There were two primary factors that led to the burst of the Internet bubble:

The Use of Metrics That Ignored Cash Flow. Many analysts focused on aspects of individual businesses that had nothing to do with how they generated revenue or their cash flow. For example, one theory is that the Internet bubble burst due to a preoccupation with the “network theory,” which stated the value of a network increased exponentially as the series of nodes (computers hosting the network) increased. Although this concept made sense, it neglected one of the most important aspects of valuing the network: the ability of the company to use the network to generate cash and produce profits for investors.

Significantly Overvalued Stocks. In addition to focusing on unnecessary metrics, analysts used very high multipliers in their models and formulas for valuing Internet companies, which resulted in unrealistic and overly optimistic values. Although more conservative analysts disagreed, their recommendations were virtually drowned out by the overwhelming hype in the financial community around Internet stocks.

Avoiding Another Internet Bubble

Considering that the last Internet bubble cost investors trillions, getting caught in another is among the last things an investor would want to do. In order to make better investment decisions (and avoid repeating history), there are important considerations to keep in mind.

  1. Popularity Does Not Equal Profit

Sites such as Facebook and Twitter have received a ton of attention, but that does not mean they are worth investing in. Rather than focusing on which companies have the most buzz, it is better to investigate whether a company follows solid business fundamentals.

Although hot Internet stocks will often do well in the short-term, they may not be reliable as long-term investments. In the long run, stocks usually need a strong revenue source to perform well as investments.

  1. Many Companies Are Too Speculative

Companies are appraised by measuring their future profitability. However, speculative investments can be dangerous, as valuations are sometimes overly optimistic. This may be the case for Facebook. Given that Facebook may be making less than $1 billion a year in profits, it is hard to justify valuing the company at $100 billion.

Never invest in a company based solely on the hopes of what might happen unless it’s backed by real numbers. Instead, make sure you have strong data to support that analysis – or, at least, some reasonable expectation for improvement.

  1. Sound Business Models Are Essential

In contrast to Facebook, Twitter does not have a profitable business model, or any true method to make money. Many investors were not realistic concerning revenue growth during the first Internet bubble, and this is a mistake that should not be repeated. Never invest in a company that lacks a sound business model, much less a company that hasn’t even figured out how to generate revenue.

  1. Basic Business Fundamentals Cannot Be Ignored

When determining whether to invest in a specific company, there are several solid financial variables that must be examined, such as the company’s overall debt, profit margin, dividend payouts, and sales forecasts. In other words, it takes a lot more than a good idea for a company to be successful. For example, MySpace was a very popular social networking site that ended up losing over $1 billion between 2004 and 2010.

Dotcom Domains – The History of Domain Names

Fewer than 15,000 dot.com domains were registered

Date: 01/01/1992

By 1992 fewer than 15,000 dot.com domains were registered.

On 15 March 1985, the first commercial Internet domain name (.com) was registered under the name Symbolics.com by Symbolics Inc., a computer systems firm in Cambridge, Massachusetts.

By 1992 fewer than 15,000 dot.com domains were registered.

In December 2009 there were 192 million domain names. A big fraction of them are in the .com TLD, which as of March 15, 2010 had 84 million domain names, including 11.9 million online business and e-commerce sites, 4.3 million entertainment sites, 3.1 million finance related sites, and 1.8 million sports sites.

Domain name registration

The right to use a domain name is delegated by domain name registrars which are accredited by the Internet Corporation for Assigned Names and Numbers (ICANN), the organization charged with overseeing the name and number systems of the Internet. In addition to ICANN, each top-level domain (TLD) is maintained and serviced technically by an administrative organization operating a registry. A registry is responsible for maintaining the database of names registered within the TLD it administers. The registry receives registration information from each domain name registrar authorized to assign names in the corresponding TLD and publishes the information using a special service, the whois protocol.

Registries and registrars usually charge an annual fee for the service of delegating a domain name to a user and providing a default set of name servers. Often this transaction is termed a sale or lease of the domain name, and the registrant may sometimes be called an “owner”, but no such legal relationship is actually associated with the transaction, only the exclusive right to use the domain name. More correctly, authorized users are known as “registrants” or as “domain holders”.

DOD Registrationservices – The History of Domain Names

Department of Defense would no longer fund registration services

Date: 01/01/1992

By the 1990s, most of the growth of the Internet was in the non-defense sector, and even outside the United States. Therefore, the US Department of Defense would no longer fund registration services outside of the mil domain.

The National Science Foundation started a competitive bidding process in 1992; subsequently, in 1993, NSF created the Internet Network Information Center, known as InterNIC, to extend and coordinate directory and database services and information services for the NSFNET; and provide registration services for non-military Internet participants. NSF awarded the contract to manage InterNIC to three organizations; Network Solutions provided registration services, AT&T provided directory and database services, and General Atomics provided information services. General Atomics was disqualified from the contract in December 1994 after a review found their services not conforming to the standards of its contract. General Atomics’ InterNIC functions were assumed by AT&T.

Beginning in 1996, Network Solutions rejected domain names containing English language words on a “restricted list” through an automated filter. Applicants whose domain names were rejected received an email containing the notice: “Network Solutions has a right founded in the First Amendment to the U.S. Constitution to refuse to register, and thereby publish, on the Internet registry of domain names words that it deems to be inappropriate.” Domain names such as “shitakemushrooms.com” would be rejected, but the domain name “shit.com” was active since it had been registered before 1996.

Network Solutions eventually allowed domain names containing the words on a case-by-case basis, after manually reviewing the names for obscene intent. This profanity filter was never enforced by the government and its use was not continued by ICANN when it took over governance of the distribution of domain names to the public.

DOD – The History of Domain Names

DoD Internet Host Table Specification, Hostname Server, Domain Naming Convention for Internet User Applications, Distributed System for Internet Name Service

Date: 03/01/1982

DOD INTERNET HOST TABLE SPECIFICATION

The ARPANET Official Network Host Table, as outlined in RFC 608, no longer suits the needs of the DoD community, nor does it follow a format suitable for internetting.  This paper specifies a new host table format applicable to both ARPANET and Internet needs. In addition to host name to host address translation and selected protocol information, we have also included network and gateway name to address correspondence, and host operating system information. This Host Table is utilized by the DoD Host Name Server maintained by the ARPANET Network Information Center (NIC) on behalf of the Defense Communications Agency (DCA) (RFC 811).  It obsoletes the host table described in RFC 608.

HOSTNAME SERVER

The NIC Internet Hostname Server is a TCP-based host information   program and protocol running on the SRI-NIC machine.  It is one of a series of internet name services maintained by the DDN Network Information Center (NIC) at SRI International on behalf of the Defense Communications Agency (DCA).  The function of this particular server is to deliver machine-readable name/address information describing networks, gateways, hosts, and eventually domains, within the internet environment.  As currently implemented, the server provides the information outlined in the DoD Internet Host Table Specification.

Protocol

To access this server from a program, establish a TCP connection to port 101 (decimal) at the service host, SRI-NIC.ARPA (26.0.0.73 or 10.0.0.51).  Send the information request (a single line), and read the resulting response.  The connection is closed by the server upon completion of the response, so only one request can be made for each connection.

The Domain Naming Convention for Internet User Applications

Introduction

For many years, the naming convention “<user>@<host>” has served the ARPANET user community for its mail system, and the substring”<host>” has been used for other applications such as file transfer (FTP) and terminal access (Telnet).  With the advent of network interconnection, this naming convention needs to be generalized to accommodate internetworking.  A decision has recently been reached to replace the simple name field, “<host>”, by a composite name field, “<domain>”.  This note is an attempt to clarify this generalized naming convention, the Internet Naming Convention, and to explore the implications of its adoption for Internet name service and user applications.

The following example illustrates the changes in naming convention:

ARPANET Convention:   Fred@ISIF

Internet Convention:  Fred@F.ISI.ARPA

The intent is that the Internet names be used to form a tree-structured administrative dependent, rather than a strictly topology dependent, hierarchy.  The left-to-right string of name components proceeds from the most specific to the most general, that is, the root of the tree, the administrative universe, is on the right. The name service for realizing the Internet naming convention is assumed to be application independent.  It is not a part of any particular application, but rather an independent name service serves different user applications.

A Distributed System for Internet Name Service

INTRODUCTION

For many years, the ARPANET Naming Convention “<user>@<host>” has served its user community for its mail system.  The substring “<host>” has been used for other user applications such as file transfer (FTP) and terminal access (Telnet).  With the advent of network interconnection, this naming convention needs to be generalized to accommodate internetworking.  The Internet Naming Convention  describes a hierarchical naming structure for serving Internet user applications such as SMTP for electronic mail, FTP and Telnet for file transfer and terminal access.  It is an integral part of the network facility generalization to accommodate internetworking. Realization of Internet Naming Convention requires the establishment of both naming authority and name service.  In this document, we propose an architecture for a distributed System for Internet Name Service (SINS).  We assume the reader’s familiarity with, which describes the Internet Naming Convention.

Internet Name Service provides a network service for name resolution and resource negotiation for the establishment of direct communication between a pair of source and destination application processes.  The source application process is assumed to be in possession of the destination name.  In order to establish communication, the source application process requests for name service. The SINS resolves the destination name for its network address, and provides negotiation for network resources.  Upon completion of successful name service, the source application process provides the destination address to the transport service for establishing direct communication with the destination application process.

Overview

SINS is a distributed system for name service.  It logically consists of two parts: the domain name service and the application interface. The domain name service is an application independent network service for the resolution of domain names.  This resolution is provided through the cooperation among a set of domain name servers (DNSs).  With each domain is associated a DNS.* The reader is referred to for the specification of a domain name server.  As noted in , a domain is an administrative but not necessarily a topological entity.  It is represented in the networks by its associated DNS.  The resolution of a domain name results in the address of its associated DNS.

Domain Arpa – The History of Domain Names

Domain Arpa

The domain arpa was the first Internet top-level domain.

The domain arpa was the first Internet top-level domain. It was intended to be used only temporarily, aiding in the transition of traditional ARPANET host names to the domain name system. However, after it had been used for reverse DNS lookup, it was found impractical to retire it.

The domain name arpa is a top-level domain (TLD) in the Domain Name System of the Internet. It is used exclusively for technical infrastructure purposes. While the name was originally the acronym for the Advanced Research Projects Agency (ARPA), the funding organization in the United States that developed one of the precursors of the Internet (ARPANET), it now stands for Address and Routing Parameter Area.

arpa also contains the domains for reverse domain name resolution in-addr.arpa and ip6.arpa for IPv4 and IPv6, respectively.

Types

As of 2015, IANA distinguishes the following groups of top-level domains:

infrastructure top-level domain (ARPA)
generic top-level domains (gTLD)
restricted generic top-level domains (grTLD)
sponsored top-level domains (sTLD)
country code top-level domains (ccTLD)
test top-level domains (tTLD)
History

The arpa top-level domain was the first domain installed in the Domain Name System (DNS). It was originally intended to be a temporary domain to facilitate the transition of the ARPANET host naming conventions and the host table distribution methods to the Domain Name System. The ARPANET was one of the predecessors to the Internet, established by the United States Department of Defense Advanced Research Projects Agency (DARPA). When the Domain Name System was introduced in 1985, ARPANET host names were initially converted to domain names by adding the arpa domain name label to the end of the existing host name, separated with a full stop (i.e., a period). Domain names of this form were subsequently rapidly phased out by replacing them with domain names under the newly introduced, categorized top-level domains.

After arpa had served its transitional purpose, it proved impractical to remove the domain, because in-addr.arpa was used for reverse DNS lookup of IP addresses. For example, the mapping of the IP address 145.97.39.155 to a host name is obtained by issuing a DNS query for a pointer record of the domain name 155.39.97.145.in-addr.arpa.

It was intended that new infrastructure databases would be created in the top-level domain int. However, in May 2000 this policy was reversed, and it was decided that arpa should be retained for this purpose, and int should be used solely by international organizations.[3] In accordance with this new policy, arpa now officially stands for Address and Routing Parameter Area (a backronym).

Beginning in March 2010 the arpa zone is signed (DNSSEC).

Domain Name Concept – The History of Domain Names

The Domain Name Concept

Date: 01/01/1982

A domain name is the human-friendly name that we are used to associating with an internet resource. A domain name is an identification string that defines a realm of administrative autonomy, authority or control within the Internet. Domain names are formed by the rules and procedures of the Domain Name System (DNS). Any name registered in the DNS is a domain name. Domain names are used in various networking contexts and application-specific naming and addressing purposes. In general, a domain name represents an Internet Protocol (IP) resource, such as a personal computer used to access the Internet, a server computer hosting a web site, or the web site itself or any other service communicated via the Internet. In 2015, 294 million domain names had been registered.

Domain names are organized in subordinate levels (subdomains) of the DNS root domain, which is nameless. The first-level set of domain names are the top-level domains (TLDs), including the generic top-level domains (gTLDs), such as the prominent domains com, info, net, edu, and org, and the country code top-level domains (ccTLDs). Below these top-level domains in the DNS hierarchy are the second-level and third-level domain names that are typically open for reservation by end-users who wish to connect local area networks to the Internet, create other publicly accessible Internet resources or run web sites. The registration of these domain names is usually administered by domain name registrars who sell their services to the public. A fully qualified domain name (FQDN) is a domain name that is completely specified with all labels in the hierarchy of the DNS, having no parts omitted. Labels in the Domain Name System are case-insensitive, and may therefore be written in any desired capitalization method, but most commonly domain names are written in lowercase in technical contexts.

Purpose

Domain names serve to identify Internet resources, such as computers, networks, and services, with a text-based label that is easier to memorize than the numerical addresses used in the Internet protocols. A domain name may represent entire collections of such resources or individual instances. Individual Internet host computers use domain names as host identifiers, also called host names. The term host name is also used for the leaf labels in the domain name system, usually without further subordinate domain name space. Host names appear as a component in Uniform Resource Locators (URLs) for Internet resources such as web sites (e.g., en.wikipedia.org).

Domain names are also used as simple identification labels to indicate ownership or control of a resource. Such examples are the realm identifiers used in the Session Initiation Protocol (SIP), the Domain Keys used to verify DNS domains in e-mail systems, and in many other Uniform Resource Identifiers (URIs). An important function of domain names is to provide easily recognizable and memorizable names to numerically addressed Internet resources. This abstraction allows any resource to be moved to a different physical location in the address topology of the network, globally or locally in an intranet. Such a move usually requires changing the IP address of a resource and the corresponding translation of this IP address to and from its domain name. Domain names are used to establish a unique identity. Organizations can choose a domain name that corresponds to their name, helping Internet users to reach them easily. A generic domain is a name that defines a general category, rather than a specific or personal instance, for example, the name of an industry, rather than a company name. Some examples of generic names are books.com, music.com, and travel.info. Companies have created brands based on generic names, and such generic domain names may be valuable. Domain names are often simply referred to as domains and domain name registrants are frequently referred to as domain owners, although domain name registration with a registrar does not confer any legal ownership of the domain name, only an exclusive right of use for a particular duration of time. The use of domain names in commerce may subject them to trademark law.

Concept

1982 January As described in Computer Mail Meeting Notes, RFC 805, it was initially the need for a real-world solution to the complexity of email relaying that triggered the development of the domain concept. A group of ARPANET researchers, principles, and related parties held a meeting in January, 1982, to discuss a solution foremail relaying. As described on the email addresses page, email was often originally sent from site to site toits destination along a path of systems, and might need to go through a half a dozen or more links that would connect at certain times of the day.

To send an email to someone, you had to first be a human router and specify a valid path to the destination aspart of the address. If you didn’t know a valid route, the software couldn’t help you. In order to solve this problem, domain names were created to provide each person with one address regardless of where email was sent from. As RFC 805 put it, “The hierarchical domain type naming differs from source routing in that the former gives absolute addressing while the latter gives relative addressing”. RFC 805 outlines many of the basic principles of the eventual domain name system, including the need for top level domains to provide a starting point for delegation of queries, the need for second level domains to be unique — and therefore the requirement for a registrar type of administration, and the recognition that distribution of individual name servers responsible for each domain would provide administration and maintenance advantages. Within the year, the concept was developed through a series of communications. In March, the hosts table definition was updated with DoD Internet Host Table Specification, RFC 810, and NIC’s introduction of a server function to provide individual hostname / address translations was described in Hostnames Server, RFC 811, both documents including the domain concept. In August, The Domain Naming Convention for Internet User Applications, RFC 819, provided an excellent overview of the concept. And then, in October, the full concept of a distributed system of name servers, each serving its local domain,was described in A Distributed System for Internet Name Service, RFC 830, providing the main architectural outlines of the system still in use today.

1982 February 8th The conclusion in this area wasthat the current “user@host” mailbox identifier should be extended to”user@host.domain” where “domain” could be a hierarchy ofdomains.- J. Postel; Computer Mail Meeting Notes, RFC 805; 8 Feb 1982. TheDomain Name System was originally invented to support the growth of email communications on the ARPANET, and now supports the Internet on a global scale. Alphabetic host names were introduced on the ARPANET shortly after its creation, and greatly increased usability since alphabetic names are much easier to remember than semantically meaningless numeric addresses. Host names were also useful for developmentof network-aware computer programs, since they could reference a constant host name without concern about changes to the physical address due to network alterations. Of course, the infrastructure of the underlying network was still based on numeric addresses, so each site maintained a “HOSTS.TXT” file that provided a mapping between host names and network addresses in a set of simple text records that could be easily read by a person or program.

Domain Transfer – The History of Domain Names

Domain Transfer Made Easier

Date: 10/01/2006

Domains can be transferred between registrars. Prior to October 2006, the procedure used by Verisign was complex and unreliable – requiring a notary public to verify the identity of the registrant requesting a domain transfer. In October 2006, a new procedure, requiring the losing registrar to provide an authorization code on instruction from the registrant (also known as EPP code) was introduced by Verisign to reduce the incidence of domain hijacking.

2006 September 29th On September 29, 2006, ICANN signed a new agreement with the United States Department of Commerce (DOC) that moves the private organization towards full management of the Internet’s system of centrally coordinated identifiers through the multi-stakeholder model of consultation that ICANN represents.

2007 It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet.

2007 The temporary reassignment of country code cs (Serbiaand Montenegro) until its split into rs and me (Serbia and Montenegro, respectively) led to some controversies about the stability of ISO 3166-1country codes, resulting in a second edition of ISO 3166-1 in 2007 with a guarantee that retired codes will not be reassigned for at least 50 years, and the replacement of RFC 3066 by RFC 4646 for country codes used in language tags in 2006.

2007 The org domain registry allows the registration of selected internationalized domain names (IDNs) as second-level domains. For German, Danish, Hungarian, Icelandic, Korean, Latvian, Lithuanian, Polish, and Swedish IDNs this has been possible since 2005. Spanish IDN registrations have been possible since 2007.

DomainNames – The History of Domain Names

215 MILLION DOMAIN NAMES REGISTERED AT THE END OF AUGUST

September 3, 2011

The end of the second quarter of this year saw over 215 million domain name registrations across all Top Level Domains (TLDs), an increase of 5.2 million domain names over the first quarter.

According to Verisign’s August 2011 Domain Name Industry Brief, this represented growth of  2.5%. Since the second quarter of last year, domain name registrations have increased by more than 16.9 million, or 8.6 percent.

Country Code Top Level Domains (ccTLDs) number 84.6 million, a 3.6 percent increase quarter over quarter, and an 8.4 percent increase year over year.

Among the 20 largest ccTLDs, Brazil, Spain and Australian domain registrations all grew more than 4% based on a quarter to quarter comparison.  The Australian domain name space has been performing particularly well over the last few quarters.

The largest Top Level Domains in terms of base size were, .com, .de, .net, .uk, .org, .info, .nl, .cn, .eu and .ru respectively.

Renewal rates for .com/.net fell during the second quarter of 2011 from 73.8 percent in the first quarter to 73.1 percent.

Verisign estimates that 88 percent of .com and .net domain names resolve to a website, however 22% are one-page websites, which includes “under construction” pages, brochure and parked pages; including revenue generating parked pages often used by domainers.

Verisign stated its average daily Domain Name System (DNS ) query load during the quarter was 56 billion, down 1 percent on the previous quarter; and peaked at 68 billion queries, up 1 percent.

Verisign is a U.S. based company that operates array of network infrastructure, including two of the world’s 13 Internet root servers. The company has invested $500 million in Project Titan, that willl allow Verisign to increase daily DNS query capacity by a factor of ten – from the current 400 billion queries a day to 4 trillion queries a day.

DomainRegistrations – The History of Domain Names

Domain registrations grow 12% to 240 million

October 2, 2012

Base of registered domains continues to grow.

The total base of registered domain names increased nearly 12% year over year to 240 million at the end of June, VeriSign announced its its quarterly domain industry brief today.

Country code domain name registrations continue to lead the way. There are now over 100 million registered country code domain names, an 18.5% increase over a year.

The total base of .com and .net domains, which are managed by VeriSign, increased 7.8% compared to the same period a year ago and inched up 1.6% compared to the first quarter of this year.

.TK became the second largest country code domain name in terms of registration base. .TK domains (for the country of Tokelau) are given away for free.

DealDash – The History of Domain Names

DealDash Acquires Swoopo

February 8, 2012

Penny auction site DealDash has acquired the Swoopo.com domain name. DealDash was one of the very first penny auctions to launch in the US (April 2009), they launched as BidRay. The penny auction paid 10,000 euros, $13,170 US to purchase the domain name on Sedo just a few weeks ago.

Swoopo was the world’s first penny auction site, an interesting type of online auction website that had bidders pay for bids.

When browsing to Swoopo.com a message on top reads,  “Swoopo is no longer in service. DealDash, the longest running fair & honest auction site provides a similar service.”

Demand-Media – The History of Domain Names

Demand Media invests $18 million in new TLDs

May 8, 2012

Demand Media, parent company of eNom, announced today that it has invested $18 million into new top level domain names.

In April 2012, Demand Media invested $18 million in pursuit of its generic Top Level Domain (“gTLD”) initiative, which it believes represents a complementary strategic growth opportunity for its Registrar services.

Given that this refers only to the month of April, when Demand Media would have completed its applications, it’s possible that this is for application fees and related expenses only. That’s a whole lot of top level domains.

Kristen Moore, VP, Corporate Marketing & Communications at Demand Media, tells Domain Name Wire: “As the ICANN application process is not yet completed, we aren’t commenting on the specifics of any applications beyond the size of our investment and our enthusiasm for the opportunity at this time.”

On the investor conference call today, the company said it has committed $18 million in “support” of the program. It has signed two partners that will use its backend system. It also said it “may become a registry in our own right”, e.g. apply for domains itself. Its CFO said it “funded” $18 million in April, which still leads us back to application fees.

Interestingly, by the spirit of the rules, Demand Media shouldn’t be eligible to apply for new TLDs due to multiple UDRP losses. But there are plenty of technicalities to get around that.

Demandmedia – The History of Domain Names

Demand Media Renews Ad Deal With Google

Aug 9, 2011

Web content giant Demand Media is announcing a number of acquisitions and a renewal of its Google ad deal today. First, Demand says that it is renewing and expanding its advertising partnership with Google. Under the terms of the three year agreement, Demand will continue to monetize its properties via AdSense for Content and the DoubleClick Ad Exchange.

The company says Demand has extended its agreement to manage and serve ads through the DoubleClick for Publishers platform and the media company’s properties will be included in ‘premium, brand-safe channels within Google Display Network Reserve,’ according to a release.

Demand and Google haven’t had the rosiest of relationships in the past. Google issued its “Panda” update to search results earlier this year, which aimed to weed out low-quality content sites from search. It was thought that this could affect Demand content’s rank in search results. And in the company’s earnings call in May, Richard Rosenblatt told investors that changes in Google’s algorithm affected eHow’s traffic, with visits to the platform down in the past quarter.

But Demand has promised to clean up its content and is taking measures to improve the quality of the content posted on its family of sites.

Demand has also acquired online advertising and media planning company IndieClick, a company that delivers multi-platform advertising campaigns. IndieClick represents over 300 websites as their exclusive advertising sales and technology partners and focuses on the 13-35 y/o demographic. The company provides the complete outsourcing of sales, business processes and ad serving technology for media publications.

And lastly, Demand is buying social media company RSS Graffiti, which helps publishers and brand program their Facebook pages with content from websites, blogs, Twitter, YouTube and more. The startup basically simplifies and automates the posting of updates to Facebook pages. Publishers configure the application to check their websites, blogs, Twitter streams and other social platforms. The application then automatically posts the content to the publisher’s Facebook page and to the activity stream of friends and fans at scheduled intervals.

The company also reported earnings for the quarter, beating analyst expectations. Demand’s Non-GAAP revenue increased 34% to $76.6 million, from $57.3 million in Q210. And net income came in at $5.0 million, which is an increase of 43% compared with $3.5 million in Q210. Net Income per share was $0.06, up 50% compared with $0.04 in Q210.