Author Archives: admin

DVD – The History of Domain Names

Netflix purchases DVD.com

March 30, 2012

Netflix has confirmed its purchase of DVD.com. A spokesperson tells Domain Name Wire “Netflix cares about keeping DVD healthy, and this is just one small investment in keeping DVD healthy.”

The nameservers and registrant information for DVD.com just changed. The domain now forwards to DVD.netflix.com, signalling that Netflix may have purchased the domain.

Sometime around March 25 the nameservers changed from worldnic.com to ULTRADNS.NET, the same DNS provider Netflix uses.

The domain’s whois also changed from a Network Solutions private registration to a DNStinations registration. DNStinations is essentially a whois proxy service for brand protection company Mark Monitor. Mark Monitor has done work for Netflix in the past, including registering domains related to its short-lived Qwikster name.

Dupont – The History of Domain Names

Dupont – dupont.com was registered

Date: 07/27/1987

On July 27, 1987, Dupont registered the dupont.com domain name, making it 81st .com domain ever to be registered.

EI. du Pont de Nemours and Company, commonly referred to as DuPont, is an American conglomerate that was founded in July 1802 as a gunpowder mill by French Éleuthère Irénée du Pont. In the 20th century, DuPont developed many polymers such as Vespel, neoprene, nylon, Corian, Teflon, Mylar, Kevlar, Zemdrain, M5 fiber, Nomex, Tyvek, Sorona, Corfam, and Lycra. DuPont developed Freon (chlorofluorocarbons) for the refrigerant industry, and later more environmentally friendly refrigerants. It also developed synthetic pigments and paints including ChromaFlair.

In 2014, DuPont was the world’s fourth largest chemical company based on market capitalization and eighth based on revenue. Its stock price is a component of the Dow Jones Industrial Average.

History

Establishment: 1802

DuPont was founded in 1802 by Éleuthère Irénée du Pont, using capital raised in France and gunpowder machinery imported from France. The company was started at the Eleutherian Mills, on the Brandywine Creek, near Wilmington, Delaware, two years after he and his family left France to escape the French Revolution and religious persecutions against Huguenot protestants. It began as a manufacturer of gunpowder, as du Pont noticed that the industry in North America was lagging behind Europe. The company grew quickly, and by the mid-19th century had become the largest supplier of gunpowder to the United States military, supplying half the powder used by the Union Army during the American Civil War. The Eleutherian Mills site is now a museum and a National Historic Landmark.

Expansion: 1902 to 1912

DuPont continued to expand, moving into the production of dynamite and smokeless powder. In 1902, DuPont’s president, Eugene du Pont, died, and the surviving partners sold the company to three great-grandsons of the original founder. Charles Lee Reese was appointed as director and the company began centralizing their research departments. The company subsequently purchased several smaller chemical companies, and in 1912 these actions gave rise to government scrutiny under the Sherman Antitrust Act. The courts declared that the company’s dominance of the explosives business constituted a monopoly and ordered divestment. The court ruling resulted in the creation of the Hercules Powder Company (later Hercules Inc. and now part of Ashland Inc.) and the Atlas Powder Company (purchased by Imperial Chemical Industries (ICI) and now part of AkzoNobel). At the time of divestment, DuPont retained the single base nitrocellulose powders, while Hercules held the double base powders combining nitrocellulose and nitroglycerine. DuPont subsequently developed the Improved Military Rifle (IMR) line of smokeless powders.

In 1910, DuPont published a brochure entitled “Farming with Dynamite”. The pamphlet was instructional, outlining the benefits to using their dynamite products on stumps and various other obstacles that would be easier to remove with dynamite as opposed to other more conventional, inefficient means. DuPont also established two of the first industrial laboratories in the United States, where they began the work on cellulose chemistry, lacquers and other non-explosive products. DuPont Central Research was established at the DuPont Experimental Station, across the Brandywine Creek from the original powder mills.

Automotive investments: 1914

In the 1920s, DuPont continued its emphasis on materials science, hiring Wallace Carothers to work on polymers in 1928. Carothers invented neoprene, a synthetic rubber; the first polyester superpolymer; and, in 1935, nylon. The invention of Teflon followed a few years later. DuPont introduced phenothiazine as an insecticide in 1935.

Second World War: 1941 to 1945

DuPont ranked 15th among United States corporations in the value of wartime production contracts. As the inventor and manufacturer of nylon, DuPont helped produce the raw materials for parachutes, powder bags, and tires. DuPont also played a major role in the Manhattan Project in 1943, designing, building and operating the Hanford plutonium producing plant in Hanford, Washington. In 1950 DuPont also agreed to build the Savannah River Plant in South Carolina as part of the effort to create a hydrogen bomb.

Space Age developments: 1950 to 1970

After the war, DuPont continued its emphasis on new materials, developing Mylar, Dacron, Orlon, and Lycra in the 1950s, and Tyvek, Nomex, Qiana, Corfam, and Corian in the 1960s. DuPont materials were critical to the success of the Apollo Project of the United States space program. DuPont has been the key company behind the development of modern body armor. In the Second World War DuPont’s ballistic nylon was used by Britain’s Royal Air Force to make flak jackets. With the development of Kevlar in the 1960s, DuPont began tests to see if it could resist a lead bullet. This research would ultimately lead to the bullet resistant vests that are the mainstay of police and military units in the industrialized world.

Conoco holdings: 1981 to 1999

In 1981, DuPont acquired Conoco Inc., a major American oil and gas producing company that gave it a secure source of petroleum feedstocks needed for the manufacturing of many of its fiber and plastics products. The acquisition, which made DuPont one of the top ten U.S.-based petroleum and natural gas producers and refiners, came about after a bidding war with the giant distillery Seagram Company Ltd., which would become DuPont’s largest single shareholder with four seats on the board of directors. On April 6, 1995, after being approached by Seagram Chief Executive Officer Edgar Bronfman, Jr., DuPont announced a deal in which the company would buy back all the shares owned by Seagram.

In 1999, DuPont sold all of its shares of Conoco, which merged with Phillips Petroleum Company, and acquired the Pioneer Hi-Bred agricultural seed company.

Activities, 2000–present

DuPont describes itself as a global science company that employs more than 60,000 people worldwide and has a diverse array of product offerings. The company ranks 86th in the Fortune 500 on the strength of nearly $36 billion in revenues, $4.848 billion in profits in 2013. In April 2014, Forbes ranked DuPont 171st on its Global 2000, the listing of the world’s top public companies.

DuPont businesses are organized into the following five categories, known as marketing “platforms”: Electronic and Communication Technologies, Performance Materials, Coatings and Color Technologies, Safety and Protection, and Agriculture and Nutrition.

The agriculture division, DuPont Pioneer makes and sells hybrid seed and genetically modified seed, some of which goes on to become genetically modified food. Genes engineered into their products include LibertyLink, which provides resistance to Bayer’s Ignite Herbicide/Liberty herbicides; the Herculex I Insect Protection gene which provides protection against various insects; the Herculex RW insect protection trait which provides protection against other insects; the YieldGard Corn Borer gene, which provides resistance to another set of insects; and the Roundup Ready Corn 2 trait that provides crop resistance against glyphosate herbicides. In 2010, DuPont Pioneer received approval to start marketing Plenish soybeans, which contains “the highest oleic acid content of any commercial soybean product, at more than 75 percent. Plenish provides a product with no trans fat, 20 percent less saturated fat than regular soybean oil, and more stabile oil with greater flexibility in food and industrial applications.” Plenish is genetically engineered to “block the formation of enzymes that continue the cascade downstream from oleic acid (that produces saturated fats), resulting in an accumulation of the desirable monounsaturated acid.”

In 2004, the company sold its textiles business, which included some of its best-known brands such as Lycra (Spandex), Dacron polyester, Orlon acrylic, Antron nylon and Thermolite, to Koch Industries.

In 2011, DuPont was the largest producer of titanium dioxide in the world, primarily provided as a white pigment used in the paper industry.

DuPont has 150 research and development facilities located in China, Brazil, India, Germany, and Switzerland with an average investment of $2 billion annually in a diverse range of technologies for many markets including agriculture, genetic traits, biofuels, automotive, construction, electronics, chemicals, and industrial materials. DuPont employs more than 10,000 scientists and engineers around the world.

On January 9, 2011, DuPont announced that it had reached an agreement to buy Danish company Danisco for US$6.3 billion. On May 16, 2011, DuPont announced that its tender offer for Danisco had been successful and that it would proceed to redeem the remaining shares and delist the company.

On May 1, 2012, DuPont announced that it had acquired from Bunge full ownership of the Solae joint venture, a soy-based ingredients company. DuPont previously owned 72 percent of the joint venture while Bunge owned the remaining 28 percent.

In February 2013, DuPont Performance Coatings was sold to the Carlyle Group and rebranded as Axalta Coating Systems.

Chemours

In October 2013, DuPont announced that it was planning to spin off its Performance Chemicals business into a new publicly traded company in mid-2015. The company filed its initial Form 10 with the SEC in December 2014 and announced that the new company would be called The Chemours Company. The spin-off to DuPont shareholders was completed on July 1, 2015 and Chemours stock began trading on the New York Stock Exchange on the same date. DuPont will focus on production of GMO seeds, materials for solar panels, and alternatives to fossil fuels. Chemours becomes responsible for the cleanup of 171 former DuPont sites, which DuPont says will cost between $295 million and $945 million.

Merger with Dow

On December 11, 2015, DuPont announced that it would merge with the Dow Chemical Company, in an all-stock deal. The combined company, which will be known as DowDuPont, will have an estimated value of $130 billion, be equally held by the shareholders of both companies, and maintain their headquarters in Delaware and Michigan respectively. Within two years of the merger’s closure, expected in late 2016 and subject to regulatory approval, DowDuPont will be split into three separate public companies, focusing on the agricultural chemicals, materials science, and specialty product industries. Commentators have questioned the economic viability of this plan because, of the three companies, only the specialty products industry has prospects for high growth. The outlook on the profitability of the other two proposed companies has been questioned due to reduced crop prices and lower margins on plastics such as polyethylene. They have also noted that the deal is likely to face antitrust scrutiny in several countries.

Dudu – The History of Domain Names

Dudu.com Sold for $1 million

January 5, 2012

A Dubai-based social networking site, Dudu, has paid $1 million for dudu.com, making one Chinese domainer a very happy man indeed. Sedo brokered the deal over three months and announced the sale today.

Dudu was previously located at godudu.com. The lesson to be learned here is so painfully obvious it’s barely worth mentioning: if you’re going to launch a brand and try to make it successful, first make sure you have a domain to match.

Dudu.com is a memorable domain name and a rather short one, so it was bound to fetch a decent price. But there aren’t that many people or companies in the world that would pay $1 million for it.

Before Dudu built up the brand, dudu.com was probably a five-figure sale.

To Dudu’s credit, it does not appear to have ever attempted a reverse domain name hijacking using the UDRP.

The domain name has since replaced the social networking site that was located on godudu.com. Dudu uses a unique translation technology to allow its users from around the world to communicate with each other in their native languages.

Dubai-based businessman and Chairman of Dudu Communications Alibek Issaev purchased the domain from its owner in China.

The social networking site was launched in April 2011 and within a small span, has over 2 million registered users who can add friends, upload and share pictures, just like Facebook, and also listen to music and see what your friends are listening to.

dotUK – The History of Domain Names

.Uk domain hits 10 million milestone

March 16, 2012

10 million .uk domain names currently registered.

Today .uk domain registry Nominet announced that the .uk domain crossed the 10 million domain milestone this week.

The domain name swarvemagazine.co.uk, registered on March 12, represented the 10 millionth domain.

Of course, more than 10 million domains have been registered to date, but this is the base of currently registered .uk domain names.

The .uk domain ranks fourth in the world for size, following .com, .de (Germany), and .net, according to VeriSign’s latest domain industry report. That makes it number two for country code domain names, with .tk for Tokelau nipping at its heels.

DotTV – The History of Domain Names

.tv domain turns 10

September 6, 2012

The one .tv domain name that has done the most to build the .TV domain name brand turned ten last month.

And it doesn’t even resolve to a web site.

MLB.TV, Major League Baseball’s live game streaming service, launched at about this time in 2002. Since then it has delivered 1.5 billion live video streams. This season it is running at a clip of 1.1 million live video streams per day.

But MLB.TV is more of a brand than a .tv web site. If you type in MLB.TV you’ll be forwarded to a page on MLB.com. Also, a good percentage of viewers never type in the .tv to begin with; they either go to their team’s web site (e.g. Cardinals.com) or watch on their iPads and other mobile devices.

Still, MLB.TV has had a huge influence of the .tv domain name. Major League Baseball’s service has attracted lots of attention and media mentions, and the .tv branding has surely nudged other media executives to stake a claim to a .tv web address.

dotcom-growth – The History of Domain Names

.Com growth rate slowing

October 16, 2012

Data show a slight reduction in .com growth rate, but it’s still over 8% per year.

A couple weeks ago Michael Berkens wrote a story about how the base of .com and .net domain registrations was slowing “to a crawl”.

A few days later Treffis issued a report stating “Verisign’s Dropping .com And .net Is A Troubling Trend”.

The numbers in Berken’s post didn’t quite make sense to me. The Treffis report refers to .com and .net having a falling market share, which doesn’t seem particularly relevant with the overall namespace continuing to grow.

Curious what the real story is, I decided to dig into the numbers.

The most important thing I discovered is that the numbers some people compare aren’t always apples to apples. What VeriSign reports on its zone file page differ from what’s in its domain industry briefs and investor calls, which also differ from the numbers it reports to ICANN each month. For example, the numbers in the quarterly reports and domain briefs exclude domains in the 5 day add-grace period.

If you compare numbers in the zone file to VeriSign’s other reports, you’ll get the wrong growth rates.

Perhaps the best monthly source of information on .com registrations are the monthly reports VeriSign sends to ICANN. They’re easy to access and the archive goes back as far as you want to go.

Dotcom Domains – The History of Domain Names

Fewer than 15,000 dot.com domains were registered

Date: 01/01/1992

By 1992 fewer than 15,000 dot.com domains were registered.

On 15 March 1985, the first commercial Internet domain name (.com) was registered under the name Symbolics.com by Symbolics Inc., a computer systems firm in Cambridge, Massachusetts.

By 1992 fewer than 15,000 dot.com domains were registered.

In December 2009 there were 192 million domain names. A big fraction of them are in the .com TLD, which as of March 15, 2010 had 84 million domain names, including 11.9 million online business and e-commerce sites, 4.3 million entertainment sites, 3.1 million finance related sites, and 1.8 million sports sites.

Domain name registration

The right to use a domain name is delegated by domain name registrars which are accredited by the Internet Corporation for Assigned Names and Numbers (ICANN), the organization charged with overseeing the name and number systems of the Internet. In addition to ICANN, each top-level domain (TLD) is maintained and serviced technically by an administrative organization operating a registry. A registry is responsible for maintaining the database of names registered within the TLD it administers. The registry receives registration information from each domain name registrar authorized to assign names in the corresponding TLD and publishes the information using a special service, the whois protocol.

Registries and registrars usually charge an annual fee for the service of delegating a domain name to a user and providing a default set of name servers. Often this transaction is termed a sale or lease of the domain name, and the registrant may sometimes be called an “owner”, but no such legal relationship is actually associated with the transaction, only the exclusive right to use the domain name. More correctly, authorized users are known as “registrants” or as “domain holders”.

Dotcom Bubblebursts – The History of Domain Names

Dot-com bubble bursts

Date: 01/01/2000

The dot-com bubble (also known as the dot-com boom, the tech bubble, the Internet bubble, the dot-com collapse, and the information technology bubble) was a historic speculative bubble covering roughly 1995–2001 during which stock markets in industrialized nations saw their equity value rise rapidly from growth in the Internet sector and related fields. While the latter part was a boom and bust cycle, the Internet boom is sometimes meant to refer to the steady commercial growth of the Internet with the advent of the World Wide Web, as exemplified by the first release of the Mosaic web browser in 1993, and continuing through the 1990s.

A year ago Americans could hardly run an errand without picking up a stock tip. Day-trading manuals were selling briskly. Neighbors were speaking a foreign tongue, carrying on about B2B’s and praising the likes of JDS Uniphase and Qualcomm. Venture capital firms were throwing money at any and all dot-coms to help them build market share, never mind whether they could ever be profitable. It was a brave new era, in which more than a dozen fledgling dot-coms that nobody had ever heard of could pay $2 million of other people’s money for a Super Bowl commercial.

What a difference a year makes. The Nasdaq sank. Stock tips have been replaced with talk of recession. Many pioneering dot-coms are out of business or barely surviving. The Dow Jones Internet Index, made up of dot-com blue chips, is down more than 72 percent since March. Online retailers Priceline and eToys, former Wall Street darlings, have seen their stock prices fall more than 99 percent from their highs.

Unlike the worrisome decline in the stock prices of solidly grounded technology firms due to a slowdown in profits, the sharper plunge taken by some of the trendy Internet companies that had no earnings in the first place has proved comforting to those who believe in the rationality of markets. After all, many of them lacked one key asset — a sensible business plan. Even the most traditional brokers and investment banks set aside the notion that a company’s stock price should reflect its profits and urged investors not to miss out on the gold rush. At the craze’s zenith, Priceline, the money-losing online ticket seller, was worth more than the airlines that provided its inventory.

The current sense of despair in the dot-com universe may be as overdone as last year’s euphoria. The Internet, after all, really is a transforming technology that has revolutionized the way we communicate. What recent months suggest, however, is that it may not be an indiscriminate, magical new means of making money.

Woeful tales of visionary innovators failing to capitalize on their revolutionary new technology are not new. The advent of railroads, the automobile and radio, to name other watershed innovations in history, also led to many a shattered dream. The number of failed auto makers far exceeded the number that ultimately succeeded.

In this holiday season, the financial implosion of so many dot-com retailers seems particularly cruel. It is not as if consumers do not appreciate shopping online. Online sales this season are expected to be about two-thirds greater than last year. But it is not the innovators who are reaping all the benefits. Online retailers are losing market share to the likes of Wal-Mart and Kmart. This holiday season, online sales of traditional retailers, initially hesitant to embrace the Web, will outpace those of the pure dot-coms for the first time.

The endearing Pets.com sock puppet is a fitting mascot for the demise of the dot-com mania. Less than a year ago the spokesdog for the online pet-supply retailer was starring in some of those $2 million Super Bowl commercials. Now, in the wake of his master’s bankruptcy, he is looking to shill for another company — one that can actually make money.

Factors That Led to the Dot-Com Bubble Burst

There were two primary factors that led to the burst of the Internet bubble:

The Use of Metrics That Ignored Cash Flow. Many analysts focused on aspects of individual businesses that had nothing to do with how they generated revenue or their cash flow. For example, one theory is that the Internet bubble burst due to a preoccupation with the “network theory,” which stated the value of a network increased exponentially as the series of nodes (computers hosting the network) increased. Although this concept made sense, it neglected one of the most important aspects of valuing the network: the ability of the company to use the network to generate cash and produce profits for investors.

Significantly Overvalued Stocks. In addition to focusing on unnecessary metrics, analysts used very high multipliers in their models and formulas for valuing Internet companies, which resulted in unrealistic and overly optimistic values. Although more conservative analysts disagreed, their recommendations were virtually drowned out by the overwhelming hype in the financial community around Internet stocks.

Avoiding Another Internet Bubble

Considering that the last Internet bubble cost investors trillions, getting caught in another is among the last things an investor would want to do. In order to make better investment decisions (and avoid repeating history), there are important considerations to keep in mind.

  1. Popularity Does Not Equal Profit

Sites such as Facebook and Twitter have received a ton of attention, but that does not mean they are worth investing in. Rather than focusing on which companies have the most buzz, it is better to investigate whether a company follows solid business fundamentals.

Although hot Internet stocks will often do well in the short-term, they may not be reliable as long-term investments. In the long run, stocks usually need a strong revenue source to perform well as investments.

  1. Many Companies Are Too Speculative

Companies are appraised by measuring their future profitability. However, speculative investments can be dangerous, as valuations are sometimes overly optimistic. This may be the case for Facebook. Given that Facebook may be making less than $1 billion a year in profits, it is hard to justify valuing the company at $100 billion.

Never invest in a company based solely on the hopes of what might happen unless it’s backed by real numbers. Instead, make sure you have strong data to support that analysis – or, at least, some reasonable expectation for improvement.

  1. Sound Business Models Are Essential

In contrast to Facebook, Twitter does not have a profitable business model, or any true method to make money. Many investors were not realistic concerning revenue growth during the first Internet bubble, and this is a mistake that should not be repeated. Never invest in a company that lacks a sound business model, much less a company that hasn’t even figured out how to generate revenue.

  1. Basic Business Fundamentals Cannot Be Ignored

When determining whether to invest in a specific company, there are several solid financial variables that must be examined, such as the company’s overall debt, profit margin, dividend payouts, and sales forecasts. In other words, it takes a lot more than a good idea for a company to be successful. For example, MySpace was a very popular social networking site that ended up losing over $1 billion between 2004 and 2010.

Dotcom Bubble – The History of Domain Names

Dot com Bubble

Date: 03/01/2000

The dot-com bubble (also known as the dot-com boom, the tech bubble, the Internet bubble, the dot-com collapse, and the information technology bubble was a historic speculative bubble covering roughly 1995–2001 during which stock markets in industrialized nations saw their equity value rise rapidly from growth in the Internet sector and related fields. While the latter part was a boom and bust cycle, the Internet boom is sometimes meant to refer to the steady commercial growth of the Internet with the advent of the World Wide Web, as exemplified by the first release of the Mosaic web browser in 1993, and continuing through the 1990s.

The period was marked by the founding (and, in many cases, spectacular failure) of several new Internet-based companies commonly referred to as dot-coms. Companies could cause their stock prices to increase by simply adding an “e-” prefix to their name or a “.com” suffix, which one author called “prefix investing.” A combination of rapidly increasing stock prices, market confidence that the companies would turn future profits, individual speculation in stocks, and widely available venture capital created an environment in which many investors were willing to overlook traditional metrics, such as P/E ratio, in favor of basing confidence on technological advancements. By the end of the 1990s, the NASDAQ hit a price-to-earnings (P/E) ratio of 200, a truly astonishing plateau that dwarfed Japan’s peak P/E ratio of 80 a decade earlier.

The collapse of the bubble took place during 1999–2001. Some companies, such as pets.com and Webvan, failed completely. Others – such as Cisco, whose stock declined by 86% – lost a large portion of their market capitalization but remained stable and profitable. Some, such as eBay.com, later recovered and even surpassed their dot-com-bubble peaks. The stock of Amazon.com came to exceed $700 per share, for example, after having gone from $107 to $7 in the crash.

Bubble growth

Due to the rise in the commercial growth of the Internet, venture capitalists saw record-setting growth as “dot-com” companies experienced meteoric rises in their stock prices and therefore moved faster and with less caution than usual, choosing to mitigate the risk by investing in many contenders and letting the market decide which would succeed. The low interest rates of 1998–99 helped increase the start-up capital amounts. A canonical “dot-com” company’s business model relied on harnessing network effects by operating at a sustained net loss and building market share (or mind share). These companies offered their services or end product for free with the expectation that they could build enough brand awareness to charge profitable rates for their services later. The motto “get big fast” reflected this strategy.

This occurred in industrialized nations due to the reducing “digital divide” in the late 1990s, and early 2000s. Previously, individuals were less capable of accessing the Internet, many stopped by lack of local access/connectivity to the infrastructure, and/or the failure to understand use for Internet technologies. The absence of infrastructure and a lack of understanding were two major obstacles that previously obstructed mass connectivity. For these reasons, individuals had limited capabilities in what they could do and what they could achieve in accessing technology. Increased means of connectivity to the Internet than previously available allowed the use of ICT (information and communications technology) to progress from a luxury good to a necessity good. As connectivity grew, so did the potential for venture capitalists to take advantage of the growing field. The functionalism, or impacts of technologies driven from the cost effectiveness of new Internet websites ultimately influenced the demand growth during this time.

Soaring stocks

In financial markets, a stock market bubble is a self-perpetuating rise or boom in the share prices of stocks of a particular industry; the term may be used with certainty only in retrospect after share prices have crashed. A bubble occurs when speculators note the fast increase in value and decide to buy in anticipation of further rises, rather than because the shares are undervalued. Typically, during a bubble, many companies thus become grossly overvalued. When the bubble “bursts”, the share prices fall dramatically. The prices of many non-technology stocks increased in tandem and were also pushed up to valuations discorrelated relative to fundamentals.

American news media, including respected business publications such as Forbes and the Wall Street Journal, encouraged the public to invest in risky companies, despite many of the companies’ disregard for basic financial and even legal principles.

Andrew Smith argued that the financial industry’s handling of initial public offerings tended to benefit the banks and initial investors rather than the companies. This is because company staff were typically barred from reselling their shares for a lock-in period of 12 to 18 months, and so did not benefit from the common pattern of a huge short-lived share price spike on the day of the launch. In contrast, the financiers and other initial investors were typically entitled to sell at the peak price, and so could immediately profit from short-term price rises. Smith argues that the high profitability of the IPOs to Wall Street was a significant factor in the course of events of the bubble. He writes:

“But did the kids [the often young dotcom entrepreneurs] dupe the establishment by drawing them into fake companies, or did the establishment dupe the kids by introducing them to Mammon and charging a commission on it?”

In spite of this, however, a few company founders made vast fortunes when their companies were bought out at an early stage in the dot-com stock market bubble. These early successes made the bubble even more buoyant. An unprecedented amount of personal investing occurred during the boom, and the press reported the phenomenon of people quitting their jobs to become full-time day traders.

Academics Preston Teeter and Jorgen Sandberg have criticized Federal Reserve chairman Alan Greenspan for his role in the promotion and rise in tech stocks. Their research cites numerous examples of Greenspan putting a positive spin on historic stock valuations despite a wealth of evidence suggesting that stocks were overvalued.

Free spending

According to dot-com theory, an Internet company’s survival depended on expanding its customer base as rapidly as possible, even if it produced large annual losses. For instance, Google and Amazon.com did not see any profit in their first years. Amazon was spending to alert people to its existence and expand its customer base, and Google was busy spending to create more powerful machine capacity to serve its expanding web search engine. The phrase “Get large or get lost” was the wisdom of the day. At the height of the boom, it was possible for a promising dot-com to make an initial public offering (IPO) of its stock and raise a substantial amount of money even though it had never made a profit—or, in some cases, earned any revenue whatsoever. In such a situation, a company’s lifespan was measured by its burn rate: that is, the rate at which a non-profitable company lacking a viable business model ran through its capital.

Public awareness campaigns were one of the ways in which dot-coms sought to expand their customer bases. These included television ads, print ads, and targeting of professional sporting events. Many dot-coms named themselves with onomatopoeic nonsense words that they hoped would be memorable and not easily confused with a competitor. Super Bowl XXXIV in January 2000 featured 16 dot-com companies that each paid over $2 million for a 30-second spot. By contrast, in January 2001, just three dot-coms bought advertising spots during Super Bowl XXXV. In a similar vein, CBS-backed iWon.com gave away $10 million to a lucky contestant on an April 15, 2000 half-hour primetime special that was broadcast on CBS.

Not surprisingly, the “growth over profits” mentality and the aura of “new economy” invincibility led some companies to engage in lavish internal spending, such as elaborate business facilities and luxury vacations for employees. Executives and employees who were paid with stock options instead of cash became instant millionaires when the company made its initial public offering; many invested their new wealth into yet more dot-coms.

Cities all over the United States sought to become the “next Silicon Valley” by building network-enabled office space to attract Internet entrepreneurs. Communication providers, convinced that the future economy would require ubiquitous broadband access, went deeply into debt to improve their networks with high-speed equipment and fiber optic cables. Companies that produced network equipment like Nortel Networks were irrevocably damaged by such over-extension; Nortel declared bankruptcy in early 2009. Companies like Cisco, which did not have any production facilities, but bought from other manufacturers, were able to leave quickly and actually do well from the situation as the bubble burst and products were sold cheaply.

In the struggle to become a technology hub, many cities and states used tax money to fund technology conference centers, advanced infrastructure, and created favorable business and tax law to encourage development of the dotcom industry in their locale. Virginia’s Dulles Technology Corridor is a prime example of this activity. Large quantities of high-speed fiber links were laid, and the State and local governments gave tax exemptions to technology firms. Many of these buildings could be viewed along I-495, after the burst, as vacant office buildings.

Similarly, in Europe the vast amounts of cash the mobile operators spent on 3G licences in Germany, Italy, and the United Kingdom, for example, led them into deep debt. The investments were far out of proportion to both their current and projected cash flow, but this was not publicly acknowledged until as late as 2001 and 2002. Due to the highly networked nature of the IT industry, this quickly led to problems for small companies dependent on contracts from operators. One example is of a then Finnish mobile network company Sonera, which paid huge sums in German broadband auction then dubbed as 3G licenses. Third-generation networks however took years to catch on and Sonera ended up as a part of TeliaSonera, then simply Telia.

Aftermath

On January 10, 2000, America Online (now Aol.), a favorite of dot-com investors and pioneer of dial-up Internet access, announced plans to merge with Time Warner, the world’s largest media company, in the second-largest M&A transaction worldwide. The transaction has been described as “the worst in history”. Within two years, boardroom disagreements drove out both of the CEOs who made the deal, and in October 2003 AOL Time Warner dropped “AOL” from its name.

On March 10, 2000 the NASDAQ peaked at 5,132.52 intraday before closing at 5,048.62. Afterwards, the NASDAQ fell as much as 78%.

Several communication companies could not weather the financial burden and were forced to file for bankruptcy. One of the more significant players, WorldCom, was found engaging in illegal accounting practices to exaggerate its profits on a yearly basis. WorldCom was one of the last standing combined competitive local exchange and inter-exchange companies and struggled to survive after the implementation of the Telecommunications Act of 1996. This Act favored incumbent players formerly known as Regional Bell Operating Companies (RBOCS) and led to the demise of competition and the rise of consolidation and a current day oligopoly ruled by lobbyist saturated powerhouses AT&T and Verizon.

WorldCom’s stock price fell drastically when this information went public, and it eventually filed the third-largest corporate bankruptcy in U.S. history. Other examples include NorthPoint Communications, Global Crossing, JDS Uniphase, XO Communications, and Covad Communications. Companies such as Nortel, Cisco, and Corning were at a disadvantage because they relied on infrastructure that was never developed which caused the stock of Corning to drop significantly.

Many dot-coms ran out of capital and were acquired or liquidated; the domain names were picked up by old-economy competitors, speculators or cybersquatters. Several companies and their executives were accused or convicted of fraud for misusing shareholders’ money, and the U.S. Securities and Exchange Commission fined top investment firms like Citigroup and Merrill Lynch millions of dollars for misleading investors. Various supporting industries, such as advertising and shipping, scaled back their operations as demand for their services fell. A few large dot-com companies, such as Amazon.com, eBay, and Google have become industry-dominating mega-firms.

The stock market crash of 2000–2002 caused the loss of $5 trillion in the market value of companies from March 2000 to October 2002. The September 11, 2001, attacks accelerated the stock market drop; the NYSE suspended trading for four sessions. When trading resumed, some of it was transacted in temporary new locations.

More in-depth analysis shows that 48% of the dot-com companies survived through 2004. With this, it is safe to assume that the assets lost from the stock market do not directly link to the closing of firms. More importantly, however, it can be concluded that even companies who were categorized as the “small players” were adequate enough to endure the destruction of the financial market during 2000–2002. Additionally, retail investors who felt burned by the burst transitioned their investment portfolios to more cautious positions.

Nevertheless, laid-off technology experts, such as computer programmers, found a glutted job market. University degree programs for computer-related careers saw a noticeable drop in new students. Anecdotes of unemployed programmers going back to school to become accountants or lawyers were common.

Turning to the long-term legacy of the bubble, Fred Wilson, who was a venture capitalist during it, said:

“A friend of mine has a great line. He says ‘Nothing important has ever been built without irrational exuberance’. Meaning that you need some of this mania to cause investors to open up their pocketbooks and finance the building of the railroads or the automobile or aerospace industry or whatever. And in this case, much of the capital invested was lost, but also much of it was invested in a very high throughput backbone for the Internet, and lots of software that works, and databases and server structure. All that stuff has allowed what we have today, which has changed all our lives… that’s what all this speculative mania built”.

As the technology boom receded, consolidation and growth by market leaders caused the tech industry to come to more closely resemble other traditional U.S. sectors. As of 2014, ten information technology firms are among the 100 largest U.S. corporations by revenues: Apple, Hewlett-Packard, IBM, Microsoft, Amazon.com, Google, Intel, Cisco Systems, Ingram Micro, and Oracle.

Conclusion

The Dot-com Bubble of the 1990s and early 2000s was characterized by a new technology which created a new market with many potential products and services, and highly opportunistic investors and entrepreneurs who were blinded by early successes. After the crash, both companies and the markets have become a lot more cautious when it comes to investing in new technology ventures. It might be noted, though, that the current popularity of mobile devices such as smartphones and tablets, and their almost infinite possibilities, and the fact that there have been a few successful tech IPOs recently, will give life to a whole new generation of companies that want to capitalize on this new market. Let’s see if investors and entrepreneurs are a bit more sensible this time around.

Dot-Tel – The History of Domain Names

The state of .tel in 2012

March 13, 2012

.Tel has been quiet, at least in the domainer community, for quite some time. There are a couple good reasons for this: you can’t park .tel domains and no one is getting rich trying to resell them for a profit.

The company sent out its latest newsletter today and it has some interesting data.

The first thing that caught my eye was that you will soon be able to add video to your site. But it can only be done via API. I’ve long thought that .tel is an over-technicalized solution geared at non-techies, and this is a prime example. Granted, there are plenty of third party solutions to develop your .tel domain. But why is a third party necessary? .Tel should be easy. It’s not.

The video also signals that .tel domains are becoming just a bit more like web pages.

Now, about those numbers. Here’s a handy infographic from .tel.

In 2011, .tel says the “number of members in the .tel community” expanded by 41%.

I’m not quiet sure what “members” mean. Unless the number of registrants increased by a bunch in December, this doesn’t represent total .tel domains. In November 2011, the last month for which official numbers are available, there were 280,502 .tel domains. At the beginning of 2012 there were 256,566.

This certainly isn’t what Telnic investors had in mind when they plowed $35 million into the company.

On the plus side, 79% of the “.tel community” owns just one domain. And as of February there were an impressive 64,274 .tel IDNs.

Dot-DE – The History of Domain Names

Germany’s .De crosses 15 million domain mark

April 19, 2012

.De registry DENIC has announced that Germany’s country code domain name has hit a major milestone: 15 million domains currently registered.

The milestone was hit yesterday with the registration of floristennetzwerk.de, which translates to “florists’ network” in English. The domain was registered at German domain registrar 1&1 Internet.

.De is the largest country code domain name in terms of registrations. DENIC says that 50,000 .de domain names are registered every month.

The United Kingdom’s .uk is the second largest country code domain name according to VeriSign’s most recent quarterly domain name brief. In third place and rising rapidly is Tokelau’s .tk, which are offered at no or low cost.

Donuts – The History of Domain Names

Donuts raises $100 million, applies for 307 new TLDs

June 5, 2012

Donuts rolls in a lot of dough, but you’ll have to wait another week to see which domains they applied for.

This morning Donuts, a new TLD applicant founded by eNom founder Paul Stahura, announced the most ambitious public new TLD plans to date: a $100 investment and 307 applications.

The company also hired Mason Cole as Vice President, Communications and Industry Relations. I had the chance to connect with him this morning to understand more about Donuts’ plans. Cole told me the company looked at thousands of possible strings, ultimately settling on the 307 it has applied for. Back in January Bloomberg reported that Donuts was applying for 10 strings, but Cole says that may have been a miscommunication between Stahura and the Bloomberg reporter.

“At the time the final number hadn’t been settled on,” said Cole.

Cole isn’t sure how many of the 307 strings will be contested by other applicants. The company is prepared to bring all 307 to market.

“I can tell you we’ve made sure to resource the company in a way that would allow us to get all 307 strings if we decide to,” he said.

Cole was tight-lipped on what the company is doing in the digital archery process for batching. He also said the company does not plan to reveal its applications until ICANN makes them public on June 13.

The company announced today that it will use Demand Media as its backend registry provider. Stahura sold eNom to Demand Media in 2006. Demand Media has announced its own $18 million investment in new top level domains, but it has not invested in Donuts. Assuming Demand Media is applying for its own TLDs, it will be interesting to see if there’s any contention between Demand Media’s domains and those of Donuts. Cole said he can’t comment on Demand Media’s business.

For now, Donuts is engaged in a high stakes game of digital archery and is preparing for whatever roadblocks that may occur — but hoping for no more delays.

Domains Portfolio – The History of Domain Names

150,000+ domain portfolio sells for $5.2 million

November 16, 2012

A portfolio of more than 150,000 domain names has sold in a bankruptcy auction for $5.2 million.

Yet that might not be the end of it.

The auction was of two separate portfolios and were part of bankrupt firm Ondova. The $5.2 purchase price is 26.8% over the stalking horse bid.

The domains had significant traffic and revenue.

The first portfolio of about 3,000 premium domains earned close to $400,000 in the 12 months ending September 30, 2012. The second portfolio had 150,000+ domain that averaged $12.60 in revenue in the same 12 month period. After domain costs that netted out to $3.80 per domain, or net revenue of around $580,000.

That means the overall sale was about 5x-6x earnings. The premium domains probably have value above the PPC earnings yet the larger portfolio definitely has some trademark issues.

Domains-Growth – The History of Domain Names

REGISTERED DOMAIN NAMES GROW BY 5.9 MILLION

March 11, 2012

The latest Domain Name Industry Brief from Verisign shows in the final quarter of last year, registrations of all Top Level Domains (TLDs) grew by 2.7 percent since the third quarter.

Q4 last year was a huge one for domain name registration. The quarter closed with more than 225 million registered names, which was 10% more than the same quarter in 2010.

The leaderboard now in terms of domain extension popularity remained unchanged from 2011 Q3, with .com still the leader; followed by de (Germany), .net, .uk (United Kingdom), .org, .info, .tk (Tokelau), .nl (Netherlands), .ru (Russian Federation) and .eu (European Union).
The popularity of .tk continues to puzzle some given Tokelau has a tiny population. As we mentioned in an earlier article, a Dutch firm bought the rights to .tk and now provides free domain registration of the extension.

While on the topic of country code Top Level Domains (ccTLDs), the top ccTLD registries by domain name base for the final quarter of last year reported by Verisign were:

1. .de – Germany
2. .uk – United Kingdom
3. .tk – Tokelau
4. .nl – Netherlands
5. .ru – Russian Federation
6. .eu – European Union
7. .cn – China
8. .br – Brazil
9. .ar – Argentina
10. .it – Italy

Verisign says there are now 290 ccTLD extensions globally and the top 10 ccTLDs account for 60 percent of all registrations.

Verisign manages two of the world’s 13 Internet root name servers and according to the latest Brief, its average daily Domain Name System (DNS) query load during the fourth quarter of 2011 was 64 billion, peaking at 117 billion. This represented a daily average increase of 8 percent and a peak increase of 51 percent. Year on year, the increase was 2% and 59% respectively.

DomainRegistrations – The History of Domain Names

Domain registrations grow 12% to 240 million

October 2, 2012

Base of registered domains continues to grow.

The total base of registered domain names increased nearly 12% year over year to 240 million at the end of June, VeriSign announced its its quarterly domain industry brief today.

Country code domain name registrations continue to lead the way. There are now over 100 million registered country code domain names, an 18.5% increase over a year.

The total base of .com and .net domains, which are managed by VeriSign, increased 7.8% compared to the same period a year ago and inched up 1.6% compared to the first quarter of this year.

.TK became the second largest country code domain name in terms of registration base. .TK domains (for the country of Tokelau) are given away for free.

DomainNames – The History of Domain Names

215 MILLION DOMAIN NAMES REGISTERED AT THE END OF AUGUST

September 3, 2011

The end of the second quarter of this year saw over 215 million domain name registrations across all Top Level Domains (TLDs), an increase of 5.2 million domain names over the first quarter.

According to Verisign’s August 2011 Domain Name Industry Brief, this represented growth of  2.5%. Since the second quarter of last year, domain name registrations have increased by more than 16.9 million, or 8.6 percent.

Country Code Top Level Domains (ccTLDs) number 84.6 million, a 3.6 percent increase quarter over quarter, and an 8.4 percent increase year over year.

Among the 20 largest ccTLDs, Brazil, Spain and Australian domain registrations all grew more than 4% based on a quarter to quarter comparison.  The Australian domain name space has been performing particularly well over the last few quarters.

The largest Top Level Domains in terms of base size were, .com, .de, .net, .uk, .org, .info, .nl, .cn, .eu and .ru respectively.

Renewal rates for .com/.net fell during the second quarter of 2011 from 73.8 percent in the first quarter to 73.1 percent.

Verisign estimates that 88 percent of .com and .net domain names resolve to a website, however 22% are one-page websites, which includes “under construction” pages, brochure and parked pages; including revenue generating parked pages often used by domainers.

Verisign stated its average daily Domain Name System (DNS ) query load during the quarter was 56 billion, down 1 percent on the previous quarter; and peaked at 68 billion queries, up 1 percent.

Verisign is a U.S. based company that operates array of network infrastructure, including two of the world’s 13 Internet root servers. The company has invested $500 million in Project Titan, that willl allow Verisign to increase daily DNS query capacity by a factor of ten – from the current 400 billion queries a day to 4 trillion queries a day.

Domain Transfer – The History of Domain Names

Domain Transfer Made Easier

Date: 10/01/2006

Domains can be transferred between registrars. Prior to October 2006, the procedure used by Verisign was complex and unreliable – requiring a notary public to verify the identity of the registrant requesting a domain transfer. In October 2006, a new procedure, requiring the losing registrar to provide an authorization code on instruction from the registrant (also known as EPP code) was introduced by Verisign to reduce the incidence of domain hijacking.

2006 September 29th On September 29, 2006, ICANN signed a new agreement with the United States Department of Commerce (DOC) that moves the private organization towards full management of the Internet’s system of centrally coordinated identifiers through the multi-stakeholder model of consultation that ICANN represents.

2007 It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet.

2007 The temporary reassignment of country code cs (Serbiaand Montenegro) until its split into rs and me (Serbia and Montenegro, respectively) led to some controversies about the stability of ISO 3166-1country codes, resulting in a second edition of ISO 3166-1 in 2007 with a guarantee that retired codes will not be reassigned for at least 50 years, and the replacement of RFC 3066 by RFC 4646 for country codes used in language tags in 2006.

2007 The org domain registry allows the registration of selected internationalized domain names (IDNs) as second-level domains. For German, Danish, Hungarian, Icelandic, Korean, Latvian, Lithuanian, Polish, and Swedish IDNs this has been possible since 2005. Spanish IDN registrations have been possible since 2007.

Domain Name Concept – The History of Domain Names

The Domain Name Concept

Date: 01/01/1982

A domain name is the human-friendly name that we are used to associating with an internet resource. A domain name is an identification string that defines a realm of administrative autonomy, authority or control within the Internet. Domain names are formed by the rules and procedures of the Domain Name System (DNS). Any name registered in the DNS is a domain name. Domain names are used in various networking contexts and application-specific naming and addressing purposes. In general, a domain name represents an Internet Protocol (IP) resource, such as a personal computer used to access the Internet, a server computer hosting a web site, or the web site itself or any other service communicated via the Internet. In 2015, 294 million domain names had been registered.

Domain names are organized in subordinate levels (subdomains) of the DNS root domain, which is nameless. The first-level set of domain names are the top-level domains (TLDs), including the generic top-level domains (gTLDs), such as the prominent domains com, info, net, edu, and org, and the country code top-level domains (ccTLDs). Below these top-level domains in the DNS hierarchy are the second-level and third-level domain names that are typically open for reservation by end-users who wish to connect local area networks to the Internet, create other publicly accessible Internet resources or run web sites. The registration of these domain names is usually administered by domain name registrars who sell their services to the public. A fully qualified domain name (FQDN) is a domain name that is completely specified with all labels in the hierarchy of the DNS, having no parts omitted. Labels in the Domain Name System are case-insensitive, and may therefore be written in any desired capitalization method, but most commonly domain names are written in lowercase in technical contexts.

Purpose

Domain names serve to identify Internet resources, such as computers, networks, and services, with a text-based label that is easier to memorize than the numerical addresses used in the Internet protocols. A domain name may represent entire collections of such resources or individual instances. Individual Internet host computers use domain names as host identifiers, also called host names. The term host name is also used for the leaf labels in the domain name system, usually without further subordinate domain name space. Host names appear as a component in Uniform Resource Locators (URLs) for Internet resources such as web sites (e.g., en.wikipedia.org).

Domain names are also used as simple identification labels to indicate ownership or control of a resource. Such examples are the realm identifiers used in the Session Initiation Protocol (SIP), the Domain Keys used to verify DNS domains in e-mail systems, and in many other Uniform Resource Identifiers (URIs). An important function of domain names is to provide easily recognizable and memorizable names to numerically addressed Internet resources. This abstraction allows any resource to be moved to a different physical location in the address topology of the network, globally or locally in an intranet. Such a move usually requires changing the IP address of a resource and the corresponding translation of this IP address to and from its domain name. Domain names are used to establish a unique identity. Organizations can choose a domain name that corresponds to their name, helping Internet users to reach them easily. A generic domain is a name that defines a general category, rather than a specific or personal instance, for example, the name of an industry, rather than a company name. Some examples of generic names are books.com, music.com, and travel.info. Companies have created brands based on generic names, and such generic domain names may be valuable. Domain names are often simply referred to as domains and domain name registrants are frequently referred to as domain owners, although domain name registration with a registrar does not confer any legal ownership of the domain name, only an exclusive right of use for a particular duration of time. The use of domain names in commerce may subject them to trademark law.

Concept

1982 January As described in Computer Mail Meeting Notes, RFC 805, it was initially the need for a real-world solution to the complexity of email relaying that triggered the development of the domain concept. A group of ARPANET researchers, principles, and related parties held a meeting in January, 1982, to discuss a solution foremail relaying. As described on the email addresses page, email was often originally sent from site to site toits destination along a path of systems, and might need to go through a half a dozen or more links that would connect at certain times of the day.

To send an email to someone, you had to first be a human router and specify a valid path to the destination aspart of the address. If you didn’t know a valid route, the software couldn’t help you. In order to solve this problem, domain names were created to provide each person with one address regardless of where email was sent from. As RFC 805 put it, “The hierarchical domain type naming differs from source routing in that the former gives absolute addressing while the latter gives relative addressing”. RFC 805 outlines many of the basic principles of the eventual domain name system, including the need for top level domains to provide a starting point for delegation of queries, the need for second level domains to be unique — and therefore the requirement for a registrar type of administration, and the recognition that distribution of individual name servers responsible for each domain would provide administration and maintenance advantages. Within the year, the concept was developed through a series of communications. In March, the hosts table definition was updated with DoD Internet Host Table Specification, RFC 810, and NIC’s introduction of a server function to provide individual hostname / address translations was described in Hostnames Server, RFC 811, both documents including the domain concept. In August, The Domain Naming Convention for Internet User Applications, RFC 819, provided an excellent overview of the concept. And then, in October, the full concept of a distributed system of name servers, each serving its local domain,was described in A Distributed System for Internet Name Service, RFC 830, providing the main architectural outlines of the system still in use today.

1982 February 8th The conclusion in this area wasthat the current “user@host” mailbox identifier should be extended to”user@host.domain” where “domain” could be a hierarchy ofdomains.- J. Postel; Computer Mail Meeting Notes, RFC 805; 8 Feb 1982. TheDomain Name System was originally invented to support the growth of email communications on the ARPANET, and now supports the Internet on a global scale. Alphabetic host names were introduced on the ARPANET shortly after its creation, and greatly increased usability since alphabetic names are much easier to remember than semantically meaningless numeric addresses. Host names were also useful for developmentof network-aware computer programs, since they could reference a constant host name without concern about changes to the physical address due to network alterations. Of course, the infrastructure of the underlying network was still based on numeric addresses, so each site maintained a “HOSTS.TXT” file that provided a mapping between host names and network addresses in a set of simple text records that could be easily read by a person or program.

Domain Arpa – The History of Domain Names

Domain Arpa

The domain arpa was the first Internet top-level domain.

The domain arpa was the first Internet top-level domain. It was intended to be used only temporarily, aiding in the transition of traditional ARPANET host names to the domain name system. However, after it had been used for reverse DNS lookup, it was found impractical to retire it.

The domain name arpa is a top-level domain (TLD) in the Domain Name System of the Internet. It is used exclusively for technical infrastructure purposes. While the name was originally the acronym for the Advanced Research Projects Agency (ARPA), the funding organization in the United States that developed one of the precursors of the Internet (ARPANET), it now stands for Address and Routing Parameter Area.

arpa also contains the domains for reverse domain name resolution in-addr.arpa and ip6.arpa for IPv4 and IPv6, respectively.

Types

As of 2015, IANA distinguishes the following groups of top-level domains:

infrastructure top-level domain (ARPA)
generic top-level domains (gTLD)
restricted generic top-level domains (grTLD)
sponsored top-level domains (sTLD)
country code top-level domains (ccTLD)
test top-level domains (tTLD)
History

The arpa top-level domain was the first domain installed in the Domain Name System (DNS). It was originally intended to be a temporary domain to facilitate the transition of the ARPANET host naming conventions and the host table distribution methods to the Domain Name System. The ARPANET was one of the predecessors to the Internet, established by the United States Department of Defense Advanced Research Projects Agency (DARPA). When the Domain Name System was introduced in 1985, ARPANET host names were initially converted to domain names by adding the arpa domain name label to the end of the existing host name, separated with a full stop (i.e., a period). Domain names of this form were subsequently rapidly phased out by replacing them with domain names under the newly introduced, categorized top-level domains.

After arpa had served its transitional purpose, it proved impractical to remove the domain, because in-addr.arpa was used for reverse DNS lookup of IP addresses. For example, the mapping of the IP address 145.97.39.155 to a host name is obtained by issuing a DNS query for a pointer record of the domain name 155.39.97.145.in-addr.arpa.

It was intended that new infrastructure databases would be created in the top-level domain int. However, in May 2000 this policy was reversed, and it was decided that arpa should be retained for this purpose, and int should be used solely by international organizations.[3] In accordance with this new policy, arpa now officially stands for Address and Routing Parameter Area (a backronym).

Beginning in March 2010 the arpa zone is signed (DNSSEC).

DOD – The History of Domain Names

DoD Internet Host Table Specification, Hostname Server, Domain Naming Convention for Internet User Applications, Distributed System for Internet Name Service

Date: 03/01/1982

DOD INTERNET HOST TABLE SPECIFICATION

The ARPANET Official Network Host Table, as outlined in RFC 608, no longer suits the needs of the DoD community, nor does it follow a format suitable for internetting.  This paper specifies a new host table format applicable to both ARPANET and Internet needs. In addition to host name to host address translation and selected protocol information, we have also included network and gateway name to address correspondence, and host operating system information. This Host Table is utilized by the DoD Host Name Server maintained by the ARPANET Network Information Center (NIC) on behalf of the Defense Communications Agency (DCA) (RFC 811).  It obsoletes the host table described in RFC 608.

HOSTNAME SERVER

The NIC Internet Hostname Server is a TCP-based host information   program and protocol running on the SRI-NIC machine.  It is one of a series of internet name services maintained by the DDN Network Information Center (NIC) at SRI International on behalf of the Defense Communications Agency (DCA).  The function of this particular server is to deliver machine-readable name/address information describing networks, gateways, hosts, and eventually domains, within the internet environment.  As currently implemented, the server provides the information outlined in the DoD Internet Host Table Specification.

Protocol

To access this server from a program, establish a TCP connection to port 101 (decimal) at the service host, SRI-NIC.ARPA (26.0.0.73 or 10.0.0.51).  Send the information request (a single line), and read the resulting response.  The connection is closed by the server upon completion of the response, so only one request can be made for each connection.

The Domain Naming Convention for Internet User Applications

Introduction

For many years, the naming convention “<user>@<host>” has served the ARPANET user community for its mail system, and the substring”<host>” has been used for other applications such as file transfer (FTP) and terminal access (Telnet).  With the advent of network interconnection, this naming convention needs to be generalized to accommodate internetworking.  A decision has recently been reached to replace the simple name field, “<host>”, by a composite name field, “<domain>”.  This note is an attempt to clarify this generalized naming convention, the Internet Naming Convention, and to explore the implications of its adoption for Internet name service and user applications.

The following example illustrates the changes in naming convention:

ARPANET Convention:   Fred@ISIF

Internet Convention:  Fred@F.ISI.ARPA

The intent is that the Internet names be used to form a tree-structured administrative dependent, rather than a strictly topology dependent, hierarchy.  The left-to-right string of name components proceeds from the most specific to the most general, that is, the root of the tree, the administrative universe, is on the right. The name service for realizing the Internet naming convention is assumed to be application independent.  It is not a part of any particular application, but rather an independent name service serves different user applications.

A Distributed System for Internet Name Service

INTRODUCTION

For many years, the ARPANET Naming Convention “<user>@<host>” has served its user community for its mail system.  The substring “<host>” has been used for other user applications such as file transfer (FTP) and terminal access (Telnet).  With the advent of network interconnection, this naming convention needs to be generalized to accommodate internetworking.  The Internet Naming Convention  describes a hierarchical naming structure for serving Internet user applications such as SMTP for electronic mail, FTP and Telnet for file transfer and terminal access.  It is an integral part of the network facility generalization to accommodate internetworking. Realization of Internet Naming Convention requires the establishment of both naming authority and name service.  In this document, we propose an architecture for a distributed System for Internet Name Service (SINS).  We assume the reader’s familiarity with, which describes the Internet Naming Convention.

Internet Name Service provides a network service for name resolution and resource negotiation for the establishment of direct communication between a pair of source and destination application processes.  The source application process is assumed to be in possession of the destination name.  In order to establish communication, the source application process requests for name service. The SINS resolves the destination name for its network address, and provides negotiation for network resources.  Upon completion of successful name service, the source application process provides the destination address to the transport service for establishing direct communication with the destination application process.

Overview

SINS is a distributed system for name service.  It logically consists of two parts: the domain name service and the application interface. The domain name service is an application independent network service for the resolution of domain names.  This resolution is provided through the cooperation among a set of domain name servers (DNSs).  With each domain is associated a DNS.* The reader is referred to for the specification of a domain name server.  As noted in , a domain is an administrative but not necessarily a topological entity.  It is represented in the networks by its associated DNS.  The resolution of a domain name results in the address of its associated DNS.

DOD Registrationservices – The History of Domain Names

Department of Defense would no longer fund registration services

Date: 01/01/1992

By the 1990s, most of the growth of the Internet was in the non-defense sector, and even outside the United States. Therefore, the US Department of Defense would no longer fund registration services outside of the mil domain.

The National Science Foundation started a competitive bidding process in 1992; subsequently, in 1993, NSF created the Internet Network Information Center, known as InterNIC, to extend and coordinate directory and database services and information services for the NSFNET; and provide registration services for non-military Internet participants. NSF awarded the contract to manage InterNIC to three organizations; Network Solutions provided registration services, AT&T provided directory and database services, and General Atomics provided information services. General Atomics was disqualified from the contract in December 1994 after a review found their services not conforming to the standards of its contract. General Atomics’ InterNIC functions were assumed by AT&T.

Beginning in 1996, Network Solutions rejected domain names containing English language words on a “restricted list” through an automated filter. Applicants whose domain names were rejected received an email containing the notice: “Network Solutions has a right founded in the First Amendment to the U.S. Constitution to refuse to register, and thereby publish, on the Internet registry of domain names words that it deems to be inappropriate.” Domain names such as “shitakemushrooms.com” would be rejected, but the domain name “shit.com” was active since it had been registered before 1996.

Network Solutions eventually allowed domain names containing the words on a case-by-case basis, after manually reviewing the names for obscene intent. This profanity filter was never enforced by the government and its use was not continued by ICANN when it took over governance of the distribution of domain names to the public.

DNS – The History of Domain Names

Domain Name System (DNS)

Date:01/01/1983

The Domain Name System (DNS) is a hierarchical decentralized naming system for computers, services, or any resource connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates more readily memorized domain names to the numerical IP addresses needed for the purpose of locating and identifying computer services and devices with the underlying network protocols. By providing a worldwide, distributed directory service, the Domain Name System is an essential component of the functionality of the Internet.

The Domain Name System (DNS) is basically a large database which resides on various computers and it contains the names and IP addresses of various hosts on the internet and various domains. The Domain Name System is used to provide information to the Domain Name Service to use when queries are made. The service is the act of querying the database, and the system is the data structure and data itself. The Domain Name System is similar to a file system in Unix or DOS starting with a root. Branches attach to the root to create a huge set of paths. Each branch in the DNS is called a label. Each label can be 63 characters long, but most are less. Each text word between the dots can be 63 characters in length, with the total domain name (all the labels) limited to 255 bytes in overall length. The domain name system database is divided into sections called zones. The name servers in their respective zones are responsible for answering queries for their zones. A zone is a subtree of DNS and is administered separately. There are multiple name servers for a zone. There is usually one primary nameserver and one or more secondary name servers. A name server may be authoritative for more than one zone.

The Domain Name System delegates the responsibility of assigning domain names and mapping those names to Internet resources by designating authoritative name servers for each domain. Network administrators may delegate authority over sub-domains of their allocated name space to other name servers. This mechanism provides distributed and fault tolerant service and was designed to avoid a single large central database. The Domain Name System also specifies the technical functionality of the database service which is at its core. It defines the DNS protocol, a detailed specification of the data structures and data communication exchanges used in the DNS, as part of the Internet Protocol Suite. Historically, other directory services preceding DNS were not scalable to large or global directories as they were originally based on text files, prominently the HOSTS.TXT resolver. The Domain Name System has been in use since the 1980s. The Internet maintains two principal namespaces, the domain name hierarchy[ and the Internet Protocol (IP) address spaces. The Domain Name System maintains the domain name hierarchy and provides translation services between it and the address spaces. Internet name servers and a communication protocol implement the Domain Name System. A DNS name server is a server that stores the DNS records for a domain; a DNS name server responds with answers to queries against its database.

The most common types of records stored in the DNS database are for Start of Authority (SOA), IP addresses (A and AAAA), SMTP mail exchangers (MX), name servers (NS), pointers for reverse DNS lookups (PTR), and domain name aliases (CNAME). Although not intended to be a general purpose database, DNS can store records for other types of data for either automatic lookups, such as DNSSEC records, or for human queries such as responsible person (RP) records. As a general purpose database, the DNS has also been used in combating unsolicited email (spam) by storing a real-time blackhole list. The DNS database is traditionally stored in a structured zone file.

Function

An often-used analogy to explain the Domain Name System is that it serves as the phone book for the Internet by translating human-friendly computer hostnames into IP addresses. For example, the domain name www.example.com translates to the addresses 93.184.216.119 (IPv4) and 2606:2800:220:6d:26bf:1447:1097:aa7 (IPv6). Unlike a phone book, DNS can be quickly updated, allowing a service’s location on the network to change without affecting the end users, who continue to use the same host name. Users take advantage of this when they use meaningful Uniform Resource Locators (URLs), and e-mail addresses without having to know how the computer actually locates the services.

Additionally, DNS reflects administrative partitioning. For zones operated by a registry, also known as public suffix zones, administrative information is often complemented by the registry’s RDAP and WHOIS services. That data can be used to gain insight on, and track responsibility for, a given host on the Internet. An important and ubiquitous function of DNS is its central role in distributed Internet services such as cloud services and content delivery networks. When a user accesses a distributed Internet service using a URL, the domain name of the URL is translated to the IP address of a server that is proximal to the user. The key functionality of DNS exploited here is that different users can simultaneously receive different translations for the same domain name, a key point of divergence from a traditional “phone book” view of DNS. This process of using DNS to assign proximal servers to users is key to providing faster response times on the Internet and is widely used by most major Internet services today.

History

Using a simpler, more memorable name in place of a host’s numerical address dates back to the ARPANET era. The Stanford Research Institute (now SRI International) maintained a text file named HOSTS.TXT that mapped host names to the numerical addresses of computers on the ARPANET. Host operators obtained copies of the master file. The rapid growth of the emerging network required an automated system for maintaining the host names and addresses.

Paul Mockapetris designed the Domain Name System at the University of California, Irvine in 1983, and wrote the first implementation at the request of Jon Postel from ISI. The Internet Engineering Task Force published the original specifications in RFC 882 and RFC 883 in November 1983, which established the concepts that still guide DNS development.

In 1984, four UC Berkeley students—Douglas Terry, Mark Painter, David Riggle, and Songnian Zhou—wrote the first Unix name server implementation, called the Berkeley Internet Name Domain (BIND) Server. In 1985, Kevin Dunlap of DEC substantially revised the DNS implementation. Mike Karels, Phil Almquist, and Paul Vixie have maintained BIND since then. BIND was ported to the Windows NT platform in the early 1990s. BIND was widely distributed, especially on Unix systems, and is still the most widely used DNS software on the Internet.

In November 1987, RFC 1034 and RFC 1035 superseded the 1983 DNS specifications. Several additional Request for Comments have proposed extensions to the core DNS protocols.

Structure and message format

The drawing below shows a partial DNS hierarchy. At the top is what is called the root and it is the start of all other branches in the DNS tree. It is designated with a period. Each branch moves down from level to level. When referring to DNS addresses, they are referred to from the bottom up with the root designator (period) at the far right. Example: “myhost.mycompany.com.”.

DNS is hierarchical in structure. A domain is a subtree of the domain name space. From the root, the assigned top-level domains in the U.S. are:

GOV – Government body.

EDU – Educational body.

INT – International organization

NET – Networks

COM – Commercial entity.

MIL – U. S. Military.

ORG –      Any other organization not previously listed.

Outside this list are top level domains for various countries.

Usage and file formats

If a domain name is not found when a query is made, the server may search for the name elsewhere and return the information to the requesting workstation, or return the address of a name server that the workstation can query to get more information. There are special servers on the Internet that provide guidance to all name servers. These are known as root name servers. They do not contain all information about every host on the Internet, but they do provide direction as to where domains are located (the IP address of the name server for the uppermost domain a server is requesting). The root name server is the starting point to find any domain on the Internet.

Name Server Types

There are three types of name servers:

The primary master builds its database from files that were preconfigured on its hosts, called zone or database files. The name server reads these files and builds a database for the zone it is authoritative for.

Secondary masters can provide information to resolvers just like the primary masters, but they get their information from the primary. Any updates to the database are provided by the primary.

Caching name server – It gets all its answers to queries from other name servers and saves (caches) the answers. It is a non-authoritative server.

The caching only name server generates no zone transfer traffic. A DNS Server that can communicate outside of the private network to resolve a DNS name query is referred to as forwarder.

DNS Invented – The History of Domain Names

DNS is Invented

Date: 01/01/1983

Before the DNS was invented in 1983

By the following November, 1983, the conceptand schedule were developed and published in The Domain Names Plan and Schedule,RFC 881, Domain Names — Concepts And Facilities, RFC 882, and DomainNames — Implementation And Specification, RFC 883. Some of the technical discussion involved indeveloping the DNS was carried out on the name droppers list. BIND. Because theDNS is such a fundamental part of the operation of the Internet network, the software that runs it must be nearly fault free, easily upgraded when a bug is found, and completely trusted by the Internet community — in other words, free open source software. The application that runs almost every DNS server on theInternet is called BIND, for Berkeley Internet Name Domain, first developed as a graduate student project at the University of California at Berkeley, and maintained through version 4.8.3 by the university’s Computer Systems Research Group (CSRG). The initial BIND development team consisted of Mark Painter,David Riggle, Douglas Terry, and Songnian Zhou. Later work was done by Ralph Campbell and Kevin Dunlap, and others that contributed include Jim Bloom, SmootCarl-Mitchell, Doug Kingston, Mike Muuss, Craig Partridge, and Mike Schwartz.  Application maintenance was done by Mike Karels and O. Kure. Versions 4.9 and 4.9.1of BIND were released by then the number two computer company, Digital Equipment Corporation. The lead developer was Paul Vixie, with assistance from Paul Albitz, Phil Almquist, Fuat Baran, Alan Barrett, Bryan Beecher, AndyCherenson, Robert Elz, Art Harkin, Anant Kumar, Don Lewis, Tom Limoncelli, Berthold Paffrath, Andrew Partan, Win Treese, and Christopher Wolfhugel. After Vixie left to establish Vixie Enterprises, he sponsored the development of BIND Version 4.9.2, and became the application’s principal architect. Versions 4.9.3on have been developed and maintained by the Internet Systems Consortium. A major architectural update called Version 8 was co-developed by Bob Halley and Paul Vixie and released in May 1997. Another major architectural rewrite called Version 9 with enhanced security support was developed and released in the year 2000.

1983 It wasn’t long before people realized that keeping multiple copies of the hosts file was inefficient and error-prone. Starting with a formal proposal for centralization in Host Names On-line, RFC 606, inDecember, 1973, proceeding through agreement in Host Names On-Line, RFC 608,and further discussions in Comments on On-Line Host Name Service, RFC 623, it was settled by March, 1974 with On Line Hostnames Service, RFC 625, that the Stanford Research Institute Network Information Center (NIC) would serve as theofficial source of the master hosts file. This centralized system worked well for about a decade, approximately 1973to 1983. However, by the early 1980’s the disadvantages of centralized management of a large amount of dynamic data were becoming apparent. The hosts file was becoming larger, the rate of change was growing as the network expanded, more hosts were downloading the entire file nightly, and there were always errors that were then propagated network-wide. Change was required, but a spark was needed.

1983 At the request of Jon Postel, Paul Mockapetris invented the Domain Name System in 1983 and wrote the first implementation. The original specifications were published by the Internet Engineering Task Force in RFC 882 and RFC 883, which were superseded in November 1987 by RFC 1034 and RFC 1035.Several additional Request for Comments have proposed various extensions to the core DNS protocols.

DiscGolf – The History of Domain Names

Disc Golf Associations Buys DiscGolf.com Domain Name

July 9, 2011

Company that “founded” disc golf buys DiscGolf.com.

Go to any major park in Austin during the day and you’ll see people “throwing plastic”, a.k.a. playing disc golf. If not trademark attorneys are around you might also call it “frisbee golf”. It’s like golf but cheaper and less pretentious.

Heck, the church down the street dedicated much of its green space to an 18 hole disc golf course designed by a famous course designer. So apparently God approves of the sport, too.

Now Disc Golf Association, which apparently invented the sport 30 years ago, is the proud new owner of DiscGolf.com. The company acquired the domain name from a Santa Cruz man.

It’s a nice upgrade for the company, which no longer has to use DiscGolfAssoc.com. The purchase price was not disclosed.

The company’s general manager told the Santa Cruz Sentinel that he “hopes the Discgolf.com address will boost traffic and further solidify DGA’s position in the disc golf market.”