Monthly Archives: November 2022

Phil Drops Domains – The History of Domain Names

Phil Fischer drops all 12303 of the one worded domain names for lack of payment.

Date: 01/12/1992

As an Internet entrepreneur, in 1989 he was one of the first domain investors and acquired over 12,000 one worded domains in the Internet’s early history.

January 12th, 1992

Phil Fischer drops all 12303 of the one worded domain names for lack of payment. A few years later Greatdomains.com brokers the same domain names such as loans.com, and car.com, and business.com for Millions. Estimated value of Fischer’s portfolio was over Twenty three Million.

PeterDengate – The History of Domain Names

Peter Dengate Thrush Joins Minds + Machines

Jul 17, 2011

Peter Dengate Thrush appointed Executive Chairman of Top Level Domain Holdings Limited. Mr. Dengate Thrush was until recently the Chairman of the Board of Directors of the ICANN.

Less than a month after pushing through a vote approving the new top level domain name program at his last meeting as ICANN Chairman, Peter Dengate Thrush has joined a company applying for new top level domain names.

Dengate Thrush will join Top Level Domain Holdings (parent company of Minds + Machines) as Executive Chairman.

His compensation package will include options on 15 million shares at 8p per share.

This is certainly a big coup for Top Level Domain Holdings. But it also begs the question about why Dengate Thrush pushed through the vote in his last meeting as Chair. It’s highly likely that it was so it would be part of his legacy, but joining a new TLD applicant so quickly after the vote will certainly raise eyebrows.

It makes you wonder if ICANN staffers that remain with the non-profit during the new TLD gold rush are fools for doing so. Certainly there’s more money on the outside, at least during the application process.

Peregrine – The History of Domain Names

Peregrine Systems – peregrine.com was registered

Date: 12/11/1986

On December 11, 1986, Peregrine Systems registered the peregrine.com domain name, making it 49th .com domain ever to be registered.

Peregrine Systems, Inc. was an enterprise software company, founded in 1981, that sold enterprise asset management, change management, and ITIL-based IT service management software. Following an accounting scandal and bankruptcy in 2003, Peregrine was acquired by Hewlett-Packard in 2005. HP now markets the Peregrine products as part of its IT Service Management solutions, within the HP Software Division portfolio.

History

Peregrine Systems was founded in 1981 in Irvine, California. The founders and employees were Chris Cole, Gary Story, Ed Beck, Kevin Keyes and Richard Diederich. They started selling Peregrine Network Management System (PNMS) on a Series One computer while developing an MVS version. The MVS client/server solutions for PNMS became available in 1995. In 1989, John Moores, founder of BMC Software and owner of the San Diego Padres Major League Baseball team, became a member of the Peregrine Board of Directors. He served as Chairman from March 1990 through July 2000 and then again in 2002. He resigned from the Board in 2003 during the company’s bankruptcy filing. His involvement in the software industry continues today with investments through his venture capital firm JMI Equity. The legacy of his investments has been focused on ITSM software packages with the most recent investments made in ServiceNow. Peregrine had offices in the Americas, Europe and Asia Pacific and grew its product line rapidly both organically and via acquisitions, including Harbinger Corporation in 2000, and Remedy Corporation in 2001.

As of December 19, 2005, Peregrine Systems Inc. was acquired by Hewlett-Packard Company. Peregrine Systems, Inc. provides enterprise software worldwide. It offers information technology (IT) asset and service management software solutions. The company’s asset management solutions include Asset Tracking to discover, track, and consolidate hardware, software, and network assets throughout the enterprise; Expense Control to institute entitlement procedures, manage contracts, and initiate cost-center budgeting; and Process Automation to implement automated operational, financial, and compliance-driven processes across multiple applications. Peregrine Systems’ service management solutions include Service Establishment to centralize service management functions in a consolidated service desk; Service Control to enhance control over IT services; and Service Alignment to ensure services align with customer needs by managing to specified services levels, enhancing responsiveness to routine requests, and identifying infrastructure weaknesses. The company also provides a Consolidation solution to provide the tools and processes to consolidate service operations, and to identify hardware and software redundancies. Further, it offers Discovery products that find, identify, and track IT and other infrastructure assets; IT Business Analytics products, which provide business intelligence, analytic, and reporting solutions to meet the needs of executive management; SelfService products to extend IT asset and service management capabilities to employees through a Web-based interface; and Integration products that enable customers to connect its products to various third-party applications. Peregrine Systems sells its products to Fortune Global 2000 companies; medium-sized companies; federal, state, and local government agencies; and outsourcers through a direct sales force and alliance partners. The company was founded in 1981 and is headquartered in San Diego, California.

Paulbaran – The History of Domain Names

Paul Baran

Developed the Baran network for the US military and “Message Blocks”

Date: 01/01/1957

Paul Baran was a Polish-born American engineer who was a pioneer in the development of computer networks. He was one of the two independent inventors of packet switched computer networking, and went on to start several companies and develop other technologies that are an essential part of modern digital communication.

Early life
Paul Baran was born in Grodno (then Second Polish Republic, now part of Belarus) on April 29, 1926. He was the youngest of three children in a Polish-Jewish family, with the Yiddish given name “Pesach”. His family moved to the United States on May 11, 1928, settling in Boston and later in Philadelphia, where his father, Morris “Moshe” Baran (1884–1979), opened a grocery store. He graduated from Drexel University in 1949 (then called Drexel Institute of Technology), with a degree in electrical engineering. He then joined the Eckert-Mauchly Computer Company, where he did technical work on UNIVAC models, the first brand of commercial computers in the USA. In 1955 he married Evelyn Murphy, moved to Los Angeles, and worked for Hughes Aircraft on radar data processing systems. He obtained his master’s degree in engineering from UCLA in 1959, with advisor Gerald Estrin while taking night classes. His thesis was on character recognition. While Baran initially stayed on at UCLA to pursue his doctorate, a heavy travel and work schedule forced him to abandon his doctoral work.

Packet switched network design
After joining the RAND Corporation in 1959, Baran took on the task of designing a “survivable” communications system that could maintain communication between end points in the face of damage from nuclear weapons. At the time of the Cold War, most American military communications used high frequency connections which could be put out of action for many hours by a nuclear attack. Baran decided to automate RAND director Franklin R. Collbohm’s previous work with emergency communication over conventional AM radio networks and showed that a distributed relay node architecture could be survivable. The Rome Air Development Center soon showed that the idea was practicable.

Using the mini-computer technology of the day, Baran and his team developed a simulation suite to test basic connectivity of an array of nodes with varying degrees of linking. That is, a network of n-ary degree of connectivity would have n links per node. The simulation randomly ‘killed’ nodes and subsequently tested the percentage of nodes that remained connected. The result of the simulation revealed that networks where n ≥ 3 had a significant increase in resilience against even as much as 50% node loss. Baran’s insight gained from the simulation was that redundancy was the key. His first work was published as a RAND report in 1960, with more papers generalizing the techniques in the next two years.

After proving survivability Baran and his team needed to show proof of concept for this design such that it could be built. This involved high level schematics detailing the operation, construction, and cost of all the components required to construct a network that leveraged this new insight of redundant links. The result of this was one of the first store-and-forward data layer switching protocols, a link-state/distance vector routing protocol, and an unproved connection-oriented transport protocol. Explicit detail of these designs can be found in the complete series of reports On Distributed Communications, published by RAND in 1964.

The design flew in the face of telephony design of the time, placing inexpensive and unreliable nodes at the center of the network, and more intelligent terminating ‘multiplexer’ devices at the endpoints. In Baran’s words, unlike the telephone company’s equipment, his design didn’t require expensive “gold plated” components to be reliable. This Distributed Network that Baran introduced was intended to route around damage. It provided connection to others through many points, not one centralized connection. Fundamental to this scheme was the division of the information into “blocks” before sending them out across the network. This enabled the data to travel faster and communications lines to be used more efficiently. Each block was sent separately, traveling different paths and rejoining into a whole when they were received at their destination.

Selling the idea
After the publication of On Distributed Communications, Paul Baran presented the findings of his team to a number of audiences, including AT&T engineers (not to be confused with Bell Labs engineers, who at the time provided Paul Baran with the specifications for the first generation of T1 circuit which he used as the links in his network design proposal). In subsequent interviews Baran mentioned how the AT&T engineers scoffed at his idea of non-dedicated physical circuits for voice communications, at times claiming that Baran simply did not understand how voice telecommunication worked.

Donald Davies at the National Physical Laboratory in the United Kingdom also thought of the same idea
and implemented a trial network. While Baran used the term “message blocks” for his units of communication, Davies used the term “packets” as it was capable of being translated into languages other than English without compromise. He applied the concept to a general-purpose computer network. Davies’ key insight came in the realization that computer network traffic was inherently “bursty” with periods of silence, compared with relatively constant telephone traffic. It was in fact Davies’ work on packet switching, and not Baran’s, that initially caught the attention of the developers of ARPANET, at a conference in Gatlinburg, Tennessee, in October 1967.

Later work
In 1968 Baran was a founder of the Institute for the Future, and was then involved in other networking technologies developed in Silicon Valley. He participated in a review of the NBS proposal for a Data Encryption Standard in 1976, along with Martin Hellman and Whitfield Diffie of Stanford University. In the early 1980s, Baran founded PacketCable, Inc, “to support impulse-pay television channels, locally generated videotex, and packetized voice transmission”. PacketCable (also known as Packet Technologies) spun off StrataCom to commercialize his packet voice technology for the telephony market. This technology led to the first commercial pre-standard Asynchronous Transfer Mode product. He founded Telebit after conceiving its discrete multitone modem technology in the mid-1980s. This was one of the first commercial products to use Orthogonal frequency-division multiplexing, which was later widely deployed in DSL modems and Wi-Fi wireless modems. In 1985, Baran founded Metricom, the first wireless Internet company, which deployed Ricochet, the first public wireless mesh networking system. In 1992, he also founded Com21, an early cable modem company. Following Com21, Baran founded and was president of GoBackTV, which specializes in personal TV and cable IPTV infrastructure equipment for television operators. Most recently he founded Plaster Networks, providing an advanced solution for connecting networked devices in the home or small office through existing wiring.

Baran extended his work in packet switching to wireless-spectrum theory, developing what he called “kindergarten rules” for the use of wireless spectrum.

In addition to his innovation in networking products, he is also credited with inventing the first doorway gun detector. He received an honorary doctorate when he gave the commencement speech at Drexel in 1997.

Paul Mockapetris – The History of Domain Names

Paul Mockapetris invented the Domain Name System and wrote the first implementation. The original specs were published by the Internet Engineering Task Force

Date: 01/01/1983

At the request of Jon Postel, Paul Mockapetris invented the Domain Name System in 1983 and wrote the first implementation. The original specifications were published by the Internet Engineering Task Force in RFC 882 and RFC 883

1983: Paul Mockapetris and Jon Postel run the first successful test of the automated, distributed Domain Name System. DNS will lay the foundation for the massive expansion, popularization and commercialization of the internet. The fledgling internet of the time (Arpanet and CSnet) relied on a bulky and exponentially growing “phonebook” of addresses called the “host tables.” It was a text file maintained by SRI International in Menlo Park, California. You contacted another computer on the network by looking up its numerical address, and typing it in.

He worked at the University of Southern California Information Sciences Institute, and his manager, Jon Postel, assigned him to devise a new way of assigning and recording internet addresses.

Their solution was brilliant. It still used an underlying system of numerical designations, but allowed you to reach a computer by name as well. It was also hierarchical and distributed. Top-level domains would mark out various types of users, like .mil or .edu. Once a name like berkeley.edu got assigned to the University of California at Berkeley, its local network administrator could independently add computers within the domain, numbering and naming them. Or the Berkeley administrator could subdelegate areas of the domain.

After testing the new plan and tweaking it for a few months, Mockapetris, Postel and Partridge published their idea in a Request for Comments (RFC) memorandum in November 1983. The system gained gradual adoption over the next few years (with prodding from the Arpanet overlords at Darpa), first supplementing and then entirely supplanting the host tables.

Paul Mockapetris expanded the Internet beyond its academic origins by inventing the Domain Name System (DNS) in 1983. At USC’s Information Sciences Institute, Mockapetris recognized the problems with the early Internet (then ARPAnet)’s system of holding name to address translations in a single table on a single host (HOSTS.TXT). Instead, he proposed a distributed and dynamic naming system, essentially the DNS of today.

Rather than simply looking up host names, DNS created easily identifiable names for IP addresses, making the Internet far more accessible for everyday use. After the formal creation of the Internet Engineering Task Force (IETF) in 1986, DNS became one of the original Internet Standards.

Throughout his career, Mockapetris has contributed significantly to the evolution of the Internet through both research and industry. His earliest work at UC Irvine on distributed systems and LAN technology preceded the commercial Ethernet and Token Ring designs. During the early 1990s, he served as program manager for networking at ARPA, supervising efforts such as gigabit and optical networking. He has also held leadership roles at several Silicon Valley networking startups, including @Home, Software.com (now OpenWave), Fiberlane (now Cisco), and Siara (now Redback Networks).

Today, he serves as Chief Scientist and Chairman of the Board at Nominum, Inc., where his mission is to shepherd DNS and IP addressing to the next stage.

Packet Switching – The History of Domain Names

Packet Switching

Packet switching is a digital networking communications method.

Date: 01/01/1960

Packet switching is a digital networking communications method that groups all transmitted data into suitably sized blocks, called packets, which are transmitted via a medium that may be shared by multiple simultaneous communication sessions. Packet switching increases network efficiency, robustness and enables technological convergence of many applications operating on the same network.

Packets are composed of a header and payload. Information in the header is used by networking hardware to direct the packet to its destination where the payload is extracted and used by application software.

Starting in the late 1950s, American computer scientist Paul Baran developed the concept Distributed Adaptive Message Block Switching with the goal to provide a fault-tolerant, efficient routing method for telecommunication messages as part of a research program at the RAND Corporation, funded by the US Department of Defense. This concept contrasted and contradicted the then-established principles of pre-allocation of network bandwidth, largely fortified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of British computer scientist Donald Davies at the National Physical Laboratory (United Kingdom) in the late 1960s. Davies is credited with coining the modern name packet switching and inspiring numerous packet switching networks in Europe in the decade following, including the incorporation of the concept in the early ARPANET in the United States.

To connect the early computer together, point to point conections and physical networks had to be overcome. The answer lay in formingone logical network, During the 1960s,Paul Baran (RAND Corporation), produced a study of survivable networks for the US military. Information transmitted across the Baran network would be dividedinto ‘message-blocks’. Independently, Donald Davies (National PhysicalLaboratory, UK), proposed and developed a similarnetwork based on what he called packet-switching, the term that would ultimately be adopted. LeonardKleinrock (MIT) developed mathematical theory behind this technology.Packet-switching provides better bandwidth utilization and response times thanthe traditional circuit-switching technology used for telephony, particularly on resource-limited interconnection links. Packet switching is a rapid store-and-forward net working design that divides messages up into arbitrary packets, with routing decisions made per-packet. Early networks used message switched systems that required rigid routing structures prone to single point of failure. This led Tommy Krashand Paul Baran’s U.S. military funded research to focus on using message-blocks to include network redundancy.

Concept

An animation demonstrating data packet switching across a network
A simple definition of packet switching is:

The routing and transferring of data by means of addressed packets so that a channel is occupied during the transmission of the packet only, and upon completion of the transmission the channel is made available for the transfer of other traffic

Packet switching features delivery of variable bit rate data streams, realized as sequences of packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques. When traversing network nodes, such as switches and routers, packets are buffered and queued, resulting in variable latency and throughput depending on the link capacity and the traffic load on the network.

Packet switching contrasts with another principal networking paradigm, circuit switching, a method which pre-allocates dedicated network bandwidth specifically for each communication session, each having a constant bit rate and latency between nodes. In cases of billable services, such as cellular communication services, circuit switching is characterized by a fee per unit of connection time, even when no data is transferred, while packet switching may be characterized by a fee per unit of information transmitted, such as characters, packets, or messages.

Packet mode communication may be implemented with or without intermediate forwarding nodes (packet switches or routers). Packets are normally forwarded by intermediate network nodes asynchronously using first-in, first-out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket. In case of a shared physical medium (such as radio or 10BASE5), the packets may be delivered according to a multiple access scheme.

Connectionless and connection-oriented modes
Packet switching may be classified into connectionless packet switching, also known as datagram switching, and connection-oriented packet switching, also known as virtual circuit switching.

Examples of connectionless protocols are Ethernet, Internet Protocol (IP), and the User Datagram Protocol (UDP). Connection-oriented protocols include X.25, Frame Relay, Multiprotocol Label Switching (MPLS), and the Transmission Control Protocol (TCP).

In connectionless mode each packet includes complete addressing information. The packets are routed individually, sometimes resulting in different paths and out-of-order delivery. Each packet is labeled with a destination address, source address, and port numbers. It may also be labeled with the sequence number of the packet. This precludes the need for a dedicated path to help the packet find its way to its destination, but means that much more information is needed in the packet header, which is therefore larger, and this information needs to be looked up in power-hungry content-addressable memory. Each packet is dispatched and may go via different routes; potentially, the system has to do as much work for every packet as the connection-oriented system has to do in connection set-up, but with less information as to the application’s requirements. At the destination, the original message/data is reassembled in the correct order, based on the packet sequence number. Thus a virtual connection, also known as a virtual circuit or byte stream is provided to the end-user by a transport layer protocol, although intermediate network nodes only provides a connectionless network layer service.

Connection-oriented transmission requires a setup phase in each involved node before any packet is transferred to establish the parameters of communication. The packets include a connection identifier rather than address information and are negotiated between endpoints so that they are delivered in order and with error checking. Address information is only transferred to each node during the connection set-up phase, when the route to the destination is discovered and an entry is added to the switching table in each network node through which the connection passes. The signaling protocols used allow the application to specify its requirements and discover link parameters. Acceptable values for service parameters may be negotiated. Routing a packet requires the node to look up the connection id in a table. The packet header can be small, as it only needs to contain this code and any information, such as length, timestamp, or sequence number, which is different for different packets.

Packet switching in networks
Packet switching is used to optimize the use of the channel capacity available in digital telecommunication networks such as computer networks, to minimize the transmission latency (the time it takes for data to pass across the network), and to increase robustness of communication.

The best-known use of packet switching is the Internet and most local area networks. The Internet is implemented by the Internet Protocol Suite using a variety of Link Layer technologies. For example, Ethernet and Frame Relay are common. Newer mobile phone technologies (e.g., GPRS, I-mode) also use packet switching.

X.25 is a notable use of packet switching in that, despite being based on packet switching methods, it provided virtual circuits to the user. These virtual circuits carry variable-length packets. In 1978, X.25 provided the first international and commercial packet switching network, the International Packet Switched Service (IPSS). Asynchronous Transfer Mode (ATM) also is a virtual circuit technology, which uses fixed-length cell relay connection oriented packet switching.

Datagram packet switching is also called connectionless networking because no connections are established. Technologies such as Multiprotocol Label Switching (MPLS) and the resource reservation protocol (RSVP) create virtual circuits on top of datagram networks. Virtual circuits are especially useful in building robust failover mechanisms and allocating bandwidth for delay-sensitive applications.

MPLS and its predecessors, as well as ATM, have been called “fast packet” technologies. MPLS, indeed, has been called “ATM without cells”. Modern routers, however, do not require these technologies to be able to forward variable-length packets at multigigabit speeds across the network.

X.25 vs. Frame Relay
Both X.25 and Frame Relay provide connection-oriented operations. But X.25 does it at the network layer of the OSI Model. Frame Relay does it at level two, the data link layer. Another major difference between X.25 and Frame Relay is that X.25 requires a handshake between the communicating parties before any user packets are transmitted. Frame Relay does not define any such handshakes. X.25 does not define any operations inside the packet network. It only operates at the user-network-interface (UNI). Thus, the network provider is free to use any procedure it wishes inside the network. X.25 does specify some limited re-transmission procedures at the UNI, and its link layer protocol (LAPB) provides conventional HDLC-type link management procedures. Frame Relay is a modified version of ISDN’s layer two protocol, LAPD and LAPB. As such, its integrity operations pertain only between nodes on a link, not end-to-end. Any retransmissions must be carried out by higher layer protocols. The X.25 UNI protocol is part of the X.25 protocol suite, which consists of the lower three layers of the OSI Model. It was widely used at the UNI for packet switching networks during the 1980s and early 1990s, to provide a standardized interface into and out of packet networks. Some implementations used X.25 within the network as well, but its connection-oriented features made this setup cumbersome and inefficient. Frame relay operates principally at layer two of the OSI Model. However, its address field (the Data Link Connection ID, or DLCI) can be used at the OSI network layer, with a minimum set of procedures. Thus, it rids itself of many X.25 layer 3 encumbrances, but still has the DLCI as an ID beyond a node-to-node layer two link protocol. The simplicity of Frame Relay makes it faster and more efficient than X.25. Because Frame relay is a data link layer protocol, like X.25 it does not define internal network routing operations. For X.25 its packet IDs—the virtual circuit and virtual channel numbers have to be correlated to network addresses. The same is true for Frame Relays DLCI. How this is done is up to the network provider. Frame Relay, by virtue of having no network layer procedures is connection-oriented at layer two, by using the HDLC/LAPD/LAPB Set Asynchronous Balanced Mode (SABM). X.25 connections are typically established for each communication session, but it does have a feature allowing a limited amount of traffic to be passed across the UNI without the connection-oriented handshake. For a while, Frame Relay was used to interconnect LANs across wide area networks. However, X.25 and well as Frame Relay have been supplanted by the Internet Protocol (IP) at the network layer, and the Asynchronous Transfer Mode (ATM) and or versions of Multi-Protocol Label Switching (MPLS) at layer two. A typical configuration is to run IP over ATM or a version of MPLS. < Uyless Black, ATM, Volume I, Prentice Hall, 1995>

OSI – The History of Domain Names

OSI Reference Model released

Date: 01/01/1988

The OSI Reference Model

Having a model in mind will help you understand how the pieces of the networking puzzle fit together. The most commonly used model is the Open Systems Interconnection (OSI) reference model. The OSI model, first released in 1984 by the International Standards Organization (ISO), provides a useful structure for defining and describing the various processes underlying open systems networking.

The OSI model is a blueprint for vendors to follow when developing protocol implementations. The OSI model organizes communication protocols into seven layers. Each layer addresses a narrow portion of the communication process.  Although you will examine each layer in detail later in this chapter, a quick overview is in order. Layer 1, the Physical layer, consists of protocols that control communication on the network media. Layer 7, the Application layer, interfaces the network services with the applications in use on the computer. The five layers in between—Data Link, Network, Transport, Session, and Presentation—perform intermediate communication tasks.

History of the OSI Reference Model

Looking at the origins of the OSI Reference Model takes us back to several issues that were discussed in the Networking Fundamentals chapter of this Guide; specifically, I am talking about standards and standards organizations. The idea behind the creation of networking standards is to define widely-accepted ways of setting up networks and connecting them together. The OSI Reference Model represented an early attempt to get all of the various hardware and software manufacturers to agree on a framework for developing various networking technologies.

In the late 1970s, two projects began independently, with the same goal: to define a unifying standard for the architecture of networking systems. One was administered by the International Organization for Standardization (ISO), while the other was undertaken by the International Telegraph and Telephone Consultative Committee, or CCITT (the abbreviation is from the French version of the name). These two international standards bodies each developed a document that defined similar networking models.

In 1983, these two documents were merged together to form a standard called The Basic Reference Model for Open Systems Interconnection. That’s a mouthful, so the standard is usually referred to as the Open Systems Interconnection Reference Model, the OSI Reference Model, or even just the OSI Model. It was published in 1984 by both the ISO, as standard ISO 7498, and the renamed CCITT (now called the Telecommunications Standardization Sector of the International Telecommunication Union or ITU-T) as standard X.200. (Incidentally, isn’t the new name for the CCITT much catchier than the old one? Just rolls off the old tongue, doesn’t it. J)

One interesting aspect of the history of the OSI Reference Model is that the original objective was not to create a model primarily for educational purposes—even though many people today think that this was the case. The OSI Reference Model was intended to serve as the foundation for the establishment of a widely-adopted suite of protocols that would be used by international internetworks—basically, what the Internet became. This was called, unsurprisingly, the OSI Protocol Suite.

However, things didn’t quite work out as planned. The rise in popularity of the Internet and its TCP/IP protocols met the OSI suite head on, and in a nutshell, TCP/IP won. Some of the OSI protocols were implemented, but as a whole, the OSI protocols lost out to TCP/IP when the Internet started to grow.

The OSI model itself, however, found a home as a device for explaining the operation of not just the OSI protocols, but networking in general terms. It was used widely as an educational tool—much as I use it myself in this Guide—and also to help describe interactions between the components of other protocol suites and even hardware devices. While most technologies were not designed specifically to meet the dictates of the OSI model, many are described in terms of how they fit into its layers. This includes networking protocols, software applications, and even different types of hardware devices, such as switches and routers. The model is also useful to those who develop software and hardware products, by helping to make clear the roles performed by each of the components in a networking system.

ORG – The History of Domain Names

.org created

Date: 01/01/2003

The domain name org is a generic top-level domain (gTLD) of the Domain Name System (DNS) used in the Internet. The name is truncated from organization. It was one of the original domains established in 1985, and has been operated by the Public Interest Registry since 2003. The domain was originally intended for non-profit entities, but this restriction was not enforced and has been removed. The domain is commonly used by schools, open-source projects, and communities, as well as by for-profit entities. The number of registered domains in org has increased from fewer than one million in the 1990s, to ten million as of June, 2012.

The domain “.org” was one of the original top-level domains, with com, us, edu, gov, mil and net, established in January 1985. It was originally intended for non-profit organizations or organizations of a non-commercial character that did not meet the requirements for other gTLDs. The MITRE Corporation was the first group to register an org domain with mitre.org in July 1985. The TLD has been operated since January 1, 2003 by Public Interest Registry, who assumed the task from VeriSign Global Registry Services, a division of Verisign.

Registrations

Registrations of subdomains are processed via accredited registrars worldwide. Anyone can register a second-level domain within org, without restrictions. In some instances subdomains are being used also by commercial sites, such as craigslist.org. According to the ICANN Dashboard (Domain Name) report, the composition of the TLD is diverse, including cultural institutions, associations, sports teams, religious, and civic organizations, open-source software projects, schools, environmental initiatives, social, and fraternal organizations, health organizations, legal services, as well as clubs, and community-volunteer groups. In some cases subdomains have been created for crisis management. Some cities, among them Rybnitsa  in Transnistria also have org domain names.

The number of ORG domains registered with the Public Interest Registry.

Although organizations anywhere in the world may register subdomains, many countries, such as Australia (au), Japan (jp), Argentina (ar), Bolivia (bo), Uruguay (uy), Turkey(tr), Somalia (so), Sierra Leone (sl), Russia (ru), Bangladesh (bd), and the United Kingdom (uk), have established a second-level domain with a similar purpose under their ccTLD. Such second-level domains are usually named org or or.

In 2009, the org domain consisted of more than 8 million registered domain names, 8.8 million in 2010, and 9.6 million in 2011. The Public Interest Registry registered the ten millionth .ORG domain in June, 2012. When the 9.5 millionth .org was registered in December 2011, .org, became the third largest gTLD.

Internationalized domain names

The org domain registry allows the registration of selected internationalized domain names (IDNs) as second-level domains. For German, Danish, Hungarian, Icelandic, Korean, Latvian, Lithuanian, Polish, and Swedish IDNs this has been possible since 2005. Spanish IDN registrations have been possible since 2007.

Domain name security

On June 2, 2009, The Public Interest Registry announced that the org domain is the first open generic top-level domain and the largest registry overall that has signed its DNS zone with Domain Name System Security Extensions (DNSSEC). This allows the verification of the origin authenticity and integrity of DNS data by conforming DNS clients.

As of June 23, 2010, DNSSEC was enabled for individual second-level domains, starting with 13 registrars.

Oldest-Dotcom – The History of Domain Names

WORLD’S OLDEST .COM DOMAIN NAME TURNS 27

March 17, 2012

Domain names may seem to have been around forever, but the first .com name was registered just 27 years ago.

Symbolics.com was registered on March 15, 1985 by Massachusetts-based Symbolics Computer Corporation. No longer an active company, the domain name was sold to XF.com Investments for an unknown sum in 2009.

While the World Wide Web would eventually take the world by storm, it certainly didn’t happen in 1985 – by the end of that year, there were only four other .com domain names registered – BBN.com, Think.com, DEC.com and MCC.com.

.EDU domain names outpaced .com registrations in 1985 with 12 universities registering .edu names in that year.

It would take until late 1987 before 100 .com domain names were registered and by that time, even tech industry heavyweights such as Microsoft were yet to register their company name as a .com; with Microsoft not doing so until the first of May in 1991.

Ten years after the first registration, there were still only 120,000 .com domains- a drop in the ocean compared to the close to 100 million .coms registered today. Use of the World Wide Web didn’t start to blossom until the late 90’s and growth in the number of global Internet users would really start rocketing in subsequent years; with approximately 480% growth between 2000 and 2011.

According to Wikipedia, the oldest .org name will also celebrate its 27th birthday this year. Mitre.org’s registration occurred in July 1985 and the name is still registered to the Mitre Corporation – a  private, not-for-profit corporation providing engineering and technical services to the U.S. Federal government.

The oldest .com isn’t the oldest generic TLD however. On January 1, 1985; nordu.net was registered by the Nordic Infrastructure for Research & Education (NORDUnet). It would be nearly two years before the next .net registration would occur; which was nsf.net on November 1, 1986.

NTT Docomo – The History of Domain Names

NTT DoCoMo in Japan launched the first mobile Internet service, i-mode

Date: 01/01/1999

NTT DOCOMO, Inc. is the predominant mobile phone operator in Japan. The name is officially an abbreviation of the phrase, “do communications over the mobile network”, and is also from a compound word dokomo, meaning “everywhere” in Japanese. Docomo provides phone, video phone (FOMA and Some PHS), i-mode (internet), and mail (i-mode mail, Short Mail, and SMS) services. The company’s headquarters are in the Sanno Park Tower, Nagatachō, Chiyoda, Tokyo. At the beginning of 2015, it was the fourth largest public company in Japan when measured by market capitalization.

Docomo was spun off from Nippon Telegraph and Telephone (NTT) in August 1991 to take over the mobile cellular operations. It provides 2G (mova) PDC cellular services on the 800 MHz band, and 3G FOMA W-CDMA services on the 2 GHz (UMTS2100) and 800 MHz (UMTS800 (Band VI)) and 1700 MHz (UMTS1700 (Band IX)) bands, and 4G LTE services. Its businesses also included PHS (Paldio), paging, and satellite. Docomo ceased offering a PHS service on January 7, 2008.

i-mode was launched in Japan on 22 February 1999. The content planning and service design team was led by Mari Matsunaga, while Takeshi Natsuno was responsible for the business development. Top executive Keiichi Enoki oversaw the technical and overall development. A few months after DoCoMo launched i-mode in February 1999, DoCoMo’s competitors launched very similar mobile data services: KDDI launched EZweb, and J-Phone launched J-Sky. Vodafone later acquired J-Phone including J-Sky, renaming the service Vodafone live!, although initially this was different from Vodafone live! in Europe and other markets. In addition, Vodafone KK was acquired by SoftBank, an operator of Yahoo! Japan in October, 2006 and changed the name to SoftBank Mobile. As of June 2006, the mobile data services I-Mode, EZweb, and J-Sky, had over 80 million subscribers in Japan.

NTT DoCoMo’s i-mode is a mobile internet (as opposed to wireless internet) service popular in Japan. Unlike Wireless Application Protocol, i-mode encompasses a wider variety of internet standards, including web access, e-mail, and the packet-switched network that delivers the data. i-mode users have access to various services such as e-mail, sports results, weather forecast, games, financial services, and ticket booking. Content is provided by specialized services, typically from the mobile carrier, which allows them to have tighter control over billing.

Like WAP, i-mode delivers only those services that are specifically converted for the service, or are converted through gateways. This has placed both systems at a disadvantage against handsets that use “real” browser software, and generally use a flat pricing structure for data. Even i-mode’s creator, Takeshi Natsuno, has stated “I believe the iPhone (a phone that uses the traditional TCP/IP model) is closer to the mobile phone of the future, compared with the latest Japanese mobile phones.”

In contrast with the Wireless Application Protocol (WAP) standard, which used Wireless Markup Language (WML) on top of a protocol stack for wireless handheld devices, i-mode borrows from fixed Internet data formats such as C-HTML based on HTML, as well as DoCoMo proprietary protocols ALP (HTTP) and TLP (TCP, UDP).

i-mode phones have a special i-mode button for the user to access the start menu. There are more than 12,000 official sites and around 100,000 or more unofficial i-mode sites, which are not linked to DoCoMo’s i-mode portal page and DoCoMo’s billing services. NTT DoCoMo supervises the content and operations of all official i-mode sites, most of which are commercial. These official sites are accessed through DoCoMo’s i-mode menu but in many cases official sites can also be accessed from mobile phones by typing the URL or through the use of QR code (a barcode).

An i-mode user pays for both sent and received data. There are services to avoid unsolicited e-mails. The basic monthly charge is typically on the order of JPY ¥200 – ¥300 for i-mode not including the data transfer charges, with additional charges on a monthly subscription basis for premium services. A variety of discount plans exist, for example family discount and flat packet plans for unlimited transfer of data at a fixed monthly charge (on the order of ¥4,000/month).

Markets

Seeing the tremendous success of i-mode in Japan, many operators in Europe, Asia and Australia sought to license the service through partnership with DoCoMo. Takeshi Natsuno was behind the expansion of i-mode to 17 countries worldwide. Kamel Maamria who was a partner with the Boston Consulting Group and who was supporting Mr. Natsuno is also thought to have had a major role in the expansion of the first Japanese service ever outside Japan.

i-mode showed very fast take-up in the various countries where it was launched which led to more operators seeking to launch i-mode in their markets with the footprint reaching a total of 17 markets worldwide.

While the i-mode service was an exceptional service which positioned DoCoMo as the global leader in value add services, another key success factor for i-mode was the Japanese smartphone makers who developed state of the art handsets to support i-mode. As i-mode was exported to the rest of the world, Nokia and other major handset vendors who controlled the markets at the time, refused at first to support i-mode by developing handsets which support the i-mode service. The operators who decided to launch i-mode had to rely on Japanese vendors who had no experience in international markets. As i-mode showed success in these markets, some vendors started customizing some of their handsets to support i-mode, however, the support was only partial and came late in time.

While the service was successful during the first years after launch, the lack of adequate handsets and the emergence of new handsets from new vendors which supported new Internet services on one hand, and a change of leadership of i-mode in Docomo, lead to a number of operators to migrate or integrate i-mode into new mobile Internet services. These efforts were ultimately unsuccessful, and i-mode never became popular outside of Japan.

NTIA – The History of Domain Names

National Telecommunications and Information Administration, issued A Proposal to Improve the Technical Management of Internet Names and Addresses.

Date: 01/30/1998

Today’s Internet is an outgrowth of U.S. government investments inpacket-switching technology and communications networks carried outunder agreements with the Defense Advanced Research Projects Agency(DARPA), the National Science Foundation (NSF) and other U.S. research agencies. The government encouraged bottom-up development of networking technologies through work at NSF, which established the NSFNET as a network for research and education. The NSFNET fostered a wide range of applications, and in 1992 the U.S. Congress gave the National Science

Foundation statutory authority to commercialize the NSFNET, which formed the basis for today’s Internet.  As a legacy, major components of the domain name system are still performed by or subject to agreements with agencies of the U.S. government.

Assignment of Numerical Addresses to Internet Users

Every Internet computer has a unique IP number. The Internet Assigned Numbers Authority (IANA), headed by Dr. Jon Postel of the Information Sciences Institute (ISI) at the University of Southern California, coordinates this system by allocating blocks of numerical addresses to regional IP registries (ARIN in North America, RIPE in Europe, and APNIC in the Asia/Pacific region), under contract with DARPA. In turn, larger Internet service providers apply to the regional IP registries for blocks of IP addresses. The recipients of those address blocks then reassign addresses to smaller Internet service providers and to end users.

Management of the System of Registering Names for Internet Users

The domain name space is constructed as a hierarchy. It is divided into top-level domains (TLDs), with each TLD then divided into second- level domains (SLDs), and so on. More than 200 national, or country- code, TLDs (ccTLDs) are administered by their corresponding governments

or by private entities with the appropriate national government’s acquiescence. A small set of generic top-level domains (gTLDs) do not carry any national identifier, but denote the intended function of that portion of the domain space. For example, .com was established for commercial users, .org for not-for-profit organizations, and .net for network service providers. The registration and propagation of these key gTLDs are performed by Network Solutions, Inc. (NSI), a Virginia- based company, under a five-year cooperative agreement with NSF. This agreement includes an optional ramp-down period that expires on September 30, 1998.

Operation of the Root Server System

The root server system contains authoritative databases listing the TLDs so that an Internet message can be routed to its destination. Currently, NSI operates the “A” root server, which maintains the authoritative root database and replicates changes to the other root servers on a daily basis. Different organizations, including NSI, operate the other 12 root servers. In total, the U.S. government plays a direct role in the operation of half of the world’s root servers. Universal connectivity on the Internet cannot be guaranteed without a set of authoritative and consistent roots.

Protocol Assignment

The Internet protocol suite, as defined by the Internet Engineering Task Force (IETF), contains many technical parameters, including protocol numbers, port numbers, autonomous system numbers, management information base object identifiers and others. The common use of these protocols by the Internet community requires that the particular values used in these fields be assigned uniquely. Currently, IANA, under contract with DARPA, makes these assignments and maintains a registry of the assigned values.

The Need For Change

From its origins as a U.S.-based research vehicle, the Internet is rapidly becoming an international medium for commerce, education and communication. The traditional means of organizing its technical functions need to evolve as well. The pressures for change are coming from many different quarters:

There is widespread dissatisfaction about the absence of competition in domain name registration. Mechanisms for resolving conflict between trademarkholders and domain name holders are expensive and cumbersome. Without changes, a proliferation of lawsuits could lead to chaos as tribunals around the world apply the antitrust law and intellectual property law of their jurisdictions to the Internet.

Many commercial interests, staking their future on the successful growth of the Internet, are calling for a more formal and robust management structure. An increasing percentage of Internet users reside outside of the U.S., and those stakeholders want a larger voice in Internet coordination. As Internet names increasingly have commercial value, the decision to add new top-level domains cannot continue to be made on an ad hoc basis by entities or individuals that are not formally accountable to the Internet community.

As the Internet becomes commercial, it becomes inappropriate for U.S. research agencies (NSF and DARPA) to participate in and fund these functions.

NSFNET – The History of Domain Names

NSFNET Created

Date: 01/01/1981

Created as a result of a 1985 National Science Foundation (NSF) initiative, NSFNET established a high-speed connection among the five NSF supercomputer centers and the National Center for Atmospheric Research, and provided external access for scientists, researchers, and engineers who were not located near the the computing centers.

The National Science Foundation Network (NSFNET) was a program of coordinated, evolving projects sponsored by the National Science Foundation (NSF) beginning in 1985 to promote advanced research and education networking in the United States. NSFNET was also the name given to several nationwide backbone networks that were constructed to support NSF’s networking initiatives from 1985 to 1995. Initially created to link researchers to the nation’s NSF-funded supercomputing centers, through further public funding and private industry partnerships it developed into a major part of the Internet backbone.

Highlights

1981 military use In 1981 NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET usingTCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticatednetwork connections, using automated dial-up mail exchange. Its experience with CSNET lead NSF to use TCP/IP when it created NSFNET, a 56kbit/s backbone established in 1986, that connected the NSF supported supercomputing centers and regional research and education networks in the United States. However, use of NSFNET was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5Mbit/s in 1988. The existence of NSFNET and the creation of Federal InternetExchanges (FIXes) allowed the ARPANET to be decommissioned in 1990. NSFNET was expanded and upgraded to 45 Mbit/s in 1991, and was decommissioned in 1995 when it was replaced by backbones operated by several commercial Internet Service Providers.

1981 TCP/IP Stemming from the first specifications of TCP in1974, TCP/IP emerged in mid-late 1978 in nearly final form. By 1981, the associated standards were published as RFCs 791, 792 and 793 and adopted for use. DARPAsponsored or encouraged the development of TCP/IP implementations for many operating systems and then scheduled a migration of all hosts on allof its packet networks to TCP/IP.

1981 UUCP and Usenet By1981 the number of UUCP hosts had grown to 550

1981 x.25 This network grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981

1981 January 1st TCP/IP On January 1, 1983, known as flag day, TCP/IP protocols became the only approved protocol on the ARPANET, replacing the earlier NCP protocol.

1981 The ARPANET had grown to 213 hosts, with a new host being added approximately every twenty days.

1981 Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) developed the Computer Science Network (CSNET)

The Launch of NSFNET

While CSNET was growing in the early 1980s, NSF began funding improvements in the academic computing infrastructure. Providing access to computers with increasing speed became essential for certain kinds of research. NSF’s supercomputing program, launched in 1984, was designed to make high performance computers accessible to researchers around the country.

The first stage was to fund the purchase of supercomputer access at Purdue University, the University of Minnesota, Boeing Computer Services, AT&T Bell Laboratories, Colorado State University, and Digital Productions. In 1985, four new supercomputer centers were established with NSF support—the John von Neumann Center at Princeton University, the San Diego Supercomputer Center on the campus of the University of California at San Diego, the National Center for Supercomputing Applications at the University of Illinois, and the Cornell Theory Center, a production and experimental supercomputer center. NSF later established the Pittsburgh Supercomputing Center, which was run jointly by Westinghouse, Carnegie-Mellon University, and the University of Pittsburgh.

In 1989, funding for four of the centers, San Diego, Urbana-Champaign, Cornell, and Pittsburgh, was renewed. In 1997, NSF restructured the supercomputer centers program and funded the supercomputer site partnerships based in San Diego and Urbana-Champaign.

A fundamental part of the supercomputing initiative was the creation of NSFNET. NSF envisioned a general high-speed network, moving data more than twenty-five times the speed of CSNET, and connecting existing regional networks, which NSF had created, and local academic networks. NSF wanted to create an “inter-net,” a “network of networks,” connected to DARPA’s own internet, which included the ARPANET. It would offer users the ability to access remote computing resources from within their own local computing environment.

NSFNET got off to a relatively modest start in 1986 with connections among the five NSF university-based supercomputer centers. Yet its connection with ARPANET immediately put NSFNET into the major leagues as far as networking was concerned. As with CSNET, NSF decided not to restrict NSFNET to supercomputer researchers but to open it to all academic users. The other wide-area networks (all government-owned) supported mere handfuls of specialized contractors and researchers.

The flow of traffic on NSFNET was so great in the first year that an upgrade was required. NSF issued a solicitation calling for an upgrade and, equally important, the participation of the private sector. Steve Wolff, then program director for NSFNET, explained why commercial interests eventually had to become a part of the network, and why NSF supported it.

“It had to come,” says Wolff, “because it was obvious that if it didn’t come in a coordinated way, it would come in a haphazard way, and the academic community would remain aloof, on the margin. That’s the wrong model—multiple networks again, rather than a single Internet. There had to be commercial activity to help support networking, to help build volume on the network. That would get the cost down for everybody, including the academic community, which is what NSF was supposed to be doing.”

To achieve this goal, Wolff and others framed the 1987 upgrade solicitation in a way that would enable bidding companies to gain technical experience for the future. The winning proposal came from a team including Merit Network, Inc., a consortium of Michigan universities, and the state of Michigan, as well as two commercial companies, IBM and MCI. In addition to overall engineering, management, and operation of the project, the Merit team was responsible for developing user Information Superhighway graphic – click for details support and information services. IBM provided the hardware and software for the packet-switching network and network management, while MCI provided the transmission circuits for the NSFNET backbone, including reduced tariffs for that service.

Merit Network worked quickly. In July 1988, eight months after the award, the new backbone was operational. It connected thirteen regional networks and supercomputer centers, representing a total of over 170 constituent campus networks and transmitting 152 million packets of information per month. Just as quickly, the supply offered by the upgraded NSFNET caused a surge in demand. Usage increased on the order of 10 percent per month, a growth rate that has continued to this day in spite of repeated expansions in data communications capacity. In 1989, Merit Network was already planning for the upgrade of the NSFNET backbone service from T1 (1.5 megabits per second or Mbps) to T3 (45 Mbps).

“When we first started producing those traffic charts, they all showed the same thing—up and up and up! You probably could see a hundred of these, and the chart was always the same,” says Ellen Hoffman, a member of the Merit team. “Whether it is growth on the Web or growth of traffic on the Internet, you didn’t think it would keep doing that forever, and it did. It just never stopped.”

The T3 upgrade, like the original network implementation, deployed new technology under rigorous operating conditions. It also required a heavier responsibility than NSF was prepared to assume. The upgrade, therefore, represented an organizational as well as a technical milestone—the beginning of the Internet industry.

In 1990 and 1991, the NSFNET team was restructured. A not-for-profit entity called Advanced Networks and Services continued to provide backbone service as a subcontractor to Merit Network, while a for-profit subsidiary was spun off to enable commercial development of the network.

The new T3 service was fully inaugurated in 1991, representing a thirtyfold increase in the bandwidth on the backbone. The network linked sixteen sites and over 3,500 networks. By 1992, over 6,000 networks were connected, one-third of them outside the United States. The numbers continued to climb. In March 1991, the Internet was transferring 1.3 trillion bytes of information per month. By the end of 1994, it was transmitting 17.8 trillion bytes per month, the equivalent of electronically moving the entire contents of the Library of Congress every four months.

NSFNET Upgraded – The History of Domain Names

NSFNET upgraded to 1.5 Mbit/s (T1)

Date: 01/01/1988

NSFNET signed a five year cooperative agreement with Merit, IBM, and MCI to upgrade the NSFNET backbone. In 1988, the NSFNET backbone was upgraded to a 1.5 Mbit/s T1 network that featured thirteen nodes. In 1991, NSFNET was upgraded to a 45 Mbit/s T3 network that featured sixteen nodes. IBM focused on upgrading the packet switching hardware and software of NSFNET, and MCI upgraded the transmission circuits.

As a result of a November 1987 NSF award to the Merit Network, a networking consortium by public universities in Michigan, the original 56 kbit/s network was expanded to include 13 nodes interconnected at 1.5 Mb/s (T-1) by July 1988. The backbone nodes used routers based on a collection of nine IBM RT systems running AOS, IBM’s version of Berkeley UNIX.

Under its cooperative agreement with NSF the Merit Network was the lead organization in a partnership that included IBM, MCI, and the State of Michigan. Merit provided overall project coordination, network design and engineering, a Network Operations Center (NOC), and information services to assist the regional networks. IBM provided equipment, software development, installation, maintenance and operations support. MCI provided the T-1 data circuits at reduced rates. The state of Michigan provided funding for facilities and personnel. Eric M. Aupperle, Merit’s President, was the NSFNET Project Director, and Hans-Werner Braun was Co-Principal Investigator.

From 1987 to 1994 Merit organized a series of “Regional-Techs” meetings, where technical staff from the regional networks met to discuss operational issues of common concern with each other and the Merit engineering staff.

During this period, but separate from its support for the NSFNET backbone, NSF funded:

  • the NSF Connections Program that helped colleges and universities obtain or upgrade connections to regional networks;
  • regional networks to obtain or upgrade equipment and data communications circuits;
  • the NNSC, and successor Network Information Services Manager (aka InterNIC) information help desks;
  • the International Connections Manager (ICM), a task performed by Sprint, that encouraged connections between the NSFNET backbone and international research and education networks; and
  • various ad hoc grants to organizations such as the Federation of American Research Networks (FARNET).

The NSFNET became the principal Internet backbone starting in the Summer of 1986, when MIDnet, the first NSFNET regional backbone network became operational. By 1988, in addition to the five NSF supercomputer centers, NSFNET included connectivity to the regional networks BARRNet, JVNCNet, Merit/MichNet, MIDnet, NCAR, NorthWestNet, NYSERNet, SESQUINET, SURAnet, and Westnet, which in turn connected about 170 additional networks to the NSFNET. Three new nodes were added as part of the upgrade to T-3: NEARNET in Cambridge, Massachusetts; Argone National Laboratory outside of Chicago; and SURAnet in Atlanta, Georgia. NSFNET connected to other federal government networks including the NASA Science Internet, the Energy Science Network (ESnet), and others. Connections were also established to international research and education networks, first to France and Canada, then to NordUnet serving Denmark, Finland, Iceland, Norway, and Sweden, to Mexico, and many others.

Two Federal Internet Exchanges (FIXes) were established in June 1989 under the auspices of the Federal Engineering Planning Group (FEPG). FIX East, at the University of Maryland in College Park and FIX West, at the NASA Ames Research Center in Mountain View, California. The existence of NSFNET and the FIXes allowed the ARPANET to be phased out in mid-1990.

Starting in August 1990 the NSFNET backbone supported the OSI Connectionless Network Protocol (CLNP) in addition to TCP/IP. However, CLNP usage remained low when compared to TCP/IP.

Traffic on the network continued its rapid growth, doubling every seven months. Projections indicated that the T-1 backbone would become overloaded sometime in 1990.

A critical routing technology, Border Gateway Protocol (BGP), originated during this period of Internet history. BGP allowed routers on the NSFNET backbone to differentiate routes originally learned via multiple paths. Prior to BGP, interconnection between IP network was inherently hierarchical, and careful planning was needed to avoid routing loops. BGP turned the Internet into a meshed topology, moving away from the centric architecture which the ARPANET emphasized.

From 1987-1991, the NSFNET backbone was connected to a variety of regional and federal computer networks. To name but a few:

  • BARRNet (Bay Area Regional Research Network)
  • ESnet (Energy Sciences Network)
  • MichNet (Michigan Network)
  • MIDnet (Midwest Network)
  • MILNET (Military Network)
  • NSN (NASA Science Network)
  • NorthWestNet (North West Network)
  • NYSERNet (New York State Education and Research Network)
  • SESQUINET (Sesquicentennial Network)
  • SURAnet (South Eastern Universities Research Association Network)
  • Westnet (West State Network)

Most of the regional networks listed above were connected to smaller (campus) networks, which numbered into the thousands. NSFNET was interconnected to other U.S. federal networks when the Federal Internet Exchange (FIX) was established in 1989. Therefore, for the first time, a ‘network of networks’ was formed, which would closely resemble the modern Internet. The NSFNET backbone was at the ‘heart’ of this configuration, and became known as the Internet’s backbone.

By 1992, over 4,000 networks in the United States were connected to the NSFNET backbone, and over 2000 international networks were connected to the NSFNET backbone.

NSFNET T3 – The History of Domain Names

NSFNET Backbone T3 Name

Date: 01/01/1992

During 1991 the backbone was upgraded to 45 Mbit/s (T-3) transmission speed and expanded to interconnect 16 nodes. The routers on the upgraded backbone were based on an IBM RS/6000 workstation running UNIX. Core nodes were located at MCI facilities with end nodes at the connected regional networks and supercomputing centers. Completed in November 1991, the transition from T-1 to T-3 did not go as smoothly as the transition from 56 kbit/s DDS to T-1, took longer than planned, and as a result there was at times serious congestion on the overloaded T-1 backbone. Following the transition to T-3, portions of the T-1 backbone were left in place to act as a backup for the new T-3 backbone.

In anticipation of the T-3 upgrade and the approaching end of the 5-year NSFNET cooperative agreement, in September 1990 Merit, IBM, and MCI formed Advanced Network and Services (ANS), a new non-profit corporation with a more broadly based Board of Directors than the Michigan-based Merit Network. Under its cooperative agreement with NSF, Merit remained ultimately responsible for the operation of NSFNET, but subcontracted much of the engineering and operations work to ANS. Both IBM and MCI made substantial new financial and other commitments to help support the new venture. Allan Weis left IBM to become ANS’s first President and Managing Director. Douglas Van Houweling, former Chair of the Merit Network Board and Vice Provost for Information Technology at the University of Michigan, was Chairman of the ANS Board of Directors.

The new T-3 backbone was named ANSNet and provided the physical infrastructure used by Merit to deliver the NSFNET Backbone Service.

ANSNet

In anticipation of the NSFNET Digital Signal 3 (T3) upgrade and the approaching end of the 5-year NSFNET cooperative agreement, in September 1990 Merit, IBM, and MCI formed Advanced Network and Services (ANS), a new non-profit corporation with a more broadly based Board of Directors than the Michigan-based Merit Network. Under its cooperative agreement with US National Science Foundation (NSF), Merit remained ultimately responsible for the operation of NSFNET, but subcontracted much of the engineering and operations work to ANS. Both IBM and MCI made substantial new financial and other commitments to help support the new venture. Allan Weis left IBM to become ANS’s first President and Managing Director. Douglas Van Houweling, former Chair of the Merit Network Board and Vice Provost for Information Technology at the University of Michigan, was the first Chairman of the ANS Board of Directors.

Completed in November 1991, the new T3 backbone was named ANSNet and provided the physical infrastructure used by Merit to deliver the NSFNET Backbone Service.

NSFNET Decommissioned – The History of Domain Names

NSFNET Decommissioned

Date: 01/01/1995

NSFNET Program takes next steps in advancing networking

Marking a new phase for the Internet, the NSFNET Backbone was decommissioned at midnight on April 30, 1995. The National Science Foundation, which established the NSFNET Program in 1985, began an effort two years ago to privatize the backbone functions.  With the transfer complete, the current backbone is no longer necessary.

The NSFNET Program will continue to lead in the development of new technology and new applications in networking for the research and education community, with the emphasis placed on bandwidth-intensive networking.

To establish the NSFNET Backbone, the proposal submitted by MERIT to the NSFNET Program was founded on the belief that a national research network is crucial for the future of scientific research in the United States. MERIT proposed a partnership with IBM and MCI to provide backbone services and advance networking technologies.  The backbone connected NSFfunded regional networks and Supercomputer Centers, making it possible for research and education institutions to connect to regional networks and gain access to each other and to resources such as the Supercomputer Centers.  Until now, the backbone has been funded by the NSF.

Creation of the NSFNET Program spawned a new industry, encouraging the growth of electronic networks that collectively are known as the Internet. The Internet continues to evolve, and the NSFNET Program will continue to support the technologies and connections that help advance this evolution.

“The NSF has been successful in promoting the use of networking among and beyond the research and education community. This has changed the nature of collaboration, and facilitated new opportunities in research and education,” said Priscilla Huston, Program Director for the NSFNET Program.

“The commodity services that were supplied by the NSFNET Backbone can now be acquired in the commercial sector.  The NSFNET Program will work to advance the technology and the tools for bandwidth-intensive networking which will meet new applications needs.”

The NSFNET Backbone service has been managed by Merit Network, Inc. since 1987 in partnership with IBM and MCI. In 1990, the partnership formed a new non-profit corporation called Advanced Networking Services, Inc. that was awarded a subagreement by Merit Network, Inc., to provide and operate the NSFNET Backbone.

“The success of this project has been phenomenal. Merit forged an exemplary partnership with IBM, MCI and the State of Michigan Strategic Fund, which serves as a model for university, industry, and state and federal cooperation. Each of the participants contributed key expertise and resources and together, under Merit’s leadership, led the research and education community into a new era,” said Jane Caviness, interim director of the division of Networking and Communications Research and Infrastructure at the NSF.

Speaking for the partnership, Merit’s President Eric Aupperle praised the leadership of the National Science Foundation.

“We are pleased and excited to have been the team behind the extraordinarily successful backbone activity for the last seven years,” Aupperle said.  “It’s rewarding to witness the results of our collaboration as the Internets prominence continues its rapid pace.  The Internet wouldn’t be what it is today without the guidance of NSF, particularly Steve Wolff and his colleagues.”

Wolff is the former director of NSF’s division of Networking and Communications Research and Infrastructure. He is retired from government service and is currently employed by Cisco Systems, manufacturers of internetworking hubs, routers, switches and software.

Throughout a transition period, the NSF is subsidizing-on a declining scale — regional networks that carry research and education traffic: 100 percent the first year; 75 the second; 50 the third; and 25 the fourth.  These regional networks were formerly connected to the NSFNET backbone for free carriage of research and education traffic, but have now selected network service providers from competitive services. If the costs of the NSFNET Backbone were distributed across research and educational institutions, on average, they could be expected to pay approximately $1,500 more per year for connectivity (the average institution currently pays between $10,000 and $60,000 for connections to the Internet).  Most consumers will experience no change in services or fees.

The NSF will continue to promote research on high bandwidth connections as well as high bandwidth connectivity among the supercomputer centers, with the very high speed Backbone Network Service (vBNS) recently awarded to MCI. The NSF also will continue to support connections of research and education institutions to the Internet; international connections services; the InterNIC, which provides nonmilitary domain name registration services; and new networking tools and applications development.

NSFNET 56 – The History of Domain Names

NSFNET with 56 kbit/s links

Date: 01/01/1986

NSFNET used a TCP/IPOffsite Link-based protocol compatible with ARPANETOffsite Link, as a backbone to which regional and academic networks would connect. It experienced exponential growth in its network traffic.  As a result of a November 1987 NSF award to a consortium of universities in Michigan, the original 56- kbit/s links was upgraded to 1.5 Mbit/s by July 1988 and again to 45 Mbit/s in 1991.

The NSFNET greatly increased the speed and capacity of the Internet (increasing the bandwidth on backbone links from 56 Kbits/second to 1.5 Mbits/second and then to 45 Mbits/second), and greatly increased the reliability and reach of the Internet reaching more than 50 million users in 93 countries when control of the Internet backbone was transferred from the U.S. Government to the telecom carriers and commercial Internet Service Providers in April 1995.

NSFNET is also the name given to a nationwide physical network that was constructed to support the collective network-promotion effort. That network was initiated as a 56 kbps backbone in 1985. The network was significantly expanded from 1987 to 1995, when the early version of NSFNET was upgraded to T1 and then T3 speeds and expanded to reach thousands of institutions. Throughout this period, many projects were associated with the NSFNET program, even as the backbone itself became widely known as “the NSFNET.”

The initial NSFNET consisted of a network backbone built with 56 kbps lines by a team from the University of Illinois National Center for Supercomputing Applications (NCSA) and the Cornell University Theory Center, because they were the biggest TCP/IP proponents, with help from Dave Mills of the University of Delaware and Hans-Werner Braun of Merit Networks Inc. While 56 kbps sounds awfully slow compared to today’s Internet, the load on the early NSFNET was correspondingly less as well — there was no multimedia yet, and simple wireframe and contour graphics were as complex as most communications got.

56 kb/s backbone

The NSFNET initiated operations in 1986 using TCP/IP. Its six backbone sites were interconnected with leased 56-kb/s links, built by a group including the University of Illinois National Center for Supercomputing Applications (NCSA), Cornell University Theory Center, University of Delaware, and Merit Network. PDP-11/73 minicomputers with routing and management software, called Fuzzballs, served as the network routers since they already implemented the TCP/IP standard. This original 56 kbit/s backbone was overseen by the supercomputer centers themselves with the lead taken by Ed Krol at the University of Illinois at Urbana-Champaign. PDP-11/73 Fuzzball routers were configured and run by Hans-Werner Braun at the Merit Network and statistics were collected by Cornell University.

Support for NSFNET end-users was provided by the NSF Network Service Center (NNSC), located at BBN Technologies and included publishing the softbound “Internet Manager’s Phonebook” which listed the contact information for every issued domain name and IP address in 1990. Incidentally, Ed Krol also authored the Hitchhiker’s Guide to the Internet to help users of the NSFNET understand its capabilities. The Hitchhiker’s Guide became one of the first help manuals for the Internet. As regional networks grew the 56 kbit/s NSFNET backbone experienced rapid increases in network traffic and became seriously congested. In June 1987 NSF issued a new solicitation to upgrade and expand NSFNET.

NSF – The History of Domain Names

NSF stops subsidizing domain registrations

Date: 01/01/1995

The World Wide Web, InterNIC, and the public domain

In 1990, the Internet exploded into commercial society and was followed a year later by the release of the World Wide Web by originator Tim Berners-Lee and CERN. The same year the first commercial service provider began operating and domain registration officially entered the public domain.

Initially the registration of domain names was free, subsidized by the National Science Foundation through IANA, but by 1992 a new organization was needed to specifically handle the exponential increase in flow to the Internet. IANA and the NSF jointly created InterNIC, a quasi-governmental body mandated to organize and maintain the growing DNS registry and services.

Overwhelming growth forced the NSF to stop subsidizing domain registrations in 1995. InterNIC, due to budget demands, began imposing a $100.00 fee for each two-year registration. The next wave in the evolution of the DNS occurred in 1998 when the U.S. Department of Commerce published the ‘White Paper’. This document outlined the transition of management of domain name systems to private organizations, allowing for increased competition.

NSC – The History of Domain Names

National Semiconductor – NSC.com was registered

Date: 08/05/1986

On August 5, 1986, National Semiconductor became the 20th company to register their domain nsc.com

National Semiconductor was an American semiconductor manufacturer which specialized in analog devices and subsystems, formerly with headquarters in Santa Clara, California, United States. The company produced power management integrated circuits, display drivers, audio and operational amplifiers, communication interface products and data conversion solutions. National’s key markets included wireless handsets, displays and a variety of broad electronics markets, including medical, automotive, industrial and test and measurement applications.

On September 23, 2011, the company formally became part of Texas Instruments as the “Silicon Valley” division.

History

Founding

National Semiconductor was founded in Danbury, Connecticut by Dr. Bernard J. Rothlein on May 27, 1959, when he and seven colleagues, Edward N. Clarke, Joseph J. Gruber, Milton Schneider, Robert L. Hopkins, Robert L. Hoch, Richard N. Rau and Arthur V. Siefert, left their employment at the semiconductor division of Sperry Rand Corporation. The founding of the new company was followed by Sperry Rand filing a lawsuit against National Semiconductor for patent infringement. By 1965, as it was reaching the courts, the preliminaries of the lawsuit had caused the stock value of National to be depressed. The depressed stock values allowed Peter J Sprague to invest heavily in the company with Sprague’s family funds. Sprague also relied on further financial backing from a pair of west coast investment firms and a New York underwriter to take control as the Chairman of National Semiconductor. At that time Sprague was 27 years old. Jeffrey S. Young characterised the era as the beginning of venture capitalism. That same year National Semiconductor acquired Molectro. Molectro was founded in 1962, in Santa Clara, California by J. Nall and D. Spittlehouse, who were formerly employed at Fairchild Camera and Instrument Corporation. The acquisition also brought to National Semiconductor two experts in linear semiconductor technologies, Robert Widlar and Dave Talbert, who were also formerly employed at Fairchild. The acquisition of Molectro provided National with the technology to launch itself in the fabrication and manufacture of monolithic integrated circuits.

In 1967, Sprague hired five top executives away from Fairchild, among whom were Charles E. Sporck and Pierre Lamond. At the time of Sporck’s hiring, Robert Noyce was de facto head of semiconductor operations at Fairchild and Sporck was his operations manager. Sporck was appointed President and CEO of National. To make the deal better for Sporck’s hiring and appointment for half his former salary at Fairchild, Sporck was allotted a substantial share of National’s stock. In essence, Sporck took four of his personnel from Fairchild with him as well as three others from TI, Perkin-Elmer and Hewlett Packard to form a new eight man team at National Semiconductor. Incidentally, Sporck had been Widlar’s superior at Fairchild before Widlar left Fairchild to join Molectro due to a compensation dispute with Sporck. In 1968, National shifted its headquarters from Danbury, Connecticut to Santa Clara, California. However, like many companies, National retained its registration as a Delaware corporation, for legal and financial expediency. Over the years National Semiconductor acquired several companies like Fairchild Semiconductor (1987), and Cyrix (1997). However, over time National Semiconductor spun off these acquisitions. Fairchild Semiconductor became a separate company again in 1997 and the Cyrix microprocessors division was sold to VIA Technologies of Taiwan in 1999.

From 1997 to 2002, National enjoyed a large amount of publicity and awards with the development of the Cyrix Media Center, Cyrix WebPad, WebPad Metro and National Origami PDA concept devices created by National’s Conceptual Products Group. Based largely on the success of the WebPad National formed the Information Appliance division (highly integrated processors & “internet gadgets”) in 1998. The Information Appliance Division was sold to AMD in 2003. Other business like digital wireless chipsets, image sensors, PC I/O chipsets have also been recently closed down or sold off as National has reincarnated itself as a high performance analog semiconductor company.

The transformation of National Semiconductor

Peter Sprague, Pierre Lamond and the affectionately called Charlie Sporck worked hand-in-hand, with support of the board of directors to transform the company into a multinational and world-class semiconductor concern. Immediately after becoming CEO, Sporck started an historic price war among semiconductor companies, which then trimmed the number of competitors in the field. Among the casualties to exit the semiconductor business were General Electric and Westinghouse. Cost control, overhead reduction and a focus on profits implemented by Sporck was the key element to National surviving the price war and subsequently in 1981 becoming the first semiconductor company to reach the US$1 billion annual sales mark. However, the foundation that made National successful was its expertise in analogue electronics, TTL (transistor–transistor logic) and MOSFET (metal-oxide-semiconductor field-effect transistor) integrated circuit technologies. As they had while employed in Fairchild – Sporck and Lamond directed National Semiconductor towards the growing industrial and commercial markets and to rely less on military and aerospace contracts. Those decisions coupled with inflationary growth in use of computers provided the market for the expansion of National. Meanwhile, sources of funds associated with Sprague coupled with creative structuring of cash flow buffering due to Sporck and Lamond provided the financing required for that expansion. Lamond and Sporck had also managed to attract and extract substantial funds to finance the expansion.[ Among Sporck’s cost control efforts was his attraction towards low-cost labour and outsourcing of labour. National Semiconductor was among the pioneers in the semiconductor industry to invest in facilities to perform final manufacturing operations of integrated circuits in developing countries, especially in Southeast Asia. National Semiconductor’s manufacturing improvements under Sporck (in collaboration with Lamond) had not been enabled by emphasis on process innovation but on improving and standardizing processes already established by other companies like Fairchild and Texas Instruments, as well as by frequent raiding to hire from Fairchild’s pool of talents.

Acquisition by Texas Instruments

On April 4, 2011, Texas Instruments announced that it had agreed to buy National Semiconductor for $6.5 billion in cash. Texas Instruments paid $25 per share of National Semiconductor stock, an 80% premium over the April 4, 2011 closing share price of $14.07. The deal made Texas Instruments one of the world’s largest makers of analog technology components. On September 19, 2011, the Chinese minister approved the merger, the last one needed.[citation needed] The companies formally merged on September 23, 2011.

NPL – The History of Domain Names

National Physical Laboratory (NPL)

Date: 01/01/1965

In 1965, Donald Davies of the National Physical Laboratory (United Kingdom) proposed a national data network based on packet-switching.The proposal was not taken up nationally, but by 1970 he had designed and built the Mark I packet-switched network and proved the technology under operational conditions. By 1976 12 computers and 75 terminal devices were attached and more were added until the network was replaced in 1986.

Donald Watts Davies, was a Welsh computer scientist who was employed at the UK National Physical Laboratory (NPL). In 1965 he developed the concept of packet switching in computer networking, and implemented it in the NPL network. This was independent of the work of Paul Baran in the United States who had a similar idea in the early 1960s. The ARPANET project credited Davies primarily for his influence.

Early life

Davies was born in Treorchy in the Rhondda Valley, Wales. His father, a clerk at a coalmine, died a few months later, and his mother took Donald and his twin sister back to her home town of Portsmouth, where he went to school. He received a BSc degree in physics (1943) at Imperial College London, and then joined the war effort working as an assistant to Klaus Fuchs[ on the nuclear weapons Tube Alloys project at Birmingham University. He then returned to Imperial taking a first class degree in mathematics (1947); he was also awarded the Lubbock memorial Prize as the outstanding mathematician of his year.

National Physical Laboratory

From 1947, he worked at the National Physical Laboratory (NPL) where Alan Turing was designing the Automatic Computing Engine (ACE) computer. It is said that Davies spotted mistakes in Turing’s seminal 1936 paper On Computable Numbers, much to Turing’s annoyance. These were perhaps some of the first “programming” errors in existence, even if they were for a theoretical computer, the universal Turing machine. The ACE project was overambitious and floundered, leading to Turing’s departure. Davies took the project over and concentrated on delivering the less ambitious Pilot ACE computer, which first worked in May 1950. A commercial spin-off, DEUCE was manufactured by English Electric Computers and became one of the best-selling machines of the 1950s. Davies also worked on applications of traffic simulation and machine translation. In the early 1960s, he worked on government technology initiatives designed to stimulate the British computer industry.

Packet switched network design

In 1966 he returned to the NPL at Teddington just outside London, where he headed and transformed its computing activity. He became interested in data communications following a visit to the Massachusetts Institute of Technology, where he saw that a significant problem with the new time-sharing computer systems was the cost of keeping a phone connection open for each user. Davies’ key insight came in the realisation that computer network traffic was inherently “bursty” with periods of silence, compared with relatively constant telephone traffic. His work on packet switching, presented by his colleague Roger Scantlebury, initially caught the attention of the developers of ARPANET, a U.S. defence network, at a conference in Gatlinburg, Tennessee, in October 1967. In Scantlebury’s report following the conference, he noted “It would appear that the ideas in the NPL paper at the moment are more advanced than any proposed in the USA”. Davis first presented his own ideas on packet switching at a conference in Edinburgh on 5 August 1968.

In 1965, Davies coined the term packet switching for the concept of dividing computer messages into packets that are routed independently, and possibly via differing routes, across a network and are reassembled at the destination, a concept previously known under a cryptic term distributed adaptive message block switching by Paul Baran. He used the term “packets” as it was capable of being translated into languages other than English without compromise. At NPL Davies helped build a packet-switched network (Mark I NPL network). It was replaced with the Mark II in 1973, and remained in operation until 1986, influencing other research in the UK and Europe. Larry Roberts of the Advanced Research Projects Agency in the United States applied Davies’ concepts of packet switching in the late 1960s for the ARPANET, which went on to become a predecessor to the Internet.

Later work

Davies relinquished his management responsibilities in 1979 to return to research. He became particularly interested in computer network security. He retired from the NPL in 1984, becoming a security consultant to the banking industry.

November Int – The History of Domain Names

November .int introduced

Date: 11/01/1988

In November 1988, another TLD was introduced,int. This TLD was introduced in response to NATO’s request for a domain name which adequately reflected its character as an international organization. It was also originally planned to be used for some Internet infrastructure databases, such as ip6.int, the IPv6 equivalent of in-addr.arpa. However, inMay 2000, the Internet Architecture Board proposed to exclude infrastructure databases from the int domain. All new databases of this type would be created in arpa (a legacy domain from the conversion of ARPANET), and existing usage would move to arpa wherever feasible, which led to the use of ip6.arpa for IPv6reverse lookups.

1988 TCP/IP goes global In 1988 Daniel Karrenberg, from Centrum Wiskunde & Informatica (CWI) in Amsterdam, visited Ben Segal,CERN’s TCP/IP Coordinator, looking for advice about the transition of the European side of the UUCP Usenet network (much of which ran over X.25 links) over to TCP/IP. In 1987, Ben Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks, and in 1989 CERN opened its first external TCP/IP connections. This coincided with the creation of Réseaux IP Européens (RIPE),initially a group of IP network administrators who met regularly to carry out co-ordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.

1988 The term “internet” was adopted in the first RFC published on the TCP protocol (RFC 675: Internet Transmission Control Program, December 1974) as an abbreviation of the term internetworking and the two terms were used interchangeably. In general, an internet was any network using TCP/IP. It was around the time when ARPANET was inter linked with NSFNETin the late 1980s, that the term was used as the name of the network, Internet, being a large and global TCP/IP network.

1988 military use In 1981 NSF supported the development ofthe Computer Science Network (CSNET). CSNET connected with ARPANET usingTCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange. Its experience with CSNET lead NSF to use TCP/IP when it created NSFNET, a 56kbit/s backbone established in 1986, that connected the NSF supported supercomputing centers and regional research and education networks in theUnited States. However, use of NSFNET was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5Mbit/s in 1988. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990. NSFNET was expanded and upgraded to 45 Mbit/s in 1991, and was decommissioned in 1995 when it was replaced by backbones operated by several commercial Internet Service Providers.

NonLatin ICCTLD – The History of Domain Names

Countries with non-Latin based alphabets or scripting systems apply for internationalized country code top-level domain names, which are displayed in their language-native script or alphabet

Date: 01/01/2009

Since 2009, countries with non–Latin-based scripts may apply for internationalized country code top-level domain names, which are displayed in end-user applications in their language-native script or alphabet, but use a Punycode-translated ASCII domain name in the Domain Name System.

IDN ccTLDs are specially encoded domain names that are displayed in an end user application, such as a web browser, in their language-native script or alphabet, such as the Arabic alphabet, or a non-alphabetic writing system, such as Chinese characters. IDN ccTLDs are an application of the internationalized domain name system to top-level Internet domains assigned to countries, or independent geographic regions.

Although the domain class uses the term code, some of these ccTLDs are not codes but full words. For example, السعودية (as-Suʻūdiyya) is not an abbreviation of “Saudi Arabia”, but the common short-form name of the country in Arabic.

Countries with internationalized ccTLDs also retain their traditional ASCII-based ccTLDs.

As of December 2014 there are 45 approved internationalized country code top-level domains. The most used are .рф (Russia) with over 900,000 domains names, .台灣 (Taiwan) with around 500,000 and .中国 (China) with over 200,000 domains.

On May 5, 2010, the first implementations, all in the Arabic alphabet, were activated. Egypt was assigned the مصر. country code, Saudi Arabia السعودية., and the United Arab Emirates امارات., (all reading right to left as is customary in Arabic). ICANN CEO Rod Beckstrom described the launch as “historic” and “a seismic shift that will forever change the online landscape.”  “This is the beginning of a transition that will make the Internet more accessible and user friendly to millions around the globe, regardless of where they live or what language they speak,” he added. Senior director for internationalised domain names Tina Dam said it was “the most significant day” since the launch of the Internet. According to ICANN, Arabic was chosen for the initial roll out because it is one of the most widely used non-Latin languages on the Internet. There are problems entering a mixed left-to-right and right-to-left text string on a keyboard, making fully Arabic web addresses extra useful.

As of June 2010, four such TLDs have been implemented: three using the Arabic alphabet, السعودية., مصر. and امارات. (for Egypt, Saudi Arabia and the United Arab Emirates, respectively), and one using Cyrillic, .рф (for Russia). Five new IDN ccTLDs using Chinese characters were approved in June 2010: .中国 with variant .中國 (for mainland China), .香港 (for Hong Kong), and .台灣 with variant .台湾 (for Taiwan).

Nominet – The History of Domain Names

Nominet Considers 1 Year .UK Registrations

July 5, 2011

Nominet is considering offering .uk domain names for periods other than two years, including shorter one year registrations.

According to the registry, around 90% of domain name registrations in major top level domain names are for only one year.

The group proposes several options, including one that would not offer one year registrations but would add five and ten year options to the mix.

In all of the options that include one year registrations, the annual wholesale cost of a single year is £3.50, or £1.00 above the standard annual price.

Nominet is accepting feedback on variable registration periods through August 5.

Nominet-UKdomains – The History of Domain Names

Nominet proposes second level .uk domain names

October 1, 2012

.UK domain name registry Nominet wants to allow businesses to register shorter .uk domain names.

The group has opened a three month consultation period for what it’s calling direct.uk — which is a catchy name for offering second level .uk domain names.

Right now only third level domains, such as name.co.uk, are available. Direct.uk would let businesses register name.uk.

The second level domains would likely include added security features such as daily malware scanning and Domain Name System Security Extensions (DNSSEC). Registrant verification may also be required.

Wholesale pricing would be £20 per year, which is nearly ten times the current third level pricing.

Registrants of existing third level .uk domain names may not automatically get rights in the same second level domain name, although there will be a sunrise period in which unregistered rights in a term may be considered.

NIC – The History of Domain Names

Network Information Center (NIC)

Date: 01/01/1970

The Network Information Center (NIC), also known as InterNIC from 1993 until 1998, was the organization primarily responsible for Domain Name System (DNS) domain name allocations and X.500 directory services. From its inception in 1972 until October 1, 1991, it was run by the Stanford Research Institute, now known as SRI International, and led by Jake Feinler. From October 1991 until September 18, 1998, it was run by Network Solutions. Thereafter, the responsibility was assumed by the Internet Corporation for Assigned Names and Numbers (ICANN).

It was accessed through the domain name internic.net, with email, FTP and World Wide Web services run at various times by SRI, Network Solutions, Inc., and AT&T. The InterNIC also coordinated the IP address space, including performing IP address management for North America prior to the formation of ARIN. InterNIC is a registered service mark of the U.S. Department of Commerce. The use of the term is licensed to the Internet Corporation for Assigned Names and Numbers (ICANN). The first central authority to coordinate the operation of the network was the Network Information Center (NIC). The NIC was based in Doug Engelbart’s lab, the Augmentation Research Center, at the Stanford Research Institute (now SRI International) in Menlo Park, California.In 1972, Elizabeth J. Feinler, better known as Jake, became principal investigator of the project. The Internet Assigned Numbers Authority (IANA) assigned the numbers, while the NIC published them to the rest of the network. Jon Postel fulfilled the role of manager of IANA, in addition to his role as the RFC Editor, until his death in 1998.

The NIC provided reference service to users (initially over the phone and by physical mail), maintained and published a directory of people (the “white pages”), a resource handbook (the “yellow pages”, a list of services) and the protocol handbook. After the Network Operations Center at Bolt, Bernek and Newman brought new hosts onto the network, the NIC registered names, provided access control for terminals, audit trail and billing information, and distributed Request for Comments (RFCs). Feinler, working with Steve Crocker, Jon Postel, Joyce Reynolds and other members of the Network Working Group (NWG), developed RFCs into the official set of technical notes for the ARPANET and later the Internet. The NIC provided the first links to online documents using the NLS Journal system developed at SRI’s Augmentation Research Center. On the ARPANET, hosts were given names to be used in place of numeric addresses. Owners of new hosts sent email to HOSTSMASTER@SRI-NIC.ARPA to request an address. A file named HOSTS.TXT was distributed by the NIC and manually installed on each host on the network to provide a mapping between these names and their corresponding network address. As the network grew, this became increasingly cumbersome. A technical solution came in the form of the Domain Name System, designed by Paul Mockapetris.

The Defense Data Network Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains mil, gov, edu, org, net, com and us. DDN-NIC also performed root nameserver administration and Internet number assignments under a United States Department of Defense contract starting in 1984.
Network Solutions

In 1990, the Internet Activities Board proposed changes to the centralized NIC/IANA arrangement. The Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC, which had been managed by SRI since 1972, to Government Systems, Inc (GSI), which subcontracted it to the small private-sector firm Network Solutions. On October 1, 1991, the NIC services were moved from a DECSYSTEM-20 machine at SRI to a Sun Microsystems SPARCserver running SunOS 4.1 at GSI in Chantilly, Virginia. By the 1990s, most of the growth of the Internet was in the non-defense sector, and even outside the United States. Therefore, the US Department of Defense would no longer fund registration services outside of the mil domain.

The National Science Foundation started a competitive bidding process in 1992; subsequently, in 1993, NSF created the Internet Network Information Center, known as InterNIC, to extend and coordinate directory and database services and information services for the NSFNET; and provide registration services for non-military Internet participants. NSF awarded the contract to manage InterNIC to three organizations; Network Solutions provided registration services, AT&T provided directory and database services, and General Atomics provided information services. General Atomics was disqualified from the contract in December 1994 after a review found their services not conforming to the standards of its contract. General Atomics’ InterNIC functions were assumed by AT&T.

Beginning in 1996, Network Solutions rejected domain names containing English language words on a “restricted list” through an automated filter. Applicants whose domain names were rejected received an email containing the notice: “Network Solutions has a right founded in the First Amendment to the U.S. Constitution to refuse to register, and thereby publish, on the Internet registry of domain names words that it deems to be inappropriate.” Domain names such as “shitakemushrooms.com” would be rejected, but the domain name “shit.com” was active since it had been registered before 1996. Network Solutions eventually allowed domain names containing the words on a case-by-case basis, after manually reviewing the names for obscene intent. This profanity filter was never enforced by the government and its use was not continued by ICANN when it took over governance of the distribution of domain names to the public.

Transfer to ARIN and ICANN

The InterNIC project included Internet IP number assignment, ASN assignment, and reverse DNS zone (in-addr.arpa) management tasks until December 1997 when the American Registry for Internet Numbers (ARIN) came into operation. At that time, responsibility for these tasks was transferred by the National Science Foundation from the InterNIC project to ARIN via modification of the cooperative agreement with Network Solutions.

The InterNIC Directory and Database services provided by AT&T were discontinued on March 31, 1998 after their cooperative agreement with NSF expired.

In 1998, both IANA and InterNIC project were reorganized under the control of the Internet Corporation for Assigned Names and Numbers (ICANN), a California non-profit corporation contracted by the US Department of Commerce to manage a number of Internet-related tasks. The role of operating the DNS system was privatized, and opened up to competition, while the central management of name allocations would be awarded on a contract tender basis. In July 2010, the IAB and Number Resource Organization agreed that ICANN should perform the in-addr.arpa zone technical management tasks, and this transition to ICANN was completed in February 2011.