Author Archives: admin

TLD – The History of Domain Names

Country Code Top Level Domains

Date: 01/02/1981

Country Code Top-Level Domains (ccTLDs) are two-letter Internet top-level domains (TLDs) specifically designated for a particular country, sovereign state or autonomous territory for use to service their community. ccTLDs are derived from ISO 3166-1 alpha-2 country codes.

A country code top-level domain (ccTLD) is an Internet top-level domain generally used or reserved for a country, sovereign state, or dependent territory identified with a country code. All ASCII ccTLD identifiers are two letters long, and all two-letter top-level domains are ccTLDs. In 2010, the Internet Assigned Numbers Authority (IANA) began implementing internationalized country code top-level domains, consisting of language-native characters when displayed in an end-user application. Creation and delegation of ccTLDs is described in RFC 1591, corresponding to ISO 3166-1 alpha-2 country codes.

When the Domain Name System was created in the 1980s,the domain name space was divided into two main groups of domains. The country code top-level domains (ccTLD) were primarily based on the two-character territory codes of ISO-3166 country abbreviations. In addition, a group of seven generic top-level domains (gTLD) was implemented which represented a set of categories of names and multi-organizations. These were the domains GOV, EDU,COM, MIL, ORG, NET, and INT.

Implementation

The implementation of ccTLDs was started by IANA. The delegation and creation of ccTLDs is presented within RFC 1591. In order to determine whether new ccTLDs should be added or not, the IANA follows the provisions of ISO 3166 – Maintenance Agency.

IANA’s Procedures for ccTLDs

Within its database, IANA contains authoritative information related to ccTLDs, referring to sponsoring organizations, technical and administrative contacts, name servers, registration URLs and other such information. This type of information provides extra details regarding the IANA’s procedures for maintaining the ccTLD database.

Delegation and Redelegation

The process through which the designated manager, or managers, is changed is know as redelegation. The process follows the provisions of ICP-1 and RFC 1591. IANA receives all requests of a sponsoring organization related to delegation and redelegation for the ccTLDs. The requests are then analyzed by IANA based on various technical and public criterion, and finally sent to the ICANN Board of Directors for approval or refusal. If approved, IANA is also responsible for the implementation of the request.

Conceptually speaking, the delegation and redelegation processes are simple, but can easily become complex if there are many organizations and individuals involved in the process. There is a set of steps that must be followed before sending the request for delegation or redelegation. An initial request should be developed based on The Change Request Template and supplementary information to prove that the eligibility criteria has been met by the initial request. All the information supplied is used by IANA to fortify the request received.

ccTLDs and ICANN

The policies developed by ICANN are implemented by gTLD registry operators, ccTLD managers, root-nameserver operators and regional Internet registries. One of the main activities of ICANN is to work with other organizations involved in the technical coordination of the Internet with the purpose of formally documenting their participatory role within the ICANN process. These organizations are committed to the ICANN policies that result from their work. Starting in 2000, ICANN started cooperating with ccTLD managers to document their relationship. Due to various circumstances such as: the type of organization, cultural issues, economics, the legal environment, etc., the relationships between ICANN and ccTLD mangers are often complex. Another consideration is the role of the national government in “managing or establishing policy for their own ccTLD” (role recognized in the June 1998, U.S. Government White Paper). In 2009, ICANN began the implementation of an IDN ccTLD Fast Track Process, whereby countries that use non-Latin script are able to claim ccTLDs in their native script and the corresponding Latin version. As of early 2011, 33 requests have been received, representing 22 languages. More than half have already been approved.

TLD Trademarks – The History of Domain Names

More TLD Trademarks: .Law, .Kom, .Tom, .Construction, .Hub

July 5, 2011

The “ticking time bomb” in new TLDs continues with no work from ICANN.

New top level domain name trademark “frontrunning” continues as 9 more trademarks have been filed related to new TLDs.

USM CHINA/HONG KONG LTD filed applications for .Hub, .Tom, Kom, and .Kom. The Kom trademark applications are troubling given VeriSign’s plans to apply for transliterations of .com. And .Tom? Sounds a lot like .com.

Thomas A Brackey of Beverly Hills filed three separate trademark applications for .law. A business called Dot Construction LLC, which has Brackey’s same address, filed to trademark application for .construction.

I think these TLD trademark applications, some of which have slipped through the examiners hands, represent a huge problem for new top level domain names. Trademark holders will use them to their advantage.

Tim Schumacher – The History of Domain Names

Tim Schumacher to Leave SEDO

December 5, 2011

Sedo co-founder Tim Schumacher to resign as CEO and transition to board of directors, Tobias Flaitz to become new CEO

Sedo announced today that co-founder and CEO Tim Schumacher will be stepping down from the company at the end of January. He will transition to the company’s Board of Directors.

The board of Sedo Holding AG has named Tobias Flaitz as the new CEO of Sedo. On February 1, 2012, he will succeed Sedo’s current CEO and co-founder, Tim Schumacher, who’s transitioning to the board of Directors.

Tim Schumacher co-founded Sedo in 2001, together with Marius Würzner, Ulrich Essmann and Ulrich Priesner. He led Sedo to become the world’s preeminent domain trading and parking platform. In 2007, he was named “Entrepreneur of the Year” by Ernst & Young Germany. In 2009, he became CEO of Sedo Holding AG, which is comprised of Sedo’s domain business and ‘affilinet’, one of Europe’s leading affiliate marketing platforms.

Incoming CEO, Tobias Flaitz, has more than 13 years of professional experience including eight years in strategic business consulting and five at Burda, one of Germany’s leading media companies. He holds a Master of Science in Chemical Engineering (Stuttgart, Germany and Seville, Spain) and a Master of Business Administration (MBA) from the University of St.Gallen, Switzerland, and Berkeley, USA.

Tim Schumacher – The History of Domain Names

Tim Schumacher to Leave SEDO

December 5, 2011

Sedo co-founder Tim Schumacher to resign as CEO and transition to board of directors, Tobias Flaitz to become new CEO

Sedo announced today that co-founder and CEO Tim Schumacher will be stepping down from the company at the end of January. He will transition to the company’s Board of Directors.

The board of Sedo Holding AG has named Tobias Flaitz as the new CEO of Sedo. On February 1, 2012, he will succeed Sedo’s current CEO and co-founder, Tim Schumacher, who’s transitioning to the board of Directors.

Tim Schumacher co-founded Sedo in 2001, together with Marius Würzner, Ulrich Essmann and Ulrich Priesner. He led Sedo to become the world’s preeminent domain trading and parking platform. In 2007, he was named “Entrepreneur of the Year” by Ernst & Young Germany. In 2009, he became CEO of Sedo Holding AG, which is comprised of Sedo’s domain business and ‘affilinet’, one of Europe’s leading affiliate marketing platforms.

Incoming CEO, Tobias Flaitz, has more than 13 years of professional experience including eight years in strategic business consulting and five at Burda, one of Germany’s leading media companies. He holds a Master of Science in Chemical Engineering (Stuttgart, Germany and Seville, Spain) and a Master of Business Administration (MBA) from the University of St.Gallen, Switzerland, and Berkeley, USA.

ThirdLevelDomains – The History of Domain Names

Third Level Domains

Date: 01/01/2002

A third-level domain is the next highest level following the second-level domain in domain name hierarchy. It is the segment that is found directly to the left of the second-level domain. The third-level domain is often called a “subdomain”, and includes a third domain section to the URL.

In large organizations, every department or division may include a unique third-level domain that can act as a simple, yet effective, way of identifying that particular department.

Various third-level domain names are used to balance the load on sites with heavy traffic. At times, names like www1 or www2 are used for this objective.

For example, in www.mydomain.com, “www” is the third-level domain. The default or the most commonly used third-level domain is “www”. The third-level domain is generally used to mention a certain server inside a company.

Domain names are made with a minimum of two levels, a top-level domain (TLD) and a second-level domain. TLD is the extension or suffix attached to the domain names. Only a small selection of predefined TLDs are available, including .com, .org, .net, .biz, etc.

A second-level domain refers to the part of a Uniform Resource Locator (URL) that specifies the precise administrative owner linked to an Internet Protocol (IP) address. The second-level domain name incorporates the TLD name as well. For example, in www.mydomain.com, “.com” is the TLD and “mydomain.com” is the second level domain.

Third-level domain names are not mandatory unless the user has a specific requirement. It is actually possible to own a fully functional domain name like “mydomain.com”. All that is needed is two levels: the top level domain and the second-level domain name. However, the usage of third-level domain names can really add clarity to domain names, which makes them more intuitive.

Customized third-level domains are also used for specific purposes. For instance, if mydomain.com has a file transfer protocol (FTP) server to let users download files, its third-level domain name can be termed ftp and the full domain name would then be ftp.mydomain.com. Similarly, the domain names such as support.mydomain.com and members.mydomain.com can be employed to differentiate the support department and member department of mydomain.com respectively. This will help direct the Web traffic accordingly.

Third-level domains, which are written immediately to the left of a second-level domain. There can be fourth- and fifth-level domains, and so on, with virtually no limitation. An example of an operational domain name with four levels of domain labels is sos.state.oh.us. Each label is separated by a full stop (dot). ‘sos’ is said to be a sub-domain of ‘state.oh.us’, and ‘state’ a sub-domain of ‘oh.us’, etc. In general, subdomains are domains subordinate to their parent domain. An example of very deep levels of subdomain ordering are the IPv6 reverse resolution DNS zones, e.g., 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa, which is the reverse DNS resolution domain name for the IP address of a loopback interface, or the localhost name.

TheInternet-2000s – The History of Domain Names

The Internet in 2000’s

Date: 01/01/2000

It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication, by 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet.

WHAT IS THE INTERNET ?

The Internet is a worldwide system of interconnected computer networks that use the TCP/IP set of network protocols to reach billions of users. The Internet began as a U.S Department of Defense network to link scientists and university professors around the world. A network of networks, today, the Internet serves as a global data communications system that links millions of private, public, academic and business networks via an international telecommunications backbone that consists of various electronic and optical networking technologies. The terms “Internet” and “World Wide Web” are often used interchangeably; however, the Internet and World Wide Web are not one and the same.

THE INFLUENCE AND IMPACT OF THE INTERNET

The influence of the Internet on society is almost impossible to summarize properly because it is so all-encompassing. Though much of the world, unfortunately, still does not have Internet access, the influence that it has had on the lives of people living in developed countries with readily available Internet access is great and affects just about every aspect of life. To look at it in the most general of terms, the Internet has definitely made many aspects of modern life much more convenient. From paying bills and buying clothes to researching and learning new things, from keeping in contact with people to meeting new people, all of these things have become much more convenient thanks to the Internet. Communication has also been made easier with the Internet opening up easier ways to not only keep in touch with the people you know, but to meet new people and network as well. The Internet and programs like Skype have made the international phone industry almost obsolete by providing everyone with Internet access the ability to talk to people all around the world for free instead of paying to talk via landlines. Social networking sites such as Facebook, Twitter, YouTube and LinkedIn have also contributed to a social revolution that allows people to share their lives and everyday actions and thoughts with millions.

The Internet has also turned into big business and has created a completely new marketplace that did not exist before it. There are many people today that make a living off the Internet, and some of the biggest corporations in the world like Google, Yahoo and EBay have the Internet to thank for their success. Business practices have also changed drastically thanks to the Internet. Off-shoring and outsourcing have become industry standards thanks to the Internet allowing people to work together from different parts of the world remotely without having to be in the same office or even city to cooperate effectively. All this only scratches the surface when talking about the Internet’s impact on the world today, and to say that it has greatly influenced changes in modern society would still be an understatement.

THE FUTURE: INTERNET2 AND NEXT GENERATION NETWORKS

The public Internet was not initially designed to handle massive quantities of data flowing through millions of networks. In response to this problem, experimental national research networks (NRN’s), such as Internet2 and NGI (Next Generation Internet), are developing high speed, next generation networks. In the United States, Internet2 is the foremost non for profit advanced networking consortium led by over 200 universities in cooperation with 70 leading corporations, 50 international partners and 45 non profit and government agencies. The Internet2 community is actively engaged in developing and testing new network technologies that are critical to the future progress of the Internet. Internet2 operates the Internet2 Network, a next-generation hybrid optical and packet network that furnishes a 100Gbps network backbone, providing the U.S research and education community with a nationwide dynamic, robust and cost effective network that satisfies their bandwidth intensive requirements. Although this private network does not replace the Internet, it does provide an environment in which cutting edge technologies can be developed that may eventually migrate to the public Internet. Internet2 research groups are developing and implementing new technologies such as Ipv6, multicasting and quality of service (QoS) that will enable revolutionary Internet applications.

New quality of service (QoS) technologies, for instance, would allow the Internet to provide different levels of service, depending on the type of data being transmitted. Different types of data packets could receive different levels of priority as they travel over a network. For example, packets for an application such as videoconferencing, which require simultaneous delivery, would be assigned higher priority than e-mail messages. However, advocates of net neutrality argue that data discrimination could lead to a tiered service model being imposed on the Internet by telecom companies that would undermine Internet freedoms. More than just a faster web, these new technologies will enable completely new advanced applications for distributed computation, digital libraries, virtual laboratories, distance learning and tele-immersion. As next generation Internet development continues to push the boundaries of what’s possible, the existing Internet is also being enhanced to provide higher transmission speeds, increased security and different levels of service.

The incredible growth of the Internet since 2000

Worldwide Internet users, 2000 and 2010

First off, the one thing you probably wanted to know right away. Here is how much the Internet has grown since the year 2000.

There were only 361 million Internet users in 2000, in the entire world. For perspective, that’s barely two-thirds of the size of Facebook today.

The chart really says it all. There are more than five times as many Internet users now as there were in 2000. And as has been noted elsewhere, the number of Internet users in the world is now close to passing two billion and may do so before the end of this year.

The Internet hasn’t just become larger, it’s also become more spread out, more global.

In 2000, the top 10 countries accounted for 73% of all Internet users.

In 2010, that number has decreased to 60%.

This becomes evident when viewing the distribution of Internet users for the top 50 countries in 2000 and in 2010. Note how much “thicker” the tail of the 2010 graph is.

Thanks to this growth, there are now many more countries with a significant presence on the Internet. Here’s another way to see how much things have changed:

Internet users by world region, 2000 and 2010

Now that we’ve established that the number of Internet users is more than five times as large as it was in 2000, how has that growth been distributed through the different regions of the world?

Back in 2000, Asia, North America and Europe were almost on an even footing in terms of Internet users. Now in 2010, the picture is a very different one. Asia has pulled away as the single largest region, followed by Europe, then by North America, and a significant distance exists between the three.

It’s also highly notable how the number of Internet users in Africa has increased. In 2000, the entire continent of Africa had just 4.5 million Internet users. In 2010 that has grown to more than 100 million.

History of the Internet – 2000 and Beyond

2000: After a bitter antitrust lawsuit, Microsoft is ordered to break into two separate businesses focusing on its highly successful Windows operating system and its other numerous software applications. 2000 would go down in Internet history as the rise and burst of the internet bubble. That year, the presence of internet consumer companies was widespread and visible in everyday life, including the visibility of .dotcom companies which paying millions of dollars for half-minute advertisements to air during the Super Bowl. When investors heard reports that Microsoft would be unable to settle its antitrust lawsuit with the government, the Dow Jones Industrial Average suffers the biggest one-day drop in its history up to that point. Microsoft will settle its lawsuit in 2001 allowing it to remain a single company, but in 2000, the internet revolution marches on with Googles rise to become the worlds largest search engine and the first official instance of internet voting taking places in the United States during a Democratic party primary in Arizona.

2002: Napster is shut down after it fails to win a lawsuit brought by the Recording Industry Association of America.

2003: The rise of spam; unsolicited junk mail messages begin to account for over half of all e-mail messages sent and received. Though the US Congress passes anti-spam legislation, the scourge of unsolicited email remains.

What was it like to be on the Internet in the 2000s?

Since we are still in the 2000’s and the first decade of them only ended 5 years ago I will assume you mean the earlier part of the decade from 2000-2005.

The first bubble had already popped so the first round of b/s start-ups just ripping each other off had been cleansed. Lots of services were just suddenly gone. This informs my knowledge now, I know for a fact our current start-up environment is unstable and don’t get too invested in most new services.

Firefox was your web browser unless you were an idiot or were told by an idiot not to use it….or you were old. Prior to this you were likely using Netscape or Opera though the version of IE that came with Win2000 was ok too.

Though earlier broadband was pretty widespread dial-up was more common just due to cost. AOL was still the biggest ISP, but was on a small downward trajectory.  However this was mostly due to customer poaching by low cost providers like NetZero.

Windows XP was king but after dropping OS9 in 1999 Mac was beginning to show signs of life with products like OSX and the iBook. As such Wifi was becoming far more common and seeing people just out and about on their laptop was no longer limited to the business district.

Myspace launched in 2003 and as time went they poached people from earlier services like Livejournal and Xanga, though at least to begin with services like Tribe.net made it hard for Myspace to become the undisputed king of social.

Google was king but just barely with about 54% marketshare (today its around 70%). There were still several alternatives in Yahoo, MSN (separate at the time), AOL, Terra/Lycos, AltVista, and AskJeeves. I don’t think I started using Google heavily until after 2005.

Aim, everyone used Aim. Most people had an Aol account solely to use Aim. This is unless you were that one or two people that everyone knew who preferred Yahoo or MSN chat…and everyone hated you for it. Also in a chat related thing EVERYONE wanted a Sidekick, it was the lone reason

T-moble made it out of the first decade of the 2000’s.

No Youtube, videos were pretty uncommon. The web was still text heavy. This is why when Myspace allowed users to post music to their personal page it was mind blowing.

Geocities and like services such as Angelfire were very popular but not as universal as social networking sites would become. Most people knew at least a couple people who had pages on networks like this they were constantly being pestered to visit via email.

File sharing on the likes of Napster and Gnutella was a big freaking deal. These were the early days too when the files you got were of great quality and not filled with internet herpes.

With file sharing music was plentiful and theoretically portable. But the idea everyone had an ipod is revisionist history. The ipod was initially Mac only and almost no one had a Mac. Once released on PC it took awhile for them to be in most peoples hands.  Folks spent a lot of time downloading music and then burning it to Cd’s. In my college classes there were a couple Creative Nomad players and I had an Archos Jukebox, but we were the outliers. I got my first ipod in 2005, it was a Gen3…the first one available for PC. I got it on sale when the Gen 4 was announced in 2004.

You were tethered, not to your phone which you likely didn’t have…but your desk. You would come home from school and race to the computer to catch up on stuff, instant, all the time access was still a few years off for most people.

Online games started becoming popular. Games Domain became Yahoo! Games and companies like Blizzard which had been around became far more well known during this time with online offerings such as Battle.net.

Wikipedia launched in 2001 forever changing how people came into many forms of source materials. I immediately forgot all about Encarta.

The2010s – The History of Domain Names

The 2010’s

Date: 01/01/2010

In 2010, ICANN approved a major review of its policies with respect to accountability, transparency, and public participation by the Berkman Center for Internet and Society at Harvard University. This external review wasin support of the work of ICANN’s Accountability and Transparency Review team.

2010 A country code top-level domain (ccTLD) is an Internettop-level domain generally used or reserved for a country, a sovereign state,or adependent territory. All ASCII ccTLD identifiers are two letters long, and all two-letter top-level domains areccTLDs. In 2010, the Internet Assigned Numbers Authority(IANA) began implementing internationalized country code TLDs, consisting of language-native characters when displayed in an end-user application. Creation and delegationof ccTLDs is described in RFC 1591, corresponding to ISO 3166-1 alpha-2 countrycodes. IANA is responsible for determining an appropriate trustee for each ccTLD. Administration and controlis then delegated to that trustee, which is responsible for the policies andoperation of the domain. Thecurrent delegation can be determined from IANA’slist of ccTLDs. Individual ccTLDs may have varying requirements and fees for registering subdomains. There may be a local presence requirement (for instance,citizenship or other connection to the ccTLD), as for example theCanadian (ca)and German (de) domains, or registration may beopen. Almost all current ISO3166-1 codes have been assigned and do exist in DNS. However, some of these are effectively unused. In particular, the ccTLDs for the Norwegian dependency Bouvet Island (bv) and the designation Svalbard and Jan Mayen (sj) do exist in DNS, but no subdomains have been assigned, and it is Norid policy not to assign any at present. Two French territories, bl (Saint Barthélemy) and mf (SaintMartin), still await local assignment by France’s government.

The code eh, although eligible as ccTLD for Western Sahara, has never been assigned and does not exist in DNS. Only one subdomain is still registered in gb (ISO 3166-1 for the United Kingdom) and no new registrations are being accepted for it. Sites in the United Kingdom generally use uk (see below). The former.um ccTLD for the U.S. Minor Outlying Islands was removed in April 2008. UnderRFC 1591 rules .um is eligible as a ccTLD on request by the relevant governmental agency and local Internet user community.

Texas Instruments – The History of Domain Names

TI.com was registered

Date: 03/25/1986

On March 25, 1986, Texas Instruments registered the ti.com domain name, making it 14th .com domain ever to be registered.

Texas Instruments Inc. (TI) is an American technology company that designs and manufactures semiconductors, which it sells to electronics designers and manufacturers globally. Headquartered in Dallas, Texas, United States, TI is one of the top ten semiconductor companies worldwide, based on sales volume. Texas Instruments’ focus is on developing analog chips and embedded processors, which accounts for more than 85 percent of their revenue. TI also produces TI digital light processing (DLP) technology and education technology products including calculators, microcontrollers and multi-core processors. To date, TI has more than 43,000 patents worldwide. Texas Instruments emerged in 1951 after a reorganization of Geophysical Service Incorporated, a company founded in 1930 that manufactured equipment for use in the seismic industry as well as defense electronics.[8] TI produced the world’s first commercial silicon transistor in 1950 and designed and manufactured the first transistor radio in 1954. Jack Kilby invented the integrated circuit in 1958 while working at TI’s Central Research Labs. TI also invented the hand-held calculator in 1967 and introduced the first single-chip microcontroller (MCU) in 1970, which combined all the elements of computing onto one piece of silicon. In 1987, TI invented the digital light processing device (also known as the DLP chip), which serves as the foundation for the company’s award-winning DLP technology and DLP Cinema. In 1990, TI came out with the popular TI-81 calculator which made them a leader in the graphing calculator industry. In 1997, its defense business was sold to Raytheon, which allowed TI to strengthen its focus on digital solutions. After the acquisition of National Semiconductor in 2011, the company has a combined portfolio of nearly 45,000 analog products and customer design tools, making it the world’s largest maker of analog technology components.

History

Texas Instruments was founded by Cecil H. Green, J. Erik Jonsson, Eugene McDermott, and Patrick E. Haggerty in 1951. McDermott was one of the original founders of Geophysical Service Inc. (GSI) in 1930. McDermott, Green, and Jonsson were GSI employees who purchased the company in 1941. In November, 1945, Patrick Haggerty was hired as general manager of the Laboratory and Manufacturing (L&M) division, which focused on electronic equipment. By 1951, the L&M division, with its defense contracts, was growing faster than GSI’s Geophysical division. The company was reorganized and initially renamed General Instruments Inc. Because there already existed a firm named General Instrument, the company was renamed Texas Instruments that same year. From 1956 to 1961, Fred Agnich of Dallas, later a Republican member of the Texas House of Representatives, was the Texas Instruments president. Geophysical Service, Inc. became a subsidiary of Texas Instruments. Early in 1988 most of GSI was sold to the Halliburton Company.

Geophysical Service Incorporated

In 1930, J. Clarence Karcher and Eugene McDermott founded Geophysical Service, an early provider of seismic exploration services to the petroleum industry. In 1939, the company reorganized as Coronado Corp., an oil company with Geophysical Service Inc (GSI), now as a subsidiary. On December 6, 1941, McDermott along with three other GSI employees, J. Erik Jonsson, Cecil H. Green, and H.B. Peacock purchased GSI. During World War II, GSI expanded their services to include electronics for the U.S. Army, Signal Corps, and the U.S. Navy. In 1951, the company changed its name to Texas Instruments, GSI becoming a wholly owned subsidiary of the new company. An early success story for TI-GSI came in 1965 when GSI was able (under a Top Secret government contract) to monitor the Soviet Union’s underground nuclear weapons testing under the ocean in Vela Uniform, a subset of Project Vela, to verify compliance of the Partial Nuclear Test Ban Treaty. Texas Instruments also continued to manufacture equipment for use in the seismic industry, and GSI continued to provide seismic services. After selling (and repurchasing) GSI, TI finally sold the company to Halliburton in 1988, at which point GSI ceased to exist as a separate entity.

Defense electronics

Texas Instruments entered the defense electronics market in 1942 with submarine detection equipment, based on the seismic exploration technology previously developed for the oil industry. The division responsible for these products was known at different points in time as the Laboratory & Manufacturing Division, the Apparatus Division, the Equipment Group and the Defense Systems & Electronics Group (DSEG). During the early 1980s, Texas Instruments instituted a quality program which included Juran training, as well as promoting statistical process control, Taguchi methods and Design for Six Sigma. In the late ’80s, the company, along with Eastman Kodak and Allied Signal, began involvement with Motorola institutionalizing Motorola’s Six Sigma methodology. Motorola, who originally developed the Six Sigma methodology, began this work in 1982. In 1992, the DSEG division of Texas Instruments’ quality improvement efforts were rewarded by winning the Malcolm Baldrige National Quality Award for manufacturing.

The following are some of the major programs of the former TI defense group.

Infrared and radar systems

TI developed the AAA-4 infra-red search and track (IRST) in the late 50’s and early 60’s for the F-4B Phantom for passive scanning of jet engine emissions but possessed limited capabilities and was eliminated on F-4D’s and later models. In 1956, TI began research on infrared technology that led to several line scanner contracts and with the addition of a second scan mirror the invention of the first forward looking infrared (FLIR) in 1963 with production beginning in 1966. In 1972 TI invented the Common Module FLIR concept, greatly reducing cost and allowing reuse of common components. TI went on to produce side-looking radar systems, the first terrain following radar and surveillance radar systems for both the military and FAA. TI demonstrated the first solid-state radar called Molecular Electronics for Radar Applications (MERA). In 1976 TI developed a microwave landing system prototype. In 1984 TI developed the first inverse synthetic aperture radar (ISAR). The first single-chip gallium arsenide radar module was developed. In 1991 the Military Microwave Integrated Circuit (MIMIC) program was initiated – a joint effort with Raytheon.

Missiles and laser-guided bombs

In 1961, TI won the guidance and control system contract for the defense suppression AGM-45 Shrike anti-radiation missile. This led later to the prime on the High-speed Anti-Radiation Missile (AGM-88 HARM) development contract in 1974 and production in 1981. In 1964, TI began development of the first laser guidance system for precision-guided munitions (PGM) leading to the Paveway series of laser-guided bombs (LGBs). The first LGB was the BOLT-117. In 1969, TI won the Harpoon (missile) Seeker contract. In 1986 TI won the Army FGM-148 Javelin fire-and-forget man portable anti-tank guided missile in a joint venture with Martin Marietta. In 1991 TI was awarded the contract for the AGM-154 Joint Standoff Weapon (JSOW).

Military computers

Because of TI’s research and development of military temperature range silicon transistors and integrated circuits (ICs), TI won contracts for the first IC-based computer for the U.S. Air Force in 1961 and for ICs for the Minuteman Missile the following year. In 1968, TI developed the data systems for Mariner Program. In 1991 TI won the F-22 Radar and Computer development contract.

Divestiture to Raytheon

As the defense industry consolidated, TI sold its defense business to Raytheon in 1997 for $2.95 billion. The Department of Justice required that Raytheon divest the TI Monolithic Microwave Integrated Circuit (MMIC) operations after closing the transaction.[19] The TI MMIC business accounted for less than $40 million in 1996 revenues, or roughly two percent of the $1.8 billion in total TI defense revenues was sold to TriQuint Semiconductor, Inc. Raytheon retained its own existing MMIC capabilities and has the right to license TI’s MMIC technology for use in future product applications from TriQuint. Shortly after Raytheon acquired TI DSEG, Raytheon then acquired Hughes Aircraft from General Motors. Raytheon then owned TI’s mercury cadmium telluride detector business and Infrared (IR) systems group. In California, it also had Hughes infrared detector and an IR systems business. When again the US government forced Raytheon to divest itself of a duplicate capability, the company kept the TI IR systems business and the Hughes detector business. As a result of these acquisitions these former arch rivals of TI systems and Hughes detectors work together. Immediately after acquisition, DSEG was known as Raytheon TI Systems (RTIS). It is now fully integrated into Raytheon and this designation no longer exists.

Semiconductors

In early 1952, Texas Instruments purchased a patent license to produce germanium transistors from Western Electric Co., the manufacturing arm of AT&T, for $25,000, beginning production by the end of the year. On January 1, 1953, Haggerty brought Gordon Teal to the company as a research director. Gordon brought with him his expertise in growing semiconductor crystals. Teal’s first assignment was to organize what became TI’s Central Research Laboratories (CRL), which Teal based on his prior experience at Bell Labs. Among his new hires was Willis Adcock who joined TI early in 1953. Adcock, who like Teal was a physical chemist, began leading a small research group focused on the task of fabricating “grown-junction silicon single-crystal small-signal transistors. Adcock later became the first TI Principal Fellow.

First silicon transistor and integrated circuits

On January 26, 1954, M Tanenbaum et al. at Bell Labs created the first workable silicon transistor. This work was reported in the spring of 1954 at the IRE off-the-record conference on Solid State Devices and later published in the Journal of Applied Physics, 26, 686–691(1955). Working independently in April 1954, Gordon Teal at TI created the first commercial silicon transistor and tested it on April 14, 1954. On May 10, 1954 at the Institute of Radio Engineers (IRE) National Conference on Airborne Electronics, in Dayton, Ohio. Teal also presented a paper, “Some Recent Developments in Silicon and Germanium Materials and Devices,” at this conference.

In 1954, Texas Instruments designed and manufactured the first transistor radio. The Regency TR-1 used germanium transistors, as silicon transistors were much more expensive at the time. This was an effort by Haggerty to increase market demand for transistors. Jack Kilby, an employee at TI’s Central Research Labs, invented the integrated circuit in 1958. Kilby recorded his initial ideas concerning the integrated circuit in July 1958 and successfully demonstrated the world’s first working integrated circuit on September 12, 1958. Six months later Robert Noyce of Fairchild Semiconductor (who went on to co-found Intel) independently developed the integrated circuit with integrated interconnect, and is also considered an inventor of the integrated circuit. In 1969, Kilby was awarded the National Medal of Science, and in 1982 he was inducted into the National Inventor’s Hall of Fame. Kilby also won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit. Noyce’s chip, made at Fairchild, was made of silicon, while Kilby’s chip was made of germanium. In 2008, TI named its new development laboratory “Kilby Labs” after Jack Kilby. In 2011, Intel, Samsung, LG, ST-Ericsson, Huawei’s HiSilicon Technologies subsidiary, Via Telecom and three other undisclosed chipmakers licensed the C2C link specification developed by Arteris Inc. and Texas Instruments.

Teltone – The History of Domain Names

Teltone Corporation – teltone.com was registered

Date: 11/17/1986

On November 17, 1986, Teltone Corporation registered the teltone.com domain name, making it 42nd .com domain ever to be registered.

Teltone were established in 1968 in Washington. The company produced telecommunications equipment including products that transmitted computer data over telephony infrastructure. Over time, the company increasingly developed and supported industrial monitoring applications and electronic security products.

Company Overview

Teltone Corporation is an American company that designs, manufactures, and sells specialty electronic telecommunications equipment, software, and components to various business end users and original equipment manufacturers internationally. The company was incorporated in 1968 and is headquartered in Bothell, Washington. It is publicly traded on the Pink Sheets under the trading symbol TLTN. The company’s products include OfficeLink and Gauntlet. OfficeLink allow its customers to extend their enterprise communications infrastructure by allowing employees and others to access network resources from remote locations. These products enable customers to maintain seamless connectivity between their customers, offices, and mobile workforce. The Gauntlet product provides defense-in-depth protection for substation control system equipment from malicious cyber attacks. It detects and prevents unauthorized entry. The company also provides telecommunications test tools which is used by application developers and manufacturers of telecommunications products. In addition, it provides line sharing products built to satisfy market and customer requirements. Line sharing products allow devices to share a single telephone line for remote data collection and other applications. The company primarily sells its products to customers located in North America, Asia, and western Europe. Its products are sold through various sales channels, including value added resellers, original equipment manufacturers, distributors, and direct sales to customers, as well as on the company’s website.

Business Summary

Teltone Corporation is global provider of telecommunication solutions. It designs and markets three product families: line sharing solutions for remote data collection, telecommunications test tools for design and manufacture and use of telecom equipment and remote voice solutions for call centers, teleworkers and mobile professionals and disaster preparedness. The Company’s remote voice, telecom test and line sharing products enhance business communications for customers across the globe. Its OfficeLink solutions give remote call center agents and other teleworkers seamless access to the corporate telephone system. Teltone’s telecom simulators are designed for application development and production test applications. Its line sharing products enable remote data collection, utility substation meter reading and remote equipment monitoring.

Description and history

Teltone Corporation is global provider of telecommunication solutions. It designs and markets three product families: line sharing solutions for remote data collection, telecommunications test tools for design and manufacture and use of telecom equipment and remote voice solutions for call centers, teleworkers and mobile professionals and disaster preparedness. The Company’s remote voice, telecom test, and line sharing products enhance business communications for customers across the globe. Its OfficeLink solutions give remote call center agents and other teleworkers seamless access to the corporate telephone system. Teltone’s telecom simulators are designed for application development and production test applications. Its line sharing products enable remote data collection, utility substation meter reading and remote equipment monitoring.

Line Sharing Solutions

Teltone’s line sharing products enable multiple devices to share a single telephone line for remote data collection and other applications. These solutions are focused on the unique requirements of the utilities industry and certain other specialized customers. Commonly used for remote data collection from utility substations, utility meter reading, elevator equipment monitoring, and self-service coin counting machines, the Company’s line sharing products are designed to meet specific standards for these specialized environments. Increasingly, more of the Company’s customers’ seek the capability to remotely access maintenance and diagnostic equipment incorporated into their own products that enable them to sell service and maintenance services.

Telecom Test Solutions

Teltone’s telecommunications simulators and emulators enable product testing without a live telephone connection. Telecommunications software developers, equipment manufacturers and end users can simulate various types of telephone lines for engineering and production testing of their products. The Company’s simulators are capable of satisfying a variety of performance test applications, such as Voice over Internet protocol (VoIP) gateways, which are interfaces between data and phone networks. Interactive voice response (IVR) systems, which are automated pre-recorded response systems typically used in customer service applications, and modem testing. The Company’s test solutions incorporate the most advanced telephone network features and also behaviors that can vary from public switched telephone network (PSTN) network specifications. This in turn helps such vendors stay competitive by developing products compatible with telephony features in a variety of environments. Teltone is continually researching next-generation test and demonstration requirements to capitalize on future telecommunications technologies. In July 2002, Teltone released the EDGE, the Company’s latest simulator product. This product incorporates automated test features, including scripting, which allow for a much more efficient interface to the types of automated test equipment used by telecom manufacturers.

Remote Voice Solutions

The OfficeLink solution allows a person to access their headquarters telephone system from their personal computer or phone and take calls just as they would in their office through their regular office telephone number. Calls are routed directly to the remote location from the office and voicemail consolidates into a single location. OfficeLink functions in conjunction with analog or digital PBX (Private Branch Exchange) systems and with a variety of different PBX products. OfficeLink products can also be implemented as part of a disaster preparedness program. Businesses can employ the Company’s products to structure their networks to be flexible and resilient enough to operate in a productive, safe and cost-effective manner in an unpredictable environment. The most common application of the Company’s OfficeLink product is the enterprise customer service call center. Due to OfficeLink’s flexible product architecture, customers can readily expand OfficeLink’s functions to satisfy the customers’ needs for general teleworkers and business continuity. OfficeLink solutions have been deployed in a variety of vertical markets, including financial services, utilities, manufacturing, catalog sales, insurance, government, media publishing, high technology and consumer goods and services. In July 2002, Teltone introduced the latest version of OfficeLink. Called ReVo, this product incorporates new features to more readily address the market for general telework, mobile professionals, remote call center agent monitoring by supervisors, and enhanced flexibility by switch type.

Industrial Defender, Inc. acquired Teltone Corporation in June, 2008.

Through this acquisition, Industrial Defender now offers the electric utility industry’s most comprehensive cyber security solution for industrial control and SCADA systems.

Telenet – The History of Domain Names

Telenet packet-switched network

Date: 01/01/1974

Telenet was an American commercial packet switched network which went into service in 1974. It was the first packet-switched network service that was available to the general public.

Telenet was an American commercial packet switched network which went into service in 1974. It was the first packet-switched network service that was available to the general public. Various commercial and government interests paid monthly fees for dedicated lines connecting their computers and local networks to this backbone network. Free public dialup access to Telenet, for those who wished to access these systems, was provided in hundreds of cities throughout the United States.

The original founding company, Telenet Inc., was established by Bolt Beranek and Newman (BBN) and recruited Larry Roberts (former head of the ARPANet) as President of the company, and Barry Wessler. GTE acquired Telenet in 1979. It was later acquired by Sprint and called “Sprintnet”. Sprint migrated customers from Telenet to the modern-day Sprintlink IP network, one of many networks composing today’s Internet. Telenet had its first offices in downtown Washington DC, then moved to McLean, Virginia. It was acquired by GTE while in McLean, and then moved offices in Reston, Virginia.

Under the various names, the company operated a public network, and also sold its packet switching equipment to other carriers and to large enterprise networks.

History

After establishing “value added carriers” was legalized in the U.S., Bolt Beranek and Newman (BBN) who were the private contractors for ARPANET set out to create a private sector version. In January 1975, Telenet Communications Corporation announced that they had acquired the necessary venture capital after a two-year quest, and on August 16 of the same year they began operating the first public packet-switching network.

Coverage

Originally, the public network had switching nodes in seven US cities:

  • Washington, D.C. (network operations center as well as switching)
  • Boston, Massachusetts
  • New York, New York
  • Chicago, Illinois
  • Dallas, Texas
  • San Francisco, California
  • Los Angeles, California

The switching nodes were fed by Telenet Access Controller (TAC) terminal concentrators both colocated and remote from the switches. By 1980, there were over 1000 switches in the public network. At that time, the next largest network using Telenet switches was that of Southern Bell, which had approximately 250 switches.

Internal Network Technology

The initial network used statically-defined hop-by-hop routing, using Prime commercial minicomputers as switches, but then migrated to a purpose-built multiprocessing switch based on 6502 microprocessors. Among the innovations of this second-generation switch was a patented arbitrated bus interface that created a switched fabric among the microprocessors. By contrast, a typical microprocessor-based system of the time used a bus; switched fabrics did not become common until about twenty years later, with the advent of PCI Express and [Riyad Hasan]].

Most interswitch lines ran at 56 kbit/s, with a few, such as New York-Washington, at T1 (i.e., 1.544 Mbit/s). The main internal protocol was a proprietary variant on X.75; Telenet also ran standard X.75 gateways to other packet switching networks.

Originally, the switching tables could not be altered separately from the main executable code, and topology updates had to be made by deliberately crashing the switch code and forcing a reboot from the network management center. Improvements in the software allowed new tables to be loaded, but the network never used dynamic routing protocols. Multiple static routes, on a switch-by-switch basis, could be defined for fault tolerance. Network management functions continued to run on Prime minicomputers.

Its X.25 host interface was the first in the industry and Telenet helped standardize X.25 in the CCITT.

Tektronix – The History of Domain Names

Tektronix – tek.com was registered

Date: 05/08/1986

On May 8, 1986, Tektronix became the 16th company to register their domain tek.com

Tektronix, Inc. “Tek” is an American company best known for manufacturing test and measurement devices such as oscilloscopes, logic analyzers, and video and mobile test protocol equipment. Originally an independent company, it is now a subsidiary of Fortive, a spinoff from Danaher Corporation. Several charities are or were associated with Tektronix, including the Tektronix Foundation and the M.J. Murdock Charitable Trust in Vancouver, Washington.

History

1946–1954

The company traces its roots to the electronics revolution that immediately followed World War II. The company’s founders C. Howard Vollum and Melvin J. “Jack” Murdock invented the world’s first triggered oscilloscope in 1946, a significant technological breakthrough. This oscilloscope touted by Tektronix was the model 511. The model 511 was a triggering with sweep oscilloscope. The first oscilloscope with a true time-base was the Tektronix Model 513.  The leading oscilloscope manufacturer at the time was DuMont Laboratories. DuMont pioneered the frequency-synch trigger and sweep. Allen DuMont personally tried the 511 at an electronics show and was impressed, but when he saw the price of $795, which was about twice as much as his equivalent model, he told Howard Vollum at the show that they would have a hard time to sell many. Tektronix was incorporated in 1946 with its headquarters at SE Foster Road and SE 59th Avenue in Portland, Oregon. In 1947 there were 12 employees, and 250 in 1951. By 1950 the company began building a manufacturing facility in Washington County, Oregon, at Barnes Road and the Sunset Highway, and expanded the facility by 1956 to 80,000 square feet (7,000 m2). The company then moved its headquarters to this site, following an employee vote.A detailed story of Howard Vollum and Jack Murdock along with the products that made Tektronix a leading maker of oscilloscopes can be found at the Museum of Vintage Tektronix Equipment.

1955–1969

In 1956, a large piece of property in nearby Beaverton became available, and the company’s employee retirement trust purchased the land and leased it back to the company. Construction began in 1957 and on May 1, 1959 Tektronix moved into its new Beaverton headquarters campus, on a 313-acre (1.27 km2) site which came to be called the Tektronix Industrial Park. In the late 1950s (1957–58), Tektronix set a new trend in oscilloscope applications that would continue into the 1980s. This was the introduction of the plug-in oscilloscope. Started with the 530 and 540 series oscilloscopes, the operator could switch in different horizontal sweep or vertical input plug-ins. This allowed the oscilloscope to be a flexible or adaptable test instrument. Later Tektronix would add in plug-ins to have the scope operate as a spectrum analyzer, waveform sampler, cable tester and transistor curve tracer. The 530 and 540 series also ushered in the delayed trigger, allowing to trigger between a sweep rather than at the beginning. This allows more stable triggering and better waveform reproduction. In 1961, Tektronix sold its first (possibly the world’s first practical) completely portable oscilloscope, the model 321. This oscilloscope could run on AC line (power mains) or on rechargeable batteries. It also brought the oscilloscope into the transistor age (only a Nuvistor ceramic tube was used for the vertical amp input). A year and a half later the model 321A came out and that was all transistors. The 560 series introduced the rectangular CRT to oscilloscopes. In 1964 Tektronix made an oscilloscope breakthrough, the world’s first mass-produced analog storage oscilloscope the model 564. Hughes Aircraft Company is credited with the first working storage oscilloscope (the model 104D) but it was made in very small numbers and is extremely rare today. In 1966, Tektronix brought out a line of high frequency full function oscilloscopes called the 400 series. The oscilloscopes were packed with features for field work applications. These scopes were outstanding performers often preferred over their laboratory bench models. The first models were the 422, a 16 MHz bandwidth and the 453, a 50 MHz bandwidth model. The following year the 454, a 150 MHz portable. These models put Tektronix well ahead of their competitors for years. The US Military contracted with Tektronix for a model 453 “ruggedized” for field servicing. The 400 series models would continue to be popular choices in the 1970s and 80’s. In addition the styling of the 400 series would be copied by Tektronix’s competitors. 400 series oscilloscopes are still being used currently.

1970–1985

The company’s IPO, when it publicly sold its first shares of stock, was on September 11, 1963. In 1974, the company acquired 256 acres (1.0 km2) in Wilsonville, Oregon where it built a facility for its imaging group. By 1976, the company employed nearly 10,000, and was the state’s largest employer. Tektronix’s 1956 expansion and, in 1962, Electro Scientific Industries’ similar move to Washington County and expansion are credited with fostering the development of a large high-tech industry in Washington County, a number of firms which collectively are often referred to as the “Silicon Forest”. For many years, Tektronix was the major electronics manufacturer in Oregon, and in 1981, its U.S. payroll peaked at over 24,000 employees. Tektronix also had operations in Europe, South America and Asia. European factories were located in Saint Peter’s, Guernsey (then in the European Free Trade Association) until 1990, Hoddesdon (North London, UK) and Heerenveen, Netherlands (then in the European Common Market). Some oscilloscopes marketed in Europe and the UK were sold under the brand name Telequipment but many in the UK used the Tektronix brand name in the 1960s and 70s. For many years, Tektronix operated in Japan as Sony-Tektronix, a 50-50 joint venture of Sony Corporation and Tektronix, Inc; this was due to Japanese trade restrictions at the time. Since then, Tektronix has bought out Sony’s share and is now the sole owner of the Japanese operation.[14] Under the Sony-Tektronix name, the 300 series oscilloscopes were light weight and totally portable. They replaced the model 321/321A oscilloscopes. Examples of the Sony/Tektronix models were 314, 323, 335 and 370. During the early 1970s, Tektronix made a major design change to their oscilloscopes. The introduction of the 5000 and 7000 series oscilloscopes. These oscilloscopes maintained the plug-in capabilities that originally started with the 530 and 540 series however the choice of plug-ins was even greater. These scopes used custom designed integrated circuits fabricated by Tektronix. The CRT’s were all rectangular and were all fabricated by Tektronix. These oscilloscopes provided on screen controls setting. The 5000 series was the general purpose line while the 7000 series were capable of a wide variety of applications and could accept as many as 4 plug-ins. One model the 7104 (introduced 1978) was a true 1 GHz bandwidth oscilloscope. Beginning with the firm’s first cathode ray oscilloscopes, Tektronix has enjoyed a leading position in the test and measurement market. Although its equipment was expensive, it had performance, quality, and stability. Most test equipment manufacturers built their oscilloscopes with off-the-shelf, generally available components. But Tektronix, in order to gain an extra measure of performance, used many custom-designed or specially-selected components. They even had their own factory for making ultra-bright and sharp CRT tubes. Later on, they built their own integrated circuit manufacturing facility in order to make custom ICs for their equipment. Tektronix instruments contributed significantly to the development of computers, test, and communications equipment and to the advancement of research and development in the high-technology electronics industry generally. Tektronix as time went on fabricated more and more of their electronic parts. This led to very specialized skills and talents which in time led to employees forming new businesses. Some former Tektronix employees left to create other successful “Silicon Forest” companies. Spin-offs include Mentor Graphics, Planar Systems, Floating Point Systems, Cascade Microtech, Merix Corporation, Anthro Corporation and Northwest Instrument Systems (NWIS) – later renamed to MicroCase. Even some of the spin-offs have created spin-offs, such as InFocus. As Tektronix fabricated more specialized parts, they spread out their product base to include logic analyzers, digital multimeters and signal generators. The TM500 and TM5000 rack mount series was born featuring custom designed test instruments chosen by the buyer.

1986–2006

Tektronix faces big challenges to its business structure. In the 1980s, Tektronix found itself distracted with too many divisions in too many markets. This led to decreasing earnings in almost every quarter. A period of layoffs, top management changes and sell-offs followed. In 1994, Tektronix spun off its printed circuit board manufacturing operation as a separate company, Merix Corp., headquartered in Forest Grove, Oregon.[16] Eventually, Tektronix was left with its original test and measurement equipment. Upon his promotion in 2000, the current CEO, Richard H. “Rick” Wills, carefully limited corporate spending in the face of the collapsing high-tech bubble. This led the way for Tektronix to emerge as one of the largest companies in its product niche, with a market capitalization of $3 billion as of April 2006. However, this failed to prevent it from becoming an acquisition target, and Tektronix was acquired by Danaher Corporation in 2007. Major Product Changes—Digital Sampling Scopes. The advancement in signal sampling techniques and digital processing, oscilloscope manufacturers found a new horizon in the market. The ability to sample the signal and digitize it for real time viewing or digitally store it for future use and maintain the integrity of the waveform. In addition a computer can be integrated with the scope to store many waveforms or instruct the scope to do further analysis. Color enhanced waveforms can be produced for ease in identification. Tektronix was heavily involved with designing digital sampling oscilloscopes. In the mid-1980s, they quickly replaced their analog oscilloscopes. Their 400, 5000 and 7000 series oscilloscopes were replaced with a new generation of digital oscilloscopes with storage capability, the 11000 and TDS series. The 11000 series were large rack mount laboratory models with large a flat CRT face and had touch screen, multiple color, and multiwaveform display capability. They were still plug-in units and could accept the older 7000 series 7- plug-ins and the new 11000 series 11A- plug-ins. The TDS series replaced the 300 and 400 series portable line. They had the same panel layout but with enhanced storage and measuring capabilities. During this period Tektronix would also expand its test equipment line to logic analyzers, signal generators etc. By the mid 1990s the use of the CRT was dropped and Tektronix started using LCD panels for display. The 11000 series would be replaced by the MSO (Mixed Signal Oscilloscope) which featured a color active matrix LCD. The TDS continued but with LCD panels starting with the TDS-210. In the TDS models, the lower priced models replaced the last of the 2000 series analog scopes and featured monochrome display while the higher end models were color LCD models which were more like the older 400 series scopes in performance. Spinoffs of the TDS was the TBS storage scope series. Later Tektronix would replace the 200 mini oscilloscopes with the TH series hand held digital oscilloscopes. All TDS and spinoff series with LCD display are totally portable (light weight and can run AC or on batteries).

2007 to present

On November 21, 2007, Tektronix was acquired by Danaher Corporation for $2.85 billion. Prior to the acquisition, Tektronix traded on the New York Stock Exchange under the symbol TEK, the nickname by which Tektronix is known to its employees, customers, and neighbors. On October 15, 2007, Danaher Corporation tendered an offer to acquire Tektronix for $38 cash a share, which equated to a valuation of approximately $2.8 billion. The deal closed five and a half weeks later, with 90 percent of TEK shares being sold in the tender offer. Also, as part of its acquisition by Danaher, the Communications Business division of Tektronix was spun off into a separate business entity under Danaher, Tektronix Communications.

The digital oscilloscope line that was introduced in the 1990s (MSO, TDS, TH series) are still being manufactured in some form.

On February 1, 2016, Tektronix introduced a new logo design, replacing a logo that had been in use since 1992, and indicated a shift in strategy to offer measurement products tailored for specific fields such as computing, communications and automotive. Danaher spun-off several subsidiaries, including Tektronix, in 2016 to create Fortive.

TCP IP Protocol – The History of Domain Names

TCP/IP protocol suite formalized

Date: 01/01/1982

TCP/IP (transmission control protocol/Internet protocol) is the suite of communications protocols that is used to connect hosts on the Internet and on most other computer networks as well. It is also referred to as the TCP/IP protocol suite and the Internet protocol suite.

Introducing the TCP/IP Protocol Suite

“TCP/IP” is the acronym that is commonly used for the set of network protocols that compose the Internet Protocol suite. Many texts use the term “Internet” to describe both the protocol suite and the global wide area network. In this book, “TCP/IP” refers specifically to the Internet protocol suite. “Internet” refers to the wide area network and the bodies that govern the Internet. To interconnect your TCP/IP network with other networks, you must obtain a unique IP address for your network. At the time of this writing, you obtain this address from an Internet service provider (ISP). If hosts on your network are to participate in the Internet Domain Name System (DNS), you must obtain and register a unique domain name. The InterNIC coordinates the registration of domain names through a group of worldwide registries.

Protocol Layers and the Open Systems Interconnection Model

Most network protocol suites are structured as a series of layers, sometimes collectively referred to as a protocol stack. Each layer is designed for a specific purpose. Each layer exists on both the sending and receiving systems. A specific layer on one system sends or receives exactly the same object that another system’s peer process sends or receives. These activities occur independently from activities in layers above or below the layer under consideration. In essence, each layer on a system acts independently of other layers on the same system. Each layer acts in parallel with the same layer on other systems.

OSI Reference Model

Most network protocol suites are structured in layers. The International Organization for Standardization (ISO) designed the Open Systems Interconnection (OSI) Reference Model that uses structured layers. The OSI model describes a structure with seven layers for network activities. One or more protocols is associated with each layer. The layers represent data transfer operations that are common to all types of data transfers among cooperating networks.

TCP/IP Protocol Architecture Model

The OSI model describes idealized network communications with a family of protocols. TCP/IP does not directly correspond to this model. TCP/IP either combines several OSI layers into a single layer, or does not use certain layers at all. The following table shows the layers of the Oracle Solaris implementation of TCP/IP. The table lists the layers from the topmost layer (application) to the bottommost layer (physical network).

Applications in TCP/IP

TCP/IP does not provide session or presentation services directly to an application. Programmers are on their own, but this does not mean they have to create everything  from scratch. For example, applications can use a character-based presentation service called the Network Virtual Terminal (NVT), part of the Internet’s telnet remote access specifi cation. Other applications can use Sun’s External Data Representation (XDR) or IBM’s (and Microsoft’s) NetBIOS programming libraries for presentation services. In this respect, there are many presentation layer services that TCP/IP can use, but there is no formal presentation service standard in TCP/IP that all applications must use. Host TCP/IP implementations typically provide a range of applications that provide users with access to the data handled by the transport-layer protocols. These applications use a number of protocols that are not part of TCP/IP proper, but are used with TCP/IP. These protocols include the Hyper-Text Transfer Protocol (HTTP) used by Web browsers, the Simple Message Transfer Protocol (SMTP) used for email, and many others.

TCP History

Due to its prominent role, the history of TCP is impossible to describe without going back to the early days of the protocol suite as a whole. In the early 1970s, what we know of today as the global Internet was a small research internetwork called the ARPAnet, named for the United States Defense Advanced Research Projects Agency (DARPA or ARPA). This network used a technology called the Network Control Protocol (NCP) to allow hosts to connect to each other. NCP did approximately the jobs that TCP and IP do together today. Due to limitations in the NCP, development began on a new protocol that would be better suited to a growing internetwork. This new protocol, first formalized in RFC 675, was called the Internet Transmission Control Program (TCP). Like its predecessor NCP, TCP was responsible for basically everything that was needed to allow applications to run on an internetwork. Thus, TCP was at first both TCP and IP.

As I explain in detail in the topic describing the history of TCP/IP as a whole, several years were spent adjusting and revising TCP, with version 2 of the protocol documented in 1977. While the functionality of TCP was steadily improved, there was a problem with the basic concept behind the protocol. Having TCP by itself handle both datagram transmissions and routing (layer three functions) as well as connections, reliability and data flow management (layer four functions) meant that TCP violated key concepts of protocol layering and modularity. TCP forced all applications to use the layer four functions in order to use the layer three functions. This made TCP inflexible, and poorly-suited to the needs of applications that only need the lower-level functions and not the higher-level ones. As a result, the decision was made to split TCP into two: the layer four functions were retained, with TCP renamed the Transmission Control Protocol (as opposed to Program). The layer three functions became the Internet Protocol. This split was finalized in version 4 of TCP, and so the first IP was given “version 4” as well, for consistency. Version 4 of TCP was defined in RFC 793, Transmission Control Protocol, published September 1981, and is still the current version of the standard.

Even though it is more than 20 years old and is the first version most people have ever used, version 4 was the result of several years work and many earlier TCP versions tested on the early Internet. It is therefore a very mature protocol for its age. A precocious protocol, you could say.

TCP-IP – The History of Domain Names

TCP/IP

Date: 01/01/1981

TCP/IP (Transmission Control Protocol/Internet Protocol) is the basic communication language or protocol of the Internet. It can also be used as a communications protocol in a private network (either an intranet or an extranet). When you are set up with direct access to the Internet, your computer is provided with a copy of the TCP/IP program just as every other computer that you may send messages to or get information from also has a copy of TCP/IP.

TCP/IP is a two-layer program. The higher layer, Transmission Control Protocol, manages the assembling of a message or file into smaller packets that are transmitted over the Internet and received by a TCP layer that reassembles the packets into the original message. The lower layer, Internet Protocol, handles the address part of each packet so that it gets to the right destination. Each gateway computer on the network checks this address to see where to forward the message. Even though some packets from the same message are routed differently than others, they’ll be reassembled at the destination. TCP/IP uses the client/server model of communication in which a computer user (a client) requests and is provided a service (such as sending a Web page) by another computer (a server) in the network. TCP/IP communication is primarily point-to-point, meaning each communication is from one point (or host computer) in the network to another point or host computer. TCP/IP and the higher-level applications that use it are collectively said to be “stateless” because each client request is considered a new request unrelated to any previous one (unlike ordinary phone conversations that require a dedicated connection for the call duration).

Being stateless frees network paths so that everyone can use them continuously. (Note that the TCP layer itself is not stateless as far as any one message is concerned. Its connection remains in place until all packets in a message have been received.) Many Internet users are familiar with the even higher layer application protocols that use TCP/IP to get to the Internet. These include the World Wide Web’s Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), Telnet (Telnet) which lets you logon to remote computers, and the Simple Mail Transfer Protocol (SMTP). These and other protocols are often packaged together with TCP/IP as a “suite.” Personal computer users with an analog phone modem connection to the Internet usually get to the Internet through the Serial Line Internet Protocol (SLIP) or the Point-to-Point Protocol (PPP). These protocols encapsulate the IP packets so that they can be sent over the dial-up phone connection to an access provider’s modem.  Protocols related to TCP/IP include the User Datagram Protocol (UDP), which is used instead of TCP for special purposes. Other protocols are used by network host computers for exchanging router information. These include the Internet Control Message Protocol (ICMP), the Interior Gateway Protocol (IGP), the Exterior Gateway Protocol (EGP), and the Border Gateway Protocol (BGP).

With so many different network methods, something was needed to unify them. Robert E. Kahn of DARPA and ARPANET recruited Vinton Cerf of Stanford University to work with him on the problem. By 1973, they had soon worked out a fundamental reformulation, where the differences between network protocols were hidden by using a commoninternet work protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman, Gerard LeLann and LouisPouzin (designer of the CYCLADES network) with important work on this design.

The specification of the resulting protocol, RFC 675 –Specification of Internet Transmission Control Program, by Vinton Cerf, YogenDalal and Carl Sunshine, Network Working Group, December 1974, contains the first attested use of the term internet, as a shorthand for internetworking; later RFCs repeat this use, so the word started out as an adjective rather than the noun it is today.

With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn’s initial problem. DARPA agreed tofund development of prototype software, and after several years of work, the first somewhat crude demonstration of a gateway between the Packet Radio network in the SF Bay area and the ARPANET was conducted.  On November 22, 1977 a three network demonstration was conducted including the ARPANET, the PacketRadio Network and the Atlantic Packet Satellite network—all sponsored by DARPA. Stemming from the first specifications of TCP in 1974, TCP/IPemerged inmid-late 1978 in nearly final form.

By 1981, the associated standards were published as RFCs 791, 792 and 793 and adopted for use. DARPA sponsoredor encouraged the development of TCP/IP implementations for many operating systems and then scheduled a migration of all hosts on all of its packet networks to TCP/IP. On January 1, 1983, known as flag day, TCP/IP protocols became the only approved protocol on the ARPANET, replacing the earlier NCP protocol.

TCP-IP Global – The History of Domain Names

TCP/IP Goes Global

Date: 01/01/1987

TCP/IP goes global (1989–2000)

Between 1984 and 1988 CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs and an accelerator control system. CERN continued to operate a limited self-developed system CERNET internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP and the CERN TCP/IP intranets remained isolated from the Internet until 1989.

In 1988 Daniel Karrenberg, from Centrum Wiskunde & Informatica (CWI) in Amsterdam, visited Ben Segal, CERN’s TCP/IP Coordinator, looking for advice about the transition of the European side of the UUCP Usenet network (much of which ran over X.25 links) over to TCP/IP. In 1987, Ben Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks, and in 1989 CERN opened its first external TCP/IP connections. This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out co-ordination work together.  Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.

At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in-between Australian universities formed, based on various technologies such as X.25 and UUCPNet. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors’ Committee and provided a dedicated IP based network for Australia.

The Internet began to penetrate Asia in the late 1980s. Japan, which had built the UUCP-based network JUNET in 1984, connected to NSFNET in 1989. It hosted the annual meeting of the Internet Society, INET’92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.

Tandy – The History of Domain Names

Tandy Corporation – tandy.com was registered

Date: 12/11/1986

On December 11, 1986, Tandy corporation registered the tandy.com domain name, making it 49th .com domain ever to be registered.

Tandy Corporation, was an American family-owned leather goods company based in Fort Worth, Texas. Tandy Leather was founded in 1919 as a leather supply store, and acquired a number of craft retail companies, including RadioShack in 1963. In 2000, the Tandy Corporation name was dropped and entity became the RadioShack Corporation, selling The Tandy Leather name and operating assets to Tandy Leather Factory.

History

Tandy began in 1919 when two friends, Norton Hinckley and Dave L. Tandy, decided to start the Hinckley-Tandy Leather Company and concentrated their efforts on selling sole leather and other supplies to shoe repair dealers in Texas. Hinckley and Tandy opened their first branch store in 1927 in Beaumont, Texas and in 1932, Dave Tandy moved the store from Beaumont to Houston, Texas. Tandy’s business survived the economic storms of the Depression, gathered strength and developed a firm presence in the shoe findings business. Dave Tandy had a son, Charles Tandy, who was drafted into the business during his early twenties. Charles obtained a B.A degree at Texas Christian University then began attending the Harvard Business School to further expand his education. As World War II escalated Charles was called to serve his country in the military and relocated to Hawaii. He wrote his father from overseas suggesting that leathercraft might offer new possibilities for growing the shoe finding business since the same supplies were used widely in Navy and Army hospitals and recreation centers. Leathercraft gave the men something useful to do and their handiwork, in addition to being therapeutic, had genuine value.

Charles Tandy returned home from the service as a Lieutenant Commander in 1948 and negotiated to operate the fledgling leathercraft division himself. He had encouraged and followed the development of that venture through correspondence with his father. Within a short time Charles succeeded in opening the first of two retail stores in 1950 that specialized exclusively in leathercraft. Mr. Hinckley did not share the enthusiasm of Dave and Charles Tandy for the new leathercraft division. As a result, the two original founders came to an agreement in 1950 that Hinckley would continue to pursue the shoe findings business and the Tandy partners would specialize in promoting leathercrafts. The first Tandy Catalog, only 8 pages long, was mailed to readers of Popular Science magazine who had responded to two-inch test ads that were placed by Tandy. From 1950 forward Tandy operated retail mail order stores supported by direct mail advertising.[1] This successful formula helped the company expand into a chain of some 150 leathercraft stores. A growing ‘do-it-yourself movement’ prompted by a shortage of consumer goods and high labor costs continued to gather momentum. The fifteen leathercraft stores opened during this division’s first two years of operation became quite successful. Tandy began expanding by gaining new product lines; the first acquisition was with the American Handicrafts Company which featured a broad line of do-it-yourself handicraft products, two established retail stores in the New York market, and useful knowledge of school and institutional markets. Sixteen additional retail stores were opened in 1953, and by 1955 Tandy Leather was a thriving company with leased sales sites in 75 cities across the United States. Tandy Leather became an attractive commodity and was purchased in 1955 by the American Hide and Leather Company of Boston (name change in 1956 to General American Industries). Charles continued to maintain control of managing the Tandy Leather division while owned by GAI. During 1956, General American Industries acquired three other companies unrelated to the leather industry and a struggle for control of the parent company began. Charles saw the need to emancipate from the direction initiated by GAI. He used all his resources, raised additional money, and exercised his right to purchase the 500,000 shares of stock that were included in the original settlement. When the votes were counted on the day of that pivotal stockholders meeting, the Tandy group took management control of General American Industries.

Acquisition of Merribee & RadioShack

Tandy had a landmark year in 1961. The company name was changed to Tandy Corporation and the corporate headquarters were moved to Fort Worth, Texas where Charles Tandy became the President and Chairman of the Board. Tandy Leather was operating 125 stores in 105 cities of the United States and Canada and expansion was the name of the game. Tandy acquired the assets of Merribee Art Embroidery Co., manufacturer and retailer of needlecraft items, as well as 5 other companies, including Cleveland Crafts Inc. and brought on the owner, Werner Magnus, to help run the newly acquired Merribee division. The first of the ‘Tandy Marts’ was also opened in Fort Worth in December 1961. Charles Tandy believed that the do-it-yourself movement had gained sufficient momentum to support a new merchandising concept. The first Tandy Mart had twenty-eight different shops all devoted to craft and hobby merchandise and included American Handicraft, Tandy Leather, Electronics Crafts and Merribee in an area of about 40,000 square feet. Charles Tandy became intrigued with the potential for rapid growth that he saw in the electronics retail industry during 1962. He found RadioShack in Boston, a mail order company that had started in the twenties selling to ham operators and electronics buffs. By April 1963, the Tandy Corporation acquired management control of RadioShack Corporation and within two years, RadioShack’s $4 million loss was turned into a profit under the leadership of Charles Tandy.

Sales were going well for Tandy during this time. The “beads & fringe” days were in full swing with the hippy era and the ‘Nature-Tand’ look was a big seller for belts, purses, sandals and wristbands. Under the leadership of Lloyd Redd (President) and Al Patten (VP of Operations), the company prospered. The number of Tandy store-fronts skyrocketed over the next five to six years by growing from 132 sites in 1969 to 269 sites in 1975. Ground broke in downtown Fort Worth for the construction of the Tandy Towers in 1975. The 18-story office building was initiated as Phase I of a massive downtown development with plans to cover eight city blocks, become the new headquarters of the Tandy Corp. It contained an upscale retail shopping center with an indoor ice skating rink and had its own privately owned subway system. The Tandy Corporation Board of Directors then announced a plan to separate Tandy’s businesses into three distinct publicly held companies. The two new companies would be named Tandycrafts, Inc. and Tex Tan-Hickok, Inc. This plan was publicized as a strategy to provide intensive leadership and tailored management of the three distinct and diverse businesses of the company, each of which recently had reached a substantial size. With this transition, RadioShack and Tandy Leather Company were no longer under the same corporate umbrella.[3] Wray Thompson was promoted to President of Tandy Leather Company in 1976 and Dave Ferrill was promoted to the position of National Sales Manager; they oversaw 288 stores. Ron Morgan was promoted to the Eastern Divisional VP in 1977. Although they opened their 300th store that year, the popularity of Nature-Tand’s products had begun to slide as reflected by their sales and profit records. Charles Tandy died on November 4, 1978, at the age of 60. Concurrently, key stakeholders began to question the direction of the company. Wray Thompson subsequently made the career decision to resign from his position as President and later started The Leather Factory with Ron Morgan, which eventually purchased Tandy Leather Corporation in 2000.

Tandy stores

In 1973 Tandy Corporation began an expansion program outside their home market of the USA, opening a chain of RadioShack-style stores in Europe and Australia under the Tandy name. The first store to open was in Aartselaar, Belgium on August 9, 1973. The first UK store opened October 11, 1973 in Hall Green, Birmingham. Initially these new stores were under direct ownership of Tandy Corporation. In 1986 Tandy Corporation formed its subsidiary InterTAN as separate entity though connections between them were still visible. For example, catalogue number compatibility was maintained so that the same catalogue number in both companies would refer to the same item.

Tandy stores in the UK sold mainly own-brand goods under the ‘Realistic’ label and the shops were distinguished on the high street by continuing to use written sales receipts and a cash drawer instead of a till as late as the early 1990s.  Staff were required to take the name and address of any customer who made a purchase, however small, in order to put them on the company’s brochure mailing list, which often caused disgruntlement. A popular feature of Tandy stores was the free battery club, in which customers were allowed to claim a certain number of free batteries per year. In the early 1990s the chain ran the ‘Tandy Card’ store credit card scheme and the ‘Tandy Care’ extended warranty policies which were heavily marketed by staff.

In 1999 the UK stores were acquired by Carphone Warehouse, as a part of an expansion strategy that saw the majority of the Tandy stores converted either to Carphone Warehouse or Tecno photographic stores. By 2001 all former Tandy stores had been converted or closed. A small number of the stores were sold to a new company called T2 Retail Ltd formed by former Tandy (Intertan UK) employees, Dave Johnson, Neil Duggins and Philip Butcher[citation needed] who continued the RadioShack-style theme for a while, but these stores also closed in 2005. A new company called T2 Enterprises now continues using the old T2 Retail web presence as an exclusively on-line retailer stocking a range of RadioShack products and other electronics.

In 2001 the Australian stores were sold to Woolworths Limited. In February 2009, Woolworths Limited announced that it would be closing all Tandy stores within the next 2 years. By the end of June 2012, all stores had closed. After Woolworths purchased Tandy Electronics, despite owning rival Dick Smith Electronics, both continued to trade as a separate entities.

In Canada, the InterTAN stores were sold to rival Circuit City Inc. At that time, the stores were branded as RadioShack, however, because Circuit City lost the naming rights, all RadioShacks were re-branded as “The Source by Circuit City” (now called just The Source). Some have closed. In 2009 Circuit City sold The Source to Bell Canada Enterprises (BCE).

In 2012, Tandy Corporation Ltd, a UK company, acquired the UK rights to the Tandy brand from RadioShack. It now operates as an on-line retailer of electronic components and kits.

Symbolics – The History of Domain Names

First commercial Internet domain name registered (Symbolics.com)

Date: 03/15/1985

The first domain name registered was Symbolics.com. It was registered March 15, 1985, to Symbolics Inc., a computer systems company in Cambridge, making it the first .com-domain in the world. In August 2009, it was sold to XF.com Investments. It was a company best-known for developing a computer language called “Lisp,” and even went public at one point.

The original Symbolics company pioneered computer development. Symbolics designed and manufactured a line of Lisp machines, single-user computers optimized to run the Lisp programming language. The Lisp Machine was the first commercially available “workstation” (although that word had not yet been coined). Symbolics also made significant advances in software technology, and offered one of the premier software development environments of the 1980s and 1990s.

Today, Symbolics.com remains the first, and oldest, registered domain name out of approximately 275,000,000 domain names in existence. Of these 275 million domain names, approximately 120 million are of the .com extension. VeriSign reports that between 25 million and 30 million new domains are registered each year. There is a 72% – 75% renewal rate among .com domains – so approximately 3/4 of new registrations are renewed the following year. VeriSign reports that 87% of .com domains resolve to an active website (13% are inactive).

Additionally, it is the first and oldest .com domain in existence.

Superhighway – The History of Domain Names

The Superhighway and Summit

Date: 01/11/1994

The information superhighway or infobahn was a popular term used through the 1990s to refer to digital communication systems and the Internet telecommunications network.

The Superhighway Summit was held at UCLA’s Royce Hall on 11 January 1994. It was the “first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the Information Superhighway and its implications.”  The conference was organized by Richard Frank of the Academy of Television Arts & Sciences and Jeffrey Cole and Geoffrey Cowan, the former co-directors of UCLA’s Center for Communication Policy.  It was introduced by former UCLA Chancellor, Andrea L. Rich and its keynote speaker was Vice President Al Gore.

Highlights

The conference was given extensive coverage by Cynthia Lee and Linda Steiner Lee over two issues of UCLA TODAY (January 13 and 27, 1994). In the article, Gore Details Telecommunications Ideas, Lee and Lee gave an overview of the opening speech given by Vice President Gore. They commented that “Vice President Al Gore outlined the Clinton Administration’s proposals to reform the communications marketplace and challenged his audience to provide links from the so-called information superhighway to every classroom, library, hospital, and clinic in the country by the year 2000 […] ‘We have a dream for…an information superhighway that can save lives, create jobs and give every American, young and old, the chance for the best education available to anyone, anywhere,’ Gore said.” During his talk, “Ernestine” (the fictional telephone operator created by Lily Tomlin for Rowan & Martin’s Laugh-In) made a surprise appearance. She complained “about the confusing and rapid transformation of communications technology. The Vice president laughingly assured Ernestine that the new technology would be simple to understand and available to all Americans.”

In the follow-up article, CEO’s Ponder Direction of Information Superhighway, Cynthia Lee stated that leaders at the conference noted that the future of the Information Superhighway was still uncertain. ” ‘Here we are, all ready to go cruising off down this new information superhighway,’ said Jeffrey Katzenberg, Chairman of The Walt Disney Studios, during one of the panel discussions, ‘and we really don’t know where we are going. It’s the first time we will be moving in a certain direction when we don’t even know our final destination.’ ” Geoffrey Cowan, the former co-director of UCLA’s Center for Communication Policy, indicated that the key concept of the Information Superhighway was Interactivity or “the ability for the consumer to control it, to decide what they want to receive, and the ability of the technology to respond to highly sophisticated consumer demands.”

The participants underscored the point that the major challenge of the Information Highway would lie in access or the “gap between those who will have access to it because they can afford to equip themselves with the latest electronic devices and those who can’t.”

SUN – The History of Domain Names

sun.com was registered

Date: 03/19/1986

On March 19th, 1986, Sun Microsystems registered the sun.com domain name, making it one of the earliest Internet .com domains ever to be registered.

Sun Microsystems, Inc. was a company that sold computers, computer components, computer software, and information technology services and that created the Java programming language, Solaris Unix and the Network File System (NFS). Sun significantly evolved several key computing technologies, among them Unix, RISC processors, thin client computing, and virtualized computing. Sun was founded on February 24, 1982. At its height, Sun headquarters were in Santa Clara, California (part of Silicon Valley), on the former west campus of the Agnews Developmental Center.

On January 27, 2010, Sun was acquired by Oracle Corporation for US $7.4 billion, based on an agreement signed on April 20, 2009. The following month, Sun Microsystems, Inc. was merged with Oracle USA, Inc. to become Oracle America, Inc. Sun products included computer servers and workstations built on its own RISC-based SPARC processor architecture as well as on x86-based AMD’s Opteron and Intel’s Xeon processors; storage systems; and a suite of software products including the Solaris operating system, developer tools, Web infrastructure software, and identity management applications. Other technologies include the Java platform, MySQL, and NFS. Sun was a proponent of open systems in general and Unix in particular, and a major contributor to open source software. Sun’s main manufacturing facilities were located in Hillsboro, Oregon, and Linlithgow, Scotland.

History

The initial design for what became Sun’s first Unix workstation, the Sun-1, was conceived by Andy Bechtolsheim when he was a graduate student at Stanford University in Palo Alto, California. Bechtolsheim originally designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation. It was designed around the Motorola 68000 processor with an advanced memory management unit (MMU) to support the Unix operating system with virtual memory support. He built the first ones from spare parts obtained from Stanford’s Department of Computer Science and Silicon Valley supply houses.  On February 24, 1982, Vinod Khosla, Andy Bechtolsheim, and Scott McNealy, all Stanford graduate students, founded Sun Microsystems. Bill Joy of Berkeley, a primary developer of the Berkeley Software Distribution (BSD), joined soon after and is counted as one of the original founders. The Sun name is derived from the initials of the Stanford University Network. Sun was profitable from its first quarter in July 1982.

By 1983 Sun was known for producing 68000-based systems with high-quality graphics that were the only computers other than DEC’s VAX to run 4.2BSD. It licensed the computer design to other manufacturers, which typically used it to build Multibus-based systems running Unix from UniSoft. Sun’s initial public offering was in 1986 under the stock symbol SUNW, for Sun Workstations (later Sun Worldwide). The symbol was changed in 2007 to JAVA; Sun stated that the brand awareness associated with its Java platform better represented the company’s current strategy. Sun’s logo, which features four interleaved copies of the word sun, was designed by professor Vaughan Pratt, also of Stanford. The initial version of the logo was orange and had the sides oriented horizontally and vertically, but it was subsequently rotated to stand on one corner and re-colored purple, and later blue.

The “bubble” and its aftermath

In the dot-com bubble, Sun began making much more money, and its shares rose dramatically. It also began spending much more, hiring workers and building itself out. Some of this was because of genuine demand, but much was from web start-up companies anticipating business that would never happen. In 2000, the bubble burst. Sales in Sun’s important hardware division went into free-fall as customers closed shop and auctioned off high-end servers. Several quarters of steep losses led to executive departures, rounds of layoffs, and other cost cutting. In December 2001, the stock fell to the 1998, pre-bubble level of about $100. But it kept falling, faster than many other tech companies. A year later it had dipped below $10 (a tenth of what it was even in 1990) but bounced back to $20. In mid-2004, Sun closed their Newark, California factory and consolidated all manufacturing to Hillsboro, Oregon. In 2006, the rest of the Newark campus was put on the market.

Post-crash focus

In 2004, Sun canceled two major processor projects which emphasized high instruction level parallelism and operating frequency. Instead, the company chose to concentrate on processors optimized for multi-threading and multiprocessing, such as the UltraSPARC T1 processor (codenamed “Niagara”). The company also announced a collaboration with Fujitsu to use the Japanese company’s processor chips in mid-range and high-end Sun servers. These servers were announced on April 17, 2007 as the M-Series, part of the SPARC Enterprise series. In February 2005, Sun announced the Sun Grid, a grid computing deployment on which it offered utility computing services priced at US$1 per CPU/hour for processing and per GB/month for storage. This offering built upon an existing 3,000-CPU server farm used for internal R&D for over 10 years, which Sun marketed as being able to achieve 97% utilization. In August 2005, the first commercial use of this grid was announced for financial risk simulations which was later launched as its first software as a service product.

In January 2005, Sun reported a net profit of $19 million for fiscal 2005 second quarter, for the first time in three years. This was followed by net loss of $9 million on GAAP basis for the third quarter 2005, as reported on April 14, 2005. In January 2007, Sun reported a net GAAP profit of $126 million on revenue of $3.337 billion for its fiscal second quarter. Shortly following that news, it was announced that Kohlberg Kravis Roberts (KKR) would invest $700 million in the company.

Sun had engineering groups in Bangalore, Beijing, Dublin, Grenoble, Hamburg, Prague, St. Petersburg, Tel Aviv, Tokyo, and Trondheim. In 2007–2008, Sun posted revenue of $13.8 billion and had $2 billion in cash. First-quarter 2008 losses were $1.68 billion; revenue fell 7% to $12.99 billion. Sun’s stock lost 80% of its value November 2007 to November 2008, reducing the company’s market value to $3 billion. With falling sales to large corporate clients, Sun announced plans to lay off 5,000 to 6,000 workers, or 15–18% of its work force. It expected to save $700 million to $800 million a year as a result of the moves, while also taking up to $600 million in charges.

Subnetting – The History of Domain Names

Subnetting Operation

Date: 01/01/1986

Subnetting Operation

Subnetting is the process of designating some high-order bits from the host part and grouping them with the network mask to form the subnet mask. This divides a network into smaller subnets. For IPv4, a network may also be characterized by its subnet mask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix.

Subnetting enables the network administrator to further divide the host part of the address into two or more subnets. In this case, a part of the host address is reserved to identify the particular subnet. This is easier to see if we show the IP address in binary format. The subnet mask is the network address plus the bits reserved for identifying the subnetwork — by convention, the bits for the network address are all set to 1, though it would also work if the bits were set exactly as in the network address. In this case, therefore, the subnet mask would be 11111111.11111111.11110000.00000000. It’s called a mask because it can be used to identify the subnet to which an IP address belongs by performing a bitwise AND operation on the mask and the IP address.

Computers that belong to a subnet are addressed with a common, identical, most-significant bit-group in their IP address. This results in the logical division of an IP address into two fields, a network or routing prefix and the “rest” field or host identifier. The rest field is an identifier for a specific host or network interface. The routing prefix may be expressed in CIDR notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 192.168.1.0/24 is the prefix of the Internet Protocol Version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.

For IPv4, a network may also be characterized by its subnet mask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the network mask for the 192.168.1.0/24 prefix. Traffic is exchanged (routed) between subnetworks with special gateways (routers) when the routing prefixes of the source address and the destination address differ. A router constitutes the logical or physical boundary between the subnets. The benefits of subnetting an existing network vary with each deployment scenario. In the address allocation architecture of the Internet using Classless Inter-Domain Routing (CIDR) and in large organizations, it is necessary to allocate address space efficiently. It may also enhance routing efficiency, or have advantages in network management when subnetworks are administratively controlled by different entities in a larger organization. Subnets may be arranged logically in a hierarchical architecture, partitioning an organization’s network address space into a tree-like routing structure.

Stumbleupon – The History of Domain Names

StumbleUpon Buys Stumblers.com for $6,000

September 4, 2011

Social sharing site StumbleUpon has purchased the domain name Stumblers.com for $6,000. StubleUpon users are called “stumblers”.

The sale was on Afternic’s sales list from the past week, although at the time it wasn’t clear who bought the domain name since it was registered to brand protection company MarkMonitor.

The seller was Baton Rouge, Louisiana based NameSeek.com.

As of right now the domain name still resolves to a Sedo domain parking page.

The purchase could be a simple brand protection move or StumbleUpon could have a plan up its sleeve.

Stumbled – The History of Domain Names

StumbleUpon Buys Stumbled.com

November 6, 2011

Earlier StumbleUpon bought Stumblers.com, now it has picked up Stumbled.com.

Stumbled.com currently has MarkMonitor company DNStinations as its owner in whois. This was the same company that helped the social sharing site purchase Stumblers.com for $6,000 back in September, so it’s highly likely that StumbleUpon is the buyer.

The domain name is currently listed for sale on Sedo for $8,000, so It is suspected StumbleUpon paid no more than this.

Steve Crocker – The History of Domain Names

Steve Crocker – Program Manager at ARPA

(Advanced Research Projects Agency)

Date: 01/01/1970

Steve Crocker was a program manager at ARPA (Advanced Research Projects Agency)

Stephen D. Crocker (born October 15, 1944, in Pasadena, California) is the inventor of the Request for Comments series, authoring the very first RFC and many more. He received his bachelor’s degree (1968) and PhD (1977) from the University of California, Los Angeles. Crocker is chair of the board of the Internet Corporation for Assigned Names and Numbers, ICANN.

Steve Crocker has worked in the Internet community since its inception. As a UCLA graduate student in the 1960s, he was part of the team that developed the protocols for the ARPANET which were the foundation for today’s Internet. For this work, Crocker was awarded the 2002 IEEE Internet Award.

While at UCLA Crocker taught an extension course on computer programming (for the IBM 7094 mainframe computer). The class was intended to teach digital processing and assembly language programming to high school teachers, so that they could offer such courses in their high schools. A number of high school students were also admitted to the course, to ensure that they would be able to understand this new discipline. Crocker was also active in the newly formed UCLA Computer Club.

Crocker has been a program manager at Defense Advanced Research Projects Agency (DARPA), a senior researcher at USC’s Information Sciences Institute, founder and director of the Computer Science Laboratory at The Aerospace Corporation and a vice president at Trusted Information Systems. In 1994, Crocker was one of the founders and chief technology officer of CyberCash, Inc. In 1998, he founded and ran Executive DSL, a DSL-based ISP. In 1999 he cofounded and was CEO of Longitude Systems. He is currently CEO of Shinkuro, a research and development company.

Steve Crocker was instrumental in creating the ARPA “Network Working Group”, which later was the context in which the IETF was created.

He has also been an IETF security area director, a member of the Internet Architecture Board, chair of the ICANN Security and Stability Advisory Committee, a board member of the Internet Society and numerous other Internet-related volunteer positions.

In 2012, Crocker was inducted into the Internet Hall of Fame by the Internet Society.