Category Archives: Uncategorized

Youtube – The History of Domain Names

YouTube – Video sharing

Date: 01/01/2005

YouTube is an American video-sharing website headquartered in San Bruno, California, United States. The service was created by three former PayPal employees Chad Hurley, Steve Chen, and Jawed Karim in February 2005. In November 2006, it was bought by Google for US$1.65 billion. YouTube now operates as one of Google’s subsidiaries. The site allows users to upload, view, rate, share, add to favorites, report and comment on videos, and it makes use of WebM, H.264/MPEG-4 AVC, and Adobe Flash Video technology to display a wide variety of user-generated and corporate media videos. Available content includes video clips, TV show clips, music videos, short and documentary films, audio recordings, movie trailers and other content such as video blogging, short original videos, and educational videos.

Most of the content on YouTube has been uploaded by individuals, but media corporations including CBS, the BBC, Vevo, Hulu, and other organizations offer some of their material via YouTube, as part of the YouTube partnership program. Unregistered users can watch videos on the site, but registered users are permitted to upload an unlimited number of videos and add comments to videos. Videos deemed potentially offensive are available only to registered users affirming themselves to be at least 18 years old. In July 2016, the website was ranked as the second most popular site by Alexa Internet, a web traffic analysis company.

YouTube earns advertising revenue from Google AdSense, a program which targets ads according to site content and audience. The vast majority of its videos are free to view, but there are exceptions, including subscription-based premium channels, film rentals, as well as YouTube Red, a subscription service offering ad-free access to the website and access to exclusive content made in partnership with existing users.

YouTube was founded by Chad Hurley, Steve Chen, and Jawed Karim, who were all early employees of PayPal. Hurley had studied design at Indiana University of Pennsylvania, and Chen and Karim studied computer science together at the University of Illinois at Urbana-Champaign. According to a story that has often been repeated in the media, Hurley and Chen developed the idea for YouTube during the early months of 2005, after they had experienced difficulty sharing videos that had been shot at a dinner party at Chen’s apartment in San Francisco. Karim did not attend the party and denied that it had occurred, but Chen commented that the idea that YouTube was founded after a dinner party “was probably very strengthened by marketing ideas around creating a story that was very digestible”.

Karim said the inspiration for YouTube first came from Janet Jackson’s role in the 2004 Super Bowl incident, when her breast was exposed during her performance, and later from the 2004 Indian Ocean tsunami. Karim could not easily find video clips of either event online, which led to the idea of a video sharing site. Hurley and Chen said that the original idea for YouTube was a video version of an online dating service, and had been influenced by the website Hot or Not.

YouTube began as a venture capital-funded technology startup, primarily from a $11.5 million investment by Sequoia Capital between November 2005 and April 2006. YouTube’s early headquarters were situated above a pizzeria and Japanese restaurant in San Mateo, California. The domain name www.youtube.com was activated on February 14, 2005, and the website was developed over the subsequent months. The first YouTube video, titled Me at the zoo, shows co-founder Jawed Karim at the San Diego Zoo. The video was uploaded on April 23, 2005, and can still be viewed on the site. YouTube offered the public a beta test of the site in May 2005. The first video to reach one million views was a Nike advertisement featuring Ronaldinho in September 2005. Following a $3.5 million investment from Sequoia Capital in November, the site launched officially on December 15, 2005, by which time the site was receiving 8 million views a day. The site grew rapidly, and in July 2006 the company announced that more than 65,000 new videos were being uploaded every day, and that the site was receiving 100 million video views per day. According to data published by market research company comScore, YouTube is the dominant provider of online video in the United States, with a market share of around 43% and more than 14 billion views of videos in May 2010.

In 2014 YouTube said that 300 hours of new videos were uploaded to the site every minute, three times more than one year earlier and that around three quarters of the material comes from outside the U.S. The site has 800 million unique users a month. It is estimated that in 2007 YouTube consumed as much bandwidth as the entire Internet in 2000. According to third-party web analytics providers, Alexa and SimilarWeb, YouTube is the third most visited website in the world, as of June 2015; SimilarWeb also lists YouTube as the top TV and video website globally, attracting more than 15 billion visitors per month.

The choice of the name www.youtube.com led to problems for a similarly named website, www.utube.com. The site’s owner, Universal Tube & Rollform Equipment, filed a lawsuit against YouTube in November 2006 after being regularly overloaded by people looking for YouTube. Universal Tube has since changed the name of its website to www.utubeonline.com. In October 2006, Google Inc. announced that it had acquired YouTube for $1.65 billion in Google stock, and the deal was finalized on November 13, 2006.

In March 2010, YouTube began free streaming of certain content, including 60 cricket matches of the Indian Premier League. According to YouTube, this was the first worldwide free online broadcast of a major sporting event. On March 31, 2010, the YouTube website launched a new design, with the aim of simplifying the interface and increasing the time users spend on the site. Google product manager Shiva Rajaraman commented: “We really felt like we needed to step back and remove the clutter.” In May 2010, it was reported that YouTube was serving more than two billion videos a day, which it described as “nearly double the prime-time audience of all three major US television networks combined”. In May 2011, YouTube reported in its company blog that the site was receiving more than three billion views per day. In January 2012, YouTube stated that the figure had increased to four billion videos streamed per day.

In October 2010, Hurley announced that he would be stepping down as chief executive officer of YouTube to take an advisory role, and that Salar Kamangar would take over as head of the company. In April 2011, James Zern, a YouTube software engineer, revealed that 30% of videos accounted for 99% of views on the site. In November 2011, the Google+ social networking site was integrated directly with YouTube and the Chrome web browser, allowing YouTube videos to be viewed from within the Google+ interface.

In December 2011, YouTube launched a new version of the site interface, with the video channels displayed in a central column on the home page, similar to the news feeds of social networking sites. At the same time, a new version of the YouTube logo was introduced with a darker shade of red, the first change in design since October 2006. In May 2013, YouTube launched a pilot program to begin offering some content providers the ability to charge $0.99 per month or more for certain channels, but the vast majority of its videos would remain free to view.

In February 2015, YouTube announced the launch of a new app specifically for use by children visiting the site, called YouTube Kids. It allows parental controls and restrictions on who can upload content, and is available for both Android and iOS devices. Later on August 26, 2015, YouTube Gaming was launched, a platform for video gaming enthusiasts intended to compete with Twitch.tv. 2015 also saw the announcement of a premium YouTube service titled YouTube Red, which provides users with both ad-free content as well as the ability to download videos among other features. On August 10, 2015, Google announced that it was creating a new company, Alphabet, to act as the holding company for Google, with the change in financial reporting to begin in the fourth quarter of 2015. YouTube remains as a subsidiary of Google. In January 2016, YouTube expanded its headquarters in San Bruno by purchasing an office park for $215 million. The complex has 554,000 square feet of space and can house up to 2,800 employees.

Video technology

Playback

Previously, viewing YouTube videos on a personal computer required the Adobe Flash Player plug-in to be installed in the browser. In January 2010, YouTube launched an experimental version of the site that used the built-in multimedia capabilities of web browsers supporting the HTML5 standard. This allowed videos to be viewed without requiring Adobe Flash Player or any other plug-in to be installed. The YouTube site had a page that allowed supported browsers to opt into the HTML5 trial. Only browsers that supported HTML5 Video using the H.264 or WebM formats could play the videos, and not all videos on the site were available. On January 27, 2015, YouTube announced that HTML5 will be the default playback method on supported browsers. Supported browsers include Google Chrome, Safari 8, and Internet Explorer 11. YouTube experimented with Dynamic Adaptive Streaming over HTTP (MPEG-DASH), which is an adaptive bit-rate HTTP-based streaming solution optimizing the bitrate and quality for the available network. Currently they are using Adobe Dynamic Streaming for Flash.

Uploading

All YouTube users can upload videos up to 15 minutes each in duration. Users who have a good track record of complying with the site’s Community Guidelines may be offered the ability to upload videos up to 12 hours in length, which requires verifying the account, normally through a mobile phone. When YouTube was launched in 2005, it was possible to upload long videos, but a ten-minute limit was introduced in March 2006 after YouTube found that the majority of videos exceeding this length were unauthorized uploads of television shows and films. The 10-minute limit was increased to 15 minutes in July 2010. If an up-to-date browser version is used, videos greater than 20 GB can be uploaded. Videos captions are made using speech recognition technology when uploaded. Such captioning is usually not perfectly accurate, so YouTube provides several options for manually entering in the captions themselves for greater accuracy.

YouTube accepts videos uploaded in most container formats, including .AVI, .MKV, .MOV, .MP4, DivX, .FLV, and .ogg and .ogv. These include video formats such as MPEG-4, MPEG, VOB, and .WMV. It also supports 3GP, allowing videos to be uploaded from mobile phones. Videos with progressive scanning or interlaced scanning can be uploaded, but for the best video quality, YouTube suggests interlaced videos be deinterlaced before uploading. All the video formats on YouTube use progressive scanning.

Quality and formats

YouTube originally offered videos at only one quality level, displayed at a resolution of 320×240 pixels using the Sorenson Spark codec (a variant of H.263), with mono MP3 audio. In June 2007, YouTube added an option to watch videos in 3GP format on mobile phones. In March 2008, a high-quality mode was added, which increased the resolution to 480×360 pixels. In November 2008, 720p HD support was added. At the time of the 720p launch, the YouTube player was changed from a 4:3 aspect ratio to a widescreen 16:9. With this new feature, YouTube began a switchover to H.264/MPEG-4 AVC as its default video compression format. In November 2009, 1080p HD support was added. In July 2010, YouTube announced that it had launched a range of videos in 4K format, which allows a resolution of up to 4096×3072 pixels. In June 2015, support for 8K resolution was added, with the videos playing at 7680×4320 pixels.

In June 2014, YouTube introduced videos playing at 60 frames per second, in order to reproduce video games with a frame rate comparable to high-end graphics cards. The videos play back at a resolution of 720p or higher. YouTube videos are available in a range of quality levels. The former names of standard quality (SQ), high quality (HQ) and high definition (HD) have been replaced by numerical values representing the vertical resolution of the video. The default video stream is encoded in the VP9 format with stereo Opus audio; if VP9/WebM is not supported in the browser/device or the browser’s user agent reports Windows XP, then H.264/MPEG-4 AVC video with stereo AAC audio is used instead.

Content accessibility

YouTube offers users the ability to view its videos on web pages outside their website. Each YouTube video is accompanied by a piece of HTML that can be used to embed it on any page on the Web. This functionality is often used to embed YouTube videos in social networking pages and blogs. Users wishing to post a video discussing, inspired by or related to another user’s video are able to make a “video response”. On August 27, 2013, YouTube announced that it would remove video responses for being an underused feature. Embedding, rating, commenting and response posting can be disabled by the video owner.

YouTube does not usually offer a download link for its videos, and intends for them to be viewed through its website interface. A small number of videos, such as the weekly addresses by President Barack Obama, can be downloaded as MP4 files. Numerous third-party web sites, applications and browser plug-ins allow users to download YouTube videos. In February 2009, YouTube announced a test service, allowing some partners to offer video downloads for free or for a fee paid through Google Checkout. In June 2012, Google sent cease and desist letters threatening legal action against several websites offering online download and conversion of YouTube videos. In response, Zamzar removed the ability to download YouTube videos from its site. The default settings when uploading a video to YouTube will retain a copyright on the video for the uploader, but since July 2012 it has been possible to select a Creative Commons license as the default, allowing other users to reuse and remix the material if it is free of copyright.

Platforms

Most modern smartphones are capable of accessing YouTube videos, either within an application or through an optimized website. YouTube Mobile was launched in June 2007, using RTSP streaming for the video. Not all of YouTube’s videos are available on the mobile version of the site. Since June 2007, YouTube’s videos have been available for viewing on a range of Apple products. This required YouTube’s content to be transcoded into Apple’s preferred video standard, H.264, a process that took several months. YouTube videos can be viewed on devices including Apple TV, iPod Touch and the iPhone. In July 2010, the mobile version of the site was relaunched based on HTML5, avoiding the need to use Adobe Flash Player and optimized for use with touch screen controls. The mobile version is also available as an app for the Android platform. In September 2012, YouTube launched its first app for the iPhone, following the decision to drop YouTube as one of the preloaded apps in the iPhone 5 and iOS 6 operating system. According to GlobalWebIndex, YouTube was used by 35% of smartphone users between April and June 2013, making it the third most used app.

A TiVo service update in July 2008 allowed the system to search and play YouTube videos. In January 2009, YouTube launched “YouTube for TV”, a version of the website tailored for set-top boxes and other TV-based media devices with web browsers, initially allowing its videos to be viewed on the PlayStation 3 and Wii video game consoles. In June 2009, YouTube XL was introduced, which has a simplified interface designed for viewing on a standard television screen. YouTube is also available as an app on Xbox Live. On November 15, 2012, Google launched an official app for the Wii, allowing users to watch YouTube videos from the Wii channel. An app is also available for Wii U and Nintendo 3DS, and videos can be viewed on the Wii U Internet Browser using HTML5. Google made YouTube available on the Roku player on December 17, 2013, and, in October 2014, the Sony PlayStation 4.

YU – The History of Domain Names

The previous code for Yugoslavia, YU, was removed, but the yu ccTLD remained. After a two-year transition to Serbian rs and Montenegrin me, the .yu domain was phased out

Date: 03/01/2010

The previous code for Yugoslavia, YU, was removed on 2003, but the yu ccTLD remained in operation. Finally, after a two-year transition to Serbian rs and Montenegrin me, the .yu domain was phased out in March 2010.

The .YU domain delegation has been removed from the DNS root zone as the final step in the process to retire the domain from usage. ICANN staff provide this report to the community as information. All websites using the .YU country code Top-Level Domain (ccTLD) will cease to be available online. The ccTLD assigned to the former Republic of Yugoslavia has been replaced by .rs (for Serbia) and .me (for Montenegro). ICANN allowed extra time for sites to make the transition before removing the .YU domain. It has been reported that up to 4,000 websites are still using the .YU domain.

On 6 November 2007, based on the verbally agreed approach, a letter was transmitted to RNIDS formally documenting the reporting protocol.

Following the delegation of .RS, the registry took a staged approach to the decommissioning of the .YU domain. In the first phase, all names registered within .YU had their respective .RS domain reserved. This was conducted as part of a sunrise process that involved other rights-based allocations prior to general availability.

During the first six months of .RS operations, only existing .YU domain holders were able to obtain domains corresponding to the reservations. As the domains have a hierarchical model (.CO.RS, .ORG.RS, etc.) rights were also awarded for domains directly under .RS on a first-come first-served basis.

By September 2008, after the six month period, unredeemed .RS reservations expired, and general availability started for .RS domains. The .YU registry was then curated, with inactive and unused .YU domains being identified. 2,769 .YU domains deemed as still active, and all remaining .YU domains were removed in March 2009. Between March and May 2009, 1,236 domain holders appealed to have their domains re-instated.

ICANN received a short status update from RNIDS in early 2008, however nothing further was reported according to the reporting protocol regarding the transition, or any difficulties that had been encountered.

As of June 2009, there were 4,266 .YU domains still delegated, down from 32,772. At that time there were 26,294 domains registered in .RS. It is worth noting that of these remaining 4,266 domains, only approximately 200 did not also have the matching .RS domain.

Zooz – The History of Domain Names

Zooz uses 1% of its funding to buy Zooz.com

January 4, 2012

Zooz spends 1% of its recent funding round on a domain name and other interesting domain name purchases.

It’s time for the weekly roundup of end user domain name purchases.

Let’s start with in-app mobile payments startup Zooz. The company snagged $1.5 million in funding in November. How better to use the money than to buy Zooz.com? Actually, it only had to use 1% of the money to buy the domain for $15,000 at Sedo. The company’s domain name to date has been Zooz.co.

Zynga – The History of Domain Names

Zynga acquires Chefville.com domain name

July 18, 2012

Zynga pockets key domain name for new game.

Last month social gaming company Chefville announced its newest games including Chefville. But it didn’t have the Chefville.com domain name — until now.

Chefville.com was registered back in January 2006. Its current owner, an individual in California, appears to have owned it that whole time. (The oldest whois record recorded at DomainTools for the domain is in 2007 but shows the same owner.) The registrant parked the domain name at Sedo for many years, and probably thought nothing of it until last month.

Wyse – The History of Domain Names

Wyse Technology – WYSE.com was registered

Date: 10/14/1987

On October 14, 1987, Wyse Technology registered the wyse.com domain name, making it 94th .com domain ever to be registered.

Wyse is a manufacturer of cloud computing systems. They are best known for their video terminal line introduced in the 1980s, which competed with the market leading Digital. They also had a successful line of IBM PC compatible in the mid-to-late 1980s, but were outcompeted by companies such as Dell starting late in the decade. Current products include thin client hardware and software as well as desktop virtualization. Other products include cloud software-supporting desktop computers, laptops, and mobile devices. Dell Cloud Client Computing is partnered with IT vendors such as Citrix, IBM, Microsoft, and VMware.

On April 2, 2012, Dell and Wyse announced that Dell intends to take over the company. With this acquisition Dell would surpass their rival Hewlett-Packard in the market for thin clients. On May 25, 2012 Dell informed the market that it had completed the acquisition of Wyse Technology, which is now known as Dell Wyse.

Company History:

Wyse Technology, Inc. is a designer and manufacturer of computer monitors and terminals for mainframe, mini, and desktop computing markets. It is the worldwide leader in the video display terminal market, with over seven million units shipped by 1995. Taiwanese investors purchased Wyse in 1989, but the company maintains headquarters offices in the United States and sales offices throughout the world.

Wyse was launched in 1981 by husband-and-wife team Bernard and Grace Tse. Bernard was a native of Hong Kong and Grace was Taiwanese. They met in the United States while studying engineering at the University of Illinois. While still in school, the couple became convinced that they could design a computer terminal that was better and less expensive to produce than terminals that were already being sold. They persuaded two colleagues to join them in a business venture that would eventually become the largest terminal manufacturer in the world.

The Tses were unlikely candidates for such an achievement. Indeed, by the time they decided to launch their company the computer terminal market was already glutted with well over 100 manufacturers. Furthermore, the industry was about to enter a period of consolidation; as terminal producers began competing fiercely on price, economies of scale became paramount. The Tses, undeterred by the increasingly competitive business environment, marched ahead with plans to launch a manufacturing operation and began looking for investment capital. They first approached venture capital firms, which at the time were the most likely sources of cash for a technology start-up. But those groups realized that the Tse’s chance of success was slim, and refused to front any money.

Undaunted, the Tses mortgaged their house for startup cash. That show of good faith helped them to convince David Jackson, the founder of nearby Altos Computers Systems, to supply $1.6 million in additional funding. With cash in hand, the Tses scrambled during the next few years to design a low-cost, high-performance computer terminal that could beat the competition. They achieved their goal in 1983, when they introduced the company’s first big hit: the WY50. Offering a larger screen and higher resolution, the WY50 was priced a stunning 44 percent lower than its nearest competitors. “People thought we were giving away $50 with every shipment,” Grace Tse recalled in the November 16, 1987, Forbes. The WY50 was an immediate success. Within a few years Wyse had sprinted past its competitors to become one of the largest manufacturers of terminals in the world, second only to computer leviathan International Business Machines (IBM).

Wyse posted an impressive $4 million sales figure in 1983. More importantly, the company showed a profit for the first time, and would continue to record surpluses for more than four straight years. Indeed, between 1983 and 1987 Wyse managed to increase annual sales to more than $250 million. The Tses achieved that success using a relatively straight-forward operating and sales strategy. Importantly, they tapped Grace’s Taiwan roots and set up low-cost manufacturing operations in that country. In addition to inexpensive labor, Wyse’s production facilities benefited from close proximity to low-cost parts suppliers. Wyse kept marketing costs low by selling through established distributors and resellers, rather than through a more expensive direct sales force or retail channel.

Complementing Wyse’s operations and distribution savvy was a shrewd product strategy. Rather than spend heavily to research and develop cutting-edge technology, Wyse focused on its core competencies of manufacturing and distribution. It waited for its competitors to establish a new technology. Once demand for the new technology reached a high volume, Wyse would jump in with its own low-cost version. The notable wrinkle in that tactic was that Wyse would also tag on neat, low-tech features that gave its terminals an edge in the marketplace other than a low price. Such gimmicks included European styling, larger screens, and tilt-and-swivel bases. The end result was surging demand for Wyse’s products and its stock. After going public with a stock sale in 1984, Wyse’s stock climbed from $7 to $39 per share by 1987 in the wake of investor excitement.

Wyse’s gains during the mid-1980s were primarily the result of the explosive success of Wyse’s WY50 and subsequent terminal models. But Wyse bolstered that core product line with its venture into personal computers. The Tses recognized that most future growth in the computer industry would be in personal computers, rather than in terminals that were connected to mainframes. To meet that demand, Wyse began developing and selling low-cost personal computers using roughly the same strategy it had used with its terminals. It utilized existing technology to mass produce low-cost, attractive computers. And instead of selling the units through traditional retail or direct sales channels, it sold the computers to other companies that simply attached their name to the units and resold them.

By 1985 Wyse was still generating more than 90 percent of its revenues from sales of terminals, of which it was shipping about 300,000 annually. As it moved to emphasize its personal computer business, however, that share dropped to about 75 percent by 1987. Among other initiatives, Wyse inked a deal in 1986 to supply personal computers as house-brand products to Businessland, which at the time was the top computer retailer in the United States. That helped boost Wyse’s 1987 net income 44 percent to $18 million. Wyse management boldly predicted that sales in 1988 would balloon to about $400 million.

Wyse’s optimistic sales target for 1988 reflected management’s intent to aggressively pursue the booming market for personal computers. To that end, in 1986 Wyse had purchased a computer equipment manufacturer named Amdek. Amdek was a leader in the market for computer monitors. Shortly after the buyout, Wyse announced a plan to begin marketing its computers under the Amdek name, launching Wyse to the status of computer retailer. Wyse quickly developed new lines of personal computers based on the then-popular 80286 and 80386 Intel microprocessors. To give its units an advantage, Wyse designed them to be easily upgradable, meaning that the owners could adapt the systems to changing technology instead of having to replace the entire unit when it became obsolete.

The jump into personal computer retailing marked a divergence from the route Wyse had taken during the mid-1980s. By becoming a retailer, the company was trying to establish itself as a full-line supplier of computer systems. It hoped to use its traditional competencies to prosper in the retail marketplace and steal market share from venerable competitors like IBM, Apple, and Compaq. To spearhead the effort, Wyse hired H.L. “Sparky” Sparks, the personal computer industry veteran who had developed IBM’s successful distribution strategy earlier in the decade. Sparks was intrigued by the opportunity, because he thought he could parlay Wyse’s inexpensive systems into retail success. Wyse’s newest computer in 1987 (the WY3216), for example, sold for about $1,500 to $2,000 less than comparable systems offered by IBM and Compaq.

Some analysts were skeptical of Wyse’s retail strategy, with good reason. By the late 1980s the personal computer market was becoming glutted with competition. Just a few years earlier, in fact, one of Wyse’s competitors in the terminal market, TeleVideo Systems, had ventured into the personal computer retailing market with disastrous results. But the Tses had faced intense competition before, and were confident that they could profit by retailing their PCs. Most observers were optimistic, as well. The company’s stock price shot up during 1987 (before the stock crash), and Inc. magazine predicted that Wyse would be the third fastest-growing small public company in 1987.

Wyse began shipping the first Amdek personal computers late in 1987. As a result, Wyse’s sales rose rapidly, surpassing $400 million by 1989. That growth belied profit setbacks, however. Indeed, a number of factors combined to put an end to the 23 consecutive quarters of profit growth Wyse had enjoyed since the early 1980s. Among other problems, Wyse’s Amdek computer line was slow to win the approval of top-tier PC dealers, largely as a result of aggressive price slashing throughout the industry. To make matters worse, sales of Wyse’s PCs through nonretail channels slowed after the company raised prices on the units during the height of the memory chip shortage of 1988. Then, after posting a miserable fourth quarter loss of $15 million in 1988, Wyse was dumped by one its largest PC buyers, Tandem Computers Inc.

To try and stem the tide of red ink flowing from its balance sheet, Tse hired Larry Lummis to take control of the Amdek subsidiary. Lummis had been one of the original cofounders of Wyse back in 1981, and agreed to come out of retirement to help his old business partner. But Amdek continued to bleed cash, and Wyse was forced to announce big quarterly losses throughout 1988 and into 1989. That’s when Tse began looking for a deep-pocketed partner to help pull it out of its slump. Wyse found its savior in 1989, when the Taiwan-based Mitac Group agreed to buy the company for $262 million. The Mitac Group was itself a division of Chanel International Corp., a Taiwanese government-supported consortium that was pushing to boost Taiwan’s presence in the global computer industry.

Moving to the chairman and chief executive posts at Wyse following the December 1989 buyout was Morris Chang, the head of Chanel International. Chang received an engineering degree at Massachusetts Institute of Technology before earning his doctorate in electrical engineering at Stanford. Among other management posts in Taiwan and the United States, he had served as president at General Instruments Corp. before agreeing to head Taiwan’s Industrial Technology Research Institute (ITRI) in 1985. It was through ITRI that Chang helped to construct the consortium of technology companies that, going into 1990, included Wyse. “The acquisition [of Wyse] was perceived as a step toward globalization for Taiwan industry…,” Chang explained in the June 25, 1990 Electronic Business.

Under new management, Wyse began to back off of its drive into personal computer retailing, focus on its traditional core strengths, and develop new products that would help it succeed in the more competitive and rapidly evolving computer industry. Wyse was aided in that effort by the other companies in the Taiwanese consortium, all of which worked together, exchanging technology, partnering manufacturing, and sharing marketing and distribution channels. Wyse introduced a full range of 386- and 486-based PCs during the early 1990s, and even jumped into the market for multiprocessor servers with a full family of systems. Meanwhile, it continued to bolster its lines of terminals. The result was that the company regained profitability in 1991 and doubled profits in 1992, when sales jumped to $480 million.

Wyse eventually decided to completely exit the systems side of the hyper-competitive computer business. Under the direction of president and chief executive Douglas Chance, whom Chang hired in 1994 (Chang remained as chairman of the board), Wyse shifted its focus to its traditional strengths in terminals and monitors. Indeed, restructuring during 1993 and 1994 left Wyse Technology Inc. a company focused entirely on video display terminals and monitors. In that niche, Wyse had become the leader after surpassing IBM in 1992. By 1995, in fact, its Qume, Link, and Wyse brand terminals controlled 37 percent of the general purpose computer terminal market (which included terminals connected to mainframes and minicomputers). Augmenting those products were its low-cost, high-performance desktop computer monitors, which proffered such features as digital panel controls and low-radiant emissions.

In November 1995, Wyse introduced the Winterm product line, the world’s first Microsoft Windows terminal. The terminal was designed to run the popular Windows interface, thus providing a low-cost alternative to networking several complete personal computer systems; for example, a company could purchase several Winterm terminals for $500 to $750 each and connect them to a server, allowing multiple users to get the look, feel, and approximate performance of a PC, but at a much lower cost. For the remainder of the decade, the privately-held Wyse planned to remain focused on markets for terminals and PC video displays.

x25 – The History of Domain Names

X.25 and public data networks

Date: 01/01/1974

X.25 is an ITU-T standard protocol suite for packet switched wide area network (WAN) communication. An X.25 WAN consists of packet-switching exchange (PSE) nodes as the networking hardware, and leased lines, plain old telephone service connections, or ISDN connections as physical links. X.25 is a family of protocols that was popular during the 1980s with telecommunications companies and in financial transaction systems such as automated teller machines. X.25 was originally defined by the International Telegraph and Telephone Consultative Committee (CCITT, now ITU-T) in a series of drafts and finalized in a publication known as The Orange Book in 1976.

A public data network is a network established and operated by a telecommunications administration, or a recognized private operating agency, for the specific purpose of providing data transmission services for the public. In communications, a PDN is a circuit- or packet-switched network that is available to the public and that can transmit data in digital form. A PDN provider is a company that provides access to a PDN and that provides any of X.25, frame relay, or cell relay (ATM) services. Access to a PDN generally includes a guaranteed bandwidth, known as the committed information rate (CIR). Costs for the access depend on the guaranteed rate. PDN providers differ in how they charge for temporary increases in required bandwidth (known as surges). Some use the amount of overrun; others use the surge duration.

While X.25 has, to a large extent, been replaced by less complex protocols, especially the Internet protocol (IP), the service is still used (e.g. as of 2012 in credit card payment industry) and available in niche and legacy applications.

Based on ARPA’s research, packet switching network standards were developed by the International Telecommunication Union (ITU) in the form of X.25 and related standards. While using packet switching, X.25 is built on theconcept of virtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976.

The British Post Office, Western Union International and Tymnet collaborated to create the first internationalpacket switched network, referred to as the International Packet Switched Service(IPSS), in 1978. This networkgrew from Europe and the US to cover Canada, Hong Kong and Australiaby 1981. By the 1990s it provided a worldwide networking infrastructure. Unlike ARPANET, X.25 was commonly available for business use.Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET. Thefirst public dial-in networks used asynchronous TTY terminal protocols to reach a concentrator operated in the public network. Some networks, such as CompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols.

In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CBS imulator. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features. Many bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users

History (X.25)

X.25 is one of the oldest packet-switched services available. It was developed before the OSI Reference Model.[4] The protocol suite is designed as three conceptual layers, which correspond closely to the lower three layers of the seven-layer OSI model.[5] It also supports functionality not found in the OSI network layer.

X.25 was developed in the ITU-T (formerly CCITT) Study Group VII based upon a number of emerging data network projects. Various updates and additions were worked into the standard, eventually recorded in the ITU series of technical books describing the telecommunication systems. These books were published every fourth year with different-colored covers. The X.25 specification is only part of the larger set of X-Series specifications on public data networks.

The public data network was the common name given to the international collection of X.25 providers. Their combined network had large global coverage during the 1980s and into the 1990s.

Publicly accessible X.25 networks (Compuserve, Tymnet, Euronet, PSS, Datapac, Datanet 1 and Telenet) were set up in most countries during the 1970s and 1980s, to lower the cost of accessing various online services.

Beginning in the early 1990s, in North America, use of X.25 networks (predominated by Telenet and Tymnet) started to be replaced by Frame Relay, service offered by national telephone companies. Most systems that required X.25 now use TCP/IP, however it is possible to transport X.25 over TCP/IP when necessary.

X.25 networks are still in use throughout the world. A variant called AX.25 is also used widely by amateur packet radio. Racal Paknet, now known as Widanet, is still in operation in many regions of the world, running on an X.25 protocol base. In some countries, like the Netherlands or Germany, it is possible to use a stripped version of X.25 via the D-channel of an ISDN-2 (or ISDN BRI) connection for low volume applications such as point-of-sale terminals; but, the future of this service in the Netherlands is uncertain.

Additionally X.25 is still under heavy use in the aeronautical business (especially in the Asian region) even though a transition to modern protocols like X.400 is without option as X.25 hardware becomes increasingly rare and costly.[clarification needed] As recently as March 2006, the United States National Airspace Data Interchange Network has used X.25 to interconnect remote airfields with Air Route Traffic Control Centers.

France was one of the last remaining countries where commercial end-user service based on X.25 operated. Known as Minitel it was based on Videotex, itself running on X.25. In 2002, Minitel had about 9 million users, and in 2011, it still accounted for about 2 million users in France when France Télécom announced it would completely shut down the service by 30 June 2012. As planned, service was terminated 30 June 2012. There were 800 000 terminals still in operation at the time.

X25protocol – The History of Domain Names

X.25 protocol approved

Date: 01/01/1976

X.25

A packet-switching protocol for wide area network (WAN) connectivity that uses a public data network (PDN) that parallels the voice network of the Public Switched Telephone Network (PSTN). The current X.25 standard supports synchronous, full-duplex communication at speeds up to 2 Mbps over two pairs of wires, but most implementations are 64-Kbps connections via a standard DS0 link.

X.25 was developed by common carriers in the early 1970s and approved in 1976 by the CCITT, the precursor of the International Telecommunication Union (ITU), and was designed as a global standard for a packet-switching network. X.25 was originally designed to connect remote character-based terminals to mainframe hosts. The original X.25 standard operated only at 19.2 Kbps, but this was generally sufficient for character-based communication between mainframes and terminals.

Because X.25 was designed when analog telephone transmission over copper wire was the norm, X.25 packets have a relatively large overhead of error-correction information, resulting in comparatively low overall bandwidth. Newer WAN technologies such as frame relay, Integrated Services Digital Network (ISDN), and T-carrier services are now generally preferred over X.25. However, X.25 networks still have applications in areas such as credit card verification, automatic teller machine transactions, and other dedicated business and financial uses.

How X.25 Works

The X.25 standard corresponds in functionality to the first three layers of the Open Systems Interconnection (OSI) reference model for networking. Specifically, X.25 defines the following:

The physical layer interface for connecting data terminal equipment (DTE), such as computers and terminals at the customer premises, with the data communications equipment (DCE), such as X.25 packet switches at the X.25 carrier’s facilities. The physical layer interface of X.25 is called X.21bis and was derived from the RS-232 interface for serial transmission.

The data-link layer protocol called Link Access Procedure, Balanced (LAPB), which defines encapsulation (framing) and error-correction methods. LAPB also enables the DTE or the DCE to initiate or terminate a communication session or initiate data transfer. LAPB is derived from the High-level Data Link Control (HDLC) protocol.

The network layer protocol called the Packet Layer Protocol (PLP), which defines how to address and deliver X.25 packets between end nodes and switches on an X.25 network using permanent virtual circuits (PVCs) or switched virtual circuits (SVCs). This layer is responsible for call setup and termination and for managing transfer of packets.

An X.25 network consists of a backbone of X.25 switches that are called packet switching exchanges (PSEs). These switches provide packet-switching services that connect DCEs at the local facilities of X.25 carriers. DTEs at customer premises connect to DCEs at X.25 carrier facilities by using a device called a packet assembler/disassembler (PAD). You can connect several DTEs to a single DCE by using the multiplexing methods inherent in the X.25 protocol. Similarly, a single X.25 end node can establish several virtual circuits simultaneously with remote nodes.

An end node (DTE) can initiate a communication session with another end node by dialing its X.121 address and establishing a virtual circuit that can be either permanent or switched, depending on the level of service required. Packets are routed through the X.25 backbone network by using the ID number of the virtual circuit established for the particular communication session. This ID number is called the logical channel identifier (LCI) and is a 12-bit address that identifies the virtual circuit. Packets are generally up to 128 bytes in size, although maximum packet sizes range from 64 to 4096 bytes, depending on the system.

Xerox – The History of Domain Names

xerox.com was registered

Date: 01/09/1986

Xerox managed to register xerox.com on 09-Jan-1986 and became the 7th domain name.

Xerox Corporation is an American global corporation that sells business services and document technology products. Xerox is headquartered in Norwalk, Connecticut (moved from Stamford, Connecticut in October 2007), though its largest population of employees is based around Rochester, New York, the area in which the company was founded. The company purchased Affiliated Computer Services for $6.4 billion in early 2010. As a large developed company, it is consistently placed in the list of Fortune 500 companies. Researchers at Xerox and its Palo Alto Research Center invented several important elements of personal computing, such as the desktop metaphor GUI, the computer mouse and desktop computing. These concepts were frowned upon by the then board of directors, who ordered the Xerox engineers to share them with Apple technicians. The concepts were adopted by Apple and, later, Microsoft. With the help of these innovations, Apple and Microsoft came to dominate the personal computing revolution of the 1980s, whereas Xerox was not a major player.

Xerox was founded in 1906 in Rochester as The Haloid Photographic Company, which originally manufactured photographic paper and equipment.

In 1938 Chester Carlson, a physicist working independently, invented a process for printing images using an electrically charged drum and dry powder “toner”. However, it would take more than 20 years of refinement before the first automated machine to make copies was commercialized, using a document feeder, scanning light, and a rotating drum. Joseph C. Wilson, credited as the “founder of Xerox”, took over Haloid from his father. He saw the promise of Carlson’s invention and, in 1946, signed an agreement to develop it as a commercial product. Wilson remained as President/CEO of Xerox until 1967 and served as Chairman until his death in 1971. Looking for a term to differentiate its new system, Haloid coined the term Xerography from two Greek roots meaning “dry writing”. Haloid subsequently changed its name to Haloid Xerox in 1958 and then Xerox Corporation in 1961.

Before releasing the 914, Xerox tested the market by introducing a developed version of the prototype hand-operated equipment known as the Flat-plate 1385. The 1385 was not actually a viable copier because of its speed of operation. As a consequence, it was sold as a platemaker to the offset lithography market, perhaps most notably as a platemaker for the Addressograph-Multigraph Multilith 1250 and related sheet-fed offset printing presses. It was little more than a high quality, commercially available plate camera mounted as a horizontal rostrum camera, complete with photo-flood lighting and timer. The glass film/plate had been replaced with a selenium-coated aluminum plate. Clever electrics turned this into a quick developing and reusable substitute for film. A skilled user could produce fast, paper and metal printing plates of a higher quality than almost any other method. Having started as a supplier to the offset lithography duplicating industry, Xerox now set its sights on capturing some of offset’s market share. The 1385 was followed by the first automatic xerographic printer, the Copyflo, in 1955. The Copyflo was a large microfilm printer which could produce positive prints on roll paper from any type of microfilm negative. Following the Copyflo, the process was scaled down to produce the 1824 microfilm printer. At about half the size and weight, this still sizable machine printed onto hand-fed, cut-sheet paper which was pulled through the process by one of two gripper bars. A scaled-down version of this gripper feed system was to become the basis for the 813 desktop copier.

XXX – The History of Domain Names

.XXX Plans to Spend $15 Million in First Year Promoting New Domain to Web Users

August 16, 2011

With the first .xxx domain names from the “Founder’s” program coming online and the trademark sunrise period soon to get underway, .XXX backer ICM Registry has some very ambitious marketing plans.

I caught up with ICM Registry CEO Stuart Lawley today to understand the latest on .xxx and the company’s marketing plans.

Ambitious Marketing

One of the striking differences between .xxx and new TLD launches of a decade ago is how much the registry plans to spend informing the world that .xxx exists. The company will spend $3 million promoting .xxx before the end of the year and budgets $15 million for the first year alone to reach web users.

“I haven’t seen many registries do that,” said Lawley. “They pump the names to people to try to register them. We’re advertising to users.”

The message to users is that .xxx exists and is “safe” (thanks to a virus scanner deal for all .xxx sites), so people looking for adult material can just enter a term followed by .xxx to find what they’re looking for.

The registry is also sacrificing what would surely be a big payday — selling porn.xxx and sex.xxx – to use the sites as search engines for .xxx domain names. ICM Registry wants these sites to become traffic generators for registrants and is spending seven figures on this effort.

Rights Protection

But the registry isn’t just promoting the domains to users. Lawley said his company has spent $1 million promoting its sunrise periods to trademark holders. It will also offer a lifetime blocking service for non-adult companies. The registry price for this service is $165 and the company is operating it on a cost recovery basis. It guesstimates 10,000 domains will be “blocked” using the service.

“We’ve done our job to promote sunrise,” said Lawley. “No one can say ‘I wasn’t aware of this’”.

As for Free Speech Coalition’s continued challenges to .xxx — including providing a template letter to put the registry on notice of trademark infringement — Lawley says the company’s position is clear. It has published a white paper on trademarks and registry responsibility.

Just because a company has a registered trademark doesn’t mean it’s the only company to have rights to a domain. For example, one company could have a term registered with the U.S. while another is registered elsewhere and both companies compete.

“I think ICM’s rights protection mechanisms are stronger and better than any top level domain launched to date,” he said.

Software Compatibility and New TLD Challenges

.XXX is already getting indexed in search engines such as Google and Yahoo. But it will take a while longer for some software developers to recognize it as a top level domain.

A number of software programs don’t recognize .xxx yet. ICM is working with companies such as Skype and Google to address these issues.

This is an issue I’ve been writing about for years. New TLD operators will need to watch ICM’s launch to understand some of the challenges they’ll face.

Up Next

Sunrise for trademark holders kicks off September 7. Sunrise A is for adult site trademark holders, Sunrise B is for non-adult companies wishing to block their names from resolving.

XXXTLD – The History of Domain Names

.xxx TLD received initial approval from the ICANN

Date: 06/01/2010

XXX top level domain gets initial approval

ICANN, the Internet Corporation for Assigned Names and Numbers, has given initial approval for a .xxx top-level domain. Should the initial approval become final approval, you’ll be able to register URLs.

ICANN, for its part, wasn’t sure it wanted to create a .xxx TLD for fear that it would be forced into a sort of Internet policing role. That’s not ICANN’s job, and it couldn’t be an “Internet cop” even if it wanted to.

All of this started way back in 2005 when the first proposals were being put forth to create the TLD. Complaints and ICANN’s own fear (see above) delayed the proceedings.

ICM Registry filed its application for the .XXX domain extension (gTLD) in 2004 and initially had it approved by ICANN in June 2005, only to later have it denied. The triple-x domain name has been a controversial topic in the ICANN community as well as leaders in the adult community for which it is proposed to serve. After seven years of waiting in line, ICM Registry, the registry operator for .XXX, finally made it through the ICANN process.

ICANN’s agreement with ICM Registry will likely be much different most of the New gTLDs that we’ll see in the coming years. ICM will need to adhere to very strict compliance measures as well as pay twice the per transaction registration fee ($1 → $2) that was included in the original draft agreement. This additional portion of the fee will be used for “anticipated risks and compliance activities” due to the nature of the top-level domain.

The next step that needs to be completed is the actual delegation of .XXX to the root zone. After the delegation, the implementation schedule will be up to ICM Registry. We imagine it won’t be too long until we start seeing .XXX resolve on the Internet.

The Internet Corporation for Assigned Names and Numbers (ICANN) has given the .XXX top-level domain (TLD) its final seal of approval in 2011.

Yahoo – The History of Domain Names

Yahoo Founded

Date: 01/01/1994

Yahoo Inc. (also known simply as Yahoo!, styled as YAHOO!) is an American multinational technology company headquartered in Sunnyvale, California. Yahoo was founded by Jerry Yang and David Filo in January 1994 and was incorporated on March 2, 1995. Yahoo was one of the pioneers of the early internet era in the 1990s. Marissa Mayer, a former Google executive, serves as CEO and President of the company.

It is globally known for its Web portal, search engine Yahoo! Search, and related services, including Yahoo! Directory, Yahoo! Mail, Yahoo! News, Yahoo! Finance, Yahoo! Groups, Yahoo! Answers, advertising, online mapping, video sharing, fantasy sports, and its social media website. It is one of the most popular sites in the United States. According to third-party web analytics providers, Alexa and SimilarWeb, Yahoo! is the highest-read news and media website, with over 7 billion views per month, being the fifth most visited website globally, as of September 2016. According to news sources, roughly 700 million people visit Yahoo websites every month. Yahoo itself claims it attracts “more than half a billion consumers every month in more than 30 languages”.

Once the most popular website in the U.S., Yahoo slowly started to decline since the late 2000s, and on July 25, 2016 Verizon Communications announced its intent to acquire Yahoo’s internet business for US$4.8 billion—the company was once worth over US$100 billion. Its 15% stake in Alibaba Group and 35.5% stake in Yahoo! Japan will remain intact.

Founding

In January 1994 Yang and Filo were electrical engineering graduate students at Stanford University, when they created a website named “Jerry and David’s guide to the World Wide Web”. The site was a directory of other websites, organized in a hierarchy, as opposed to a searchable index of pages. In March 1994, “Jerry and David’s Guide to the World Wide Web” was renamed “Yahoo!” The “yahoo.com” domain was created on January 18, 1995.

The word “yahoo” is a backronym for “Yet Another Hierarchically Organized Oracle” or “Yet Another Hierarchical Officious Oracle”. The term “hierarchical” described how the Yahoo database was arranged in layers of subcategories. The term “oracle” was intended to mean “source of truth and wisdom”, and the term “officious”, rather than being related to the word’s normal meaning, described the many office workers who would use the Yahoo database while surfing from work. However, Filo and Yang insist they mainly selected the name because they liked the slang definition of a “yahoo” (used by college students in David Filo’s native Louisiana in the late 1980s and early 1990s to refer to an unsophisticated, rural Southerner): “rude, unsophisticated, uncouth.” This meaning derives from the Yahoo race of fictional beings from Gulliver’s Travels.

Expansion

Yahoo grew rapidly throughout the 1990s. Like many web search engines and web directories, Yahoo added a web portal. By 1998, Yahoo was the most popular starting point for web users.[32] It also made many high-profile acquisitions. Its stock price skyrocketed during the dot-com bubble, Yahoo stocks closing at an all-time high of $118.75 a share on January 3, 2000. However, after the dot-com bubble burst, it reached a post-bubble low of $8.11 on September 26, 2001.

Yahoo began using Google for search in 2000. Over the next four years, it developed its own search technologies, which it began using in 2004. In response to Google’s Gmail, Yahoo began to offer unlimited email storage in 2007. The company struggled through 2008, with several large layoffs.

In February 2008 Microsoft Corporation made an unsolicited bid to acquire Yahoo for US$44.6 billion. Yahoo formally rejected the bid, claiming that it “substantially undervalues” the company and was not in the interest of its shareholders. Three years later Yahoo had a market capitalization of US$22.24 billion. Carol Bartz replaced Yang as CEO in January 2009. In September 2011 she was removed from her position at Yahoo by the company’s chairman Roy Bostock, and CFO Tim Morse was named as Interim CEO of the company.

In early 2012, after the appointment of Scott Thompson as CEO, rumors began to spread about looming layoffs. Several key executives, such as Chief Product Officer Blake Irving left. On April 4, 2012, Yahoo announced a cut of 2,000 jobs or about 14 percent of its 14,100 workers. The cut was expected to save around US$375 million annually after the layoffs are completed at end of 2012. In an email sent to employees in April 2012, Thompson reiterated his view that customers should come first at Yahoo. He also completely reorganized the company.

On May 13, 2012, Yahoo issued a press release stating that Thompson was no longer with the company, and would immediately be replaced on an interim basis by Ross Levinsohn, recently appointed head of Yahoo’s new Media group. Thompson’s total compensation for his 130-day tenure with Yahoo was at least $7.3 million.

On July 16, 2012, Marissa Mayer was appointed President and CEO of Yahoo, effective the following day.

On May 19, 2013 the Yahoo board approved a US$1.1 billion purchase of blogging site Tumblr, and the company’s CEO and founder David Karp will remain a large shareholder. The announcement reportedly signifies a changing trend in the technology industry, as large corporations like Yahoo, Facebook, and Google acquire start-up Internet companies that generate low amounts of revenue as a way in which to connect with sizeable, fast-growing online communities. The Wall Street Journal stated that the purchase of Tumblr would satisfy the company’s need for “a thriving social-networking and communications hub.” On May 20, the company announced the acquisition of Tumblr officially. The company also announced plans to open a San Francisco office in July 2013.

On August 2, 2013, Yahoo Inc. announced the acquisition of social Web browser concern RockMelt. With the acquisition, the RockMelt team, including the concern’s CEO Eric Vishria and CTO Tim Howes, will be the part of Yahoo team. As a result, all the RockMelt applications and existing Web services were terminated on August 31.

Data collated by comScore during July 2013, revealed that more people in the U.S. visited Yahoo Web sites during the month in comparison to Google Web sites; the occasion was the first time that Yahoo outperformed Google since 2011. The data did not incorporate visit statistics for the Yahoo-owned Tumblr site or mobile phone usage.

Recent developments

On March 12, 2014, Yahoo officially announced its partnership with Yelp, Inc., which will help Yahoo boost its local search results to better compete with services like Google.

On November 11, 2014, Yahoo announced it would be acquiring video ad company BrightRoll for $640 million. Video is one of the company’s key growth areas and the acquisition will make Yahoo’s video ad platform the largest in the U.S.

On November 21, 2014, it was announced that Yahoo had acquired Cooliris.

By the fourth quarter of 2013, the company’s share price had more than doubled since Marissa Mayer took over as president in July 2012; however, the share price peaked at about $35 in November 2013. It did go up to $36.04 in the mid afternoon of December 2, 2015, perhaps on news that the board of directors was meeting to decide on the future of Mayer, whether to sell the struggling internet business and whether to continue with the spinoff of its stake in China’s Alibaba e-commerce site. Not all has gone well during Mayer’s tenure including the $1.1 billion acquisition of Tumblr that has yet to prove to be beneficial while the forays into original video content led to a $42 million write-down. Sydney Finkelstein, a professor at Dartmouth College’s Tuck School of Business, told the Washington Post that sometimes, “the single best thing you can do … is sell the company.” The closing price of Yahoo! Inc. on December 7, 2015 was $34.68.

The Wall Street Journal’s Douglas MacMillan reported that Yahoo’s CEO Marissa Mayer is expected to cut 15% of its workforce. The announcement is expected after the market closes on February 2, 2016.

On September 22, 2016, Yahoo disclosed a data breach in which hackers stole information associated with at least 500 million user accounts in late 2014. According to the BBC, this was the largest technical breach reported to date.

Wikipedia – The History of Domain Names

Wikipedia the free encyclopedia

Date: 01/01/2001

Wikipedia  is a free online encyclopedia that, by default, allows its users to edit any article. Wikipedia is the largest and most popular general reference work on the Internet and is ranked among the ten most popular websites. Wikipedia is owned by the nonprofit Wikimedia Foundation.

Wikipedia was launched on January 15, 2001, by Jimmy Wales and Larry Sanger. Sanger coined its name, a portmanteau of wiki and encyclopedia. There was only the English language version initially, but it quickly developed similar versions in other languages, which differ in content and in editing practices. With 5,294,660 articles, the English Wikipedia is the largest of the more than 290 Wikipedia encyclopedias. Overall, Wikipedia consists of more than 40 million articles in more than 250 different languages and as of February 2014, it had 18 billion page views and nearly 500 million unique visitors each month.

In 2005, Nature published a peer review comparing 42 science articles from Encyclopædia Britannica and Wikipedia, and found that Wikipedia’s level of accuracy approached Encyclopædia Britannica’s. Criticism of Wikipedia includes claims that it exhibits systemic bias, presents a mixture of “truths, half truths, and some falsehoods”, and that in controversial topics, it is subject to manipulation and spin.

History

Nupedia

Logo reading “Nupedia.com the free encyclopedia” in blue with large initial “N”

Wikipedia originally developed from another encyclopedia project called Nupedia

Other collaborative online encyclopedias were attempted before Wikipedia, but none were so successful.

Wikipedia began as a complementary project for Nupedia, a free online English-language encyclopedia project whose articles were written by experts and reviewed under a formal process. Nupedia was founded on March 9, 2000, under the ownership of Bomis, a web portal company. Its main figures were the Bomis CEO Jimmy Wales and Larry Sanger, editor-in-chief for Nupedia and later Wikipedia. Nupedia was licensed initially under its own Nupedia Open Content License, switching to the GNU Free Documentation License before Wikipedia’s founding at the urging of Richard Stallman. Sanger and Wales founded Wikipedia. While Wales is credited with defining the goal of making a publicly editable encyclopedia, Sanger is credited with the strategy of using a wiki to reach that goal. On January 10, 2001, Sanger proposed on the Nupedia mailing list to create a wiki as a “feeder” project for Nupedia.

Launch and early growth

Wikipedia was launched on January 15, 2001, as a single English-language edition at www.wikipedia.com, and announced by Sanger on the Nupedia mailing list. Wikipedia’s policy of “neutral point-of-view” was codified in its first months. Otherwise, there were relatively few rules initially and Wikipedia operated independently of Nupedia. Originally, Bomis intended to make Wikipedia a business for profit.

Wikipedia gained early contributors from Nupedia, Slashdot postings, and web search engine indexing. By August 8, 2001, Wikipedia had over 8,000 articles. On September 25, 2001, Wikipedia had over 13,000 articles. By the end of 2001, it had grown to approximately 20,000 articles and 18 language editions. It had reached 26 language editions by late 2002, 46 by the end of 2003, and 161 by the final days of 2004. Nupedia and Wikipedia coexisted until the former’s servers were taken down permanently in 2003, and its text was incorporated into Wikipedia. The English Wikipedia passed the mark of two million articles on September 9, 2007, making it the largest encyclopedia ever assembled, surpassing even the 1408 Yongle Encyclopedia, which had held the record for almost 600 years.

Citing fears of commercial advertising and lack of control in Wikipedia, users of the Spanish Wikipedia forked from Wikipedia to create the Enciclopedia Libre in February 2002. These moves encouraged Wales to announce that Wikipedia would not display advertisements, and to change Wikipedia’s domain from wikipedia.com to wikipedia.org.

Though the English Wikipedia reached three million articles in August 2009, the growth of the edition, in terms of the numbers of articles and of contributors, appears to have peaked around early 2007. Around 1,800 articles were added daily to the encyclopedia in 2006; by 2013 that average was roughly 800. A team at the Palo Alto Research Center attributed this slowing of growth to the project’s increasing exclusivity and resistance to change. Others suggest that the growth is flattening naturally because articles that could be called “low-hanging fruit”—topics that clearly merit an article—have already been created and built up extensively.

In November 2009, a researcher at the Rey Juan Carlos University in Madrid (Spain) found that the English Wikipedia had lost 49,000 editors during the first three months of 2009; in comparison, the project lost only 4,900 editors during the same period in 2008. The Wall Street Journal cited the array of rules applied to editing and disputes related to such content among the reasons for this trend. Wales disputed these claims in 2009, denying the decline and questioning the methodology of the study. Two years later, Wales acknowledged the presence of a slight decline, noting a decrease from “a little more than 36,000 writers” in June 2010 to 35,800 in June 2011. In the same interview, Wales also claimed the number of editors was “stable and sustainable”, a claim which was questioned by MIT’s Technology Review in a 2013 article titled “The Decline of Wikipedia”. In July 2012, the Atlantic reported that the number of administrators is also in decline. In the November 25, 2013, issue of New York magazine, Katherine Ward stated “Wikipedia, the sixth-most-used website, is facing an internal crisis. In 2013, MIT’s Technology Review revealed that since 2007, the site has lost a third of the volunteer editors who update and correct the online encyclopedia’s millions of pages and those still there have focused increasingly on minutiae.”

Recent milestones

In January 2007, Wikipedia entered for the first time the top-ten list of the most popular websites in the United States, according to comScore Networks. With 42.9 million unique visitors, Wikipedia was ranked number 9, surpassing the New York Times and Apple . This marked a significant increase over January 2006, when the rank was number 33, with Wikipedia receiving around 18.3 million unique visitors. As of March 2015, Wikipedia has rank 6 among websites in terms of popularity according to Alexa Internet. In 2014, it received 8 billion pageviews every month. On February 9, 2014, The New York Times reported that Wikipedia has 18 billion page views and nearly 500 million unique visitors a month, “according to the ratings firm comScore.”

On January 18, 2012, the English Wikipedia participated in a series of coordinated protests against two proposed laws in the United States Congress—the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA)—by blacking out its pages for 24 hours. More than 162 million people viewed the blackout explanation page that temporarily replaced Wikipedia content.

Loveland and Reagle argue that, in process, Wikipedia follows a long tradition of historical encyclopedias that accumulated improvements piecemeal through “stigmergic accumulation”.

On January 20, 2014, Subodh Varma reporting for The Economic Times indicated that not only had Wikipedia’s growth flattened but that it has “lost nearly 10 per cent of its page-views last year. That’s a decline of about 2 billion between December 2012 and December 2013. Its most popular versions are leading the slide: page-views of the English Wikipedia declined by 12 per cent, those of German version slid by 17 per cent and the Japanese version lost 9 per cent.” Varma added that, “While Wikipedia’s managers think that this could be due to errors in counting, other experts feel that Google’s Knowledge Graphs project launched last year may be gobbling up Wikipedia users.” When contacted on this matter, Clay Shirky, associate professor at New York University and fellow at Harvard’s Berkman Center for Internet and Security indicated that he suspected much of the page view decline was due to Knowledge Graphs, stating, “If you can get your question answered from the search page, you don’t need to click .”

Openness

Unlike traditional encyclopedias, Wikipedia follows the procrastination principle regarding the security of its content. It started almost entirely open—anyone could create articles, and any Wikipedia article could be edited by any reader, even those who did not have a Wikipedia account. Modifications to all articles would be published immediately. As a result, any article could contain inaccuracies such as errors, ideological biases, and nonsensical or irrelevant text.

Restrictions

Due to the increasing popularity of Wikipedia, popular editions, including the English version, have introduced editing restrictions in some cases. For instance, on the English Wikipedia and some other language editions, only registered users may create a new article. On the English Wikipedia, among others, some particularly controversial, sensitive and/or vandalism-prone pages have been protected to some degree. A frequently vandalized article can be semi-protected, meaning that only autoconfirmed editors are able to modify it. A particularly contentious article may be locked so that only administrators are able to make changes.

In certain cases, all editors are allowed to submit modifications, but review is required for some editors, depending on certain conditions. For example, the German Wikipedia maintains “stable versions” of articles, which have passed certain reviews. Following protracted trials and community discussion, the English Wikipedia introduced the “pending changes” system in December 2012. Under this system, new users’ edits to certain controversial or vandalism-prone articles are “subject to review from an established Wikipedia editor before publication”.

Review of changes

Although changes are not systematically reviewed, the software that powers Wikipedia provides certain tools allowing anyone to review changes made by others. The “History” page of each article links to each revision. On most articles, anyone can undo others’ changes by clicking a link on the article’s history page. Anyone can view the latest changes to articles, and anyone may maintain a “watchlist” of articles that interest them so they can be notified of any changes. “New pages patrol” is a process whereby newly created articles are checked for obvious problems.

In 2003, economics PhD student Andrea Ciffolilli argued that the low transaction costs of participating in a wiki create a catalyst for collaborative development, and that features such as allowing easy access to past versions of a page favor “creative construction” over “creative destruction”.

Vandalism

Any edit that changes content in a way that deliberately compromises the integrity of Wikipedia is considered vandalism. The most common and obvious types of vandalism include insertion of obscenities and crude humor. Vandalism can also include advertising language and other types of spam. Sometimes editors commit vandalism by removing information or entirely blanking a given page. Less common types of vandalism, such as the deliberate addition of plausible but false information to an article, can be more difficult to detect. Vandals can introduce irrelevant formatting, modify page semantics such as the page’s title or categorization, manipulate the underlying code of an article, or use images disruptively. Obvious vandalism is generally easy to remove from Wikipedia articles; the median time to detect and fix vandalism is a few minutes. However, some vandalism takes much longer to repair.

In the Seigenthaler biography incident, an anonymous editor introduced false information into the biography of American political figure John Seigenthaler in May 2005. Seigenthaler was falsely presented as a suspect in the assassination of John F. Kennedy. The article remained uncorrected for four months. Seigenthaler, the founding editorial director of USA Today and founder of the Freedom Forum First Amendment Center at Vanderbilt University, called Wikipedia co-founder Jimmy Wales and asked whether he had any way of knowing who contributed the misinformation. Wales replied that he did not, although the perpetrator was eventually traced. After the incident, Seigenthaler described Wikipedia as “a flawed and irresponsible research tool”. This incident led to policy changes at Wikipedia, specifically targeted at tightening up the verifiability of biographical articles of living people.

Accuracy of content

Articles for traditional encyclopedias such as Encyclopædia Britannica are carefully and deliberately written by experts, lending such encyclopedias a reputation for accuracy. Conversely, Wikipedia is often cited for factual inaccuracies and misrepresentations. However, a peer review in 2005 of forty-two scientific entries on both Wikipedia and Encyclopædia Britannica by the science journal Nature found few differences in accuracy, and concluded that “the average science entry in Wikipedia contained around four inaccuracies; Britannica, about three.” Reagle suggested that while the study reflects “a topical strength of Wikipedia contributors” in science articles, “Wikipedia may not have fared so well using a random sampling of articles or on humanities subjects.” The findings by Nature were disputed by Encyclopædia Britannica, and in response, Nature gave a rebuttal of the points raised by Britannica. In addition to the point-for-point disagreement between these two parties, others have examined the sample size and selection method used in the Nature effort, and suggested a “flawed study design” (in Nature’s manual selection of articles, in part or in whole, for comparison), absence of statistical analysis (e.g., of reported confidence intervals), and a lack of study “statistical power” (i.e., owing to small sample size, 42 or 4 x 101 articles compared, vs >105 and >106 set sizes for Britannica and the English Wikipedia, respectively).

As a consequence of the open structure, Wikipedia “makes no guarantee of validity” of its content, since no one is ultimately responsible for any claims appearing in it. Concerns have been raised by PC World in 2009 regarding the lack of accountability that results from users’ anonymity, the insertion of false information, vandalism, and similar problems.

Economist Tyler Cowen wrote: “If I had to guess whether Wikipedia or the median refereed journal article on economics was more likely to be true, after a not so long think I would opt for Wikipedia.” He comments that some traditional sources of non-fiction suffer from systemic biases and novel results, in his opinion, are over-reported in journal articles and relevant information is omitted from news reports. However, he also cautions that errors are frequently found on Internet sites, and that academics and experts must be vigilant in correcting them.

Critics argue that Wikipedia’s open nature and a lack of proper sources for most of the information makes it unreliable. Some commentators suggest that Wikipedia may be reliable, but that the reliability of any given article is not clear. Editors of traditional reference works such as the Encyclopædia Britannica have questioned the project’s utility and status as an encyclopedia. Wikipedia’s open structure inherently makes it an easy target for Internet trolls, spammers, and various forms of paid advocacy seen as counterproductive to the maintenance of a neutral and verifiable online encyclopedia. In response to paid advocacy editing and undisclosed editing issues, Wikipedia was reported in an article by Jeff Elder in The Wall Street Journal on June 16, 2014, to have strengthened its rules and laws against undisclosed editing. The article stated that: “Beginning Monday [from date of article], changes in Wikipedia’s terms of use will require anyone paid to edit articles to disclose that arrangement. Katherine Maher, the nonprofit Wikimedia Foundation’s chief communications officer, said the changes address a sentiment among volunteer editors that, ‘we’re not an advertising service; we’re an encyclopedia.’” These issues, among others, had been parodied since the first decade of Wikipedia, notably by Stephen Colbert on The Colbert Report.

Most university lecturers discourage students from citing any encyclopedia in academic work, preferring primary sources; some specifically prohibit Wikipedia citations. Wales stresses that encyclopedias of any type are not usually appropriate to use as citeable sources, and should not be relied upon as authoritative. Wales once (2006 or earlier) said he receives about ten emails weekly from students saying they got failing grades on papers because they cited Wikipedia; he told the students they got what they deserved. “For God’s sake, you’re in college; don’t cite the encyclopedia”, he said.

In February 2007, an article in The Harvard Crimson newspaper reported that a few of the professors at Harvard University were including Wikipedia articles in their syllabi, although without realizing the articles might change. In June 2007, former president of the American Library Association Michael Gorman condemned Wikipedia, along with Google, stating that academics who endorse the use of Wikipedia are “the intellectual equivalent of a dietitian who recommends a steady diet of Big Macs with everything”.

A Harvard law textbook, Legal Research in a Nutshell (2011), cites Wikipedia as a “general source” that “can be a real boon” in “coming up to speed in the law governing a situation” and, “while not authoritative, can provide basic facts as well as leads to more in-depth resources”.

Quality of writing

In 2008, researchers at Carnegie Mellon University found that the quality of a Wikipedia article would suffer rather than gain from adding more writers when the article lacked appropriate explicit or implicit coordination. For instance, when contributors rewrite small portions of an entry rather than making full-length revisions, high- and low-quality content may be intermingled within an entry. Roy Rosenzweig, a history professor, stated that American National Biography Online outperformed Wikipedia in terms of its “clear and engaging prose”, which, he said, was an important aspect of good historical writing. Contrasting Wikipedia’s treatment of Abraham Lincoln to that of Civil War historian James McPherson in American National Biography Online, he said that both were essentially accurate and covered the major episodes in Lincoln’s life, but praised “McPherson’s richer contextualization his artful use of quotations to capture Lincoln’s voice and his ability to convey a profound message in a handful of words.” By contrast, he gives an example of Wikipedia’s prose that he finds “both verbose and dull”. Rosenzweig also criticized the “waffling—encouraged by the npov policy— means that it is hard to discern any overall interpretive stance in Wikipedia history”. By example, he quoted the conclusion of Wikipedia’s article on William Clarke Quantrill. While generally praising the article, he pointed out its “waffling” conclusion: “Some historians remember him as an opportunistic, bloodthirsty outlaw, while others continue to view him as a daring soldier and local folk hero.”

Other critics have made similar charges that, even if Wikipedia articles are factually accurate, they are often written in a poor, almost unreadable style. Frequent Wikipedia critic Andrew Orlowski commented: “Even when a Wikipedia entry is 100 per cent factually correct, and those facts have been carefully chosen, it all too often reads as if it has been translated from one language to another then into to a third, passing an illiterate translator at each stage.” A study of articles on cancer was undertaken in 2010 by Yaacov Lawrence of the Kimmel Cancer Center at Thomas Jefferson University limited to those Wikipedia articles which could be found in the Physician Data Query and excluding Wikipedia articles written at the “start” class or the “stub” class level. Lawrence found the articles accurate but not very readable, and thought that “Wikipedia’s lack of readability (to non-college readers) may reflect its varied origins and haphazard editing”. The Economist argued that better-written articles tend to be more reliable: “inelegant or ranting prose usually reflects muddled thoughts and incomplete information”.

Worldwideweb – The History of Domain Names

World Wide Web

Date: 01/01/1990

In 1990, the Internet exploded into commercial society and was followed a year later by the release of the WorldWide Web by originator Tim Berners-Lee and CERN. The same year the first commercial service provider began operating and domain registration officially entered the public domain. 1990 Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between ChulalongkornUniversity and UUNETin 1992.

The World Wide Web (abbreviated WWW or the Web) is an information space where documents and other web resources are identified by Uniform Resource Locators (URLs), interlinked by hypertext links, and can be accessed via the Internet. English scientist Tim Berners-Lee invented the World Wide Web in 1989. He wrote the first web browser computer programme in 1990 while employed at CERN in Switzerland.

The World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet. Web pages are primarily text documents formatted and annotated with Hypertext Markup Language (HTML). In addition to formatted text, web pages may contain images, video, audio, and software components that are rendered in the user’s web browser as coherent pages of multimedia content. Embedded hyperlinks permit users to navigate between web pages. Multiple web pages with a common theme, a common domain name, or both, make up a website. Website content can largely be provided by the publisher, or interactive where users contribute content or the content depends upon the user or their actions. Websites may be mostly informative, primarily for entertainment, or largely for commercial, governmental, or non-governmental organisational purposes.

History

Tim Berners-Lee’s vision of a global hyperlinked information system became a possibility by the second half of the 1980s. By 1985, the global Internet began to proliferate in Europe and in the Domain Name System (upon which the Uniform Resource Locator is built) came into being. In 1988 the first direct IP connection between Europe and North America was made and Berners-Lee began to openly discuss the possibility of a web-like system at CERN.[7] In March 1989 Berners-Lee issued a proposal to the management at CERN for a system called “Mesh” that referenced ENQUIRE, a database and software project he had built in 1980, which used the term “web” and described a more elaborate information management system based on links embedded in readable text: “Imagine, then, the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document you could skip to them with a click of the mouse.” Such a system, he explained, could be referred to using one of the existing meanings of the word hypertext, a term that he says was coined in the 1950s. There is no reason, the proposal continues, why such hypertext links could not encompass multimedia documents including graphics, speech and video, so that Berners-Lee goes on to use the term hypermedia.

With help from his colleague and fellow hypertext enthusiast Robert Cailliau he published a more formal proposal on 12 November 1990 to build a “Hypertext project” called “WorldWideWeb” (one word) as a “web” of “hypertext documents” to be viewed by “browsers” using a client–server architecture. At this point HTML and HTTP had already been in development for about two months and the first Web server was about a month from completing its first successful test. This proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve “the creation of new links and new material by readers, authorship becomes universal” as well as “the automatic notification of a reader when new material of interest to him/her has become available.” While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, WebDAV, blogs, Web 2.0 and RSS/Atom.

The proposal was modelled after the SGML reader Dynatext by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext system, licensed by CERN, was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. A NeXT Computer was used by Berners-Lee as the world’s first web server and also to write the first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the first web browser (which was a web editor as well) and the first web server. The first web site, which described the project itself, was published on 20 December 1990.

The first web page may be lost, but Paul Jones of UNC-Chapel Hill in North Carolina announced in May 2013 that Berners-Lee gave him what he says is the oldest known web page during a 1991 visit to UNC. Jones stored it on a magneto-optical drive and on his NeXT computer. On 6 August 1991, Berners-Lee published a short summary of the World Wide Web project on the newsgroup alt.hypertext. This date is sometimes confused with the public availability of the first web servers, which had occurred months earlier. As another example of such confusion, several news media reported that the first photo on the Web was published by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes taken by Silvano de Gennaro; Gennaro has disclaimed this story, writing that media were “totally distorting our words for the sake of cheap sensationalism.”

The first server outside Europe was installed at the Stanford Linear Accelerator Center (SLAC) in Palo Alto, California, to host the SPIRES-HEP database. Accounts differ substantially as to the date of this event. The World Wide Web Consortium’s timeline says December 1992, whereas SLAC itself claims December 1991, as does a W3C document titled A Little History of the World Wide Web.[20] The underlying concept of hypertext originated in previous projects from the 1960s, such as the Hypertext Editing System (HES) at Brown University, Ted Nelson’s Project Xanadu, and Douglas Engelbart’s oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush’s microfilm-based memex, which was described in the 1945 essay “As We May Think”.

Berners-Lee’s breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested that a marriage between the two technologies was possible to members of both technical communities, but when no one took up his invitation, he finally assumed the project himself. In the process, he developed three essential technologies:

  • a system of globally unique identifiers for resources on the Web and elsewhere, the universal document identifier (UDI), later known as uniform
  • resource locator (URL) and uniform resource identifier (URI);
  • the publishing language HyperText Markup Language (HTML);
  • the Hypertext Transfer Protocol (HTTP).

The World Wide Web had a number of differences from other hypertext systems available at the time. The Web required only unidirectional links rather than bidirectional ones, making it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems), but in turn presented the chronic problem of link rot. Unlike predecessors such as HyperCard, the World Wide Web was non-proprietary, making it possible to develop servers and clients independently and to add extensions without licensing restrictions. On 30 April 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due. Coming two months after the announcement that the server implementation of the Gopher protocol was no longer free to use, this produced a rapid shift away from Gopher and towards the Web. An early popular web browser was ViolaWWW for Unix and the X Windowing System.

Scholars generally agree that a turning point for the World Wide Web began with the introduction of the Mosaic web browser in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the U.S. High-Performance Computing and Communications Initiative and the High Performance Computing and Communication Act of 1991, one of several computing developments initiated by U.S. Senator Al Gore. Prior to the release of Mosaic, graphics were not commonly mixed with text in web pages and the web’s popularity was less than older protocols in use over the Internet, such as Gopher and Wide Area Information Servers (WAIS). Mosaic’s graphical user interface allowed the Web to become, by far, the most popular Internet protocol. The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in October 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet; a year later, a second site was founded at INRIA (a French national computer research lab) with support from the European Commission DG InfSo; and in 1996, a third continental site was created in Japan at Keio University. By the end of 1994, the total number of websites was still relatively small, but many notable websites were already active that foreshadowed or inspired today’s most popular services.

Connected by the Internet, other websites were created around the world. This motivated international standards development for protocols and formatting. Berners-Lee continued to stay involved in guiding the development of web standards, such as the markup languages to compose web pages and he advocated his vision of a Semantic Web. The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularising use of the Internet. Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet. The Web is an information space containing hyperlinked documents and other resources, identified by their URIs. It is implemented as both client and server software using Internet protocols such as TCP/IP and HTTP. Berners-Lee was knighted in 2004 by Queen Elizabeth II for “services to the global development of the Internet”.

WWW prefix

Many hostnames used for the World Wide Web begin with www because of the long-standing practice of naming Internet hosts according to the services they provide. The hostname of a web server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a USENET news server. These host names appear as Domain Name System (DNS) or subdomain names, as in www.example.com. The use of www is not required by any technical or policy standard and many web sites do not use it; indeed, the first ever web server was called nxoc01.cern.ch. According to Paolo Palazzi, who worked at CERN along with Tim Berners-Lee, the popular use of www as subdomain was accidental; the World Wide Web project page was intended to be published at www.cern.ch while info.cern.ch was intended to be the CERN home page, however the DNS records were never switched, and the practice of prepending www to an institution’s website domain name was subsequently copied. Many established websites still use the prefix, or they employ other subdomain names such as www2, secure or en for special purposes. Many such web servers are set up so that both the main domain name (e.g., example.com) and the www subdomain (e.g., www.example.com) refer to the same site; others require one form or the other, or they may map to different web sites. The use of a subdomain name is useful for load balancing incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since, currently, only a subdomain can be used in a CNAME, the same result cannot be achieved by using the bare domain root.

When a user submits an incomplete domain name to a web browser in its address bar input field, some web browsers automatically try adding the prefix “www” to the beginning of it and possibly “.com”, “.org” and “.net” at the end, depending on what might be missing. For example, entering ‘microsoft’ may be transformed to http://www.microsoft.com/ and ‘openoffice’ to http://www.openoffice.org. This feature started appearing in early versions of Mozilla Firefox, when it still had the working title ‘Firebird’ in early 2003, from an earlier practice in browsers such as Lynx. It is reported that Microsoft was granted a US patent for the same idea in 2008, but only for mobile devices.

In English, www is usually read as double-u double-u double-u. Some users pronounce it dub-dub-dub, particularly in New Zealand. Stephen Fry, in his “Podgrammes” series of podcasts, pronounces it wuh wuh wuh. The English writer Douglas Adams once quipped in The Independent on Sunday (1999): “The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it’s short for”. In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wǎng, which satisfies www and literally means “myriad dimensional net”, a translation that reflects the design concept and proliferation of the World Wide Web. Tim Berners-Lee’s web-space states that World Wide Web is officially spelled as three separate words, each capitalised, with no intervening hyphens. Use of the www prefix is declining as Web 2.0 web applications seek to brand their domain names and make them easily pronounceable. As the mobile web grows in popularity, services like Gmail.com, Outlook.com, MySpace.com, Facebook.com and Twitter.com are most often mentioned without adding “www.” (or, indeed, “.com”) to the domain.

WWW Concept – The History of Domain Names

The World Wide Network Concept

Date: 01/01/1982

In 1982 the Internet Protocol Suite (TCP/IP) was standardized and the concept of a world-wide network of fully interconnected TCP/IP networkscalled the Internet was introduced. Access to the ARPANET was expanded in 1981when the National Science Foundation (NSF) developed the Computer Science Network (CSNET) and again in 1986 when NSFNET provided access to supercomputer sites in the United States from research and education organizations.Commercial internet service providers (ISPs) began to emerge in the late 1980sand 1990s. The ARPANET was decommissionedin 1990. The Internet was commercialized in 1995 when NSFNET was decommissioned, removing the last restrictions on the use of the Internet to carry commercial traffic.

WWW Logo – The History of Domain Names

WWW Logo by Robert Cailliau

Date: 01/01/1993

The name Robert Cailliau may not ring a bell to the general public, but his invention is the reason why you are reading this: Dr. Cailliau together with his colleague Sir Tim Berners-Lee invented the World Wide Web, making the internet accessible so it could grow from an academic tool to a mass communication medium. Last January Dr. Cailliau retired from CERN, the European particle physics lab where the WWW emerged.

Robert Cailliau is most well known for the proposal, developed with Tim Berners-Lee, of a hypertext system for accessing documentation, which eventually led to the creation of the World Wide Web. In 1992, Cailliau produced the first Web browser for the Apple Macintosh. In 1993, Cailliau started “WISE”, the first Web-based project at the European Commission (DGXIII) together with the Fraunhofer Gesellschaft. Cailliau also started the authentication scheme for the Web and supervised its implementation. He worked with CERN (European Organization for Nuclear Research) to produce and get the document approved whereby CERN placed the web technology into the public domain. Cailliau was one of the co-founders of the International WWW Conference Committee (IW3C2) after successfully organizing the first conference in 1994. During 1995, he was active in the transfer of the WWW development effort and the standards activities from CERN to the Web Consortium W3C. He then started, with the European Commission, the Web for Schools project, which has given support and access to 150 schools in the European Union.

There are some fun anecdotes about the beginning of the World Wide Web. We’ve just offered you a virtual beer, but the idea of the WWW started with a beer too, in this case with a (real) beer with Tim Berners-Lee, right? And there is a reason that the original WWW-logo is green, right? Why is it not called world-wide web, with a hyphen? Are there other anecdotes you want to share with our readers?

At the second conference, held by NCSA in Chicago, someone asked if it was not better to use the web for conferences instead of travelling to far places. Implusively I answered, in front of 1500 programmers in the audience, that there was no such thing as a virtual beer and therefore we preferred to meet in the flesh. That was 1994.

In 1990, months before there was a shred of code, Tim and I wanted to find a good name for the project. Sometimes, after a hard day’s work in warm offices we drank a beer on the CERN cafeteria terrace before going home. On one such occasion Tim came up with “World-Wide Web”. I would have preferred something shorter, but to find a catching name is not easy. I agreed to use WWW for the new document that was to go to management and “find a better name later”. WWW stayed: it summarised well what it was.

Because I’m a synaesthete I see characters in colours and I perceive a W as green. I liked that. So it remained WWW. And there was indeed a logo that we used a lot in the beginning. It was made from three Ws: white, light green and darker green.

And yes, the hyphen was there for a long time too. But it confused people who were not so grammatical, and Tim finally cut the knot by stating that he had the right to decide how it was written since he had invented it: without the hyphen.

Some anecdotes were less funny: at one time I could have made Alexander Totic of NCSA come to CERN to join us. That was almost arranged when it appeared that Alex had a Serbian passport and CERN at the time did not admit Serbs.

The most recent and very positive anecdote is from October 2006: I gave the opening keynote at an Australian conference on e-learning and the internet in education. I started out by saying I knew nothing about education but would give my keynote anyway. In a later talk Jean Johnson presented NotSchool (http://www.notschool.net) and mentioned that the project “Web for Schools” of the European Commission had been very important for her work.

Later I asked her whether she meant the project that ended in a conference in Dublin in 1995. She said yes and asked if I had been there. I then had to admit that I had started that project myself and addressed the audience at the closing session of the Dublin conference and therefore maybe I did know a little about internet in education. It was heartwarming to know that even smaller initiatives had been quite important.

WWW Released – The History of Domain Names

The World Wide Web is released (www.)

Date: 01/01/1991

Web history – CERN open sourced World Wide Web today in 1993

On April 30, 1993, CERN, the European Organization for Nuclear Research, released World Wide Web into the public domain.

CERN issued a statement putting the Web into the public domain, ensuring that it would remain an open standard. The organization released the source code of Berners-Lee’s hypertext project, WorldWideWeb, into the public domain the same day. WorldWideWeb became free software, available to all. The move had an immediate effect on the spread of the web. By late 1993 there are over 500 known web servers, and the web accounts for 1% of internet traffic.

Berners-Lee moved to the Massachusetts Institute of Technology (MIT), from where he still runs the World Wide Web Consortium (W3C). By the end of 1994, the Web had 10,000 servers – of which 2000 were commercial – and 10 million users. Traffic was equivalent to shipping the collected works of Shakespeare every second.

The World Wide Web opened to the public

On 6 August 1991, exactly, the World Wide Web became publicly available. Its creator, the now internationally known Tim Berners-Lee, posted a short summary of the project on the alt.hypertext newsgroup and gave birth to a new technology which would fundamentally change the world as we knew it.

The World Wide Web has its foundation in work that Berners-Lee did in the 1980s at CERN, the European Organization for Nuclear Research. He had been looking for a way for physicists to share information around the world without all using the same types of hardware and software. This culminated in his 1989 paper proposing ‘A large hypertext database with typed links’.

There was no fanfare in the global press. In fact, most people around the world didn’t even know what the Internet was. Even if they did, the revolution the Web ushered in was still but a twinkle in Tim Berners-Lee’s eye. Instead, the launch was marked by way of a short post from Berners-Lee on the alt.hypertext newsgroup, which is archived to this day on Google Groups.

In 1993, it was announced by CERN that the World Wide Web was free for everyone to use and develop, with no fees payable – a key factor in the transformational impact it would soon have on the world.

While a number of browser applications were developed during the first two years of the Web, it was Mosaic which arguably had the most impact. It was launched in 1993 and by the end of that year was available for Unix, the Commodore Amiga, Windows and Mac OS. The first browser to be freely available and accessible to the public, it inspired the birth of the first commercial browser, Netscape Navigator, while Mosaic’s technology went on to form the basis of Microsoft’s Internet Explorer.

The growth of easy-to-use Web browsers coincided with the growth of the commercial ISP business, with companies like Compuserve bringing increasing numbers of people from outside the scientific community on to the Web – and that was the start of the Web we know today.

What was initially a network of static HTML documents has become a constantly changing and evolving information organism, powered by a wide range of technologies, from database systems like PHP and ASP that can display data dynamically, to streaming media and pages that can be updated in real-time. Plugins like Flash have expanded our expectations of what the Web can offer, while HTML itself has evolved to the point where its latest version can handle video natively.

The Web has become a part of our everyday lives – something we access at home, on the move, on our phones and on TV. It’s changed the way we communicate and has been a key factor in the way the Internet has transformed the global economy and societies around the world. Sir Tim Berners-Lee has earned his knighthood a thousand times over, and the decision of CERN to make the Web completely open has been perhaps its greatest gift to the world.

The future of the Web

So, where does the Web go from here? Where will it be in twenty more years? The Semantic Web will see metadata, designed to be read by machines rather than humans, become a more important part of the online experience. Tim Berners-Lee coined this term, describing it as “A web of data that can be processed directly and indirectly by machines,” – a ‘giant global graph’ of linked data which will allow apps to automatically create new meaning from all the information out there.

Warnerbros – The History of Domain Names

Warner Bros buys Injustice.com for $7500 – a new online game?

May 24, 2012

Movie studio buys domain Injustice.com — for a new online game or movie?

Movie studio Warner Bros. has purchased the domain name Injustice.com for $7,500 through Afternic.

This is one of those domains with many uses, but that the owner must have known could end up being the title of a movie or game. It’s kind of like Invasion.com, which is in Sedo’s Great Domains auction this week. However, Injustice.com has a lot more commercial uses than Invasion.com.

It doesn’t appear that Warner Bros. has any announced movies named Injustice. There’s a documentary about class action lawsuits by this name that’s gaining steam, though. Perhaps the movie studio has bought the distribution rights.

Web Dotcom – The History of Domain Names

Web.com Plans .Web Top Level Domain

October 24, 2011

Web.com plans to apply for the .web top level domain name and says no one else should be allowed to apply for the TLD due to its trademarks.

The company has a pre-reservation form up on its web site and just filed a trademark application for the name. This is the first heard of the company’s plans, although its trademark application says it has been using .web in commerce since June.

Expecting multiple applications for .web. A number of companies have proposed it over the years, and an unofficial version runs outside the official root.

Still, Web.com is making it clear it thinks no one else should be able to operate the .web top level domain name.

Webcrawler – The History of Domain Names

WebCrawler introduced

Date: 01/01/1994

WebCrawler is a metasearch engine that blends the top search results from Google Search and Yahoo! Search. WebCrawler also provides users the option to search for images, audio, video, news, yellow pages and white pages. WebCrawler is a registered trademark of InfoSpace, Inc. It went live on April 20, 1994 and was created by Brian Pinkerton at the University of Washington.

History

WebCrawler was the first Web search engine to provide full text search. It was bought by America Online on June 1, 1995 and sold to Excite on April 1, 1997. WebCrawler was acquired by InfoSpace in 2001 after Excite (which was then called Excite@Home) went bankrupt. InfoSpace also owns and operates the metasearch engines Dogpile and MetaCrawler.

WebCrawler was originally a separate search engine with its own database, and displayed advertising results in separate areas of the page. More recently it has been repositioned as a metasearch engine, providing a composite of separately identified sponsored and non-sponsored search results from most of the popular search engines.

WebCrawler also changed its image in early 2008, scrapping its classic spider mascot.

In July 2010, WebCrawler was ranked the 753rd most popular website in the U.S., and 2994th most popular in the world by Alexa. Quantcast estimated 1.7 million unique U.S. visitors a month, while Compete estimated 7,015,395 — a difference so large that at least one of the companies has faulty methods, according to Alexa.

WebCrawler formerly fetched results from Bing Search (formerly MSN Search and Live Search), Ask.com, About.com, MIVA, and LookSmart.

WesleyClark – The History of Domain Names

Wesley A Clark developed LINC computers and helped plan some of the ideas behind the ARPANET

Date: 01/01/2002

Wesley Allison Clark (April 10, 1927 – February 22, 2016) was an American physicist who is credited for designing the first modern personal computer. He was also a computer designer and the main participant, along with Charles Molnar, in the creation of the LINC computer, which was the first minicomputer and shares with a number of other computers (such as the PDP-1) the claim to be the inspiration for the personal computer.

Clark was born in New Haven, Connecticut and grew up in Kinderhook, New York and northern California. His parents, Wesley Sr. and Eleanor Kittell, moved to California, and he attended the University of California, Berkeley, where he graduated with a degree in physics in 1947. Clark began his career as a physicist at the Hanford Site. In 1981, Clark received the Eckert-Mauchly Award for his work on computer architecture. He was awarded an honorary degree by Washington University in 1984. He was elected to the National Academy of Engineering in 1999. Clark is a charter recipient of the IEEE Computer Society Computer Pioneer Award for “First Personal Computer”.

At Lincoln Laboratory

Clark moved to the MIT Lincoln Laboratory in 1952 where he joined the Project Whirlwind staff. There he was involved in the development of the Memory Test Computer (MTC), a testbed for ferrite core memory that was to be used in Whirlwind. His sessions with the MTC, “lasting hours rather than minutes” helped form his views that computers were to be used as tools on demand for those who needed them. That view carried over into his designs for the TX-0 and TX-2 and the LINC. He expresses this view clearly here:

…both of the Cambridge machines, Whirlwind and MTC, had been completely committed to the air defense effort and were no longer available for general use. The only surviving computing system paradigm seen by M.I.T. students and faculty was that of a very large International Business Machine in a tightly sealed Computation Center: the computer not as tool, but as demigod. Although we were not happy about giving up the TX-0, it was clear that making this small part of Lincoln’s advanced technology available to a larger M.I.T. community would be an important corrective step.

Clark is one of the fathers of the personal computer… he was the architect of both the TX-0 and TX-2 at Lincoln Labs. He believed that “a computer should be just another piece of lab equipment.” At a time when most computers were huge remote machines operated in batch mode, he advocated far more interactive access. He practiced what he preached, even though it often meant bucking current “wisdom” and authority (in a 1981 lecture, he mentioned that he had the distinction of being, “the only person to have been fired three times from MIT for insubordination.”)

Clark’s design for the TX-2 “integrated a number of man-machine interfaces that were just waiting for the right person to show up to use them in order to make a computer that was ‘on-line’. When selecting a PhD thesis topic, an MIT student named Ivan Sutherland looked at the simple cathode ray tube and light pen on the TX-2’s console and thought one should be able to draw on the computer. Thus was born Sketchpad, and with it, interactive computer graphics.”

At Washington University

In 1964, Clark moved to Washington University in St. Louis where he and Charles Molnar worked on macromodules, which were fundamental building blocks in the world of asynchronous computing. The goal of the macromodules was to provide a set of basic building blocks that would allow computer users to build and extend their computers without requiring any knowledge of electrical engineering.

The New York Times series on the history of the personal computer had this to say in an article on August 19, 2001 “How the Computer Became Personal”:

In the pantheon of personal computing, the LINC, in a sense, came first—more than a decade before Ed Roberts made PC’s affordable for ordinary people. Work started on the Linc, the brainchild of the M.I.T. physicist Wesley A. Clark, in May 1961, and the machine was used for the first time at the National Institute of Mental Health in Bethesda, MD, the next year to analyze a cat’s neural responses.

Each Linc had a tiny screen and keyboard and comprised four metal modules, which together were about as big as two television sets, set side by side and tilted back slightly. The machine, a 12-bit computer, included a one-half megahertz processor. Lincs sold for about $43,000—a bargain at the time—and were ultimately made commercially by Digital Equipment, the first minicomputer company. Fifty Lincs of the original design were built.

Role in ARPANET

Clark had a small but key role in the planning for the ARPANET (the predecessor to the Internet). In 1967, he suggested to Larry Roberts the idea of using separate small computers (later named Interface Message Processors) as a way of standardizing the network interface and reducing load on the local computers.

Post-Nixon China trip

In 1972, shortly after President Nixon’s trip to China, Clark accompanied five other computer scientists to China for three weeks to “tour computer facilities and to discuss computer technology with Chinese experts in Shanghai and Peking. Officially, the trip was seen by the Chinese in two lights: as a step in reestablishing the long-interrupted friendship between the two nations and as a step in opening channels for technical dialogue.”[10] The trip was organized by his colleague Severo Ornstein from MIT Lincoln Laboratory and Washington University. The other members of the group were: Thomas E. Cheatham, Anatol Holt, Alan J. Perlis and Herbert A. Simon.

He was 88 when he died on February 22, 2016 at his home in Brooklyn due to severe atherosclerotic cardiovascular disease.

Wheelsup – The History of Domain Names

WheelsUp.com sold to Marquis Jet

March 30, 2012

Marquis Jet founder bought domain for new investment firm, later shuns $500k offer.

Owen Frager and Elliot Silver recently discovered that Marquis Jet was somehow related to the buyer of WheelsUp.com, a domain Frank Schilling sold last year.

Now we know the exact relationship.

Marquis Jet founder Kenny Dichter bought the domain name for a new investment firm he’s part of, Wheels Up Partners, LLC, he told CNBC in an interview today. Purchase price? $50,000 according to Dichter.

WHOIS – The History of Domain Names

WHOIS

Date: 01/01/2001

Whois is a TCP-based query/response protocol which is widely used for querying a database in order to determine the owner of a domain name, an IP address, or an autonomous system number on the Internet. Whois (pronounced as the phrase Who is) represents a protocol that is mainly used to used to find details and information about domain names, networks and hosts. The Whois records contain data referring to various organizations and contacts related to the domain names. The Whois protocols operate by means of a server where anyone is allowed to connect and create a query; the Whois server will then respond to this query and end the connection.

History

When the Internet was emerging out of the ARPANET, there was only one organization that handled all domain registrations, which was DARPA itself. The process of registration was established in RFC 920. WHOIS was standardized in the early 1980s to look up domains, people and other resources related to domain and number registrations. As all registration was done by one organization at that time, one centralized server was used for WHOIS queries. This made looking up such information very easy.

WHOIS traces its roots to 1982, when the Internet Engineering Task Force published a protocol for a directory service for ARPANET users. Initially, the directory simply listed the contact information that was requested of anyone transmitting data across the ARPANET.

As the Internet grew, WHOIS began to serve the needs of different stakeholders such as registrants, law enforcement agents, intellectual property and trademark owners, businesses and individual users. But the protocol remained fundamentally based on those original IETF standards. This is the WHOIS protocol that ICANN inherited when it was established in 1998. On 30 September 2009, ICANN and the U.S. signed an Affirmation of Commitments (AOC) which recognizes ICANN as an independent, private and non-profit organization.

A key provision in the AOC stated that ICANN “commits to enforcing its existing policy relating to WHOIS, subject to applicable laws. Such existing policy requires that ICANN implement measures to maintain timely, unrestricted and public access to accurate and complete WHOIS information, including registrant, technical, billing, and administrative contact information.” The AOC also set up specific provisions for periodic reviews of WHOIS policy.

In 1999, ICANN began allowing other entities to offer domain name registration services. Registries are responsible for maintaining registries of top-level domain names.

Over the years, ICANN has used its agreements with registrars and registries to modify the WHOIS service requirements. These agreements set up the basic framework that dictates how the WHOIS service is operated. In addition, ICANN adopted several consensus policies aimed at improving the WHOIS service.

Responsibility of domain registration remained with DARPA as the ARPANET became the Internet during the 1980s. UUNET began offering domain registration service; however they simply handled the paperwork which they forwarded to the DARPA Network Information Center (NIC). Then the National Science Foundation directed that management of Internet domain registration would be handled by commercial, third-party entities. InterNIC was formed in 1993 under contract with the NSF, consisting of Network Solutions, Inc., General Atomics and AT&T. The General Atomics contract was canceled after several years due to performance issues.

20th century WHOIS servers were highly permissive and would allow wild-card searches. A WHOIS query of a person’s last name would yield all individuals with that name. A query with a given keyword returned all registered domains containing that keyword. A query for a given administrative contact returned all domains the administrator was associated with. Since the advent of the commercialized Internet, multiple registrars and unethical spammers, such permissive searching is no longer available.

On December 1, 1999, management of the top-level domains (TLDs) com, net, and org was assigned to ICANN. At the time, these TLDs were converted to a thin WHOIS model. Existing WHOIS clients stopped working at that time. A month later, it had self-detecting Common Gateway Interface support so that the same program could operate a web-based WHOIS lookup, and an external TLD table to support multiple WHOIS servers based on the TLD of the request. This eventually became the model of the modern WHOIS client.

By 2005, there were many more generic top-level domains than there had been in the early 1980s. There are also many more country-code top-level domains. This has led to a complex network of domain name registrars and registrar associations, especially as the management of Internet infrastructure has become more internationalized. As such, performing a WHOIS query on a domain requires knowing the correct, authoritative WHOIS server to use. Tools to do WHOIS proxy searches have become common.

Whois and ICANN

ICANN’s requirements for registered domain names state that the extent of registration data collected in the moment of domain name registration can be accessed. That is, ICANN requires accredited registrars to collect and provide free public access, such as a Whois service, to information regarding the registered domain name and its nameservers and registrar, the date the domain was created and when its registration expires, and the contact information for the registered name holder, the technical contact, and the administrative contact.

ICANN’s WHOIS protocol remains largely unchanged since 1999 – in spite of over a decade of task forces, working groups and studies, and changes in privacy laws. As a result, WHOIS is at the center of long-running debate and study at ICANN, among other Internet governance institutions, and in the global Internet community.

The evolution of the Internet ecosystem has created challenges for WHOIS in every area: accuracy, access, compliance, privacy, abuse and fraud, cost and policing. Questions have arisen about the fundamental design of WHOIS, which many believe is inadequate to meet the needs of today’s Internet, much less the Internet of the future. Concerns about WHOIS obsolescence are equaled by concerns about the costs involved in changing or replacing WHOIS.

WHOIS faces these challenges because its use has expanded beyond what was envisaged when its founding protocol was designed. Many more stakeholders make use of it in legitimate ways not foreseen by its creators. So ICANN has had to modify WHOIS over the years; the consensus policies on accuracy are a prime example, as well as the introduction of validation and verification requirements in the new form of Registrar Accreditation Agreement (2013 RAA).

There are other challenges to WHOIS, as well. As domain names have become an important weapon to combat fraud and abuse, ICANN’s Security and Stability Advisory Committee recommended in SAC 38: Registrar Abuse Point of Contact that registrars and registries publish abuse point of contact information. This abuse contact would be responsible for addressing and providing timely response to abuse complaints received from recognized parties, such as other registries, registrars, law enforcement organizations and recognized members of the anti-abuse community. Beginning in 2014, registrars under the 2013 RAA will be required to publish WHOIS data that includes registrar abuse contacts.

Even with these modifications, there are calls in the community for improvements to the current WHOIS model. ICANN’s Generic Names Supporting Organization (GNSO) explores these areas and works to develop new policies to address each issue, as appropriate. Over the last decade, the GNSO has undertaken a series of activities to reevaluate the current WHOIS system, and has sought to collect data examining the importance of WHOIS to stakeholders.

Whois Protocol

The origin of Whois Protocol is in the ARPANET NICNAME protocol, which was developed based on NAME/FINGER Protocol (discussed in RFC742 from 1977). In 1982, in RFC812, the NICNAME/WHOIS protocol was presented for the first time by Ken Harrenstien and Vic White from SRI International – Network Information Center. While Whois was first used on the Network Control Program, its main use was eventually determined by the standardization of TCP/IP across the ARPNET and Internet.

Whois Replacements/Alternatives

Due to shortcomings of the protocol, various proposals exist to augment or replace it. Examples are Internet Registry Information Service (IRIS) as well as the newer proposed IETF working group called WHOIS-based Extensible Internet Registration Data Service (WEIRDS) intended to develop a REST-based protocol.

Thick Whois

A Thick Whois Server stores complete and accurate information from all registrars regarding registered domain names and their registrants. This information is available to the registry operator and it can facilitate bulk transfers of all domain names to another registrar in the event of a registrar failure. Thick Whois also enables faster queries.

In November 2011, ICANN Staff issued a Preliminary Issue Report on ‘Thick’ Whois to determine if the GNSO Council needs to conduct a Policy Development Process (PDP) regarding the Whois requirements made of existing gTLDs. The ICANN community was divided on the issue. In a statement, Verisign said that it will “neither advocate for nor against the initiation of a PDP.” The company also argued that its Whois model for .com, .net, .name and .jobs is effective but if the internet community and its customers believed that thick Whois is a better, it will respect and implement the policy. The Intellectual Property Constituency supports Whois implementation. The constituency believed that it will help prevent abuses on intellectual property rights and consumer fraud.  On the other hand, Wendy Seltzer of the Non-Commercial Users Constituency (NCUC) expressed her concern on the impact of further Whois expansion on privacy rights. She pointed out that, “Moving all data to the registry could facilitate invasion of privacy and decrease the jurisdictional control registrants have through their choice of registrar.”

In February 2012, the GNSO Council postponed its decision to determine if it is necessary for Verisign to implement the thick Whois database on .com and all the other gTLDs under its management. The Policy Development Process regarding the issue was also delayed due to the request of the NCUC. All registry operators except Verisign were required to implement Thick Whois. In August, 2012, the GNSO Council, along with two other ICANN constituencies sent a letter to ICANN chastising it for its decision to not require Verisign to implement Thick Whois for the .com TLD.

Wikimedia – The History of Domain Names

Wikimedia wins dispute over web hosting comparison site.

August 3, 2011

Wikimedia wins dispute over web hosting comparison site.

Wikipedia owner Wikimedia Foundation Inc. has won a case against the owner of WebhostingWikipedia.com, which will be transferred to Wikimedia as a result.

From reading the case decision, it’s fairly clear the owner of the web hosting comparison site at WebhostingWikipedia.com had no idea that Wikipedia was a trademark.

So let’s be clear: the generic term is wiki. Wikipedia is a specific site.

If you want to create a site similar to this, WebhostingWiki.com would be OK. (It’s already registered.)

The case was filed at World Intellectual Property Forum. Wikimedia has filed nine cases with the outfit and hasn’t lost yet. Some of its other wins are SoftwareWikipedia.com and IndiaWikipedia.com.

Verisign – The History of Domain Names

Verisign, the operator of net after acquiring Network Solutions, automatically renewed for another six years due to an clause in the contract with ICANN

Date: 06/30/2011

Verisign, the operator of net after acquiring Network Solutions, held an operations contract that expired on 30 June 2005. ICANN, the organization responsible for domain management, sought proposals from organizations to operate the domain upon expiration of the contract. Verisign regained the contract bid, and secured its control over the net registry for another six years. On 30 June 2011, the contract with Verisign was automatically renewed for another six years. This is because of a resolution approved by the ICANN board, which states that renewal will be automatic as long as Verisign meets certain ICANN requirements.

ICANN and Verisign have agreed to extend their .com registry contract for another six years, but there are no big changes in store for .com owners.

Verisign will now get to run the gTLD until November 30, 2024.

The contract was not due to expire until 2018, but the two parties have agreed to renew it now in order to synchronize it with Verisign’s new contract to run the root zone.

Separately, ICANN and Verisign have signed a Root Zone Maintainer Agreement, which gives Verisign the responsibility to make updates to the DNS root zone when told to do so by ICANN’s IANA department. That’s part of the IANA transition process, which will (assuming it isn’t scuppered by US Republicans) see the US government’s role in root zone maintenance disappear later this year. Cunningly, Verisign’s operation of the root zone is technically intermingled with its .com infrastructure, using many of the same security and redundancy features, which makes the two difficult to untangle.

There are no other substantial changes to the .com agreement. Verisign has not agreed to take on any of the rules that applies to new gTLDs, for example. It also means wholesale .com prices will be frozen at $7.85 for the foreseeable future. The deal only gives Verisign the right to raise prices if it can come up with a plausible security/stability reason, which for one of the most profitable tech companies in the world seems highly unlikely.

Pricing is also regulated by Verisign’s side deal (pdf) with the US Department of Commerce, which requires government approval for any price increases until such time as .com no longer has dominant “market power”.

Viacom – The History of Domain Names

Viacom company buys Film.com

May 21, 2012

NextMovie picks up Film.com.

MTV Networks site NextMovie has acquired Film.com, the company announced last week.

MTV Networks is part of publicly traded Viacom.

This was not a domain name purchase; it was a full fledged web business. Film.com has a U.S. Quantcast rank of about 15,000.

Still, it’s always noteworthy to see web sites built on fantastic domains like this change hands.