Net coverage

First Private Auction for New Generic Top Level Domains Completed: 6 gTLDs Valued at Over $9 Million

CircleID posts - Fri, 2013-06-07 05:40

On behalf of Innovative Auctions, I am very happy to announce that we've successfully completed the first private auction for generic Top Level Domains (gTLDs). Our auction resolved contention for 6 gTLDs: .club, .college, .luxury, .photography, .red, and .vote. Auction winners will pay a total of $9.01 million. All other participants will be paid from these funds in exchange for withdrawing their application.

In ICANN's gTLD Applicant Guidebook, applicants for gTLD strings that are in contention are asked to resolve the contention among themselves. ICANN did not further specify how to do that. Our Applicant Auction, designed by my colleague Peter Cramton, has now become the most successful—and proven— alternative to tedious multilateral negotiations. The first withdrawal as a result of our auction (an application for .vote) has already been announced by ICANN.

All participants—winners and non-winners alike—indicated that they were pleased with the results of the first Applicant Auction. "The auction system was clear, user-friendly, and easy to navigate," said Monica Kirchner, applicant for Luxury partners. "The process worked smoothly, and we're very happy with the outcome."

"The Applicant Auction process is extremely well organized and we were very pleased with the results for us" said Colin Campbell, of .CLUB LLC. "It is a fair and efficient way to resolve contention and support the industry at the same time, with auction funds remaining among the domain contenders."

Top Level Design's CEO Ray King praised the auction's execution. "The applicant auction process was great, the software functioned without a hitch and all of the folks involved were responsive and highly professional.  We look forward to participating in future auctions with Innovative Auctions."

In the last days leading up to the auction, many single-string and multiple-string participants have expressed an interest to participate in private auctions in general and the Applicant Auction in particular. Antony van Couvering's insightful article on CircleID a few days ago lays out the reasons why his company TLDH will participate in private auctions, and Colin Campbell, who announced earlier today that his company was the winner for .club, predicts that "many other parties who stood by the sidelines in this first auction will participate in future Applicant Auctions."

We'll hold additional auctions in the coming months, on a schedule and under terms mutually agreed upon by applicants, to resolve contention for many more of the rougly 200 gTLDs still pending. Please direct questions to info@applicantauction.com.

Written by Sheel Mohnot, Project Director, Applicant Auction

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

BIND 9 Users Should Upgrade to Most Recent Version to Avoid Remote Exploit

CircleID posts - Thu, 2013-06-06 21:02

A remote exploit in the BIND 9 DNS software could allow hackers to trigger excessive memory use, significantly impacting the performance of DNS and other services running on the same server.

BIND is the most popular open source DNS server, and is almost universally used on Unix-based servers, including those running on Linux, the BSD variants, Mac OS X, and proprietary Unix variants like Solaris.

A flaw was recently discovered in the regular expression implementation used by the libdns library, which is part of the BIND package. The flaw enables a remote user to cause the 'named' process to consume excessive amounts of memory, eventually crashing the process and tying up server resources to the point at which the server becomes unresponsive.

Affected BIND versions include all 9.7 releases, 9.8 releases up to 9.8.5b1, and 9.9 releases up to version 9.9.3b1. Only versions of BIND running on UNIX-based systems are affected; the Windows version is not exploitable in this way. The Internet Systems Consortium considers this to be a critical exploit.

All authoritative and recursive DNS servers running the affected versions are vulnerable.

The most recent versions of BIND in the 9.8 and 9.9 series have been updated to close the vulnerability by disabling regular expression support by default.

The 9.7 series is no longer supported and those using it should update to one of the more recent versions. However, if that is not desirable or possible there is a workaround, which involves recompiling the software without regex support. Regex support can be disabled by editing the BIND software's 'config.h' file and replacing the line that reads "#define HAVE_REGEX_H 1" with "#undef HAVE_REGEX_H" before running 'make clean' and then recompiling BIND as usual.

At the time of the initial report, ISC stated that there were no active exploits for the vulnerability, but a user reported that he was able to develop and implement a working exploit in ten minutes.

While most of the major DNS providers, including DNS Made Easy, have patched and updated their software, DNS software on servers around the Internet tends to lag behind the most recent version. Because BIND is so widely used and DNS is essential to the functioning of the Internet, knowledge of this vulnerability should be disseminated as widely as possible to encourage system administrators to update.

It should be noted that this exploit is totally unrelated to the widely publicized problems with the DNS that allows criminals to launch DNS amplification attacks. Those attacks depend on a misconfiguration of DNS servers rather than a flaw in the software. However, both problems can be used to create a denial of service attack. Open recursive DNS servers can be used to direct large amounts of data at their targets; effectively using DNS as a weapon to attack other parts of the Internet's infrastructure, whereas the regex vulnerability could be used to attack the DNS itself.

Written by Evan Daniels

Follow CircleID on Twitter

More under: DNS, DNS Security

Categories: Net coverage

A Look Ahead to Fedora 19

CircleID posts - Thu, 2013-06-06 21:00

Fedora 19 is the community-supported Linux distribution that is often used as a testing ground for features that eventually find their way into the Red Hat Enterprise Linux commercial distribution and its widely used noncommercial twin, CentOS. Both distributions are enormously popular on servers and so it's often instructive for sysadmins to keep an eye on what's happening with Fedora.

Fedora prides itself on being at the bleeding edge of Linux software, so all the cool new features tend to get implemented there before they are included in Ubuntu and the other popular distros.

Late May saw the release of the beta version of Fedora 19, AKA Schrödinger's Cat, which has a number of new features that will be of interest to developers, system administrators, and desktop users.

Updated Programming Languages

This release seems to be primarily focused on developers, who will be pleased to hear that many of the most popular programming languages used on the web are getting a bump.

Ruby 2.0 – This is the first major ruby release in half a decade, and adds a number of new features to the language, including keyword arguments, a move to UTF — 8 as the default source encoding, and many updates to the core classes.

PHP 5.5 – PHP 5.5 brings some great additions to everyone's favorite web programming language, including support for generators with the new "yield" keyword, and the addition of a new password hashing API that should make it easier to manage password storage more securely.

OpenJDK 8 – Those who really like to live on the bleeding edge can check out the technology review of OpenJDK 8, which won't be officially released until September (if all goes according to plan). This release is intended to add support for programming in multicore environments by adding closures to the language in addition to the standard performance enhancements and bug fixes.

Node.js – The Node.js' runtime and its dependencies will be included as standard for the first time.

Developer's Assistant

The Developer's Assistant is a new tool to make it easier to automate the setting up of an environment suitable for programming in a particular language, so it'll take care of installing compilers, interpreters, and their dependencies, and running various scripts to set environmental variables and other factors necessary for creating the perfect development environment for the chosen language.

OpenShift Origin

OpenShift origin is an application platform intended for the building, testing and deploying Platform-as-a-Service offerings. It was originally developed for RHEL and is now finding its way into Fedora.

Desktop environments are also getting the usual version increment, with KDE moving to version 4.10 and Gnome getting a bump to 3.10.

If you want, you can give the new Fedora Beta a try by grabbing the image from their site. The usual caveats apply: you shouldn't use it in a production environment.

Written by Graeme Caldwell, Inbound Marketer for InterWorx

Follow CircleID on Twitter

More under: Web

Categories: Net coverage

The Pros and Cons of Vectoring

CircleID posts - Thu, 2013-06-06 20:11

Vectoring is an extension of DSL technology that employs the coordination of line signals to reduce crosstalk levels to improve performance. It is based on the concept of noise cancellation: the technology analyses noise conditions on copper lines and creates a cancelling anti-noise signal. While data rates of up to 100Mb/s are achievable, as with all DSL-based services this is distance related: the maximum available bit rate is possible at a range of about 300-400 meters. Performance degrades rapidly as the loop attenuation increases, becoming ineffective after 700-800 meters. The technology is seen as an intermediate step to full FttH networks.

Vectoring is also specific to the DSL environment, being more appropriate to DSL LLU but becoming severely limited when applied with VDSL2 sub-loops unless all the lines are managed by the same system. Vectoring requires that all copper pairs of a cable binder are operated by the same DSLAM, and several DSLAMs need to work in combination in order to eliminate crosstalk. A customer's DSL modem also needs to support vectoring. Though the ITU has devised a Recommendation for vectoring (G.993.5), the technology is still under development and currently there remains a lack of standardisation across these various elements.

The quality of the copper network is also an issue, with better quality (newer) copper providing better results. Poorer quality copper cabling (e.g. having poorer isolation, less copper pair drilling) can also result in higher crosstalk, and thus a higher degree of pair-related interference. Nevertheless, these issues could be addressed within the vectoring process.

Vectoring is also incompatible with some current regulatory measures, though again future amendments could bring a resolution to these difficulties. While Telekom Deutschland has been engaged in vectoring since late 2012, the technology requires regulatory approval since it is based on DSL infrastructure, and some services which TD must provide to competitors is incompatible with vectoring. As such, TD must negotiate with the regulator the removal of those services from its service obligations. A partial solution may be achieved through the proposal that the regulator restricts total unbundling obligations for copper access lines to the frequency space below 2.2MHz.

Operators which have looked to deploy vectoring are being driven by cost considerations. The European Commission's target in its 'Digital Agenda 2020' is for all citizens in the region to have access to speeds of at least 30Mb/s by 2020, with at least half of all premises to receive broadband at over 100Mb/s. This presupposed fibre for most areas, with the possibility of LTE to furnish rural and remote areas. However, some cash-strapped incumbents are considering vectoring to enable them to meet these looming targets more cheaply, while still pursuing fibre (principally FttC, supplemented by FttH in some cities).

Belgium was an early adopter of vectoring: the incumbent Belgacom had been one of the first players to deploy VDSL1, which has since been phased out for the more widely used VDSL2, supplying up to 50Mb/s for its bundled services customers. The company's investment in vectoring will enable it to upgrade a portion of its urban customers more quickly and cheaply than would otherwise be possible with FttH. Yet it is perceived as a stop-gap measure to buy it time and to forestall customer churn to the cablecos which have already introduced /s and 120Mb/s services across their footprints and are looking to release 200Mb/s services or higher. The inherent limitations of copper, regardless of technological tweaking, will mean that Belgacom will have to follow Scandinavian operators and deploy 1Gb/s FttH services in order to keep pace with consumer demand for bandwidth for the next decade.

Vectoring technology has also been trialled by Telekom Austria as part of its FttC GigaNet initiative, as also b P&T;Luxembourg which in early 2013 contracted Alcatel-Lucent (one of the vendors leading vectoring R&D;) to develop one of the world's first trials of combined VDSL2 bonding and vectoring technologies. The Italian altnet Fastweb is also investing in vectoring, in conjunction with a programme to deliver FttC to about 20% of households by the end of 2014. Fastweb's parent company Swisscom has budgeted €400 million for the project (as part of a wider FttC co-investment with Telecom Italia), costing each connection at about €100 per home. The low figure is partly explained by Fastweb being able to utilise its existing fibre networks. Nevertheless, Fastweb in the long-term is aiming to have an FttH-based network across its footprint, having recently committed an additional €2 billion investment to 2016, contracting Huawei to upgrade its network from 100Mb/s to 1Gb/s.

Written by Paul Budde, Managing Director of Paul Budde Communication

Follow CircleID on Twitter

More under: Access Providers, Broadband, Telecom

Categories: Net coverage

ISOC Funds 11 Projects that Enhance Internet Environments in Underserved Regions

CircleID news briefs - Thu, 2013-06-06 19:59

Each year, a number of projects around the world receive funding from the Internet Society to do everything from connecting Sri Lankan farmers with up-to-date sustainable agriculture information, to teaching ICT skills to at-risk youth in Africa, to working with local engineers to further their IPv6 implementation knowledge. These projects are planned and brought to life by Internet Society members.The Internet Society today announced funding for 11 community-based Internet projects that will enhance the Internet ecosystem in underserved communities around the world. The Community Grants are awarded twice each year to Internet Society Chapters and Members. Recipients receive up to US$10,000 to implement their projects.

The 11 projects funded in this round of grants will:

  • Enable teachers and students in the Sultanate of Oman to produce and share video presentations that meet Omani curriculum standards and students' needs
  • Facilitate access to the Internet via a wireless mesh network for students, parents, and others in rural Panama, enabling them to use their own equipment at home
  • Provide research for an evidence-based ICT policy to help bridge the Internet divide in Ethiopia
  • Develop online resources to help Internet Society chapters effectively create and implement cost-effective video streaming to its membership and the wider community
  • Create a digital community of women in Science, Technology, Engineering, and Mathematics (STEM) in Kenya to serve as a virtual mentorship program
  • Support the Koh Sirae School in Thailand by enhancing their wireless network, updating the learning center and classrooms with laptops and workstations, and providing furniture for 1,000 children and 53 teachers
  • Empower and connect the women of Chuuk State in the Pacific Islands by establishing an Internet-connected computer lab at the Chuuk Women's Council (CWC) building and offering classes in ICT usage
  • Promote child online safety in Uganda by educating children, teachers and parents at three urban schools; developing a user guide; and advocating for sound policies that ensure Internet safety
  • Build a collaborative, independent, and transparent observatory that quantitatively assesses the Internet quality in Lebanon to help providers enhance their services and the Lebanese government accelerate the transition to broadband Internet
  • Jump start the establishment of an Internet of Things (IoT) community-operated space in the University of the Philippines, where people with shared interests in computers, technology, science, digital art, or electronic art can meet and collaborate
  • Initiate a movement that will encourage and facilitate university students majoring in ICT subjects to contribute their knowledge, skills, and time to teach ICT courses at Indonesia's rural high schools

The next application round opens in September. Additional information is available here on the Community Grants Programme and these winning projects.

Follow CircleID on Twitter

More under: Access Providers, Broadband

Categories: Net coverage

ISOC Funds 11 Projects that Enhance Internet Environments in Underserved Regions

CircleID posts - Thu, 2013-06-06 19:59

Each year, a number of projects around the world receive funding from the Internet Society to do everything from connecting Sri Lankan farmers with up-to-date sustainable agriculture information, to teaching ICT skills to at-risk youth in Africa, to working with local engineers to further their IPv6 implementation knowledge. These projects are planned and brought to life by Internet Society members.The Internet Society today announced funding for 11 community-based Internet projects that will enhance the Internet ecosystem in underserved communities around the world. The Community Grants are awarded twice each year to Internet Society Chapters and Members. Recipients receive up to US$10,000 to implement their projects.

The 11 projects funded in this round of grants will:

  • Enable teachers and students in the Sultanate of Oman to produce and share video presentations that meet Omani curriculum standards and students' needs
  • Facilitate access to the Internet via a wireless mesh network for students, parents, and others in rural Panama, enabling them to use their own equipment at home
  • Provide research for an evidence-based ICT policy to help bridge the Internet divide in Ethiopia
  • Develop online resources to help Internet Society chapters effectively create and implement cost-effective video streaming to its membership and the wider community
  • Create a digital community of women in Science, Technology, Engineering, and Mathematics (STEM) in Kenya to serve as a virtual mentorship program
  • Support the Koh Sirae School in Thailand by enhancing their wireless network, updating the learning center and classrooms with laptops and workstations, and providing furniture for 1,000 children and 53 teachers
  • Empower and connect the women of Chuuk State in the Pacific Islands by establishing an Internet-connected computer lab at the Chuuk Women's Council (CWC) building and offering classes in ICT usage
  • Promote child online safety in Uganda by educating children, teachers and parents at three urban schools; developing a user guide; and advocating for sound policies that ensure Internet safety
  • Build a collaborative, independent, and transparent observatory that quantitatively assesses the Internet quality in Lebanon to help providers enhance their services and the Lebanese government accelerate the transition to broadband Internet
  • Jump start the establishment of an Internet of Things (IoT) community-operated space in the University of the Philippines, where people with shared interests in computers, technology, science, digital art, or electronic art can meet and collaborate
  • Initiate a movement that will encourage and facilitate university students majoring in ICT subjects to contribute their knowledge, skills, and time to teach ICT courses at Indonesia's rural high schools

The next application round opens in September. Additional information is available here on the Community Grants Programme and these winning projects.

Follow CircleID on Twitter

More under: Access Providers, Broadband

Categories: Net coverage

Michele Neylon, Blacknight CEO Elected as Chair of Registrar Stakeholder Group of ICANN

CircleID news briefs - Thu, 2013-06-06 18:38

Michele Neylon, Blacknight CEOMichele Neylon, CEO of Blacknight, announced today his election as Chair of the Registrar Stakeholder Group of ICANN, the first European to ever hold this position.

The Registrar Stakeholder Group (RrSG) is one of several Stakeholder Groups within the ICANN community and is the representative body of domain name Registrars worldwide. It is a diverse and active group that works to ensure the interests of Registrars and their customers are effectively advanced.

The chair, in consultation with the executive committee and members, organises the work of the Stakeholder Group and conducts RrSG meetings. The chair often confers with others in the ICANN community on Registrar-related policy and business issues, and is the primary point of contact between the RrSG and ICANN staff. Neylon has previously served as the Secretary to the RrSG and is the only European member of the executive committee.

Follow CircleID on Twitter

More under: Domain Names, ICANN

Categories: Net coverage

Michele Neylon, Blacknight CEO Elected as Chair of Registrar Stakeholder Group of ICANN

CircleID posts - Thu, 2013-06-06 18:38

Michele Neylon, Blacknight CEOMichele Neylon, CEO of Blacknight, announced today his election as Chair of the Registrar Stakeholder Group of ICANN, the first European to ever hold this position.

The Registrar Stakeholder Group (RrSG) is one of several Stakeholder Groups within the ICANN community and is the representative body of domain name Registrars worldwide. It is a diverse and active group that works to ensure the interests of Registrars and their customers are effectively advanced.

The chair, in consultation with the executive committee and members, organises the work of the Stakeholder Group and conducts RrSG meetings. The chair often confers with others in the ICANN community on Registrar-related policy and business issues, and is the primary point of contact between the RrSG and ICANN staff. Neylon has previously served as the Secretary to the RrSG and is the only European member of the executive committee.

Follow CircleID on Twitter

More under: Domain Names, ICANN

Categories: Net coverage

One Year Later: Who's Doing What With IPv6?

CircleID posts - Thu, 2013-06-06 11:00

One year on from the World IPv6 Launch in June 2012, we wanted to see how much progress has been made towards the goal of global IPv6 deployment.

Both APNIC and Google are carrying out measurements at the end user level, which show that around 1.29% (APNIC) and 1.48% (Google) of end users are capable of accessing the IPv6 Internet. Measurements taken from this time last year show 0.49% (APNIC) and 0.72% (Google), which means the amount of IPv6-enabled end users has more than doubled in the past 12 months.

Rather than looking at the end user, the measurements the RIPE NCC conducts look at the networks themselves. To what extent are network operators engaging with IPv6? And how ready are they to deploy it on their networks?

IPv6 RIPEness

The RIPE NCC measures the IPv6 "readiness" of LIRs in its service region by awarding stars based on four indicators. LIRs receive stars when:

  • They receive an initial allocation of IPv6 address space from the RIPE NCC
  • The IPv6 address space is visible in global routing
  • There is a route6 object registered in the RIPE Database
  • Reverse DNS has been set up for the IPv6 address space

The pie charts below show the number of LIRs holding 0-4 RIPEness stars at the time of the World IPv6 Launch in June 2012, and the number today.

The first RIPEness star is awarded when the LIR receives an allocation of IPv6 address space. When we look at the charts above, we see that the number of LIRs without an IPv6 allocation has decreased from 50% at the time of the World IPv6 Launch to 39% today.

One factor that shouldn't be overlooked here is that the current IPv4 policy requires that an LIR receive an initial IPv6 allocation before it can receive its last /22 of IPv4 address space. However, this does not explain the increase in 2-4 star RIPEness, which can only come from LIRs working towards IPv6 deployment.

Five-Star RIPEness

At the recent RIPE 66 Meeting in Dublin, we presented the results from our introduction of a fifth RIPEness star, which is still in the prototype stage. This fifth star measures actual deployment of IPv6. It looks at whether LIRs are providing content over IPv6 and the degree to which they are providing IPv6 access to end users. More information on the fifth star and the methodology behind it can be found on RIPE Labs. In this first version, 573 LIRs in the RIPE NCC service region qualify for the fifth star, which represents 6.24% of all LIRs in the region.

The Day We Crossed Over

Coincidentally, the World IPv6 Launch was around the same time as another milestone for the RIPE NCC service region. It was roughly then that the number of LIRs with IPv6 allocations outnumbered those without IPv6 for the first time. This number has continued to increase, and there are currently 5,630 LIRs with IPv6 and 3,584 without.

The blue line on the graph below represents LIRs with an IPv6 allocation, while the red line indicates those with no IPv6.

ASNs Announcing IPv6

One of the things the RIPE NCC regularly checks is the percentage of autonomous networks announcing one or more IPv6 prefixes into the global routing system. This is an important step before a network can begin exchanging IPv6 traffic with other networks.

When we take a global view using the graph, we see that in the year since the World IPv6 Launch, the percentage of networks announcing IPv6 has increased from 13.7% to 16.1%. Of the 44,470 autonomous networks visible on the global Internet, 7,168 are currently announcing IPv6.

When we adopt a regional perspective, one of the things we would hope to see is increasing IPv6 deployment in those regions where the free pool of IPv4 has been exhausted. It is reassuring to see this confirmed — both the APNIC and the RIPE NCC service regions are leading the way, with 20.0% and 18.1% (respectively) of networks announcing IPv6.

The table below compares the percentage of autonomous networks announcing IPv6 — both now and at the time of the World IPv6 Launch in 2012.

The RIPE NCC's graph of IPv6-Enabled Networks (below) shows this as a comparison over time and allows for comparisons between countries and regions.

Reassuring, But The Real Work Is Still Ahead

While the above statistics provide good cause for optimism, there is still a long way to go. Now, more than ever, network operators need to learn about IPv6 and deploy it on their networks in order to safeguard the future growth of the Internet. To find out more about IPv6, visit IPv6ActNow.

Written by Mirjam Kuehne

Follow CircleID on Twitter

More under: IPv6

Categories: Net coverage

IPv6: Less Talk and More Walk

CircleID posts - Wed, 2013-06-05 18:40

The sixth month of the year is both symbolic and historic for IPv6 and a good time to take stock and see how we've progressed. But instead of looking at the usual suspects of number of networks, number of users, number of websites, etc… on IPv6, let's look at some new trends to see what's happening.

At gogo6 we've been measuring the "Buzz" of the IPv6 market every week over the last two and a half years. Each tweet, blog and news story on IPv6 has been counted, categorized and indexed for posterity. By graphing the 102,641 tweets, 6,620 blogs and 4,251 news stories during that time we capture the "Talk" of the market. Reviewing Graph 1 shows spikes in the right places but what is striking is the definitive downward trend in volume as time goes on. The "Talk" is going down.

This could be interpreted as a slowing of interest or a job complete so next I dug into the gogoNET social network database. By plotting the registration dates of the 47,142 networking professionals who joined during this same period of time I could infer the level of interest and work being done in deploying IPv6. The resulting trend line in Graph 2 is flat indicating a constant interest and flow of networking professionals preparing to implement IPv6. These are the "Workers".

The fruit of this steadfast labor pool can be seen in Graph 3. Plotting the first derivative of the IPv6 Adoption curve generated by the Google Access Graph over the same period of time yields a normalized curve of new IPv6 users. Though the original data is noisy there is a definitive upward trend indicating the rate of new users is increasing over time. And this is what I call the "Walk" — the tangible result of the constant stream of IPv6 workers.

The big headline on this one year anniversary of World IPv6 Launch is the number of IPv6 users have doubled. Taking a closer look indicates a market starting to get the job done. Sure there are more people using IPv6 but more importantly this is happening at an increasing rate — the result of a constant stream of new workers walking the walk by spending less time on naval gazing and more time on doing. Less talk and more walk.

Written by Bruce Sinclair, CEO, gogo6

Follow CircleID on Twitter

More under: IPv6

Categories: Net coverage

France Drops Its Internet "Three Strikes" Anti-Piracy Law

CircleID news briefs - Tue, 2013-06-04 18:53

France has put an end to its most extreme measure of its notorious "three strikes" anti-piracy law which came into effect in 2009. Cyrus Farivar reporting in Ars Technica: The law is better known by its French acronym, Hadopi. In the last few years under the law, the Hadopi agency famously set up a system with graduating levels of warnings and fines. The threat of being cut off entirely from the Internet was the highest degree, but that penalty was never actually put into place. "Getting rid of the cut-offs and those damned winged elephants is a good thing. They're very costly," Joe McNamee, of European Digital Rights.

Follow CircleID on Twitter

More under: Law

Categories: Net coverage

France Drops Its Internet "Three Strikes" Anti-Piracy Law

CircleID posts - Tue, 2013-06-04 18:53

France has put an end to its most extreme measure of its notorious "three strikes" anti-piracy law which came into effect in 2009. Cyrus Farivar reporting in Ars Technica: The law is better known by its French acronym, Hadopi. In the last few years under the law, the Hadopi agency famously set up a system with graduating levels of warnings and fines. The threat of being cut off entirely from the Internet was the highest degree, but that penalty was never actually put into place. "Getting rid of the cut-offs and those damned winged elephants is a good thing. They're very costly," Joe McNamee, of European Digital Rights.

Follow CircleID on Twitter

More under: Law

Categories: Net coverage

ICANN Auctions or Private Auctions?

CircleID posts - Tue, 2013-06-04 01:34

By this time next year the allocation of the new Internet namespace will be complete. Several hundred contention sets, ranging from likely blockbusters like .WEB to somewhat less obvious money-makers like .UNICORN, will be decided by some method.

One way to resolve contention is to form a joint venture. We are in the process of doing this with Uniregistry for .country. That works well when there are only two competitors and there's a good basis of trust, and it's a great solution because there are no losers. But if there are three or more competitors, or if you don't like and trust your prospective partner-to-be, this really isn't an option. Realistically, there will be only a limited number of joint ventures.

It may happen that you and a competitor are head-to-head on two strings and if so, a second method for resolving contention is a straight-across trade. It's not a bad solution: it's cashless, it's quick, and each party gets something. But it's not as easy as it might at first appear. Some people want to win everything, so they view this solution as a loss. And who gets to pick first? Is a random draw an acceptable solution?

If you can't manage one of these two solutions, you're left with either arranging a private deal with someone (again, not very realistic if there are more than two parties), or else you're going to auction. There are two kinds of auction being talked about: ICANN's "mechanism of last resort," or a "public auction" as it's often called; and the much-debated private auctions.

The ICANN auction is simple enough: it's an ascending floor auction. The auctioneer asks (electronically) who's in at $100K, $250K, $1M, $2M, and so on. The last one left standing pays their money and walks away with the TLD. The losers walk home with nothing except some memories and a 20% refund on their application fee. ICANN walks away with a bundle of cash to add to the dragon's hoard of $350M that they have already reaped in application fees.

A private auction is just an auction between companies, but without ICANN. The parties involved decide on the rules, so it may be in any auction format, but the favorite today is to ape the ICANN format exactly — with one important difference — instead of ICANN getting the money, it's split evenly among the losers.

If you have more than one or two applications, a little money can go a long way (it's also good for single applicants, see below). Let's suppose, for example, that you are head-to-head with someone for .tremendous. The bidding goes up and up, but in the end your competitor likes it more and pays you $2M for .tremendous. The next day, you and your competitor are back again for .fantastic. This time, you value it more, and you win, again for $2M. Result: you both have $2M and you both have a TLD. Except for the auctioneer's fee, it ends up being a cashless transaction.

Across multiple private auctions, this recycling of cash is writ large. With a modest war chest, you can lose more auctions and walk away with more money than you started with; you can win and lose in equal monetary proportions and end up cash-neutral; or you can try to win more value than you lose and spend your war chest in exchange for TLDs. As long as auction prices are stable relative to one another, then even a modest amount of cash will enable you to walk away with a return. Compare to ICANN auctions, where you get nothing if you lose and winning one auctions could mean that you're unable to compete in any others.

Is this analysis only relevant for portfolio players like us? On inspection, the logic of the benefit holds no matter how many strings you are in contention for. If you have a single string, you should bid up to that string's value — given your financial resources — in either a public or private auction. In each case, a competitor who places a higher value on the TLD, who has the financial resources, will ultimately beat you. The question is:, do you want to be compensated for that loss?

It took us a while to get our heads around private auctions. Actually, auctions of any kind take some time to understand, but at first blush an auction under the aegis of ICANN seemed safer; private auctions provoked a lot of questions. For instance, what if someone overbids in order to drive the price up and get a bigger payout from you? If you win, why should your money go to a competitor who might use your money to beat you at the next auction? Are the bid prices in general going to be higher or lower in a private auction?

The basic answer to all these questions is that you should bid up to what you think a TLD is worth, and no more. If you follow that rule, you should do well in a private auction. Auction participants must have an idea of what they believe a TLD is worth. For example, if Minds + Machines bid on .awesome, we would estimate how many .awesome registrations we could sell in a given year, how many premium .awesome names we could sell, and what the brand uptake might be for a .awesome sunrise. We would then translate that into a discounted net present value for the TLD, and in no case bid higher than that in either a public or private auction. Keeping this in mind should spare you all kinds of woe, and it's equally valid in an ICANN or a private auction.

What about someone overbidding to drive up the price? If you were up against Awesome Industries, the 10-billion-dollar king of awesome products, you might be tempted to overbid for .awesome in a private auction with the view of getting a higher pay out. But that's a dangerous strategy, because the reality is that at any instant Awesome could drop out, leaving you with a very, very expensive bid for something you don't think is really that… awesome. Everyone has their limit, even Awesome Industries.

On the flip side, you might worry that by winning .great for $2M in a private auction you will be providing cash to your competitor for the next auction, for .amazing. But if they overbid on .amazing, beating you, you should be pleased to take their money — leaving you with a bunch of cash as well that you think is the tld that is more amazing than .amazing.

Nobody likes the idea of enriching the competition. But consider these options, you can either:

  • Give money to the competition if you win, but you get money if you lose; or
  • Give your competitors nothing if you win, but you get nothing if you lose.

Which is the rational choice? Game theory says that it's the first choice.

There's another important point in favor of private auctions: we believe that our competitors will do more to promote the TLD space in general (thereby helping us) than ICANN will. So we're actually happy about paying our competitors instead of ICANN, as we view it as an investment in the promotion of the entire new gTLD program.

We struggled for a while with the question of which auction process, ICANN or private would produce the higher winning bids. We think that private auction prices will be lower, for the simple reason that people who've been at this as long as we have are going to inevitably fall victim to the dreaded sunk cost fallacy and are going to hate the idea of walking away with nothing. The loser's consolation prize mitigates this effect, we think, and works to prevent people from overbidding to prevent being totally skunked.

Even larger corporate players like Google or Amazon have an economic incentive to enter private auctions, because overall it will lower their cost of acquiring their name portfolios. We don't believe that Google will pay any amount for .LOL — and even Google and Amazon could find themselves outbid for some strings which are not core to their business models.

Then there is the issue of anti-trust. Do private auctions constitute bid-rigging? I talked to the Justice Department, who told me that they might or might not issue a letter giving guidance, which might or might not say that private auctions were good or bad. In other words, they told me nothing at all. We've all seen opinions issued by lawyers on both sides of the question, and we've had our own lawyers opine as well. It's clearly an untested area, and that means it carries some risk. In the end, we decided that while we are bound to follow the law, we are not bound — in fact we're not qualified, and neither is anyone else — to decide what the law might or might not become. In our experience, successful startups do not succeed without taking risks, and they do not succeed if they let themselves be ruled by lawyers pointing out potential risks — that way madness lies. Collusion carries with it an implication of secrecy, of back-room deals, but private auctions are advertised and anyone with a contested application can join in. ICANN, perhaps the most conservative, risk-averse, lawyer-driven organization in our industry, clearly encourages applicants to "work it out" and we think that private auctions are a fair and open way to do so. The first private auction is being held in a few days, and we'll see if anyone gets a letter or phone call from the government. We feel that it's a remote possibility.

Minds + Machines will proceed with private auctions. We won't participate in the first set of private auctions: given ICANN's history of delays, and the fact that it has not yet delegated a single new gTLD, we don't see any need to enter private auctions just yet. But we intend do so when the time is right: it's to our benefit, to the benefit of our competitors and to the industry generally. We also believe it's to ICANN's benefit and to the benefit of consumers, because while the ICANN auctions would leave applicants even more depleted of cash and unable invest in marketing, research, and technology, private auctions will provide money to create healthy vigorous registries that will fulfill ICANN's mission to create choice and competition in the top-level namespace.

Written by Antony Van Couvering, CEO of Minds + Machines

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

The Rise of Cyrillic Domain Names

CircleID posts - Mon, 2013-06-03 21:05

This week, on a cruise ship navigating Russia's Neva river, around 250 domain registrars and resellers are gathered for the RU-CENTER annual domain conference.

RU-CENTER is the largest Russian registrar in a market that is dominated by three companies. RU-CENTER and competitor Reg.Ru both manage around 28% of domains registered in the country's national suffix .RU, whilst a third registrar R-01 (also part of the RU-CENTER group of companies) has 18%.

RU-CENTER is also a figurehead for Russia's drive to make Internet use more palatable for those who are not natural ASCII writers. Because the Latin alphabet has been the only available option to browse the Internet up until now, and because Russian Internet users learn Latin characters anyway, having to use a foreign script has not dampened their drive to get online and reap the Web's many benefits.

But give them a chance to type in Cyrillic, and Russians jump at it. That became evident when the country launched its own Cyrillic country code, Dot RF (Latin script spelling). Pent up demand for full local language web addresses meant Dot RF exceeded all registration expectations as soon as it went online. Now in its third year, the TLD already has almost 800,000 registrations compared to the 4.5 million names in Russia's original (ASCII) ccTLD Dot RU.

That trend could grow as the new gTLD program allows further Cyrillic script suffixes to be created on the Internet. "Even with new gTLDs, it seems the market is underestimating the potential for Cyrillic domains," says RU-CENTER's Marketing Director Pavel Khramtsov. "We've only seen 8 IDN applications in Cyrillic script in this first round."

By initiating the implementation of Dot Moskva, RU-CENTER became one of these Cyrillic IDN pioneers. The domain is one of a pair, the other being the Latin character version Dot Moscow (www.domainmoscow.org). "That's where we expect the highest demand to come from," explains Sergey Gorbunov, head of RU-CENTER's International Relations division. "Because Russians have become so used to Latin characters as the default that the ASCII string "moscow" is likely to be considered the more recognisable brand when they both launch."

But the Muscovite pair may help change that. On top of the hunger for Cyrillic URLs that Russians have shown through Dot RF, the Moscow twins stand to help further the Cyrillic IDN cause because of their geography. The majority of people accessing the Internet in Russia do so from the Moscow area. "On average, around 40% of the total pool of Dot RU and Dot RF domains are registered by people from Moscow area," Gorbunov says. The country as a whole has an Internet penetration rate of around 70%, with up to 50 million Russian going online every day, making Russia the number one source of Internet users in Europe (European economic power-weight Germany is number two). But most of that traffic comes from Moscow.

So having a local script TLD for Russia's Capital may make the rise of the Cyrillic script on the Internet even more of a reality. Both TLDs have been prepared and applied for with the support of the local authorities. The project is being coordinated by a non-profit foundation called the Foundation for Assistance for Internet Technologies and Infrastructure Development (www.faitid.org), or FAITID for short (pronounced "fated"). FAITID's governance structure is based on the multi-stakeholder model and brings together users, local government, business and industry to ensure that Dots Moscow and Moskva serve the local Internet community. As an example, FAITID and Moscow city officials are working on reserving a number of second level domains such as school.moscow or museum.moscow for public service use.

The plan was to launch both TLDs at the same time. "For domains like Dot Moscow and Dot Moskva, it's easier to launch them as one, with a single roll-out and marketing plan," says Gorbunov. "It also means less confusion for the end-user." But ICANN's prioritisation draw put paid to those plans. As an IDN, Dot Moskva was a priority application in the December 2012 draw used by ICANN to determine the processing order for the 1930 applications it has received. Moskva drew number 69, as compared to Dot Moscow's 881. A huge gap which means the only way for both launches to coincide is for one to be put on hold whilst the other can play catch-up. "Because of the draw results, FAITID is now planning a long Sunrise period for Dot Moskva before moving on to initiate the rest of the launch schedule when Dot Moscow gets the green light," Gorbunov reveals.

It's still unclear when that would be. Partly because of general contract negotiations currently going on between ICANN and both the registrars and the registries, which need to be resolved before ICANN can put contracts on the table for new gTLD operators to sign. But even when that happens, Moscow will have to wait for more contract negotiations to be done. This time, it will be direct talks between FAITID and ICANN. "The current registry contract as proposed has clauses which are illegal under Russian law," Gorbunov explains. "We also have problems with the trademark clearinghouse because Russian law requires FAITID to give Russia trademarks priority. So doing a Sunrise where the trademarks registered in the clearinghouse get priority is a challenge."

FAITID is hoping for a November Dot Moskva Sunrise. That assumes 2 months for these specific contract negotiations so that the foundation no longer finds itself between a rock and a hard place, having to decide whether to stand foul of its national law by signing the ICANN contract as-is, or to refuse the contract and risk having to give up on its TLD application.

That would be a pity for the millions of Internet users around the world who want to be able to type their web addresses in their own alphabet. Especially as Russia's innovative and cutting-edge Internet community could push the IDN system as a whole to new heights. "Email use remains limited with IDNs," says Khramtsov. "Most of the systems currently available can only work if both sender and receiver use the same technology provider. But this limited use does have some advantages. Spam is non-existent with Dot RF emails for example, because the technology for doing IDN spam just isn't there."

Imagine an Internet where spam is heavily reduced by applying new techniques first developed for namespaces which are younger and can therefore afford to start afresh and apply new solutions to problems which have plagued the ASCII Internet since its inception. This is just one way in which the development of local-language web addresses could help the Internet as a whole, ASCII namespace included. A true embodiment of one of the new gTLD's program founding goals "to open up the top level of the Internet's namespace to foster diversity, encourage competition, and enhance the utility of the DNS."

Written by Stéphane Van Gelder, Chairman, STEPHANE VAN GELDER CONSULTING

Follow CircleID on Twitter

More under: DNS, Domain Names, ICANN, Multilinguism, Top-Level Domains

Categories: Net coverage

Switzerland Overtakes Romania as Top IPv6 Adopter

CircleID news briefs - Mon, 2013-06-03 20:30

According to recent statistics by Google, Switzerland has achieved the top for IPv6 adoption, passing Romania which topped the charts for nearly a year.

Jo Best reporting in ZDNet: IPv6 adoption stands at 10.11 percent in Switzerland — the highest penetration of any country, according to stats from Google, which takes a snapshot of adoption by measuring the proportion of users that access Google services over IPv6… It's been suggested that the sudden spike in Switzerland's IPv6 adoption has been down to Swisscom, the country's biggest telco with around 55 percent of the broadband market and 60 percent of mobile, moving to adopt it.

Comparison of IPv6-Enabled Web Browsers in Different Countries (Source Google / Visit Chart Page)

Follow CircleID on Twitter

More under: IPv6

Categories: Net coverage

Switzerland Overtakes Romania as Top IPv6 Adopter

CircleID posts - Mon, 2013-06-03 20:30

According to recent statistics by Google, Switzerland has achieved the top for IPv6 adoption, passing Romania which topped the charts for nearly a year.

Jo Best reporting in ZDNet: IPv6 adoption stands at 10.11 percent in Switzerland — the highest penetration of any country, according to stats from Google, which takes a snapshot of adoption by measuring the proportion of users that access Google services over IPv6… It's been suggested that the sudden spike in Switzerland's IPv6 adoption has been down to Swisscom, the country's biggest telco with around 55 percent of the broadband market and 60 percent of mobile, moving to adopt it.

Comparison of IPv6-Enabled Web Browsers in Different Countries (Source Google / Visit Chart Page)

Follow CircleID on Twitter

More under: IPv6

Categories: Net coverage

The Company You Keep

CircleID posts - Sat, 2013-06-01 20:59

This story started earlier this year, with a posting to the Australian network operators' mailing list, asking if anyone had more information about why the web site that was operated by an outfit called "Melbourne Free University" was inaccessible through a number of major Australian ISPs. When they asked their local ISP if there was some issue, they were informed that "this was due to an Australian government request, and could say no more about it." This was unusual, as it was very hard to see that this site would fall under the gamut of Australian Internet censorship efforts, or fall foul of various law enforcement or security investigations. As the name suggests, their web site was all about a community-based educational initiative. To quote from their web site: "The Melbourne Free University provides a platform for learning, discussion and debate which is open to everyone [...] and aims to offer space for independent engagement with important contemporary ideas and issues." What dastardly crime had the good folk at the Melbourne Free University committed to attract such a significant response?

One Australian technology newsletter, The Delimiter, (delimiter.com.au) subsequently reported that their investigation revealed that the Australian Securities and Investments Commission (ASIC) had used their powers under Section 313 of the Australian Telecommunications Act (1997) to demand that a network block be applied by local Internet Service Providers. That section of the Act falls under Part 14, the part that is described as "National Security Matters." The mechanics of the network block was to demand that all Australian ISPs block IP level address to the IP address 198.136.54.104.

As it turned out in subsequent ASIC announcements, it wasn't Melbourne Free University that had attracted ASIC's interest. What had lead to this block was an investment scam operation that had absolutely nothing in common with the Melbourne Free University. Well almost nothing. They happened to use a web hosting company for their web site, and that web hosting company used name-based virtual hosting, allowing multiple web sites to be served from a common IP address. Some financial scammers attracted ASIC's interest, and when ASIC formed the view that the scammers had breached provisions of Australian corporate and/or financial legislation. They used their powers under Section 313 to require Australia ISPs to block the web site of this finance operation. However, the critical aspect here was that the block was implemented as a routing block in the network, operating at the level of the IP address. No packets could get to that IP address from all customers of ISPs that implemented the requested network level block. The result was that financial scammers were blocked, but so were Melbourne Free University and more than a thousand other web sites.

At this point the story could head in many different directions. There is the predominately Australian issue of agency accountability on their use of Section 313 of the Australian Telecommunications Act (1997) to call for the imposition of network-level blocks by Australian carriers, or concerns over the more general ability under this section for Australian government agencies to initiate such blocking of content without clear accountability and evidently without consistent reporting mechanisms. However, what specifically interests me here is not the issues about agency behaviors and matters of the application of national security interests and criminal investigations. What interests me here is that this story illustrates another aspect of the collateral damage that appears to have arisen from IPv4 address exhaustion.

How do we make too few IP addresses span an ever-growing Internet? Yes, you can renumber all the network's internal infrastructure into private addresses, and reclaim the public addresses for use by customers and services. Yes, you can limit the addresses assigned to each customer to a single address, and require the customer to run a NAT to share this address across all the devices in their local network. And these days you may well be forced to run up larger Carrier Grade NATs (CGNs) in the interior of your network so that each public IP address can be shared across multiple customers.

What about at the server side of the client/server network? If you can get multiple clients to share an address with a CGN, can you share a single public address across multiple services?

For the web service at least, the answer is a clear "yes". The reason why this can be done so readily in the web is because the HTTP 1.1 protocol specification includes a mandatory Host request-header field (see: "Hypertext Transfer Protocol — HTTP/1.1”, R. Fielding et. al, RFC2616, June 1999") This field is the DNS name part of the URL being referenced, and must be provided by the client upon every GET request. When multiple DNS names share a common IP address, a web server can distinguish between the various DNS names and select the correct server context by examining the Host part of the request. This allows the server to establish the appropriate virtual context for every request.

This form of virtual hosting appears to be very common. It allows a large number of small scale web servers to co-exist on a single platform without directly interfering with each other, and allows service providers to offer relatively low priced web service hosting. And it makes highly efficient use of IP addresses by servers, which these days is surely a good thing. Right?

Well, as usual, it may be good, but it's not all good.

As Melbourne Free University can attest, the problem is that we have yet to come to terms with these address sharing practices, and as a result we are still all too ready to assign reputation to IP addresses, and filter or otherwise block these IP addresses when we believe that they are used for nefarious purposes. So when one of the tenants on a shared IP address is believed to be misbehaving, then it's the common IP address that often attracts the bad reputation, and it's the IP address that often gets blocked. And a whole lot of otherwise uninvolved folk are then dragged into the problem space. It seems that in such scenarios the ability of clients to access your service does depend on all your online neighbors who share your IP address also acting in a way that does not attract unwelcome attention. And while you might be able to vet your potential online neighbors before you move your service into a shared server, such diligence might not be enough, in so far as it could just as easily be the neighbor who moves in after you that triggers the problem of a bad reputation.

Exactly how widespread is address sharing on the server side?

I haven't looked at that question myself, but others have. There is a web site that will let you know about the folk who share the same address. When I enter the address 198.136.54.104 into http://sameid.net then I see that more than a thousand various domain names are mapped to this particular IP address. So in the case of Melbourne Free University, they were relying on an assumption that none of these 1,000 unrelated online services were attracting unwelcome attention.

Does this sharing apply to all forms of web services? Do secure web sites that use Transport Layer Security (TLS) have their own IP address all the time, or are we seeing sharing there as well? By default, sharing of secure web services requires that all the secure web service names that coexist on the same service IP address need to be named in the public key certificate used within the startup of the TLS session. This means that the transport key security is therefore shared across all the services that are located at the same IP address, which, unless the set of services are actually operated by a single entity, represents an unacceptable level of compromise. For this reason, there is a general perception that if you want to use a TLS-enabled secure channel for your service then you need your own dedicated IP address.

But that's not exactly true. Back in 2006 the IETF published RFC 4366, which describes extensions to TLS that allows each service on a shared service platform to use its own keys for TLS sessions. (This has subsequently been obsoleted by a revised technical specification, RFC 6066.) The way this is done is that the client can include the server name of the service connection when it starts the TLS session, allowing the server to then respond with the desired service context and allows the session to start up using keys associated uniquely with the named service. So if the server and the client support this Server Name Indication (SNI) extension in TLS, then it is possible to use name-based server sharing and also support secured sessions for service access. Now if you are running a recent software platform as either a server or a client then it is likely that SNI will work for you. But if the client does not support SNI, such as the still relatively ubiquitous Windows XP platform, or version 2 or earlier of Android platforms, then the service client does not recognize this TLS extension and will encounter certificate warnings, and be unable to use the appropriate secure channel. At this stage SNI still appears to be relatively uncommon, so while it is feasible to use a shared server platform for a secure service, most secure services tend to avoid that and use a dedicated IP address and not require specific extension functionality from TLS.

But back to the issue of shared service platforms and IP-level imposed filtering.

Why are IP addresses being targeted here? Why can't a set of distinct services share a common platform, yet lead entirely separate online lives? If there are diverse unrelated services that are located on a common IP address then maybe a filter could be constructed at the DNS phase rather than traffic blocking by IP address. Certainly the DNS has been used as a blocking measure in some regimes. In the world of imposed filters we see filter efforts directed at both the DNS name and at the IP address. Both have their weaknesses.

The DNS filters attempt to deny access to the resource by requiring all DNS resolvers in a particular regime not to return an IP address for a particular DNS name. The problem here is that circumvention is possible for those who are determined to circumvent such imposed DNS filters. There are a range of counter-measures, include using resolvers located in another regime that does not block the DNS name, running your own DNS resolver, or patching the local host by adding the blocked DNS entry into the local hosts.txt file. The general ease of circumvention of this approach supports the view that the DNS filter approach is akin to what Bruce Schneier refers to as "security theatre."[7] In this case the desired outcome is to be seen to be doing something that gives the appearance of improved security, as distinct to actually doing something that is truly effective in improving the security of the system.

An IP block is the other readily available approach used as a service filter mechanism. It's implementation can be as simple as an eBGP feed of the offending (or offensive) IP addresses where the BGP next hop address is unreachable. It has been argued that such network filter mechanisms can be harder to circumvent than a DNS-based filter, in that you need to perform some form of tunneling to pass your packets over the network filter point. But in these days of TOR, VPNs and a raft of related IP tunnel mechanisms, that's hardly a forbidding hurdle. It may require a little more thought and configuration than simply using an open DNS service to circumvent a DNS block, which may make it a little more credible as an effective block. So it should not be surprising to learn that many regulatory regimes use this form of network filtering using IP addresses as a means of implementing blocks. However, this approach of blocking IP addresses assumes that IP addresses are not shared, and that blocking an IP address is synonymous with blocking the particular service that is located at that address. These days that's not a very good universal assumption. While many services do exist that are uniquely bound to a dedicated IP address, many others exist on shared platforms, where the IP address is no longer unique to just that service.

It's not just the "official" IP blocks that can cause collateral damage in the context of shared service platforms. In the longstanding efforts to counter the barrage of spam in the email world, the same responses of maintaining blocking filters based on domain name and IP address "reputation" are used. Once an IP address gains a poor reputation as a spam originator and its value is placed on these lists as a spam originator, it can be a challenging exercise to "cleanse" the address, as such lists of spamming IP addresses exist in many forms and are maintained in many different ways.

In sharing IP addresses, it's not just the collection of formal and informal IP filters that pose potential problems for your service. In the underworld of Denial of Service (DOS) attacks, the packet level saturation attack is also based on victimizing an IP address. So even if your online neighbor has not attracted some form of official attention, and it has not been bought to the attention of the various spam list maintainers, there is still the risk that your neighbor has managed to invite the unwelcome attentions of a DOS attack. Again here your online service is then part of the collateral damage, as when the attack overwhelms the common service platform all the hosted services inevitably fall victim the to attack.

For those who can afford it, including all those who have invested what is for them significant sums of money and effort in their online service, then using a dedicated service platform, and a dedicated IP address, is perhaps an easy decision to make. When you share a service platform, your online presence is always going to be vulnerable to the vagaries of your neighbors' behavior. But there are costs involved in such a decision, and if you cannot afford it, or do not place such a premium value on your online service, then using a shared service platform often represents an acceptable compromise of price and service integrity. Yes, the risks are a little higher of your neighbors attracting unwelcome attention, but that may well be an acceptable risk for your service.

And in those cases when your service is located on a shared service platform, if the worst case does happen, and you subsequently find that your service falls foul of a sustained DDOS attack that was launched at the common service platform, or your service address becomes the subject of some government agency's IP filter list, or is listed on some spam filter, you may take some small comfort in the knowledge that it's probably not personal. It's not about you. But there may well be a problem with the online company you keep.

Postscript: What about www.melbournefreeuniversity.org? They moved hosts. Today they can be found at 103.15.178.29, on a server rack facility operated within Australia. Are they still sharing? Well, sameid.net reports that they share this IP address with www.vantagefreight.com.au. I sure hope that they’ve picked better online company this time around!

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: IP Addressing

Categories: Net coverage

Moving Beyond Telephone Numbers - The Need for a Secure, Ubiquitous Application-Layer Identifier

CircleID posts - Fri, 2013-05-31 19:18

Do "smart" parking meters really need phone numbers? Does every "smart meter" installed by electric utilities need a telephone number? Does every new car with a built-in navigation system need a phone number? Does every Amazon Kindle (and similar e-readers) really need its own phone number?

In the absence of an alternative identifier, the answer seems to be a resounding "yes” to all of the above.

Henning Schulzrinne, U.S. Federal Communications Commission CTOAt the recent SIPNOC 2013 event, U.S. Federal Communications Commission CTO Henning Schulzrinne gave a presentation (slides available) about "Transitioning the PSTN to IP” where he made a point about the changes around telephone numbers and their uses (starting on slide 14) and specifically spoke about this use of phone numbers for devices (slide 20). While his perspective is obviously oriented to North America and country code +1, the trends he identifies point to a common problem:

What do we use as an application-layer identifier for Internet-connected devices?

In a subsequent conversation, Henning indicated that one of the area codes seeing the largest amount of requests for new phone numbers is one in Detroit — because of the automakers need to provision new cars with navigation systems such as OnStar that need an identifier.

Why Not IPv6 Addresses?

Naturally, doing the work I do promoting IPv6 deployment, my first reaction was of course:

"Can't we just give all those devices IPv6 addresses and be done with it?"

The answer turns out to be a bit more complex. Yes, we can give all those devices IPv6 addresses (and almost certainly will as we are simply running out of IPv4 addresses), but:

1. Vendors Don't Want To Be Locked In To Infrastructure – Say you are a utility and you deploy 1,000 smart meters in homes in a city that all connect back to a central server to provide their information. They can connect over the Internet using mobile 3G/4G networks and in this case they could use an IPv6 address or any other identifier. They don't need to use a telephone number when they squirt their data back to the server. However, the use of IP addresses as identifiers then ties the devices to a specific Internet Service Provider. Should the utility wish to change to a different provider of mobile Internet connectivity, they would now have to reconfigure all their systems with the new IPv6 addresses of the devices. Yes, they could obtain their own block of "Provider Independent (PI)" IPv6 addresses, but now they add the issue of having to have their ISP route their PI address block across that provider's network.

2. Some Areas Don't Have Internet Connectivity – In some places where smart meters are being deployed, or where cars travel, there simply isn't any 3G/4G Internet connectivity and so the devices have to connect back to their servers using traditional "2G" telephone connections. They need a phone number because they literally have to "phone home".

While we might argue that #2 is a transitory condition while Internet access continues to expand, the first issue of separating the device/application identifier from the underlying infrastructure is admittedly a solid concern.

Telephone Numbers Work Well

The challenge for any new identifier is that telephone numbers work rather well. They are:

  • easily understood – people in general are very comfortable with and used to phone numbers (assuming they have access to phone networks)
  • ubiquitous – phone numbers are everywhere and are available globally
  • well defined – they have a fixed format that is well known and standardized
  • easy to provision – they can be entered and configured very easily, including via keypads, speech recognition and more

For all these reasons, it is understandable that device vendors have chosen phone numbers as identifiers.

The Billing / Provisioning Conundrum

The last bullet above points to a larger issue that will be a challenge for any new identifier. Utilities, telcos and other industries have billing and provisioning systems that in some cases are decades old. They may have been initially written 20 or 30 (or more) years ago and then simply added on to in the subsequent years. These systems work with telephone numbers because that's what they know.

Changing them to use new identifiers may be difficult or in some cases near impossible.

So Why Change?

So if telephone numbers work so well and legacy systems are so tied to those numbers, why consider changing?

Several reasons come to mind:

1. Security – There really is none with telephone numbers. As Henning noted in his presentation and I've written about on the VOIPSA blog in the past, "Caller ID" is easily spoofable. In fact, there are many services you can find through a simple search that will let you easily do this for a small fee. If you operate your own IP-PBX you can easily configure your "Caller ID" to be whatever you want and some VoIP service providers may let you send that Caller ID on through to the recipient.

2. OTT mobile apps moving to desktop (and vice versa) – Many of the "over the top (OTT)" apps that have sprung up in the iOS and Android devices for voice, video or chat communication started out using the mobile devices phone number as an identifier. It's a simple and easy solution as the device has the number already. We're seeing some of those apps, though, such as Viber, now move from the mobile space to the desktop. Does the phone number really make sense there? Similarly, Skype made the jump from desktop to mobile several years ago and used its own "Skype ID" identifier — no need for a phone number there.

3. WebRTCAs I've written before, I see WebRTC as a fundamental disruption to telecommunications on so many different levels. It is incredibly powerful to have browser-based communication via voice, video or chat… in any web browser… on any platform including ultimately mobile devices. But for WebRTC to work, you do need to have some way to identify the person you are calling. "Identity" is a key component here — and right now many of the WebRTC systems being developed are all individual silos of communication (which in many cases may in fact be fine for their specific use case). WebRTC doesn't need phone numbers — but some kind of widely-accepted application-layer identifier could be helpful.

4. Global applications – Similarly, this rise of WebRTC and OTT apps has no connection to geography. I can use any of these apps in any country where I can get Internet connectivity (and yes, am not being blocked by the local government). I can also physically move from country to country either temporarily or permanently. Yet if I do so I can't necessarily take my phone number with me. If I move to the US from the UK, I'll probably want to get a new mobile device — or at least a new SIM card — and will wind up with a new phone number. Now I have to go back into the apps to change the identifier used by the app to be that of my new phone number.

5. Internet of Things / M2M – As noted in the intro to this post, we're connecting more and more devices to the Internet. We've got "connected homes" where every light switch and electrical circuit is getting a sensor and all appliances are wired into centralized systems. Devices are communicating with other devices and applications. We talk about this as the "Internet of Things (IoT)" or "machine-to-machine (M2M)" communication. And yes, these devices all need IP addresses — and realistically will need to have IPv6 addresses. In some cases that may be all that is needed for provisioning and operation. In other cases a higher-level identifier may be needed.

6. Challenges in obtaining phone numbers – We can't, yet, just go obtain telephone numbers from a service like we can for domain names. Obtaining phone numbers is a more involved process that, for instance, may be beyond many WebRTC startups (although they can use services that will get them phone numbers). One of the points Henning made in this SIPNOC presentation was the FCC is actually asking for feedback on this topic. Should they open up phone numbers within the US to be more easily obtainable? But even if this were done within the US, how would it work globally?

7. Changes in user behavior – Add to all of this the fact that most of us have stopped remembering phone numbers and instead simply pull them up from contact / address books. We don't need a phone number any more… we just want to call someone, the underlying identifier is no longer critical.
All of these are reasons why a change to a new application-layer identifier would be helpful.

So What Do We Do?

What about SIP addresses that look like email addresses? What about other OpenID or other URL-based schemes? What about service-specific identifiers? What about using domain names and DNS?

Henning had a chart in his slides that compared these different options ("URL owned" is where you own the domain):

The truth is there is no easy solution.

Telephone numbers are ubiquitous, understood and easy-to-use.

A replacement identifier needs to be all of that… plus secure and portable and able to adapt to new innovations and uses.

Oh… and it has to actually be deployable within our lifetime.

Will there be only one identifier as we have with telephone numbers?

Probably not… but in the absence of one common identifier we'll see what we are already seeing — many different islands of identity for initiating real-time communications calls:

  • Skype has its own proprietary identity system for calls
  • Apple has its own proprietary identity system for FaceTime calls
  • Google has its own proprietary identity system for Hangouts
  • Facebook has its own proprietary identity system used by some RTC apps
  • Every WebRTC startup seems to be using its own proprietary identity system.
  • A smaller community of people who care about open identifiers are actually using SIP addresses and/or Jabber IDs (for XMPP/Jingle).

And in the meantime, Amazon is still assigning phone numbers to each of its Kindles, the utilities are assigning phone numbers to smart meters and automakers are embedding phone numbers in cars.

How can we move beyond telephone numbers as identifiers? Or are we already doing so but into proprietary walled gardens? Or are we stuck with telephone numbers until they just gradually fade away?

Related Notes:
Some additional pointers are worth mentioning:

• The Internet Society (my employer) has a team focused on the broader subject of online privacy and identity (beyond simply the telephone numbers I mention here) and the links and documents there may be of interest.

• There's a new Internet Draft out, draft-peterson-secure-origin-ps, that does an excellent job on the problem statement around "secure origin identification" as it relates to VoIP based on the SIP protocol and why there are security issues with what we think of as "Caller ID".

• Chris Kranky recently argued that telcos are missing the opportunity of leveraging telephone numbers as identifiers in the data world.

Written by Dan York, Author and Speaker on Internet technologies

Follow CircleID on Twitter

More under: Internet Protocol, IPv6, Telecom, VoIP

Categories: Net coverage

Why Trademark Owners Should Think Twice Before Reclaiming Domains

CircleID posts - Fri, 2013-05-31 01:42

A recent kerfuffle around Italian chocolate and confectionery producer Ferrero SpA and fan Sara Rosso is the latest example of how important it is for companies to consider carefully the domain and user names they decided to reclaim. Sometimes, enforcing trademark rights online can go really wrong, really quickly.

In 2007, Ms. Rosso chose February 5 to be "World Nutella Day" — a time when "Nutella Lovers Unite for One Day!" She built a web presence around Nutella Day that included a nutelladay.com website.

Nutelladay.com did everything a brand could hope for from a brand advocate: It encouraged people to go out and buy Nutella to use in scores of listed recipes; it created awareness of the brand and its fan-base by giving tips on how to get involved with and spread the word on World Nutella Day; and it created a strong emotional bond with the brand, giving people a place to share stories about the first time they tried the chocolate/hazelnut spread.

It was powerful stuff. Browsing through the site made me nostalgic about my Polish grandmother, who introduced me to Nutella when I visited her one summer in Bytom, a small city in the southern part of Poland, about an hour's drive from the Czech Republic. She hoped the Nutella was close enough to the peanut butter that I ate in the U.S. Oh boy, that made me one happy 8-year-old.

Ms. Rosso's campaign not only had a great web presence, it came from a loyal fan who dedicated her own time to promote a product she loved.

Then, in a bizarre move, Ferrero issued a cease-and-desist letter to Ms. Rosso, who said she would comply. That sparked a public battering of Ferrero in publications such as The Huffington Post, Mashable, Business Insider, and Adweek. Reversing itself, Ferrero stopped legal action against Ms. Rosso and began backtracking.

Adweek reported that the brand called the incident "a routine procedure in defense of trademarks." But it moved quickly to undo the damage it had done. The company expressed "its sincere gratitude to Sara Rosso for her passion for Nutella, which extends gratitude to all the fans of the World Nutella Day" and noted that the brand is "lucky to have a fan of Nutella so devoted and loyal as Sara Rosso." Ms. Rosso posted the update on NutellaDay.com, and noted that Ferrero had been "gracious and supportive."

But you know this story will become a case study in how not to pursue trademarks online. FairWinds Partners' would have advised Ferrero SpA to get the marketing, trademark, and domain name experts in the room together when deciding which domains to reclaim to discuss the risks and benefits based on certain criteria — one of which, missed in the Nutella case, is how harmful is the content on the domain name in question?

Written by Yvette Miller, Vice President of Communications and Marketing, FairWinds Partners

Follow CircleID on Twitter

More under: Cybersquatting, Domain Names

Categories: Net coverage

The Role of Trust in Determining a New TLD's Business Success

CircleID posts - Fri, 2013-05-31 00:25

Warren Buffet famously said, "It takes twenty years to build a reputation and five minutes to ruin it."

Like it or not, every Top-Level Domain (TLD) is a brand in the eyes of the consumer. So, just how important is trust in the success of the new top-level-domains?

I'm no branding expert, but I grasp that no brand, no matter how memorable, will fail to achieve its goals if it does not gain the public's trust. TLD's are no different. Several TLD's in the past have learned this the hard way by running pricing promotions that flooded their namespace with undesirable content or behavior. Once a TLD is tagged as having a distrust issue, it is difficult to erase it from the public's mind.

In the future, building trust will be an even bigger issue for those TLD's that implicitly make some sort of "promise" of the type of registrants who are using the TLD to promote themselves.

The public will approach new TLD's in one of two ways: those who begin their relationship with little trust which needs to be earned over time, and those who begin with trust freely given, but is forever taken away on the first sign of behavior deemed untrustworthy. Either way, trust must be established by the TLD or there will be no relationship.

So, how does a new TLD build trust?

We are working with several TLD applicants that have decided that their business success depends on checking credentials of the registrants up-front. These are typically TLD's that have chosen a string that represents some sort of recognizable community of special interests. Their rationale is simple: success of their business depends on building trust in their TLD. One of the best ways to achieve this is by checking registrant's eligibility for the TLD up front.

The ICANN Government Advisory Council (GAC) thinks that this decision should not be left up to the applicants that fall into one of twelve categories: children, environmental, health and fitness, financial, gambling, charity, education, intellectual property, professional services, corporate identifiers, generic geographical terms and inherently governmental functions.

Potentially, hundreds of new TLD's are impacted by this advice. Whether this last-minute intervention by the GAC would mean sabotage or rescue for some of these TLD's is an issue that will need to wait for history.

Let's assume that ICANN decides that vetting potential registrants is a business decision best left to the applicants.

Trust needs to be built in two key TLD constituencies: both registrants and visitors to the new domains.

Trust from potential registrants

Potential registrants of these new domains will assess the risk to their business or their personal reputation, in making an investment in the new domain name. The investment is not just financial, but also emotional as they will need to decide how far to go in their adoption of the new TLD.

Trust from visitors to these new domains

Success will also hinge on whether visitors to new websites on these new domains trust the website. If end-users consider a website to be suspect or somewhat shady, then registrants will abandon their investment in the new domains.

Hypothetical example:

At the risk of over-simplification, let's use a hypothetical TLD: .SURGEON.

Let's say the .SURGEON applicant has proposed an open and unrestricted namespace. Anyone who wants a .SURGEON domain name can get one. The applicant's argument for this is that although medical doctors might represent a significant registrant population of the TLD, there are also other people who consider themselves "surgeons", including the "Turkey Surgeon", the grandpa who carves the turkey on Thanksgiving Day; and the "Tree Surgeon", the chainsaw-owning brother-in-law who advertises on Craigslist. The argument is: If you restrict .SURGEON domain names just for medical doctors, then you disenfranchise these other valid uses of the domain.

Thus, no up-front review of credentials takes place for .SURGEON. Clearly, there is little potential harm of someone claiming to be a "Turkey Surgeon". To address more serious cases, such as medical doctor imposters, the applicant may propose community policing to catch these registrants.

But there may be another important question this applicant needs to ask themselves first: How important is trust to the success of my business? And can I achieve this trust without checking or otherwise policing the credentials of my registrants?

"But I have a really strong Acceptable Use and anti-abuse policy!"

EVERY TLD applicant is promising to be vigilant about policing for abuse in their TLD. The GAC has called for such safeguards to be mandatory in all new TLD's. I call these principles "Motherhood and Apple Pie". The problem is that they are reactive, as opposed to proactive. Once a TLD lets the wrong registrants in, it is likely that the public will encounter them before the registry is aware of the problem. And by then it may be too late.

What types of TLD's should care most about trust?

All new TLD's offer some sort of brand promise that will be delivered to registrants and visitors alike. There are a sub-set of TLD's that implicitly promise a lot more. Most, but not all, fall into one of the 12 categories identified by the GAC. Many of these TLD applicants have decided that their business success depends on building trust in their TLD by checking credentials of the registrants up-front. These applicants fall into three general categories:

  1. A well-defined community of people that share the same special interest, affinity or membership
  2. A well-defined geographical area that wants to give preferences to businesses and others living within their geography
  3. A highly-regulated profession or industry that requires some sort of credential, such as a business or professional license

Is leap of faith the TLD's sole branding strategy?

Every TLD applicant will need to decide how to build trust in their new TLD. The question is, how will they do it? Will you do it up-front or do you expect your public to take a leap of faith?

The right answer could be a combination of these two. But how this question is answered may well determine the business success of the TLD.

Even if the GAC advice is not made mandatory for the 12 categories it has identified, it may still make good business sense for this sub-set of TLD applicants who are planning to run completely open and unrestricted TLDs to take the extra steps to vet their registrants… unless these applicants can think of a better way to build trust in their TLD.

Written by Thomas Barrett, President - EnCirca, Inc

Follow CircleID on Twitter

More under: Top-Level Domains

Categories: Net coverage
Syndicate content