CircleID posts

Syndicate content CircleID
Latest posts on CircleID
Updated: 17 weeks 4 days ago

gTLD Contention Auction in May: Request for Comments

Tue, 2013-04-23 02:15

Many gTLD applicants with strings in contention have already heard about the Applicant Auction, a voluntary private auction for resolving string contention that my colleagues and I are organizing. In this post we'd like to share some updates on our progress.

Most importantly, we realized that more than just an escrow agent is needed for the success of a private auction of this scale, and we have partnered with Morrison & Foerster, LLP, a global law firm, who will be acting as the neutral party for our auctions.

We had the opportunity to talk to many applicants in Beijing last week, and we received some great feedback and suggestions. We have distilled these conversations into a more detailed proposal, covering the schedule, policies on which information is published and which is kept confidential, the procedure for handling withdrawals, the handling of bid deposits, and more.

Although many applicants have been asking us to hold an auction as soon as possible and several have already committed to participate in the first auction, we would like to give all applicants a chance to review the proposal and submit final comments, until Thursday this week (11pm UTC).

Based on the applicants' input, the final schedule and rules for the first auction will then be published by Tuesday, April 30, and applicants interested in participating can then sign up their TLDs in an online enrollment system.

We have summarized some of the suggested changes below, and we encourage participants to take a look at the full RFC and send us comments:

Schedule:

We propose beginning Thursday, May 2, with publication of the auction rules and other legal documents, and we plan to hold the auction on Thursday, May 23. Interested parties will need to commit online by May 8. Dates are subject to change with input from participating applicants.

Information policy:

As presented in the workshops, all bidders participating in a given auction can see the number of bidders still bidding for a domain in each round, for all domains being auctioned. However, the winning price is not disclosed to all bidders; only bidders for a particular domain can see the price at which the domain was sold. Amounts of bids and deposits will be kept strictly confidential.

Withdrawal procedure:

Several applicants asked: What if I don't win in the auction, and, as required, I withdraw my application, but some of my fellow non-winning competitors don't? We took this concern very seriously and propose the following solution:

Before the auction, bidders irrevocably authorize the neutral party to request a withdrawal with ICANN on their behalf. In addition, bidders that do not win are required to withdraw their applications via ICANN's online system and send a screenshot to the neutral party, along with a withdrawal statement signed by bidder and two witnesses confirming that the seller performed the withdrawal. A bidder who does not submit proof of withdrawal will forfeit their deposit, and Morrison & Foerster LLP will take legal steps, if necessary, to execute the withdrawal. For bidders who do submit proof, the deposit is held until the neutral party has ensured that the withdrawal took place. ICANN has assured us that withdrawals will be made public within 48 hours, and the neutral party will not release any payments or deposits until withdrawals have been confirmed by ICANN.

Deposit:

Each applicant must make a deposit of at least 20% of the maximum amount the applicant would like to be able to bid, as noted previously. The deposit must be at least $80,000. The purpose of the minimum deposit is to help ensure that bidders who didn't win in the auction withdraw their application. To level the playing field for single-domain applicants who had requested this, we also made an important change from the previously proposed policy: the effective deposit does not increase if participant becomes a seller for a TLD, and payments received from one TLD cannot be used to pay for another TLD within that auction. Applicants who are participating in the auction with more than one TLD must make the minimum deposit for each TLD.

We hope that the procedure we proposed adequately captures the feedback we received from applicants. Overall, there were surprisingly few topics on which we had to come up with a compromise; in most cases, applicant's preferences were in agreement. Where we did have to find a balance between different perspectives, we hope we have found solutions that will satisfy all applicant's concerns.

We look forward to receiving comments to the Request For Comments posted on the applicant auction website.

Written by Sheel Mohnot, Consultant

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

SIP Network Operators Conference (SIPNOC) Starts Tonight in Herndon, Virginia

Tue, 2013-04-23 02:03

Tonight begins the third annual SIP Network Operators Conference (SIPNOC) in Herndon, Virginia, where technical and operations staff from service providers around the world with gather to share information and learn about the latest trends in IP communications services — and specifically those based on the Session Initiation Protocol (SIP). Produced by the nonprofit SIP Forum, SIPNOC is an educational event sharing best practices, deployment information and technology updates. Attendees range from many traditional telecom carriers to newer VoIP-focused service providers and application developers.

The SIPNOC 2013 agenda includes talks on:

  • VoIP and communications security
  • Business strategies for service providers
  • Regulatory and policy issues
  • Multiple sessions about WebRTC and how that will change IP communications
  • IPv6 and VoIP
  • HD audio
  • Standards relating to VoIP and SIP

The main sessions begin tomorrow with a keynote presentation from FCC CTO Henning Schulzrinne where I expect he will talk about some of the challenges the FCC has identified as they continue to push the industry to move away from the traditional PSTN to the world of IP communications.

I've very much enjoyed the past SIPNOC conferences and will be back there again this year leading sessions about: IPv6 and VoIP; how DNSSEC can help secure VoIP; and a couple of sessions related to VoIP security. I'm very much looking forward to the discussions and connections that get made there — and if any of you are attending I look forward to meeting you there.

SIPNOC 2013 will not be livestreamed, but if you are in the DC area (or can easily get there), registration is still open for the event. I suspect you'll also see some of us tweeting with the hashtag #sipnoc.

Written by Dan York, Author and Speaker on Internet technologies

Follow CircleID on Twitter

More under: DNS Security, IPv6, Security, Telecom, VoIP

Categories: Net coverage

ICANN Releases 5th Round of Initial Evaluation Results - 169 TLDs Pass

Mon, 2013-04-22 17:37

Mary Iqbal writes to report that ICANN has released the fifth round of Initial Evaluation results, bringing the total number of applications that have passed the Initial Evaluation phase to 169. ICANN is targeting completing Initial Evaluation for all applicants by August 2013. To learn more, see: http://www.getnewtlds.com/news/Fifth-Round-of-Initial-Evaluations.aspx

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

Why Donuts Should Win All Wine New gTLD Applications

Mon, 2013-04-22 17:30

There are 2 reasons why Donuts, applicant for more than 300 Top-Level Domains, should become the official Registry for wine applications.

• It is not because of the content of its application: There are 3 applicants in total and all of them followed the rules provided by ICANN in its applicant guidebook.
• It is not because they protect the wine industry: the Applicant Guidebook did not "force" applicants to do so.
• It is not because they are American: there are also very good wines in Gibraltar and Ireland. In Gibraltar in particular.

So what are the reasons why Donuts is the right Registry for wine applications?

1) Donuts applied for both .VIN and .WINE Top-Level Domains.

I already imagine a Registrant (the person to buy these domain names) who would face the situation of being able to register a domain name in .VIN and not .WINE. It is what will probably happen if .VIN is owned by an applicant and .WINE by another. The same applies if rules are different, if Registrars are not the same, if launching dates are different (note this will probably happen anyway). If Donuts "wins" both applications, chances are high that a Registrant like you will probably get the chance to be served first to acquire his .wine domain name if he had previously registered his .vin domain.
Both wine applications in the hand of the same Registry is far more interesting for the end user: you don't want to buy your next car in 2 different garages.

2) Donuts is now experienced

Some institutions and myself, involved in the protection of wine Geographical Indications, asked ICANN about this question: "how are wine Geographical Indications going to be protected?" Note this is not the only issue here, wine Trademarks won't be better protected neither but at least, our voice has been heard on one question.
The result of this long information to wine institutions, Project dotVinum, their public comments, my publications in the paper and online press, their questions to ICANN and more ended to a GAC Advice.

"GAC" stands for "Governmental Advisory Committee": basically, it is a group founded by ICANN which represents Governments on such questions. Countries have their word to say when a question related to new gTLDs is a problem. The GAC advice is very important because the problem of protecting wine Geographical Indications is a serious issue for the wine Community and the GAC now seems to be the only body able to force ICANN to "do something about it". It started in 2010 with the dotVinum project. Only in 2013 ICANN listens to it…

So, why Donuts and not another applicant?

Donuts, through the GAC Early Warning Procedure, was asked by France and Luxembourg to offer a protection mechanism for wine Geographical Indications on its .VIN application or to remove it. No solution was found between applicant and French Government and this situation lead to the same question for both .WINE and .VIN. There have been many exchanges on this question. There is now a deadline set in July 2013 to answer this question and...all this is going through the public comment procedure.
Donuts is the right applicant because it is the one facing these questions with Governments and unless ICANN drops it in Durban, it appears on the reports I have on my desk that Donuts is now the most experienced applicant to help find a solution… or not.

Many things can now happen:

• ICANN could "drop it" by not paying so much attention to this question on Durban. This would lead to no protection for wine Geographical Indications. I wrote to its CEO with a solution but it looks like they do not want to confirm they received it;
• Donuts could drop its .VIN application: after all, they have more than 300 so why bother;
• ICANN could block, at the source, second level domains to be registered in all new registries to be launched;
• ICANN could force .WINE and .VIN applicant(s) to protect wine Geographical Indications in their TLD only.
• With all the promotion I am doing on both .WINE and .VIN, other applicants could decide to "bid high" to win .WINE or find an arrangement with other applicants and myself to make these TLDs a success;
• ICANN could decide to reject all wine applications because they do not offer sufficient protection mechanisms;
• ...

As a reminder, the .VIN application has prioritization number 618 on a list of 1917, would a solution be found fast on the wine Geographical Indication question, it could… not be delayed.

Written by Jean Guillon, New generic Top-Level Domain specialist

Follow CircleID on Twitter

More under: Top-Level Domains

Categories: Net coverage

A Primer on IPv4, IPv6 and Transition

Sun, 2013-04-21 22:57

There is something badly broken in today's Internet.

At first blush that may sound like a contradiction in terms, or perhaps a wild conjecture intended only to grab your attention to get you to read on. After all, the Internet is a modern day technical marvel. In just a couple of decades the Internet has not only transformed the global communications sector, but its reach has extended far further into our society, and it has fundamentally changed the way we do business, the nature of entertainment, the way we buy and sell, and even the structures of government and their engagement with citizens. In many ways the Internet has had a transformative effect on our society that is similar in scale and scope to that of the industrial revolution in the 19th century. How could it possibly be that this prodigious technology of the Internet is "badly broken?" Everything that worked yesterday is still working today isn't it? In this article I'd like to explain this situation in a little more detail and expose some cracks in the foundations of today's Internet.

You see it's all about addresses. In a communications network that supports individual communications it's essential that every reachable destination has its own unique address. For the postal network it's commonly your street address. For the traditional telephone network it's your phone number. This address is not just how other users of the network can select you, and only you, as the intended recipient of their communication. It's how the network itself can ensure that the communication is correctly delivered to the intended recipient. The Internet also uses addresses. In fact the Internet uses two sets of addresses. One set of addresses is for you and I to use. Domain names are the addresses we enter into web browsers, or what we use on the right hand side of the @ in an email address. These addresses look a lot like words in natural languages, which is what makes them so easy for we humans to use. The other set of addresses are used by the network. Every packet that is passing through the Internet has a digital field in its header that describes the network address of the packet's intended delivery address: it's "destination address." This address is a 32 bit value. A 2 bit field has four possible values, a 3 bit field has eight possible values, and by the same arithmetic a 32 bit field has 2 to the power 32, or some 4,294,967,296 unique values.

If every reachable device on the Internet needs a unique address in order to receive packets, then does that mean that we can only connect at most some 4 billion devices to the Internet? Well, in general terms, yes! And once we reach that hard limit of the address size, should we expect to encounter problems? Well, in general terms, yes!

Running out of addresses in any communications network can pose a massive problem. We have encountered this a number of times in the telephone network, and each time we've managed to add more area codes, and within each area we've added more in-area digits to telephone numbers to accommodate an ever-growing population of connected telephone handsets. Every time we've made this change to the address plan of the telephone network we needed to reprogram the network. Luckily, we didn't needed to reprogram the telephone handsets as well. We just had to re-educate telephone users to dial more digits. With care, with patience, and with enough money this on-the-fly expansion of the telephone system's address plan can be undertaken relatively smoothly. But this approach does not apply to the Internet. The address structure of the Internet is not only embedded into the devices that operate the network itself, the very same address structure is embedded in every device that is attached to the network. So if, or more correctly, when, we run out of these 32 bit addresses on the Internet we are going to be faced with the massive endeavour of not only reprogramming every part of the network, but also reprogramming every single device that is attached to the network. Given that the Internet today spans more than 2.3 billion users and a comparable number of connected devices then this sounds like a formidable and extremely expensive undertaking.

Frank Solensky's Report on Address Depletion, Proceedings of IETF 18, p. 61, Vancouver, August 1990 (PDf)If running out of IP addresses is such a problem for the Internet then you'd like to hope that we could predict when the ominous event would occur, and then give ourselves plenty of lead time to dream up something clever as a response. And indeed we did predict this address depletion. Some 23 years ago, in August 1990, when the Internet was still largely a research experiment and not the foundation bedrock of the global communications enterprise we saw the first prediction of address runout. At the time Frank Solensky a participant in the Internet Engineering Task Force (IETF) extrapolated the growth of the Internet from the emerging experience of the US National Science Foundation's NSFNET, and similar experiences in related academic and research projects, and predicted that the pool of addresses would run out in some 6-10 years time.

The technical community took this message to heart, and started working on the problem in the early 1990's.

From this effort emerged a stop gap measure that while it was not a long term solution, would buy us some urgently needed extra time. At the time the Internet's use of address use was extremely inefficient. In a similar manner to a telephone address that uses an area code followed by a local number part, the Internet's IP address plan divides an IP address into a network identifier and a local host identifier. At the time we were using an address plan that used fixed boundaries between the network identification part and the host identification part. This address plan was a variant of a "one size fits all" approach, where we had three sizes of host addresses within the network: one size was just too big for most networks, one size was too small, and the only one that left was capable of spanning an Internet of just 16,382 networks. It was this set of so-called "Class B" address blocks that Frank Solensky predicated to run out in four year's time.

So what was the stop gap measure? Easy. Remove the fixed boundaries in the address plan and provide networks with only as many addresses as they needed at the time. It was hoped that this measure would give us a few more years of leeway to allow us to develop a robust long term answer to this address problem. The new address plan was deployed on the Internet in early 1993, and for a couple of years it looked like we were precisely on track, and, as shown in Figure 2, this small change in the address plan, known as Classless Inter-Domain Routing (CIDR), would buy us around 2 or 3 years of additional time to work on a longer term approach to IP address exhaustion.

Figure 2 – CIDR and Address Consumption

As things turned out, we were wrong in that 2 — 3 year estimate.

The reason why we were wrong was that a second stop gap measure was also developed in the early 1990's. This new technology cut right to the heart of the architecture of the Internet and removed the strict requirement that every attached device needed its own unique address on the Internet.

The approach of Network Address Translators (NATs), allowed a collection of devices to share a single public IP address. The devices that were located "behind" a NAT could not be the a target of a new communication, so that, for example, you could not host a web service if you were behind a NAT, but as long as the devices behind the NAT initiated all communications, then the NAT function became invisible, and the fact that an IP address was being shared across multiple devices was effectively irrelevant. In a model of clients and servers, then as long as you only placed the clients behind a NAT then it was possible to share a single IP address across multiple clients simultaneously.

The emerging retail ISP industry took up this NAT technology with enthusiasm. The provisioning model for retail Internet services was for a single IP address provided for each connected service, which was then shared by all the computers in the home using a NAT that was embedded into the DSL or cable modem that interfaced the home network to the service provider network. The IP address consumption levels dropped dramatically, as it was no longer a case of requiring a new IP address for each connected device, but instead requiring a single IP address for each connected service. And as the home collected more connected devices, none of these devices drew additional addresses from the IP address pool.

Instead of buying a couple of years of additional breathing space to design a long term solution to address depletion, the result of the combination of classless addressing and NATs was that it looked like we had managed to push the issue of address depletion out by some decades! The most optimistic prediction of address longevity in around 2001 predicted that IPv4 address depletion might not occur for some decades, as the address consumption rate had flattened out, as shown in Figure 3.

Figure 3 – CIDR, NATs and Address Consumption

Perhaps it may have been an unwarranted over-reaction, but given this reprieve the industry appeared to put this entire issue of IP address depletion in the Internet onto the top shelf of the dusty cupboard down in the basement.

As events turned out, that level of complacency about the deferral of address depletion was misguided. The next major shift in the environment was the mobile Internet revolution of the last half of the 2000's. Before then mobile devices were generally just wireless telephones. But one major provider in Japan had chosen a different path, and NTT DOCOMO launched Internet-capable handsets onto an enthusiastic domestic market in the late 1990's. Their year-on-year rapid expansion of their mobile Internet service piqued the interest of many mobile service operators in other countries. And when Apple came out with a mobile device that included a relatively large well-designed screen and good battery life, an impressive collection of applications and of course a fully functional IP protocol engine, the situation changed dramatically. The iPhone was quickly followed by a number of other vendors, and mobile operators quickly embraced the possibilities of this new market for mobile Internet services. The dramatic uptake of these services implied an equally dramatic level of new demand for IP addresses to service these mobile IP deployments, and the picture for IP address depletion one more changed. What was thought to be comfortably far into the future problem of IP address depletion once more turned into a here and now problem.

Figure 4 – Address Consumption

Even so, we had exceeded our most optimistic expectations and instead of getting a couple of years of additional breathing space from these stop gap measures, we had managed to pull some 15 additional years of life out of the IPv4 address pool. But with the added pressures from the deployment of IP into the world's mobile networks we were once more facing the prospect of imminent address exhaustion in IPv4. So it was time to look at that long term solution. What was it again?

During the 1990's the technical community did not stop with these short term mitigations. They took the address depletion scenario seriously, and considered what could be done to define a packet-based network architecture that could span not just billions of connected devices but hundreds of billions of devices or more. Out of this effort came version 6 of the Internet Protocol, or IPv6. The changes to IPv4 were relatively conservative, apart from one major shift. The address fields in the IP packet header were expanded from 32 bits to 128 bits. Now every time you add a single bit you double the number of available addresses. This approach added 96 bits to the IP address plan. Yes, that's 340,282,366,920,938,463,463,374,607,431,768,211,456 possible addresses!

This approach to IPv6 appeared to adequately answer the need for a long term replacement protocol with enough addresses to fuel a rapacious silicon industry that can manufacture billions of processors each and every year. However, there was one residual annoying problem. The problem arises from one of the underlying features of the Internet's architecture: IP is an "end-to-end' protocol. There is no defined role for intermediaries in packet delivery. In the architecture of the Internet, what gets sent in a packet is what gets received at the other end. So if a device sends an IPv4 packet into the network, what comes out is an IPv4 packet, not an IPv6 packet. Similarly, if a device sends an IPv6 packet into the network then what comes out at the other end is still an IPv6 packet. The upshot of this is that IPv6 is not "backward compatible" with IPv4. In other words setting up a device to talk the "new" protocol means that it can only talk to other devices that also talk the same protocol. This device is completely isolated from the existing population of Internet users. What were these technology folk thinking in offering a new protocol that could not interoperate with the existing protocol?

What they were thinking was that this was an industry that was supposedly highly risk averse, and that once a long term replacement technology was available then the industry would commence broad adoption well before the crisis point of address exhaustion eventuated. The idea was that many years in advance of the predicted address exhaustion time, all new Internet devices would be configured to be capable of using both protocols, both IPv4 and IPv6. And the idea was that these bilingual devices would try to communicate using IPv6 first and fall back to IPv4 if they could not establish a connection in IPv6. The second part of the transition plan was to gradually convert the installed base of devices that only talked IPv4 and reprogram them to be bilingual in IPv6 and IPv4. Either that, or send these older IPv4-only devices to the silicon graveyard!

The transition plan was simple. The more devices on the Internet that were bilingual the more that the conversations across the network would use IPv6 in preference to IPv4. Over time IPv4 would essentially die out and support for this legacy protocol would be no longer required.

However one part of this plan was critical. We were meant to embark on this plan well before the time of address exhaustion, and, more critically, we were meant to complete this transition well before we used that last IPv4 address.

Figure 5 – The IPv6 Transition Plan

And to some extent this is what happened. Microsoft added IPv6 to its operating systems from the mid 2000's with the Windows Vista and Windows Server 2008 products. Apple similarly added IPv6 into their Mac OSX system from around 2006. More recently, IPv6 support has been added into many mobile devices. These days it appears that around one half of all devices connected to the Internet are bi-lingual with IPv6 and IPv4. This is indeed a monumental achievement, and much of the effort in re-programming the devices that are attached to the Internet to speak the new protocol has been achieved. So we are all ready to switch over the Internet to use IPv6, yes? Well, no, not at all.

So what's gone wrong?

Many things have not gone according to this plan, but perhaps there are two aspects of the situation that deserve highlighting here.

Firstly, despite the addition of IPv6 into the popular computer platforms, the uptake of IPv6 in the network is just not happening. While there was a general view that the initial phase of IPv6 adoption would be slow, the expectation was that the use of IPv6 would accelerate along exponentially increasing lines. But so far this has not been all that evident. There are many metrics of the adoption of IPv6 in the Internet, but one of the more relevant and useful measurements is that relating to client behaviour. When presented with a service that is available in both IPv4 and IPv6, what proportion of clients will prefer to use IPv6? Google provide one measurement point, that measures a sample of the clients who connect to Google's service. Their results are shown in Figure 6.

Figure 6 – IPv6 Adoption (Source)

Over the past four years Google has seen this number rise from less than 1% of users in early 2009 to a current value of 1.2%. It's one of those glass half-full or half-empty stories. Although in this case the glass is either 1% full or 99% empty! If broad scale use of IPv6 is the plan, then right now we seem to be well short of that target. On a country-by-country basis the picture is even more challenging. Only 9 countries have seen the proportion of IPv6 users rise above 1%, and the list has some surprising entries.

Figure 7 – IPv6 Adoption (Source)

It's hard to portray this as evidence of broad based adoption of IPv6. Its perhaps more accurate to observe that a small number of network providers have been very active in deploying IPv6 to their customer base, but these providers are the minority, and most of the Internet remains locked deeply in IPv4. If a significant proportion of the end devices support IPv6 then why are these use metrics so unbelievably small? It appears that the other part of the larger network re-programming effort, that of enabling the devices sitting within the network to be IPv6-capable, has not taken place to any significant extent. It's still the case that a very large number of ISPs do not include IPv6 as part of their service offering, which means that even if an attached computer or mobile device is perfectly capable of speaking IPv6, if the access service does not support IPv6 service then there is effectively no usable way for the device to use IPv6. And even when the service provider supplies IPv6 as part of its service bundle, it may still be the case that the user's own network devices, such as the in-home NAT/modems and other consumer equipment that supports in in-home networks, such as a WiFi base station or a home router may only support IPv4. Until this equipment is replaced or upgraded, then IPv6 cannot happen. The result is as we seen in the IPv6 usage metrics today: when offered a choice between IPv4 and IPv6, some 99% of the Internet's connected devices will only use IPv4.

Secondly, we've now crossed into a space that was previously regarded as the unthinkable: we've started to run out of IPv4 addresses in the operating network. This address exhaustion started with the central address pool, managed by the Internet Assigned Numbers Authority (IANA). The IANA handed out its last address block in February 2011. IANA hands out large blocks of addresses (16,777,216 addresses per "block") to the Regional Internet Address Registries (RIRs), and in February 2011 it handed out the last round of address blocks to the RIRs. Each of the five RIRs operates independently, and each will themselves exhaust their remaining pool of IPv4 addresses in response to regional demand. APNIC, the RIR serving the Asia Pacific region, was the first to run out of addresses, and in mid April 2011 APNIC handed out its last block of "general use" IPv4 addresses. (as a side remark here, APNIC still had 17 million addresses held aside at that point, but the conditions associated with allocations from this so-called "final /8" are than each recipient can receive at most an allocation of a total of just 1,024 addresses from this block.) This represented an abrupt change in the region. In the last full year of general use address allocations, 2010, APNIC consumed some 120 million addresses. In 2012, the first full year of operation under this last /8 policy the total number of addresses handed out in the region dropped to 1 million addresses. The unmet address demand from this region appears to be growing at a rate of around 120 — 150 millions addresses per year.

The region of Europe and the Middle East has been the next to run out, and in September 2012 the RIPE NCC, the RIR serving this region, also reached its "last /8" threshold, and ceased to hand out any further general use IPv4 addresses. The process of exhaustion continues, and the registry that serves Northern America and parts of the Caribbean, ARIN, has some 40 million addresses left in its address pool. At the current consumption rate ARIN will be down to its last /8 block 12 months from now, in April 2014. LACNIC, the regional registry serving Latin America and the Caribbean, currently has some 43 million addresses in its pool, and is projected to reach their last /8 slightly later in August 2014. The African regional registry, AFRINIC, has 62 million addresses, and at its current address consumption rate, the registry will be able to service address requests for the coming seven years.

Figure 8 – IPv4 Address Depletion (Source)

So if the concept was that we would not only commence, but complete the process of transition to use IPv6 across the entire Internet before we got to that last IPv4 address, then for Europe, the Middle East, Asia and the Pacific this is not going to happen. It's just too late. And for North and South America it's also highly unlikely to happen in time.

And the slow pace of uptake of IPv6 points to the expectation that this "running on empty" condition for the Internet address plan may well continue for some years to come.

We are now entering into a period of potential damage for the Internet. If the objective of this transition from IPv4 to IPv6 was to avoid some of the worse pitfalls of exhaustion of the IPv4 address space in the internet, then we've failed.

The consequence of this failure is that we are now adding a new challenge for the Internet. It's already a given that we are meant to sustain continued, and indeed accelerating, growth in terms of the overall size of the network and the population of connected devices. The pace of this growth is expressed as a demand for some 300 million additional IP addresses per year, and the figures from the device manufacturers point to a larger figure of some 500 — 700 million new devices being connected to the Internet each year. And the number grows each year. We are expanding the Internet at ever faster rates. As if riding this phenomenal rate of growth on the existing infrastructure and existing technology base wasn't challenging enough, we also have the objective not just to maintain, but to accelerate the pace of transition to IPv6. These two tasks were already proving to be extremely challenging, and we've been slipping on the second. But we now have the additional challenge of trying to achieve these two objectives without the supply of any further IPv4 addresses. At this point the degree of difficulty starts to get uncomfortably close to ten!

This situation poses some architectural consequences for the Internet. Until now we've managed to push NATs out to the edge of the network, and make address compression something that end users did in their home networks. The consequences of failure of such devices and functions are limited to the edge network served by the NAT. We are now deploying mechanisms that allow this NAT function to be performed in the core of the carriage networks. This introduces a new set of unquantified factors. We've little experience in working with large scale NAT devices. We have no idea of the failure modes, or even the set of vulnerabilities in such an approach. We are still debating the appropriate technical approach in the standards bodies, so there are a variety of these service provider NAT approaches being deployed. Each NAT approach has different operational properties, and different security aspects. But now we don't have the luxury of being able to buy more time to explore the various approaches and understand the relative strengths and weaknesses of each. The exigencies of address exhaustion mean that the need for carrier level NAT solutions is now pressing, and given that this is a situation that we never intended to experience, we find ourselves ill-prepared to deal with the side effects from this subtle change in the network's architecture. The greater the level of complexity we add into the network, and the wider the variation in potential network behaviours as a result, the greater the burden we then place on applications. If the network becomes complex to negotiate then applications are forced to explore the local properties of the network environment in order to provide the user with a robust service.

If the hallmark of the Internet was one of efficiency and flexibility based on a simple network architecture, then as we add complexity into the network what we lose is this same efficiency and flexibility that made the Internet so seductively attractive in the first place. The result is a network that is baroquely ornamented, and one that behaves in ways that are increasingly capricious.

We are hopelessly addicted to using a network protocol that has now run out of addresses. At this point the future of the Internet, with its projections of trillions of dollars of value, with its projections of billions of connected silicon devices, with its projections of petabytes of traffic, with its projections of ubiquitous fibre optics conduits spanning the entire world is now entering a period of extreme uncertainty and confusion. A well planned path of evolution to a new protocol that could comfortably address these potential futures is no longer being followed. The underlying address infrastructure of the network is now driven by scarcity rather than abundance, and this is having profound implications on the direction of evolution of the Internet.

There really is something badly broken in today's Internet.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: IP Addressing, IPv6

Categories: Net coverage

What May Happen to GAC Advice? 3 Fearless Predictions

Sun, 2013-04-21 10:25

1. Prediction: A Lesson in Story Telling.

Many TLD applicants are likely to respond to the GAC Advice in a manner that is like story telling: Based on a mixture of fiction garnished with some facts from their applications, applicants will write savvy responses with only one aim — to calm down the GAC's concerns and survive the GAC Advice storm. The "duck and cover" strategy.

Background:

According to the Applicant Guidebook, material changes to applications need to go through a Change Request process. In contention sets Change Requests that are advantageous to a specific applicant are not likely to pass due to competitor's opposition. Even in non-contentious cases Change Requests may not pass, as they could be anti-competitive. Also, the permanent opportunity for applicants in contention sets to amend their applications (by PICs, Change Requests or by the response to a GAC Advice) raises serious anti-competitive questions, as there is very limited space to make changes to an application according to the Applicant Guidebook.

Proposed solution:

No fiction — only facts! Applicants who have not been able to determine privacy issues, consumer protection issues or other issues associated with their TLD application over 12 months after filing their application raise serious concerns whether they are the appropriate entity to operate a TLD.

2. Prediction: Pass the hot potatoes, Joe.

Close to no decisions will be made to reject applications that are included in the GAC Advice. It is to be expected that only a handful of applications, where there is overwhelming support for a rejection (such as those in IV 1. In the Beijing Communiqué), will actually be rejected. This might happen due to legal and liability issues or simply lack of a clear-cut process

Background:

Governments demanded instruments — namely GAC Early Warning and GAC Advice — to prevent applications they were unhappy with. Now the GAC filed an Advice for more than 500 applications, asking for more security, more accountability and more appropriate operation of regulated industries TLDs, among other issues. According to the Applicant Guidebook, the consequence of not fulfilling the GAC Advice (without the option to distort the application to an noncredible extent) would be a dismissal of the gTLD.

Unfortunately, the current GAC Advice process poses loopholes for all parties involved which offer the chance not to be responsible for this dismissal but instead not make any decision at all. This could be the next occasion where ICANN does not serve the Public Interest and the Community but those that play hardball in this application process by their lobbying and financial power.

Proposed solution:

GAC and ICANN Board should accept the responsibilities they asked for!

3. Prediction: Time and tide wait for no man.

GAC Advice has to be executed before contention resolution for applicants in contention sets starts. Otherwise an applicant might succeed in the Contention Set who will be thrown out because of GAC Advice later in the process. This timing would not make sense.

Background:

The GAC Advice process should take into account the process and timing of the whole Application Process. The process following the execution of GAC Advice has to be finished before the Contention Resolution Process is being initiated. Otherwise an applicant who is willing to provide the safeguards being asked for in the GAC Advice may have been eliminated in the process (e.g. by an auction), while the winner of the Contention Resolution is an applicant who is not willing to abide by the GAC Advice. A TLD could then not be awarded at all although a suitable candidate was in place, making the GAC Advice meaningless.

Proposed solution:

Don't wait! We have attached a detailed proposal (PDF chart here) for the harmonization of the GAC Advice process with the New gTLD Application Process. The chart clearly demonstrates how both processes may run in parallel and come together before the contention resolution.

Written by Dirk Krischenowski, Founder and CEO of dotBERLIN GmbH & Co. KG

Follow CircleID on Twitter

More under: ICANN, Internet Governance, Top-Level Domains

Categories: Net coverage

Questions About the Robustness of Mobile Networks

Sat, 2013-04-20 20:54

With mobile phones having become a utility, people are beginning to rely completely on mobile services for a large range of communications. All mobile users, however, are aware of some level of unreliability in these phone systems. Blackspots remain all around the country, not just outside the cities, and in busy areas the quality of the service goes down rather quickly. Drop-outs are another fairly common occurrence of mobile services.

In most cases these are annoyances that we have started to take for granted. This is rather odd, as people do not have the same level of tolerance in relation to their supply of landline communication or, for example, electricity.

At the same time, in almost ever disaster situation the mobile network collapses, simply because it can't handle the enormous increase in traffic. The latest example was the collapse of the mobile services in Boston shortly after the bombing.

The trouble is that in such events this is not simply an annoyance. At these times communications are critical, and sometimes a matter of life and death. The fact that we now have many examples of network meltdowns indicates that so far mobile operators have been unable to create the level of robustness needed to cope with catastrophic events.

Then there are the natural disasters, when it is more likely that infrastructure will be extensively damaged or totally destroyed. However, as we saw during the Brisbane floods two years ago, essential infrastructure has been built in areas that are known to be flood-prone. Infrastructure like mobile towers may not necessarily be physically affected but if the electricity substations are positioned in those areas mobile service operation will be affected.

There are also very few official emergency arrangements between electricity utilities and mobile operators, or for that matter local authorities.

Bucketty in the Hunter Valley, where my office is based, is in a bushfire-prone area and we have been working with Optus — the local, and only, provider of mobile services in the area — to prepare ourselves for bushfire emergencies, to date with limited result. Our idea was to work with the local fire brigade to get access to the mobile tower in emergency situations so that we could install a mobile back-up generator in case the power is cut off.

We were unable to get that organised as Optus insists it can provide these extra emergency services itself. Based on our experience, however, roads are closed in times of emergency and it would be impossible for anyone from the outside to come into the area to assist. This has to be organised on a local level, but large organisations don't work that way.

All of these examples show that the utility and emergency functions of mobile services have not yet been taken seriously enough, and so these problems will continue unless a more critical approach is taken towards guaranteeing a much higher level of robustness to our mobile services. The mobile communication meltdowns during disasters that we have witnessed over the last few years were largely preventable if mobile operators had prepared their network for such events, and if better emergency plans had been developed between various authorities involved in such emergencies, together with policies and procedures to address these issues.

With an increased coverage of WiFi — linked to fixed networks — we see that, particularly in cities, such services are proving to be more reliable, especially for the data services that are required almost immediately to locate people and provide emergency communication services. The social media play a key role in this. In Boston Google responded instantly with a location finder for those affected and their friends and family, and access was largely provided through hotspots.

With an increase of total reliance on mobile networks, especially in emergency situations, it is obvious that far greater attention will need to be given to the construction of mobile networks with disaster events in mind. So far the industry on its own has failed to do this and it will be only a matter of time for government authorities to step in and try to fix these problems.

Other problems — based in particular on experience in the USA — that will need to be addressed include the unfamiliarity with SMS, especially among older people. During a network meltdown it often is still possible to send SMSs and they are the best method of communication. Also, with the increase of smartphones people tend to no longer remember telephone numbers, and often in those emergency situations the batteries of smartphones quickly run to empty.

Smartphone manufacturers, as well as the society at large, will have to think of solutions to these problems.

This is a good interview with my American colleague Brough Turner on why cell phone (and other phone) networks get congested in time of crisis.

Written by Paul Budde, Managing Director of Paul Budde Communication

Follow CircleID on Twitter

More under: Mobile, Wireless

Categories: Net coverage

Are There Countries Whose Situations Worsened with the Arrival of the Internet?

Fri, 2013-04-19 20:18

Are there countries whose situations worsened with the arrival of the internet? I've been arguing that there are lots of examples of countries where technology diffusion has helped democratic institutions deepen. And there are several examples of countries where technology diffusion has been part of the story of rapid democratic transition. But there are no good examples of countries where technology diffusion has been high, and the dictators got nastier as a result.

Over twitter, Eric Schmidt, Google CEO, recently opined the same thing. Evgeny Morozov, professional naysayer, asked for a graph.

So here is a graph and a list. I used PolityIV's democratization scores from 2002 and 2011. I used the World Bank/ITU data on internet users. I merged the data and made a basic graph. On the vertical axis is the change in percent of a country's population online over the last decade. The horizontal axis reflects any change in the democratization score — any slide towards authoritarianism is represented by a negative number. For Morozov to be right, the top left corner of this graph needs to have some cases in it.

Change in Percentage Internet Users and Democracy Scores, By Country, 2002-2011
(Look at the Raw Data)

Are there any countries with high internet diffusion rates, where the regime got more authoritarian? The countries that would satisfy this condition should appear in the top left of the graph. Alas, the only candidates that might satisfy these two conditions are Iran, Fiji, and Venezuela. Over the last decade, the regimes governing these countries have become dramatically more authoritarian. Unfortunately for this claim, their technology diffusion rates are not particularly high.

This was a quick sketch, and much more could be done with this data. Some researchers don't like the PolityIV scores, and there are plenty of reasons to dislike the internet user numbers. Missing data could be imputed, and there may be more meaningful ways to compare over time. Some countries may have moved in one direction and then changed course, all within the last decade. Some only moved one or two points, and really just became slightly more or less democratic. But I've done that work too, without finding the cases Morozov wishes he had.

There are concerning stories of censorship and surveillance coming from many countries. Have the stories added up to dramatic authoritarian tendencies, or do they cancel out the benefits of having more and more civic engagement over digital media? Fancier graphic design might help bring home the punchline. There are still no good examples of countries with rapidly growing internet populations and increasingly authoritarian governments.

Written by Philip N. Howard, Professor in the Department of Communication at the University of Washington

Follow CircleID on Twitter

More under: Censorship, Internet Governance, Privacy

Categories: Net coverage

US Fibre Projects: Go-Aheads Omit the Major Telcos

Fri, 2013-04-19 18:58

As the recent Senate vote on gun reform legislation has shown (wherein 42 of the 45 dissenting senators had recently received donations from gun industry lobbyists), getting things done for the good of the people is a hard task where legislation is concerned. It has been thus with the US's broadband infrastructure for years.

A number of states have legislated against community broadband networks, often resulting from the lobbying efforts of the main telcos affected. State Legislatures commonly pass bills revoking local decision-making authorities from communities, effectively making them dependent on the dominant cableco and DSL provider. The National Institute on State Politics has made a clear connection between industry contributions to politicians and hamstrung bills restricting competition to these telcos.

Following the success of Google's FttH offering in Kansas City, the FCC has promoted the so-called 'Gigabit City Challenge', aimed at encouraging broadband providers and state and municipal officials to provide communities in each state with a 1Gb/s service by 2015.Yet alternatives to the major telcos is gaining ground. Following the success of Google's FttH offering in Kansas City, the FCC has promoted the so-called 'Gigabit City Challenge', aimed at encouraging broadband providers and state and municipal officials to provide communities in each state with a 1Gb/s service by 2015. These would serve as hubs for innovation, and act as regional drivers for economic growth. Thus far there are more than 40 gigabit communities in 14 states. As part of its support, the FCC is holding workshops on best practices to lower costs and develop greater efficiencies in building the networks. In tandem with municipal efforts, the GigU initiative has helped develop gigabit networks in a number of university campuses.

The prospect for increased municipal involvement has improved with Google's expansion of its 1Gb/s service to Austin, Texas and Provo, Utah, where (in a change from its other deployments) Google acquired an existing municipal fibre-optic system (iProvo, set up several years ago, palmed off to a series of investors and largely hobbled by difficulties which included restrictions imposed by the local telco). The network is currently connected to less than a third of premises, but the job will be completed by Google, which will also upgrade the network to be on a par with those in Kansas City and Austin. It is expected that the same subscriber offer will prevail: a 1Gb/s broadband service for $70 per month, with the option of TV for an additional fee, and with a Google Nexus 7 tablet thrown in. Free broadband at a scaled-down speed may also be provided if subscribers pay an installation fee.

Google has looked at partnering with other municipalities that would reach hundreds of thousands of people across the country.

Many of these municipalities, as well as rural communities, are either developing new schemes of looking anew at earlier schemes. New schemes include United Services' 'United Fiber' FttH network in rural Missouri, while Palo Alto is looking to rekindle its longstanding effort to build a citywide fiber network. In its earlier incarnation, the fiber project was hobbled by the economic crash which led to the withdrawal of a partnered consortium and the nervousness of the city fathers to subsidise the scheme. Yet the city by the end of 2013 is expected to have accumulated $17 million in its project fund. The mood has become far more favourable, partly due to the encouragement from developments elsewhere. If other cities can work on delivering FttP as a community service and economic driver, and as a side benefit provide free WiFi, then why can't we?

Despite the obstructionism of the main telcos in realising municipal and rural broadband schemes, the can-do attitude which the US is known for is encouraged by developments thus far, and the snowball effect will be harder for telcos to stop.

Written by Henry Lancaster, Senior Analysts at Paul Budde Communication

Follow CircleID on Twitter

More under: Access Providers, Broadband, Policy & Regulation, Telecom

Categories: Net coverage

Plural TLDs: Let's Stop Throwing Spanners in the Works!

Fri, 2013-04-19 17:54

I don't have strong religion on plural TLDs.

For that matter, I don't have strong feelings for or against closed generics either, an other new gTLD issue that has recently been discussed even though it is not mentioned in the rules new gTLD applicants had to rely on.

What I do care about is predictability of process.

Yet, as Beijing showed, the ICANN community has an uncanny ability to throw last-minute wrenches at its own Great Matter, as Cardinal Wolsey called Henry VIII's plan to divorce Catherine of Aragon.

And we should all remember that the new gTLD program is our own master plan. It is born out of the community's bottom-up process for developing policy. We all own it. We all sanctioned it when it came up through our community and was given a green light by the people we elected to represent us on the GNSO Council, the body responsible for making gTLD policy. So we should now all feel responsible for seeing it to fruition.

Impressed by governments

So can this issue of plural TLDs that came out of nowhere during the ICANN Beijing meeting week cause yet more delays to the Great Matter that is the new gTLD program?

First of all, I was surprised to see it mentioned in the GAC Communiqué which provides the ICANN Board with Advice on the new gTLD program as required by the program's Bible, the Applicant Guidebook. The GAC said it believes: "that singular and plural versions of the string as a TLD could lead to potential consumer confusion. Therefore the GAC advises the ICANN Board to (...) Reconsider its decision to allow singular and plural versions of the same strings."

For governments to react so quickly shows that they now have the pulse of what goes on outside their own circle like never before. I digress here, but I think this is an extremely important development we should all take great pride in. The government representatives that attend ICANN meetings are knowledgeable and engaged in the community they are part of in a way that is probably unique in the world of governance. The rest of us may not always agree with their decisions or opinions, but we cannot disagree with their level of commitment. To the point that individual GAC members coming straight out of a gruelling 8 days of meetings will not hesitate to stand up in the public forum and give voice to their own personal opinions only a few minutes after the GAC Beijing Communiqué was published. I am impressed.

But what about that advice? Will plural TLDs give rise to user confusion and should this debate even be opened at this time? And make no mistake, having GAC Advice on the matter is not the same as discussing it over coffee. Section 1.1.2.7 of the Applicant Guidebook is very clear: "If the Board receives GAC Advice on New gTLDs stating that it is the consensus of the GAC that a particular application should not proceed, this will create a strong presumption for the ICANN Board that the application should not be approved. If the Board does not act in accordance with this type of advice, it must provide rationale for doing so."

Stay the course

So will this advice from governments cause the new gTLD program to be delayed whilst its rules are rewritten for the umpteenth time? Not necessarily. ICANN is definitely learning fast these days. With a new business-oriented CEO to provide guidance on the importance of managing a project of this magnitude with some measure of predictability, the Board itself is showing increasing confidence to stay the course. ICANN Chairman Steve Crocker has said that as far as the ICANN Board is concerned, although the word of governments carries weight, it is not the be all and end all. "We have a carefully constructed multi-stakeholder process," Crocker explained in a video interview recorded at the end of the Beijing meeting. "We want very much to listen to governments, and we also want to make sure there's a balance."

That is reassuring. The Applicant Guidebook makes no mention of plural TLDs. Not one. These are the rules by which applicants have constructed their submissions for a TLD to ICANN. It is on the basis of this guidebook that they have defined their business models and done what ICANN itself was asking them to do: build a viable business and operational plan to operate a TLD.

The rules simply cannot be changed every couple of months. In what world is it OK to ask applicants to follow a process and then, once that process is closed, revisit it time and again and force change on those applicants? Would governments tolerate this in their own business dealings? Would those community members who call for rules revisions on a despairingly regular basis put up with it in their everyday commercial ventures?

So now governments have called upon the ICANN Board to act. But the Board always intended to keep TLD evaluations independent from those with interests in the outcomes. That is why evaluation panels were constituted, instead of getting ICANN Staff to evaluate applicants directly. And that is why we should not attempt to reopen and rearrange decisions of an expert panel basing its analysis on the program's only rulebook, the Applicant Guidebook as it stood when the new gTLD application window closed. After all, parties that disagree with panel outcomes have the objection process to address their concerns.

Singularity or plurality?

And anyway, is there really a case for prohibiting singular and plural TLDs? After all, singulars and plurals have always existed together at the second level and no-one ever took exception to that. Why is the fact that the domains car.com and cars.com are not owned and operated by the same entity less confusing to users than the equivalent singular/plural pair as a TLD? Wouldn't trying to limit the use of singular and plural TLDs amount to attempted content control and free speech limitations?

Isn't this call to limit singular and plural use just a very English-language centric view of the new gTLD world? Is it true that adding or taking away the letter "S" at the end of a string means going from a singular to a plural form in every language, for every alphabet, for every culture? And if not, then how can a level playing field be guaranteed for applicants and users alike if new rules are introduce that prohibit singular/plural use in languages and alphabets that the mostly English-speaking ICANN community understands, but the wider world is not suited to?

Can it really be argued that plurals are confusing, but phonetically similar strings aren't? Aren't we over-reaching if we try to convince anyone that .hotel, .hoteles, and .hoteis belong in the same contention set? And if that's true, why isn't it true for their second-level counterparts, like hotel.info, hoteles.info and hoteis.info?

As I've stated, I have no real preconceived opinion on the matter. So to try and form one, I am more than happy to listen to the people that have spent months, sometimes years, coming up with realistic ideas for new gTLDs. The applicants themselves.

Uniregistry's Frank Shilling thinks that "the GAC (while well-intentioned) has made an extraordinarily short-sighted mistake. For the entire new GTLD exercise to thrive in the very long run, the collective right-of-the-dot namespace simply must allow for the peaceful coexistence of singulars and plurals. There are words with dual meaning that will be affected, this will significantly and unnecessarily hem in future spectrum. Consumers expect singulars and plurals to peacefully coexist. If we want to move to a naming spectrum with tens of thousands of new G's in the future — a namespace which is easy, intuitive and useful for people to navigate, there is just no long term good that can come from setting such a poor precedent today."

Donuts, another new gTLD applicant, argues that the Applicant Guidebook sets an appropriately high threshold for string confusion as it is drafted now. Section 22112 of the Guidebook defines a standard for string confusion as being (text highlighted by me) "where a string so nearly resembles another visually that it is likely to deceive or cause confusion. For the likelihood of confusion to exist, it must be probable, not merely possible that confusion will arise in the mind of the average, reasonable Internet user. Mere association, in the sense that the string brings another string to mind, is insufficient to find a likelihood of confusion."

Donuts suggest that string similarity exists in today's namespace without leading to user confusion. ".BIZ and .BZ, or .COM and .CO or .CM, for example," says Donuts. "At first glance, association of these strings might suggest similarity, but reporting or evidence that they are visually or meaningfully similar clearly does not exist, and the standard of confusion probability is not met. By these examples, it is clearly difficult to confuse the average, reasonable Internet user. Broader Internet usage, growth in name space, and specificity in identity and expression are the foundation of the new gTLD program, and are suitable priorities for the community. In the interest of consumer choice and competition, multiple strings and the variety and opportunity they present to users should prevail over all but the near certainty of actual confusion."

Obviously, these quotes from applicants will have critics dismissing them just because they are from applicants. I can hear now saying "well they would say that, they want new gTLDs to come out asap." Right! And what's wrong with that? Why is it out of place for the people we, the community, have drawn into this through the policy development we approved, to want to get to the end point in a stable and predictable manner after they have invested so much time, effort and resources into this?

A professional ICANN is a strong ICANN

As usual with these calls for last-minute rule changes, we see the recurring argument that the rest of the world is watching ICANN and waiting for it to trip up and mess this up. And as usual, if we listen to those making this argument, the "this" is such a crucial issue that if it is ignored, the world as we know it may very well end. Really? Aren't ICANN critics more likely to be impressed by the organisation displaying an ability to properly project manage and get to the finish line? After having started a process which has brought in over $350 million in application fees, introduced the ICANN ecosystem to global entities, major companies and international organisations who are used to seeing rules being followed, after having shone the outside world's spotlight on itself like never before, wouldn't that be a real sign that ICANN deserves to be overseeing the Internet's namespace?

At this stage, with only a few weeks to go until ICANN declares itself in a position to approve the first TLD delegations, I contend that the real danger to the organisation is lack of predictability in the process being imposed by artificial limitations to the program's scope and rules.

Written by Stéphane Van Gelder, Chairman, STEPHANE VAN GELDER CONSULTING

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance, Policy & Regulation, Top-Level Domains

Categories: Net coverage

Video: Watch This Bufferbloat Demo and See How Much Faster Internet Access Could Be!

Thu, 2013-04-18 23:01

What if there was a relatively simple fix that could be applied to home WiFi routers, cable modems and other gateway devices that would dramatically speed up the Internet access through those devices? Many of us may have heard of the "bufferbloat" issue where buffering of packets causes latency and slower Internet connectivity, but at IETF 86 last month in Orlando I got a chance to see the problem with an excellent demonstration by Dave Täht as part of the "Bits-And-Bytes" session (as explained in the IETF blog).

My immediate reaction, as you'll hear in the video below, was "I WANT THIS!” We live at a time when it's easy to saturate home Internet connections… just think of a couple of people simultaneously streaming videos, downloading files or doing online gaming. To be able to gain the increase in web browsing speed you see in the video is something, that to me, needs to be deployed as soon as possible.

To that end, Dave Täht, Jim Gettys and a number of others have been documenting this problem — and associated solutions — at www.bufferbloat.net for some time now and that's a good place to start. If you are a vendor of home routers, cable modems or other Internet access devices, I would encourage you to look into how you can incorporate this in your device(s).

Meanwhile, enjoy the demonstrations and information in this video: (and for the truly impatient who just want to see the demo, you can advance to the 3:08 minute mark)

Written by Dan York, Author and Speaker on Internet technologies

Follow CircleID on Twitter

More under: Broadband, Web

Categories: Net coverage

Horse's Head in a Trademark Owner's Bed

Thu, 2013-04-18 19:12

Recently, the Internet Corporation for Assigned Names and Numbers (ICANN) unveiled its Trademark Clearinghouse (TMCH), a tool it proposes will help fight trademark infringement relating to another of its new programs — generic top level domain (gTLD).

As Lafeber describes, criticism of ICANN's gTLD program and subsequent TMCH database is mounting. Skeptics have noted that given the significant cost of registering a gTLD — the application fee is $185,000 and subsequent annual fees are $25,000 — the program appears to be solely a cash cow, without adding much value to Internet users. In fact, Esther Dyson, ICANN's founding chairwoman, was quoted in August 2011 (during the nascent stages of the gTLD program's development) as saying:

"Handling the profusion of names and TLDs is a relatively simple problem for a computer, even though it will require extra work to redirect hundreds of new names (when someone types them in) back to the same old Web site. It will also create lots of work for lawyers, marketers of search-engine optimization, registries, and registrars. All of this will create jobs, but little extra value."

While the gTLD program lacks intrinsic value-added, and may in fact have anticompetitive effects given its exorbitant fees, I think there may be something more nefarious at play here. Essentially, ICANN has positioned itself as the Corleone family of the Internet space, making an offer no one can refuse. ICANN created a market in which individuals can launch new gTLDs, even using another's trademark-protected brand as their domain extension. Subsequently — and here's where the mafia-like "protection" arises — it has "offered" trademark owners the ability to head off infringements by either buying their gTLDs or receiving notification if an infringing gTLD is registered by another party.

Programs to monitor the use of one's brand in a domain name have long existed. The TMCH charges subscribers $95 to $150 annually to be notified of the registration of infringing gTLDs. Instead of extorting fees to be the watchdog for illegal activity ICANN itself facilitates, it could more ethically operate its gTLD program by mining publicly available government databases and instituting a freeze on registration of questionable domain names. Moreover, it could even provide a valuable service by offering a clearly defined resolution process for trademark disputes.

The gTLD-TMCH pairing is the proverbial horse's head in a trademark owner's bed.

Written by James Delaney, Chief Operating Officer at DMi Partners

Follow CircleID on Twitter

More under: Domain Names, ICANN, Policy & Regulation, Top-Level Domains

Categories: Net coverage

Massive Spam and Malware Campaign Following Boston Tragedy

Thu, 2013-04-18 01:48

On April 16th at 11:00pm GMT, the first of two botnets began a massive spam campaign to take advantage of the recent Boston tragedy. The spam messages claim to contain news concerning the Boston Marathon bombing, reports Craig Williams from Cisco. The spam messages contain a link to a site that claims to have videos of explosions from the attack. Simultaneously, links to these sites were posted as comments to various blogs.

The link directs users to a webpage that includes iframes that load content from several YouTube videos plus content from an attacker-controlled site. Reports indicate the attacker-controlled sites host malicious .jar files that can compromise vulnerable machines.

On April 17th, a second botnet began using a similar spam campaign. Instead of simply providing a link, the spam messages contained graphical HTML content claiming to be breaking news alerts from CNN.

Cisco became aware of a range of threats forming on April 15th when hundreds of domains related to the Boston tragedy were quickly registered. Regarding the botnet spam-specific threat – from a volume perspective – peaks approach 40% of all spam being sent. (Source: Cisco)

Follow CircleID on Twitter

More under: Malware, Security, Spam

Categories: Net coverage

Correlation Between Country Governance Regimes & Reputation of Their Internet Address Allocations

Thu, 2013-04-18 01:19

[While getting his feet wet with D3, Bradley Huffaker (at CAIDA) finally tried this analysis tidbit that's been on his list for a while.]

We recently analyzed the reputation of a country's Internet (IPv4) addresses by examining the number of blacklisted IPv4 addresses that geolocate to a given country. We compared this indicator with two qualitative measures of each country's governance. We hypothesized that countries with more transparent, democratic governmental institutions would harbor a smaller fraction of misbehaving (blacklisted) hosts. The available data confirms this hypothesis. A similar correlation exists between perceived corruption and fraction of blacklisted IP addresses.

CAIDA's Country IP Reputation Graphs (Click to Enlarge)
See the interactive graph and analysis on the CAIDA website

For more details of data sources and analysis, see:
http://www.caida.org/research/policy/country-level-ip-reputation/

Written by kc claffy, Director, CAIDA and Adjunct Professor, UC, San Diego

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, IP Addressing, Policy & Regulation, Spam

Categories: Net coverage

Over 80 European Organizations Demand Protection for Net Neutrality

Wed, 2013-04-17 21:37

Today, more than 80 organizations, represented by The European Consumer Organization (BEUC) and European Digital Rights (EDRi), sent a letter [PDF] to the European Commission demanding the end of dangerous experimentation with the functioning of the Internet in Europe and the protection of the principles of openness and neutrality.

"The Internet's unique value is openness. The experimentation by certain European access providers with blocking, filtering and throttling of services creates borders in an online world whose key value is the absence of borders." explains Joe McNamee, Executive Director of EDRi. "This reckless experimentation will continue unless the European Commission puts a stop to it."

Follow CircleID on Twitter

More under: Access Providers, Net Neutrality, Policy & Regulation

Categories: Net coverage

Live Today - "IPv4 Exhaustion and the Path to IPv6" from INET Denver

Wed, 2013-04-17 19:26

If you are interested in the current state of IPv4 address exhaustion within North America as well as the current state of IPv6 deployment, there will be a live stream today, April 17, of the sessions happening at INET Denver starting at 1:00pm US Mountain Daylight Time (UTC-6). The event is subtitled "IPv4 Exhaustion and the Path to IPv6” and you can view the live stream at:

http://www.internetsociety.org/events/inet-denver/inet-denver-livestream

Sessions include:

  • IPv4 Exhaustion Update
  • IPv4 Exhaustion at ARIN
  • Address Policy Workshop
  • Evaluation of Current Transfer Market
  • TCO of IPv6
  • Internet Society Initiatives and How To Get Involved

The list of speakers includes people from ARIN, CableLabs, Internet Society, Time Warner Cable, Google and more.

It sounds like a great event and I'm looking forward to watching it remotely.  It will be recorded so that you will be able to watch it later if you cannot view it live.

Written by Dan York, Author and Speaker on Internet technologies

Follow CircleID on Twitter

More under: Internet Protocol, IP Addressing, IPv6, Policy & Regulation

Categories: Net coverage

High-Performing Cloud Networks Are Critical to M2M Success

Tue, 2013-04-16 20:58

Machine to machine (M2M) communications may not be new, but with the rapid deployment of embedded wireless technology in vehicles, appliances and electronics, it is becoming a force for service providers to reckon with as droves of businesses and consumers seek to reap its benefits. By 2020, the GSM Association (GSMA) predicts that there will be 24 billion connected devices worldwide, while Forrester predicts that mobile machine interactions will exceed the number of mobile human interactions more than 30 times. To ensure competitive advantage, service providers must invest in their networks to enable M2M services more quickly, economically, securely and assuredly.

The principle of M2M communications is straightforward. Sensors are installed on consumer or commercial hardware to transfer application-relevant information to other sensors and/or to a centralized storage facility. Using this information, complicated algorithms infer decisions relevant to the specific application, and are executed accordingly. While this is simple in theory, in-practice, it actually requires the construction of a complex network, with a clear path between devices and storage; the ability to store, process and analyze large amounts of data; and the ability to take action based on this intelligence.

As evidenced by recent reports, it's clear that the industry believes that cloud computing is becoming a viable service option for mission critical business applications. In a 2012 survey conducted by North Bridge Venture Partners, and sponsored by 39 cloud companies including Amazon Web Services, Rackspace, Eucalyptus, and Glasshouse, found a meager 3% considered adopting cloud services to be too risky — down from 11% the previous year. In addition, only 12% said the cloud platform was too immature, and that's down from 26% the year prior. This evolution of the computing industry towards cloud has enabled the storage of vast amounts of data from devices and also made the analysis of this data more feasible. In fact, Microsoft recently said that its Azure cloud has more than four trillion objects stored in it, a fourfold increase from a year before. Its Azure cloud averages 270,000 requests per second, while peaking at 880,000 requests per second during some months. The requests per second have increased almost threefold in the past year, a Microsoft official wrote in a blog post. As a comparison, Amazon Web Services said that just its Simple Storage Service (S3) holds 905 billion objects, and was growing at a rate of one billion objects per day, while handling an average of 650,000 requests per second. As cloud becomes the de facto model for M2M communications, M2M vendors must understand what it takes to enable secure and reliable transfer of information via that vehicle.

It is also important to note that M2M communications can be triggered by both planned and unplanned events. For example, in a smart grid application, smart meters can send information about electricity consumption to a centralized database at pre-scheduled times. Sensors can also be designed to react to unplanned events, such as extreme weather conditions, and trigger increased communication in a certain geography or location. As such, the network that connects these devices to each other, and to the cloud, has to perform in both instances, adapting to both forecasted increases in traffic and random spikes, with automatic, assured performance.

Cloud Infrastructure Requirements for M2M Communications

The network platform that enables M2M communications has multiple segments: the access segment (wireless radio or wireline-based), backhaul to the cloud and the cloud network.

Figure 1: Information from billions of sensors is captured in data centers for processing. Sensor data is transmitted over a wireless access network, mobile backhaul and core network to the data centers.

Sensor data travels to the cloud over wireless/radio or wireline access infrastructures. The aggregation network has to provide highly resilient, scalable and cost-effective backhaul either from mobile or wireline access to be effective. If not the case, M2M communications would be unreliable and many of the new-age applications could never be fully realized.

In order to enable cloud as a platform for M2M adoption, innovation and communication, the cloud has to serve as a high-performance computing platform, often referred to as an enterprise-grade or carrier-grade cloud. High-performance cloud networks need terabit-level connectivity to be able to withstand the projected volume of M2M traffic. These networks will require a provisioning tool so that administrators can allocate resources to where and when they are needed, and also ensure that network assets are available to support delivery of bandwidth-rich applications and services. And, finally, data centers and the cloud backbone need to function as a seamless, single network — a data center without walls — to optimize performance and economics.

Widespread availability of M2M technology has already spurred innovative use cases across different industries, such as: smart grid in energy/utilities; communication between various devices for security and industrial/building control; environmental monitoring; and many applications in the consumer domain ranging from retail to home appliance intelligence.

For example:

  • In healthcare, mobile platforms can be connected wirelessly to a patient's body or garments for doctors to observe glucose, blood pressure, temperature, EKG and imaging data to alert staff to any abnormalities without the patient having to be checked into the hospital.
  • Innovative "green" solutions including, solar-powered, wireless parking meters that allow credit card payments to a web-based irrigation control system, which protects the environment and saves money and time for businesses and consumers.
  • Fleet management specialists can optimize fleet performance through integrating GPS capability, vehicle diagnostics and wireless communications to provide real-time field status information, including current location and diagnostics alerts. Fleet managers are able to monitor and manage driving behavior to improve safety and reduce risk, as well as log drivers' hours to ensure they comply with regulations.

Keys to success

To foster adoption of M2M-enabled technology, initiatives such as GSMA's Connected Life regularly bring together thought leaders within the M2M ecosystem to share their insights to help increase availability of anywhere, anytime connectivity.

The successful adoption of M2M depends on the maturity of multiple elements in the ecosystem, including the wireless technology and business system; the network connectivity that connects the machines and sensors to the cloud; the cloud computing platform; and the software applications that translate the huge amount of data into useful intelligence.

To build an enterprise or carrier-grade cloud platform that can support the projected volume of M2M traffic, the underlying network that connects enterprise data centers, and data centers to the cloud, has to be reliable, high-performing, connection-oriented and have low latency. It must be responsive and integrated into the cloud ecosystem to satisfy connectivity requirements of storage and compute cloud subsystems. It must also enable elastic/liquid bandwidth to ensure the performance and economic benefits of the cloud are realized. Carrier-class network infrastructure — with the ability to scale to 100G today and terabit capacities in the future and with multiple levels of resiliency enabled by an intelligent control plane — will be critical to enabling these cloud networks.

Written by Mariana Agache, Director of Service Provider Industry Marketing at Ciena

Follow CircleID on Twitter

More under: Cloud Computing

Categories: Net coverage

What's the Best IPv6 Transition Option for You?

Tue, 2013-04-16 19:58

After decades of talk, the time for IPv6 has finally arrived. There are several transition options available, but whatever approach you choose, the challenge will be to make sure that your subscribers don't experience a reduction in quality of service.

IPv4 is likely to co-exist with IPv6 for some time, so a native dual-stack migration strategy will be the best transition option for most providers. Dual-stack mode allows both IPv4 and IPv6 to run simultaneously over the network, which lets end-user devices communicate via whichever protocol they are equipped for. With dual-stack mode, there is no disruption to the service if a client requests an IPv4 address. Clients that receive both an IPv4 and IPv6 address will prefer to access the IPv6 network, if it's available. The DNS can determine whether the service is reachable over IPv6 or whether to fall back to IPv4.

Of course, dual-stack provisioning isn't perfect. Service disruptions can occur if you don't have enough IPv4 addresses to hand out to new subscribers. This is because dual-stack systems require devices to have both an IPv4 and IPv6 address. If this is a problem for you, it may be possible to use a tunneling technique or network address translation (NAT).

NAT, however, comes with its own set of problems, including:

  • Impaired quality of service for internal and external systems
  • Increased network complexity and fragmentation
  • Security concerns when multiple subscribers share a single, public IPv4 address
  • Difficulty with law enforcement compliance

Despite these issues, you may find it difficult to implement native dual-stack mode without NAT if you continue to delay your IPv6 preparations. The sooner that you can begin handing out IPv6 addresses to new customers, the sooner you will be able to store IPv4 resources to provide addresses to older subscriber devices. This means you need to start your IPv6 preparations now — even if you still have plenty of IPv4 resources.

If you're interested in learning more about dual-stack migration or want to explore other transition options, download our free ebook: IPv6 eBook Series: Migration.

Written by Stephane Bourque, Founder, CEO and President at Incognito Software

Follow CircleID on Twitter

More under: IP Addressing, IPv6

Categories: Net coverage

ICANN gTLDs: When Names Are Borrowed from an Atlas

Tue, 2013-04-16 17:17

When names are borrowed from an Atlas, things happen. Use of Geographic names have always caused some problems for two reasons; one they are in the public domain so anyone else can use them and two they connote that business is confined to just that geographic area. Like Paris Bakery, Waterloo Furniture or London Bank. Geographic naming was the biggest thing during last couple of centuries, as using name of a village or a city as a moniker was considered being on top of the hill. The sudden worldwide expansion of markets due to ease of communications in the early Computer Society created a massive exit of businesses from geographical names.

Back in 1985, ABC Namebank conducted a major study of all of the corporate names listed in Fortune 500. Starting from the first ever list of 500 published in 1955 all the way to 1985 and concluded by this 30 year by year comparison that why most corporations replaced geographic names with appropriately border-less names to reach an international audience.

Amazon as a brand name for online book retailer is the largest and most successful. At this stage, it's not important where and why that business name was chosen; originally from ancient Greece for big breasted female warriors, or the Amazon River, the fact remains it's now a geographical name in public domain. So who should get the super power gTLD dot.amazon, the book store or the region of Amazon in Brazil? Names borrowed from the Atlas often face sudden crossroads.

ICANN gTLD name evaluations policy has only two clear options; either follow the proven rules of trademark registrability or follow the first-come, first-served 'lawless' rule of early domain name registrations.

To go granular on this early lawless domain name approval system, let's clarify two things: if the 'no questions asked' and 'first-come, first-served' original policy created massive domain name expansion, did it also not create some 25 thousand of UDRPs conflict resolution proceedings and also created a multi-billion dollar defensive name registration industry? Who are the real beneficiaries of such lawless registrations? When a legit multi-billion dollar company buys a name for a business, say ibm.com the same system allows a kid to buy myibm.com, next in line. Is this a way to earn few dollars on a sale or is it a plan to fuel massive global litigation and speculative markets on Intellectual Properties? Under this lawless thinking trademark system would have collapsed couple centuries ago. Now let's fast forward.

Name-centricity clashes with global branding

"A complete breakdown of the domain name registration system, a type of anarchy on the Internet, as allowing anybody to register anything. Registrars throw up the towels. Trademark offices threaten to shut down. Intellectual property becomes public domain. The part-time guy at the local Pizza Hut answers the phone "Hello this is IBM — how can I help you" Battalions of lawyers will band around the word, declaring war on each other, and forcing conflicting points of views in endless battles will win trademarks. This war, would be a great windfall for the profession, as monthly billings would only become perpetual ones." —Excerpted from Domain Wars, by Naseem Javed, Linkbridge Publishing 1999.

Back to gTLDs, on another example; if ICANN approves the name for the athletic brand Patagonia, already objected by the Region of Patagonia of South America, it will cause serious damage to the credibility of a gTLD ownership. Once it's cracked there will be no end as every tenth gTLD name poses special conflicting issues and giving in would chip away gTLD quality.

If, on the other hand, ICANN recognizes geographic gTLDs as rightly belonging to the locals and regions, it will send a shock-wave to all the global name brands with words borrowed from the atlas.

There are at least 10% very tough name approval decisions in the big list of 1930 pending applications. If this alone does not place ICANN in the eye of a storm of naming complexity than where else is it headed? ICANN is now approaching the crossroads where the seriousness and fairness of the usage of names under trademark laws must be clearly declared or it will crack the gTLD program where litigious and hyper-defensive registration mechanisms suck out the positive energy. The Trademark clearing house without such clarity and direction is poised to become the Achilles Heels on the gTLD battlefield.

ICANN slowly approaches the crossroads and so are the global brands with borrowed words from the atlas but still both sides need good maps.

Written by Naseem Javed, Expert: Global Naming Complexities, Corporate Nomenclature, Image & Branding

Follow CircleID on Twitter

More under: Domain Names, ICANN, Internet Governance, Policy & Regulation, Top-Level Domains

Categories: Net coverage

China and the United States Agree on Forming Joint Cybersecurity Working Group

Mon, 2013-04-15 19:10

China and the United States will set up a working group on cybersecurity, U.S. Secretary of State John Kerry said on Saturday, as the two sides moved to ease months of tensions and mutual accusations of hacking and Internet theft. Speaking to reporters in Beijing during a visit to China, Kerry said the United States and China had agreed on the need to speed up action on cyber security, an area that Washington says is its top national security concern.

Read full story: Reuters

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Internet Governance, Security

Categories: Net coverage