NavigationSearchUser loginRecent blog posts
|
news aggregatorGoogle Plans Wireless Access to Remote Regions Using High-Altitude Balloons and BlimpsGoogle is reported to be building huge wireless networks across Africa and Asia, using high-altitude balloons and blimps. The company is aiming to finance, build and help operate networks from sub-Saharan Africa to Southeast Asia, with the aim of connecting around a billion people to the web. To help enable the campaign, Google has been putting together an ecosystem of low-cost smartphones running Android on low-power microprocessors. Read full story: Wired News Follow CircleID on Twitter More under: Access Providers, Mobile, Wireless Categories: Net coverage
Google Plans Wireless Access to Remote Regions Using High-Altitude Balloons and BlimpsGoogle is reported to be building huge wireless networks across Africa and Asia, using high-altitude balloons and blimps. The company is aiming to finance, build and help operate networks from sub-Saharan Africa to Southeast Asia, with the aim of connecting around a billion people to the web. To help enable the campaign, Google has been putting together an ecosystem of low-cost smartphones running Android on low-power microprocessors. Read full story: Wired News Follow CircleID on Twitter More under: Access Providers, Mobile, Wireless Categories: Net coverage
Who Has Helped the Internet? May 31 Deadline for Nominations for 2013 Jonathan Postel Service AwardDo you know of a person or organization who has made a great contribution to the Internet community? If so, have you considered nominating that person or organization for the 2013 Jonathan B. Postel Service Award? The nomination deadline of May 31 is fast approaching! From the description of the award: Each year, the Internet Society awards the Jonathan B. Postel Service Award. This award is presented to an individual or an organization that has made outstanding contributions in service to the data communications community. The award includes a presentation crystal and a prize of US$20,000. The award is focused on sustained and substantial technical contributions, service to the community, and leadership. The committee places particular emphasis on candidates who have supported and enabled others in addition to their own specific actions. The award includes a $20,000 USD prize and will be presented at the 87th meeting of the Internet Engineering Task Force (IETF) in Berlin, Germany, in July. Anyone can nominate a person or organization for consideration. To understand more about the award, you can view the list of past Postel Service Award recipients and also read more about Jon Postel and his many contributions to the Internet. Full disclosure: I am employed the Internet Society but have nothing whatsoever to do with this award. I am posting this here on CircleID purely because I figure that people within the CircleID community of readers are highly likely to know of candidates who should be considered for the award. Written by Dan York, Author and Speaker on Internet technologies Follow CircleID on Twitter More under: Web Categories: Net coverage
Removing Need at RIPEI recently attended RIPE 66 where Tore Anderson presented his suggested policy change 2013-03, "No Need – Post-Depletion Reality Adjustment and Cleanup." In his presentation, Tore suggested that this policy proposal was primarily aimed at removing the requirement to complete the form(s) used to document need. There was a significant amount of discussion around bureaucracy, convenience, and "liking" (or not) the process of demonstrating need. Laziness has never been a compelling argument for me and this is no exception. The fact is that any responsible network manager must keep track of IP address utilization in order to design and operate their network, regardless of RIR policy. Filling this existing information into a form really does not constitute a major hurdle to network or business operations. So setting aside the laziness decree, let's move on to the rationale presented. IPv4 is Dead? Tore pointed to section 3.0.3 of RIPE-582, the "IPv4 Address Allocation and Assignment Policies for the RIPE NCC Service Region:" Conservation: Public IPv4 address space must be fairly distributed to the End Users operating networks. To maximise the lifetime of the public IPv4 address space, addresses must be distributed according to need, and stockpiling must be prevented. According to Mr. Anderson, this is "something that has served us well for quite a long time" but now that IANA and RIPE have essentially exhausted their supply of free/unallocated IPv4 addresses, is obsolete. From the summary of the proposal: Following the depletion of the IANA free pool on the 3rd of February 2011, and the subsequent depletion of the RIPE NCC free pool on the 14th of September 2012, the "lifetime of the public IPv4 address space" in the RIPE NCC region has reached zero, making the stated goal unattainable and therefore obsolete. This argument appears to be the result of what I would consider a very narrow and unjustified interpretation of the goal of conservation. Tore seems to interpret "maximise the lifetime of the public IPv4 address space" to mean "maximise the duration that public IPv4 space remains available at the RIPE NCC." Under this translation, it is possible to believe that a paradigm shift has occurred which calls for a drastic reassessment of the goal of conservation. If, however, we take the goal as written in RIPE NCC policy as a carefully crafted statement meant to convey it's meaning directly and without interpretation or translation; a different conclusion seems obvious. While Tore is correct in his observation that IANA and RIPE NCC (and APNIC and soon ARIN) have all but depleted their reserves of "free" IPv4 addresses, that does not mean that the lifetime of the public IPv4 address space has come to an end. While I would love for everyone to enable IPv6 and turn off IPv4 tomorrow (or better yet, today), that is simply not going to happen all at once. The migration to IPv6 is underway and gaining momentum but there are many legacy devices and legacy networks which will require the use of IPv4 to continue for years to come. Understanding that the useful life of IPv4 is far from over (raise your hand if you have used IPv4 for a critical communication in the past 24 hours) makes it quite easy to see that we still have a need to "maximise the lifetime of the public IPv4 address space." In fact, the IANA and RIR free pools have essentially been a buffer protecting us from those who would seek to abuse the public IPv4 address space. As long as there was a reserve of IPv4 addresses, perturbations caused by bad actors could be absorbed to a large extent by doling out "new" addresses into the system under the care of more responsible folks. Now that almost all of the public IPv4 address space has moved from RIR pools into the "wild," there is arguably a much greater need to practice conservation. The loss of the RIR free pool buffer does not mark the end of "the lifetime of the public IPv4 address space" as Tore suggests but rather marks our entry into a new phase of that lifetime where stockpiling and hoarding have become even more dangerous. A Paradox Tore made two other arguments in his presentation, and I have trouble rectifying the paradox created by believing both of them at once. The two arguments are not new, I have heard them both many times before in similar debates, and they invariably go something like this:
I want to look at these arguments first individually, and then examine the paradox they create when combined. Early in his presentation, Tore said something to the effect of because the LIR can not return to RIPE NCC for more addresses, they would never give a customer more addresses than they need and that the folks involved will find ways of assessing this need independently. OK, if this is true then why not make it easy for everyone involved by standardizing the information and process required to demonstrate need? Oh, right, we already have that. Removing this standardization opens the door for abuse, large and small. The most obvious example is a wealthy spammer paying an ISP for more addresses then they can technically justify, in order to carry out their illegal bulk mail operation. The reverse is true as well, with no standard for efficient utilization to point to, it is more possible for an ISP to withhold addresses from a down stream customer (perhaps a competitor in some service) who actually does have justifiable technical need for them. The second argument is more ridiculous. I truly don't understand how anyone can be convinced by the "people are breaking the rules so removing the rules solves the problem" argument. While I am in favor of removing many of the rules, laws, and regulations that I am currently aware of; I favor removing them not because people break them but because they are unjust rules which provide the wrong incentives to society. If you have a legitimate problem with people stealing bread, for example, then making the theft of bread legal does not in any way solve your problem. While it is possible that bread thieves may be less likely to lie about stealing the bread (since they no longer fear legal repercussions) and it is certainly true that they would no longer be breaking the law, law-breaking and lying are not the problem. The theft of bread is the problem. Legalizing bread theft has only one possible outcome: Encouraging more people to steal bread. So the fact that bad actors currently have an incentive to lie and cheat to get more addresses in no way convinces me that making their bad behavior "legal" would solve the problem. If anything it is likely to exacerbate the issue by essentially condoning the bad behavior, causing others to obtain more addresses then they can technically justify. Of course it get's even worse when you try to hold up both of these arguments as true at once. If people can be counted on to take only what they need, why are they lying and cheating to get more? If people are willing to lie and cheat to get around the needs based rules, why would they abide by needs when the rules are removed? I just can't make these two statements add up in a way that makes any sense. Conclusions Since we still need IPv4 to continue working for some time, maximizing the lifetime of the public IPv4 address space through conservation is still a noble and necessary goal of the RIRs, perhaps more important than ever. Filling out some paperwork (with information you already have at hand) is a very low burden for maintaining this goal. At this time, there is no convincing rationale for removing this core tenant of the Internet model which has served us so well. Written by Chris Grundemann, Network Architect, Author, and Speaker Follow CircleID on Twitter More under: Internet Governance, Internet Protocol, IP Addressing, IPv6, Policy & Regulation, Regional Registries Categories: Net coverage
IPv6: Penny Wise and Pound FoolishThe theory put forward by the IETF was simple enough… while there were still enough IPv4 addresses, use transition technologies to migrate to dual stack and then wean IPv4 off over time. All nice and tidy. The way engineers, myself included, liked it. However those controlling the purse strings had a different idea. There was, don't spend a cent on protocol infrastructure improvement until the absolute last minute — there's no ROI in IPv6 for shareholders. Getting in front of the problem at the expense of more marketable infrastructure upgrades was career suicide. Graph from my 2008 sales presentation… sound but not convincing By considering this a technical issue rather than a business one, it was easier to delay the inevitable but this had unintended consequences. The fewer IPv4 addresses there were, the fewer technical options there were to address the problem. This coupled with a simpler user experience/expense led us to today and the emergence of the so called Carrier Grade NAT (CGN). [For a thorough overview of the various flavors of CGN and the choices in front of us, see Phil's post, The Hatred of CGN on gogoNET. Don't let the title fool you.] By deploying CGNs, ISPs are sharing single IPv4 addresses with more and more households and this isn't good. Why? Because two levels of NAT break things and that leads to unhappy customers. Case in point, British Telecom. BT recently put their retail Option 1 broadband customers (lowest tier) behind CGNs and they are now feeling the pain for a variety of brokenness but mostly because Xbox Live stopped working. Asian fixed line operators were the first to deploy CGN as a Band-Aid to cover over the problem until the rest of the world standardized on a transition solution. Japan and South Korea notwithstanding I suspect the reasons we haven't heard the same outcry earlier are cultural and the result of lower expectations/SLAs. However in a mature broadband market like the UK where customers are vocal and expectations/SLAs are high you are going to hear about it. And since there isn't a steady stream of new customers to offset the churn, this can turn into a PR nightmare resulting in the loss of high acquisition-cost customers. Expect to see more of these reports as more European and North American ISPs follow suit. The irony here is it was the British who coined the term, "Penny wise and pound foolish". Below are a selection of reader comments from the article, "BT Retail in Carrier Grade NAT Pilot”. Posted by zyborg47 13 days ago:
Posted by Kushan 13 days ago:
Posted by driz 13 days ago
Written by Bruce Sinclair, CEO, gogo6 Follow CircleID on Twitter More under: IP Addressing, IPv6 Categories: Net coverage
An Agreement in GenevaFor all the tranquility at the end of last week's World Technology/ICT Policy Forum (WTPF), E.B. White's words come to mind: "there is nothing more likely to start disagreement among people or countries than an agreement." One also has to wonder though what a literary stylist like White would think of the linguistic gyrations demanded by the compromises reached at the WTPF in Geneva, and what they portend. Past as Prologue The management of the International Telecommunication Union (ITU) and a number of influential Member States made best efforts to recalibrate the dialogue at the WTPF towards mending political fences battered by the ITU's last major gathering back in December, and delegates of all stripes found a decent hearing for their concerns. But attempts by governments of Brazil and Russia to heighten the prominence of governments and the ITU itself in Internet governance still clashed with traditional defenders of the multistakeholder model. Where the clashes could not be resolved, we are left with gems such as this: a formal recommendation dealing with the role for governments that "invites all stakeholders to work on these issues." Where, if anywhere, do you go from there? Where to, ITU? Uncertainty exists about how the next stages of the Internet governance debate will play out, but we at least know on what stages they will be played. Stakeholders in need of determining which venues to attend can choose among plenty of meetings and acronyms, IGF to CSTD to UNGA's 2C. The next opportunity for the ITU to consider the issue of Internet governance will be their own Council Working Group on the World Summit on the Information Society (WSIS) in June that take place alongside the ITU's larger Council meetings, where a broader discussion around the organization's budget may prove more important in determining priorities for the organization and how much resource it should spend in traditional areas of expertise, like satellite and spectrum allocations, and Internet policy. In the coming months the ITU will also host a series of regional meetings in preparation for the World Telecommunications Development Conference (WTDC) from 31 March – 11 April 2014 in Sharm el?Sheikh, Egypt. The ITU is colocating that meeting with its own ten year review of WSIS (called WSIS+10), as well as its annual WSIS Forum, in which it has traditionally served to review the WSIS action lines for itself and various other UN institutions. Heralding what? These meetings, and some of the new voices in them, imply that the ITU continues to position itself as a key forum for governments to come and make their views heard on Internet matters — a welcome if redundant function. So if the process of getting the agreement struck at the WTPF suggests anything, it is that stakeholders can agree to disagree. This will not mean a stalemate or halt a discussion, in this case, but rather an evolving debate about the role for government in Internet policymaking. The steady pace of ITU-sponsored engagements will provide further opportunities to agree, disagree, and in the end, hopefully create a set of shared understandings and brokered solutions that actually advance the debate to the benefit of people and countries around the world. Written by Christopher Martin, Senior Manager, International Public Policy at Access Partnership Follow CircleID on Twitter More under: Internet Governance Categories: Net coverage
How to Stop the Spread of Malware? A Call for ActionOn Webwereld an article was published (in Dutch) following a new Kaspersky malware report Q1-2013. Nothing new was mentioned here. The Netherlands remains the number 3 as far as sending malware from Dutch servers is concerned. At the same time Kaspersky writes that The Netherlands is one of the most safe countries as far as infections go. So what is going on here? Inbound, outbound and on site From my anti-spam background I have the experience that as long as a spammer remains under the radar of national authorities, e.g. by making sure that he never targets end users in his own country, he is pretty safe. The international cooperation between national authorities is so low, that seldom that something happens in cross border cases. Priorities are mainly given to national cases as cooperation is near existent. (If priority is given to spam fighting at all.) The same will be the case for the spreading of malware. National authorities focus on things national. Cross border issues are just too much of a hassle and no one was murdered, right? Of course it is true that if the allegation is right and we are talking about 157 command and control servers for botnets on thousands and thousands if not millions of servers in The Netherlands, the 157 servers is a very low figure. This does not mean that we can ignore this figure if our country is the number 3 spewing malware country in the world. Something needs to happen. Preferably through self-regulation and if not that way, then through regulation. If it is also true that it is the same few hosting providers that never respond to complaints, it is time to either make them listen or shut them down. There is no excuse for (regulatory) enforcement bodies not to do so. Harm is being done, the economic effects are huge and the name of The Netherlands is mentioned negatively again and again. In January 2005 at OPTA we were very proud that we had dropped from the number 3 position worldwide for spamming to a position out of the top 20. In six months time! I do not think it is much harder to do so for sending malware. A suggestion for an action plan Here's an action plan:
And if anti-botnet infection centre ABUSE-IX starts doing its part on disinfecting end users' devices, The Netherlands may have a winning combination this way. Of course this can be duplicated in your respective countries also for spam, malware, phishing, cyber crime, etc. International cooperation Of course the topics surrounding cyber security calls for international cooperation and coordination. In 2013 it is still virtually impossible to cooperate on cross border cyber crime, spam, the spreading of malware. This needs addressing on EU and world level. National institutions can not afford not to do so. Even if it is hard to give up a little national jurisdiction. There are in between forms, like coordination. Conclusion Let's push the boundaries for cyber threats back. It all starts with ambition. Experience shows that (the threat of) enforcement works. This isn't rocket science, it is about political will and insight. Written by Wout de Natris, Consultant international cooperation cyber crime + trainer spam enforcement Follow CircleID on Twitter More under: Cybercrime, Internet Governance, Law, Malware, Security, Spam Categories: Net coverage
A Royal Opinion on Carrier Grade NATsThere are still a number of countries who have Queen Elizabeth as their titular head of state. My country, Australia, is one of those countries. It's difficult to understand what exactly her role is these days in the context of Australian governmental matters, and I suspect even in the United Kingdom many folk share my constitutional uncertainty. Nevertheless, it's all great theatre and rich pageantry, with great press coverage thrown in as well. In the United Kingdom every year the Queen reads a speech prepared by the government of the day, which details the legislative measures that are being proposed by the government for the coming year. Earlier this month the Queen's speech included the following statement in her speech: "In relation to the problem of matching Internet Protocol addresses, my government will bring forward proposals to enable the protection of the public and the investigation of crime in Cyberspace." [on Youtube, 5:45] As the Guardian pointed out: The text of the Queen's speech gives the go-ahead to legislation, if needed, to deal with the limited technical problem of there being many more devices including phones and tablets in use than the number of internet protocol (IP) addresses that allow the police to identify who sent an email or made a Skype call at a given time. What's the problem here? The perspective of various law enforcement agencies is that the Internet is seen as a space that has been systematically abused, and too many folk are felling prey to various forms of deceit and fraud. If you add to that the undercurrent of concern that the Internet contains a wide range of vulnerabilities from the perspective of what we could generally term "cybersecurity," then it's not surprising to see law enforcement agencies now turning to legislation to assist them in undertaking their role. And part of their desired toolset in undertaking investigations and gathering intelligence is access to records from the public communications networks of exactly who is talking to whom. Such measures are used in many countries, falling under the generic title of "data retention." In the world of telephony the term "data retention" was used to refer to the capture and storage of call detail records. Such records typically contain the telephone numbers used, time and duration of the call, and may also include ancillary information including location and subscriber details. Obviously such detailed use data is highly susceptible to data mining, and such call records can be used to identify an individual's associates and can be readily used to identify members of a group. Obviously, such data has been of enormous interest to various forms of law enforcement and security agencies over the years, even without the call conversation logs from direct wire tapping of targeted individuals. The regulatory measures designed to protect access to these records vary from country to country, but access is typically made available to agencies on the grounds of national security, law enforcement or even enforcement of taxation conformance. So if that's what happens in telephony, what happens on the Internet? Here the story is a continually evolving one, and these days the issues of IPv4 address exhaustion and IPv6 are starting to be very important topics in this area. To see why it is probably worth a looking at how this used to happen and what technical changes have prompted changes to the requirements related to data retention for Internet Service Providers (ISPs). The original model of the analogous data records for the Internet was the registry of allocated addresses maintained by Internet Network Information Centre, or Internic. This registry did not record any form of packet activity, but was the reference data that shows which entity had been assigned which IP address. So if you wanted to know what entity was using a particular IP address, then you could use a very simple "whois" query tool to interrogate this database: $ whois -h whois.apnic.net 202.12.29.211
inetnum: 202.12.28.0 - 202.12.29.255
However, this model of the registry making direct allocations to end user entities stopped in the early 1990's with the advent of the ISP. The early models of ISP service were commonly based on the dial-up model, where a customer would be assigned an IP address for the duration of their call, and the IP address would return to the free pool for subsequent reassignment at the end of the call. The new registry model was that the identity of the service provider was described in the public address registry, and the assignment of individual addresses to each of their dial-up customers was information that was private to the service provider. Now if you wanted to know what entity was using a particular IP address you also had to know the time of day as well, and while a "whois" query could point you in the direction of whom to ask, you now had to ask the ISP for access to their Access, Authentication and Accounting (AAA) records, typically the radius log entries, in order to establish who was using a particular IP address at a given time. Invariably, this provider data is private data, and agencies wanting access to this data had to obtain appropriate authorization or warrants under the prevailing regulatory regime. This model of traceback has been blurred by the deployment of edge NATs, where a single external IP address is shared across multiple local systems serviced by the NAT. This exercise can therefore trace back to the NAT device, but no further. So with access to this data you can get to understand the interactions on the network at a level of granularity of customer end points, but not at a level of individual devices or users. We've used this model of Internet address tracking across the wave of cable and DSL deployments. The end customer presents their credentials to the service provider, and is provided with an IPv4 address as part of the session initiation sequence. The time of this transaction, the identity of the customer and the IP address is logged, and when the session is terminated the address is pulled back into the address pool and the release of the address is logged. The implication is that as long as the traceback can start with a query that includes an IP address and a time of day, its highly likely that the end user can be identified from this information. But, as the Guardian's commentary points out, this is all changing again. IPv4 address exhaustion is prompting some of the large retail service providers to enter the Carrier Grade NAT space, and join what has already become a well established practice in the mobile data service world. The same week of the Queen's speech, BT announced a trial of Carrier Grade NAT use in its basic IP service. At the heart of the Carrier Grade NAT approach is the concept of sharing a public IP address across multiple customers at the same time. An inevitable casualty of this approach is the concept of traceback in the internet and the associated matter of record keeping rules. It is no longer adequate to front up with an IP address and a time of day. That is just not enough information to uniquely distinguish one customer's use of the network from another's. But what is required is now going to be dependant on the particular NAT technology that is being used by the ISP. If the CGN is a simple port-multiplexing NAT then you need the external IP address and the port number. When combined with the CGN-generated records of NAT's bindings of internal to external address, this can map you back to the internal customer's IP address, and using the ISP's address allocations records, this will lead to identification of the customer. So traceback is still possible in this context. In a story titled "Individuals can be identified despite IP address sharing, BT says" the newsletter out-law.com (produced by the law firm Pinsent Masons) reports: BT told Out-Law.com that its CGNAT technology would not prevent the correct perpetrators of illegal online activity from being identified. "The technology does still allow individual customers to be identified if they are sharing the same IP address, as long as the port the customer is using is also known," a BT spokesperson said in a statement. "Although the IP address is shared, the combination of IP address and port will always be unique and as such these two pieces of information, along with the time of the activity can uniquely identify traffic back to a broadband line. [...] If we subsequently receive a request to identify someone who is using IP address x, and port number y, and time z we can then determine who this is from the logs," the spokesperson said. [...] "If only the IP address and timestamp are provided for a CGNAT customer then we are unable to identify the activity back to a broadband line," they added. But port-multiplexing NATs are still relatively inefficient in terms of address utilization. A more efficient form of NAT multiplexing uses the complete 5-tuple of the connection signature, so that the NAT's binding table uses a lookup key of the protocol field and the source and destination addresses and port values. This allows the NAT to achieve far higher address sharing ratios, allowing a single external IP address to be shared across a pool of up to thousands of customers. So what data needs to be collected by the ISP to allow for traceback in this sort of CGN environment? In this case the ISP needs to collect the complete 5-tuple of the external view of the connection, plus the start and stop times at a level of granularity to the millisecond or finer, together with the end-user identification codes. Such a session state log entry takes typically around 512 bytes as a stored data unit. How many individual CGN bindings, or session states, does each user generate? One report I've seen points to an average of some 33,000 connections per end customer each day. If that's the case then the implication is that each customer will generate some 17Mbytes of log information every day. For a very large service provider, with, say, some 25 million customers, that equates to a daily log file of 425Tbytes. If these CGN records were produced at an unrealistically uniform rate per day, that's a constant log data flow of some 40Gbps. At a more realistic estimate of the busy period peaking at 10 times the average, the peak log data flow rate is some 400Gbps. That's the daily load, but what about longer term data retention storage demands? The critical questions here is the prevailing data retention period. In some regimes it's 2 years, while in other regimes it's up to 7 years. Continuing with our example, holding this volume of data for 7 years of data will consume 1,085,875 Terrabytes, or 1.0 Exabytes to use the language of excessively large numbers. And that's even before you contemplate backup copies of the data! And yes, that's before you contemplate an Internet that becomes even more pervasive and therefore of course even larger and used more intensively in the coming years. The questions such a data set can answer also requires a very precisely defined question. It's no longer an option to ask "who used this IP address on this date?" Or even "who used this IP address and this port address in this hour?" A traceback that can penetrate the CGN-generated address overuse fog requires the question to include both the source and destination IP addresses and port numbers, the transport protocol, and the precise time of day, measured in milliseconds. This last requirement, of precise coordinated time records, is a new addition to the problem, as traceback now requires that the incident being tracked be identified in time according to a highly accurate time source running in a known timezone, so that a precise match can be found in the ISP's data logs. It's unclear what it will cost to collect and maintain such massive data sets, but its by no means a low cost incidental activity for any ISP. No wonder the UK is now contemplating legislation to enforce such record keeping requirements in the light of the forthcoming CGN deployments in large scale service provider networks in that part of the world. Without such a regulatory impost its unlikely that any service provider would, of their own volition, embark on such a massive data collection and long term storage exercise. One comment I've heard is that in some regimes it may well be cheaper not to collect this information and opt to pay the statutory fine instead — it could well be cheaper! This is starting to look messy. The impact of CGNs on an already massive system is serious, in that it alters the granularity of rudimentary data logging from the level of a connection to the Internet to the need to log each and every individual component conversation that every consumer has. Not only is it every service you use and every site you visit, but its even at the level of every image, every ad you download, everything. Because when we start sharing addresses we now can only distinguish one customer from another at the level of these individual basic transactions. Its starting to look complicated and certainly very messy. But, in theory in any case, we don't necessarily have to be in such a difficult place for the next decade and beyond. The hopeful message is that if we ever complete the transitional leap over to an all-IPv6 Internet the data retention capability reverts back to a far simpler model that bears a strong similarity to the very first model of IP address registration. The lack of scarcity pressure in IPv6 addresses allows the ISP to statically assign a unique site prefix to each and every customer, so that the service providers data records can revert to a simple listing of customer identities and the assigned IPv6 prefix. In such an environment the cyber-intelligence community would find that their role could be undertaken with a lot less complexity, and the ISPs may well find that regulatory compliance, in this aspect at least, would be a lot easier and a whole lot cheaper! Written by Geoff Huston, Author & Chief Scientist at APNIC Follow CircleID on Twitter More under: Access Providers, Cybercrime, Internet Governance, IP Addressing, IPv6, Policy & Regulation Categories: Net coverage
Major New Funding Opportunities for Internet Researchers and R&E NetworksNationally Appropriate Mitigation Action (NAMA) is a new policy program that was developed at the Bali United Nations Climate Change Conference. As opposed to the much maligned programs like CDM and other initiatives NAMA refers to a set of policies and actions that developed and developing countries undertake as part of a commitment to reduce greenhouse gas emissions. Also unlike CDM, NAMA recipients are not restricted to developing countries. The program recognizes that different countries may take different nationally appropriate action based on different capabilities and requirements. Most importantly any set of actions or policies undertaken by a nation under NAMA will be recorded in a registry along with relevant technology, finance and capacity building support and will be subject to international measurement, reporting and verification. Already most industrialized countries have committed funding,or intend to commit funding to NAMA projects. It is expected that by 2020 over $100 billion will be committed to NAMA programs by various nation states. As I have blogged ad nauseam, I believe Internet researcher and R&E networks can play a critical leadership role in developing zero carbon ICT and "Energy Internet" technologies and architectures. ICT is the fastest growing sector in terms of CO2 emissions and is rapidly become one of the largest GHG emission sectors on the planet. For example a recent Australian study pointed out that the demand for new wireless technologies alone will equal the CO2 emissions of 4 1/2 million cars! Once you get past the mental block of energy efficiency solves all problems, and realize that energy consumption is not the problem, but the type of energy we use, then a whole world of research and innovation opportunities opens up. More significantly, whether you believe in climate change or not, it is expected that within a couple of years the cost of power from distributed roof top solar panels is going to be less than that from the grid. This is going to fundamentally change the dynamics of the power industry much like the Internet disrupted the old telecom world. Those countries and businesses that take advantage of these new power realities are going to have a huge advantage in the global marketplace. I am pleased to see that Europe is at the forefront of these developments with Future Internet initiatives like FINSENY.EU that is actively working with NRENs and Internet researchers to develop the architectural principles of building an energy Internet built around distributed small scale renewable power. My only concern is that Europe may screw it up, like they did with the early Internet, when most of the research funding went to incumbent operators. The global Internet started in the academic research community and R&E networks. It would be great to see these same organizations play a leadership role in deploying the global "Energy Internet". Universities, in many cases have the energy profile of small cities, of which 25-40% of their electrical consumption is directly attributable to ICT. Most campuses also operate large fleets of utility vehicles that could easily be converted to dynamic charging to "packetize" power and provide it where needed and when needed on campus, especially when there is no power from the solar panels. I dream of the day when a university announces it is going zero carbon and off the grid. Written by Bill St. Arnaud , Green IT Networking Consultant Follow CircleID on Twitter More under: Access Providers, Broadband, Telecom Categories: Net coverage
Government Advisory Committee (GAC) Beijing Communiqué Inconsistent With ICANN's gTLD PolicyThis is an edited version of comments submitted to ICANN on the Government Advisory Committee (GAC) Beijing Communiqué of 11 April 2013. The GAC Communiqué recommends that ICANN implement a range of regulations (which the GAC calls "safeguards") for all new generic top-level domains (gTLDs) covering areas ranging from malware to piracy to trademark and copyright infringement. The GAC proposes specific safeguards for regulated and professional sectors covering areas as diverse as privacy and security, consumer protection, fair lending and organic farming. Finally, the GAC proposes a "public interest" requirement for approval of new "exclusive registry access" gTLDs. The GAC's recommendations raise complex issues of ICANN's mission and governance and how they relate to the laws of the jurisdictions in which the registries operate. Without getting into the details of the specific recommendations, the expansion of ICANN's role implicit in the GAC's recommendations is inconsistent with ICANN's policy of opening entry into the domain space. Opening entry into the domain name space is intended to bring the benefits of competition and greater innovation to the market for TLDs. A major benefit of a competitive market is that there is generally no need for regulation of product attributes, as the GAC is proposing. Indeed, regulation of such a market will be counterproductive to the interests of consumers. In a competitive gTLD market, registries can be expected to provide the services their customers demand. Registries that provide those services will flourish, and those who do not will not survive. Importantly, a competitive gTLD market allows for a range of services corresponding to different preferences and needs. The type of regulation the GAC is recommending will raise costs to registries and impede the development of innovative new TLD services, ultimately harming consumers. The value of gTLDs as economic assets and the benefits of the new gTLD program will be diminished. Included in the GAC Communiqué is the recommendation that exclusive access or closed registries for generic terms should be in the "public interest." A public interest standard is vague and difficult to define and therefore is susceptible to being applied in an arbitrary manner. As I indicated in March 6, 2013, comments to ICANN on the subject, a major benefit of the new gTLD program, in addition to providing competition to incumbents, is the ability of the entrants to develop new business models, products, and services. Valuable innovations are likely to be blocked if ICANN attaches a public interest requirement to exclusive access registries. There may be instances where regulation is warranted. For example, the protection of intellectual property in domain names has become a major issue, particularly in connection with the introduction of new gTLDs. ICANN's trademark clearing house is an attempt to address that issue. There may be other areas where regulation is warranted, but it is unclear whether ICANN is the appropriate venue. If ICANN wants to be more of a regulatory agency, it should adopt good regulatory policy practices. Specifically, ICANN should demonstrate that there is a significant market failure that is addressed by the proposed regulation (or safeguard), that the benefits of the regulation are likely to be greater than the costs, and that the proposal is the most cost-effective one available. It is preferable, however, for ICANN to minimize its regulatory role. ICANN should hew closely to the technical functions involved in administering the Domain Name System — i.e., coordinating the allocation of IP addresses, managing the DNS root, and ensuring the stability of the DNS. This has historically been ICANN's essential mission and should continue to be so. Written by Tom Lenard, President, Technology Policy Institute Follow CircleID on Twitter More under: ICANN, Internet Governance, Top-Level Domains Categories: Net coverage
Joint Venture Promises Broadband Benefits with Potential Risks for Latin American, Caribbean MarketsWhen Columbus Networks and Cable & Wireless Communications announced the formation of their new joint venture entity at International Telecoms Week 2013, it signaled an important milestone for the telecommunications sector in Latin American and the Caribbean. The development comes at a time when the region's appetite for bandwidth is rapidly rising. The market for wholesale broadband capacity is experiencing solid growth and shows no sign of slowing anytime soon. It is no surprise then, to see consolidation in the market as service providers position themselves to take full advantage of the expected growth in demand. Significant Development
Columbus Communications’ Submarine Cable Footprint (Click to Enlarge) Their new arrangement is not a union of equals. CWC's assets, subject to the joint venture arrangement, had a gross asset value of US$108.2 million, and recorded a loss before tax of US$0.9 million in the year to 31 March 2013. In contrast, Columbus's assets, subject to the joint venture arrangement, had a gross asset value of US$304.6 million and recorded a profit before tax of US$29.3 million in the year to 31 December 2012. Their joint venture, called CNL-CWC Networks, will be managed by Columbus, whose share will be 72.5% to CWC Wholesale Solutions' 27.5%. Columbus and CWC in a joint statement said, "The new joint venture company will serve as the sales agent of both Columbus Networks and CWC Wholesale Solutions for international wholesale capacity." It added, "Columbus Networks and CWC Wholesale Solutions will retain ownership and control of their respective existing networks in the region." The companies expect that after completing necessary network interconnections, the joint venture will offer wholesale customers an expanded network platform that spans more than 42,000 kilometers and reaches more than 42 countries in the region. Officials from both companies shared that they hope to offer customers greater IP traffic routing options, improved reliability and higher performance as the joint venture rolls out. However, for all their enthusiasm about the joint venture, the success of an enlarged Columbus/CWC is by no means guaranteed. Given the strong parent brands, there is the real possibility of potentially conflicting strategies from Columbus and CWC for development of the Caribbean market. It remains to be seen how the enlarged entity will position itself in the market. For Columbus, the deal enables the supply of international wholesale capacity and IP services to markets the company does not currently reach, such as Grenada, Barbados, St Lucia, Antigua and St Vincent and the Grenadines. It also provides them with additional connectivity options for Dominican Republic and Jamaica. For Cable and Wireless, its current LIME territories will be able to benefit from enhanced bandwidth capacity, enabled by access to Columbus Networks sub-sea capacity. However, both companies must await further regulatory approvals in Panama, Columbia, Cayman Islands, The Bahamas, Anguilla, Antigua and Barbuda, The British Virgin Islands, Montserrat and St Kitts and Nevis before they can begin rolling out services on behalf of the joint venture in those countries. It is anyone's guess as to how long this approval process will take. Unanswered Questions The promise of an expanded network that can offer greater resilience, redundancy and routing options for Caribbean and Latin American traffic is certainly laudable. So too is the possibility of improving the region's access to international capacity to better meet the increasing demand. However, the benefits of this joint venture must be weighed against the possibility that this new entity can negatively influence pricing, competition and downstream market growth. Unhealthy collusion or price-fixing in this significant sector of the telecommunications market could deal a serious blow to already fragile economies in the region. This must not be allowed to happen. But who is to be tasked with the responsibility of ensuring that things proceed in the interest of health market growth and economic development? There is no official body with the means or mandate for providing oversight of the region's telecommunications sector. The small markets of the Caribbean are marked by under-resourced national regulators, more practiced in responding to local telecom wrangling than to strategically analyzing the international wheeling and dealing of trans-national players. So the questions now are, who is going to act as watchdog to safeguard regional, national and public interests? And, who is going to ensure that the promised efficiencies and capacity increase, actually benefit the region? Hopefully, it will not be too long before the answers emerge. Written by Bevil Wooding, Internet Strategist at Packet Clearing House Follow CircleID on Twitter More under: Access Providers, Broadband, Telecom Categories: Net coverage
ICANN and GAC: A New Role Needed?Syracuse University professor Milton Mueller published a blog under the title "Will the GAC go away if the Board doesn't follow its advice?". Having been to a number of (very limited) ICANN meetings on behalf of law enforcement cooperation, I would like to share a few — probably thought provoking — observations. The GAC should not leave ICANN but it may be more efficient if its role changed and its efforts were aimed at a different form of output. Governments and direct influence I know that I should explain here what ICANN and the GAC is, but this article is only of interest if you already have some background. Over the past few years the role of the GAC, Government Advisory Committee, within ICANN, Internet Corporation for Assigned Names and Numbers, seems to have changed. Having started as an advisory board, giving an advice to the ICANN board, which can be ignored or only taken to heed in parts, GAC operates more forceful. From advice to orders it seems. As ICANN is multi stakeholder all the way and, as most internet related organs work, bottom up and through consensus only. Perhaps the most stifling form of democracy, but democracy it is. Show up or participate remotely and your voice is heard. In this environment governments are seeking attention for their needs and concerns over the internet. Shouldn't they ask themselves: Is this the correct place to have direct influence? Why are governments concerned? The internet as we know it was created outside the view and influence of governments and by the time of the commercial boom, let's say, since 1998, most western countries had liberalised the telecommunication markets. If anything was regulated it was the old telephony and access fees, not the internet. With the rise of commercial opportunities also other opportunities arose for criminal actors, hacktivists, activists, free speech advocates, state actors, etc. The results of these opportunities concern governments (of all sorts, for different reasons) as all sorts of national interest from public safety to economic are at stake. By the time governments seriously started to look around for enforcement matters and regulations they faced a global challenge. Hence the drive to have more say on internet related policy discussions. Hence more interest in ICANN, ITU, IGF, etc., but mostly ICANN it seems. But again is ICANN the right places to have direct influence? GAC and ICANN What also surprises me, is that governments put all this effort into ICANN. In the end this organisation handles only one aspect of what makes the internet work. Is this because it is the best organised one? There are so much more topics and equally important ones, where there seems less involvement. The RIRs, technical internet bodies, CERT meetings, etc., are less government attended. So again is ICANN the right place to have influence? National laws If a government wants real influence it has to write law that is binding within its own country. It would be advisable that (several) governments coordinate on laws and regulations, e.g. the E.U., perhaps even beyond. The three times a year GAC meeting could be great for coordination. Why go national? The internet is only as stateless as the first cable coming on/into land somewhere. Everything behind that is within a nation state. This is where influence starts or could start should a government wish to have influence. Let's say that a government wants a ruling on: 1) a validation of (a domain name registration by) registrars and registries and resellers. It can lobby with ICANN and hope for self-regulation or it can write it in the national law; 2) abused IP addresses revocation. It can lobby with the RIRs (Regional Internet Registries) or write a regulation into national law; 3) revocation of abused domain names? Idem; 4) National organisations implementing best practices developed at the IETF, it can lobby there or oblige national organisations, e.g. ISPs, to respond and implement within six months through national law; 5) etc., etc., etc. A national regulation, whether directly enforced or through mandatory self-regulation, would be much more effective from a government's perspective than lobbying within multi-stakeholder groups and hope for the best. Does this mean governments have to leave these groups? A new role I'm not claiming that governments should leave ICANN. I'm not even propagating regulatory regimes here. To the contrary, but I do think the present effort could be bettered. Governments should use ICANN meetings, and all others around the internet, to understand which topics are important, what issues are at stake, inform themselves as good as possible from all sides by asking all the right questions and to have a true understand of it all. From this understanding they can build their policies, using all that acquired information. Policy that on the one hand aids the development of the internet and the economy while on the other assists in making it more secure. There is a fine line to walk here, but a line governments need to walk to be most effective on both sides. And, without the aid of industry it will never come about. Conclusion So, governments, lay down your ears and give your advice, but then go home and act on it in the best way possible. Preferably coordinated. Written by Wout de Natris, Consultant international cooperation cyber crime + trainer spam enforcement Follow CircleID on Twitter More under: ICANN, Internet Governance, Policy & Regulation, Top-Level Domains Categories: Net coverage
ICANN at the Inflection Point: Implications and Effects Of the GAC Beijing CommuniqueAuthor's Foreword Although this article was first published just a few days ago, on May 8th, there have been several important intervening developments. First, on May 10th ICANN released a News Alert on "NGPC Progress on GAC Advice" that provides a timetable for how the New gTLD program Committee will deal with the GAC Communique.iii Of particular note is that, as the last action in an initial phase consisting of "actions for soliciting input from Applicants and from the Community', the NGPC will begin to "Review and consider Applicant responses to GAC Advice and Public Comments on how Board should respond to GAC Advice re: Safeguards" on June 20th. This will be followed by a second phase consisting of "actions for responding to each advice given by the GAC", including development of "a GAC scorecard similar to the one used during the GAC and the Board meetings in Brussels on 28 February and 1 March 2011". In regard to how this may affect the timeline for introduction of new gTLDs, the Alert notes, "Part 2 of the Plan is not yet finalized and, with respect to some of the advice, cannot be finalized until after the review of the Public Comments due to be completed on 20 June." Thus it is impossible to know at this point in time how much delay ICANN's response to the GAC Communique may create for the introduction of new gTLDs, especially for those subject to the additional or further targeted safeguards for stings related to regulated industries and professions — although the outlook seems to generally adhere to the projections made in the article. I would guesstimate that some strings affected solely by the GAC's basic safeguards could launch in the third quarter of 2013, while those encompassed by the additional safeguards probably face delay until the last quarter of the year at a minimum. The next meeting of the NGPC takes place on May 18th in Amsterdam, where "Resolution(s) on GAC Advice" is on the agendaii; any such Resolutions are more likely to be procedural than substantive — with substantive reaction, much less implementation, waiting until after GAC interaction with the Board at the mid-July ICANN meeting in Durban. Of course, regardless of how ICANN deals with the Communique, no new gTLDs can launch until the standard Registry Agreement (RA) is made final and adopted by the Board (and it may require yet further amendment to implement GAC safeguards and other advice) — and the same steps are completed for the revised Registrar Accreditation Agreement (RAA) if, as seems likely, only registrars adopting the revised RAA will be permitted to provide domain registration services for new gTLDs. Second, on May 10th ICANN also released a video interview — "GAC Chair Heather Dryden on the Beijing Communiqué and New gTLD Advice"iii — in which Chairwoman Dryden makes some significant assertions:
Chairwoman Dryden also concedes that the GAC advice may have been misunderstood because it was developed behind closed doors and therefore deprived members of the ICANN community of an opportunity to better understand the GAC's concerns and reasoning, and she appears to pledge that the GAC will operate with greater transparency in the future. In addition to providing useful background on the GAC's thinking, the interview also reiterates that if ICANN fails to provide adequate response to the Communique it risks disengagement from the ICANN model by GAC member nations. In addition to providing an opportunity for demonstrating effective self-regulation, reasonable implementation of the safeguards can also head off more onerous top-down legislative and regulatory approaches. Imagine, for example, if in the absence of a meaningful response by ICANN to the GAC the European Community (EC) were to adopt legislation that incorporates the safeguards as a prerequisite for the sale of new gTLD domains by registrars operating in the Community as well as for the transaction of online business with EC consumers by their registrants? Finally, initial public comments on the safeguards have started to be posted.iv Predictably, some support various elements while others urge rejection on the grounds that the Communique consists of tardy and ill-defined changes in policy that are at odds with the multi-stakeholder model. Notwithstanding some negative comments and related press treatment, the overarching politics of the situation will almost surely result in a very serious ICANN process for considering the proposed safeguards and other components of the Communique, and seeking to implement them in a manner that is effective but does not impose undue or inappropriate burdens on contracted parties while maintaining ICANN's role as technical manager of the DNS in a manner that respects and enforces existing public policy but does not usurp roles that belong to legislators and regulators. New gTLD applicants, other members of the ICANN community, and interested third parties have an opportunity to influence ICANN's further consideration and implementation of the GAC advice over the next several months. * * * NEW TOP-LEVEL DOMAINS (Synopsis) The Governmental Advisory Committee communique and responsive requests for comments provide an opportunity for everyone involved with the Internet Corporation for Assigned Names and Numbers and every interest affected by the new TLD program to submit final input on its proposed framework for the launch of new TLDs, the author writes. The added steps will likely cause delays and impose new duties, but will also provide a blueprint for ICANN and registry operators to work cooperatively with the global public sector in decades to come. * * * On the afternoon of April 11, 2013, the last day of ICANN's 46th Public Meeting in Beijing, China, its Governmental Advisory Committee (GAC) issued a long and detailed communique with significant implications for the approximately 1,400 unique applications submitted to ICANN's new TLDs program — and, based upon its implementation response, for ICANN itself. The communique — the end product of a week of intense work undertaken by more than 100 participants from governments attending and engaging in the Beijing meeting — was foreshadowed by a March 31 GAC announcement1 that GAC meetings in Beijing would focus on "controversial or sensitive strings and applications," with sessions organized on "safeguard advice on the basis of categories of strings" and "GAC advice/objections on specific applications." While the GAC has reverted to holding closed door meetings — excessively in our view, within an ICANN organization dedicated to transparency and accountability — during the days before the ICANN meeting and its initial days, the GAC did reach out. The GAC met with many parties, including the GNSO Council charged with TLD policy matters, the Commercial Stakeholder Group, the ICANN Board of Directors, and others. The GAC was striving to deliver its input before the Beijing meeting concluded. The communique arrived in the middle of the Beijing Public Forum, where individuals directly address the ICANN Board on relevant topics. The communique elicited immediate outcry from some that its proposals constituted major changes in the rules of the new TLD game after the game had begun, would cause undue delay, fostered internet censorship — and that it should be subject to public comment. But it received support from others who believe that the GAC is best positioned to address public interest issues implicated by ICANN activities. Further, many of the issues addressed by the GAC were not clearly evident until after the sheer volume and relevant specifics of new TLD applications had been fully digested. ICANN's Unprecedented Move In a somewhat unprecedented move, ICANN acquiesced to the call for public comments and is even requesting two separate types. First, on April 19, new TLD applicants were advised that they were being provided with 21 days, until May 10, to respond to the GAC advice.2 That notice, as well as the official "GAC Advice Response Form for Applicants," takes a wide open approach. The notice provides no guidance on how feedback should be structured, such as whether applicants should critique the advice, outline how they intend to comply with it, or both. The attached form asks only for the applicant's name, ID number, and applied for string — followed by "Response:" and a blank space to fill. Shortly thereafter, on April 23, ICANN published a general notice of request for public comment from any interested party on "New TLD Board Committee Consideration of GAC Safeguard Advice," with an initial comment deadline of May 14 and a subsequent reply period closing on June 4.3
The explanation of the general public comment invitation provides this background:
As can be seen, the scope of comment being solicited from the general public is circumscribed, with requested input limited to the portions of the communique proposing "safeguards" — although many commenters will likely ignore that restriction and address other portions as well. Again, ICANN has provided no further refinement of the request for comment, giving no indication as to what feedback would be most useful to the Board's new TLD program committee. This unique and noteworthy approach may well result in feedback being received from parties not normally engaged with or active within the ICANN community. Those most directly affected by the GAC advice, new TLD applicants, may well choose to participate in both their exclusive comment forum as well as this general one — especially as the reply period for the latter extends to nearly four weeks past their own May 10 cutoff date — if they are willing to make their responses public. Potential Implications Before getting into the specifics of the GAC safeguard advice, the following are some guesses about the implications and effects that will flow from it. Timing of New TLD Introductions From now until the end of the July 14-18 ICANN meeting in Durban, South Africa, the ICANN community will consider and react to the GAC Advice. The time from Durban until the final meeting of 2013, November 17-21 in Buenos Aires, Argentina, will likely be the period of ultimate determination as to how much of it will be accepted by ICANN's Board, followed by implementation on the part of both ICANN and applicants. ICANN's new TLD program committee, composed of non-conflicted Board members, has scheduled discussion of a "Plan for responding to the GAC advice issued in Beijing" as the only agenda item for its May 8 meeting.4 But substantive reaction is likely to await receipt and consideration of applicant and public feedback as well as staff analysis of both the communique and the comments. As the GAC wants all new TLD safeguards to be subject to "contractual oversight" by ICANN it is highly probable that additional amendments to the proposed new TLD Registry Agreement (RA) will need to be drafted and put out for public comment prior to final adoption, adding some additional delay to the rollout of new TLDs. Registry Operator Responsibilities Acceptance of even portions of the GAC advice will likely impose duties on registry operators to update and strengthen their terms of service. Registries will also need to submit or update Public Interest Commitments Specifications (PICS), and assume registrant monitoring and coordination duties with regulators and industry bodies that they probably did not envision or price into their business model. Requirements that registries immediately suspend domains in certain circumstances could re-ignite "domain censorship" due process concerns that last flared during the PIPA/SOPA internet blackout. Role of Governments at ICANN ICANN's and key stakeholders' reactions to the GAC communique may well determine whether governments remain engaged in and embracing of the ICANN multistakeholder model — or begin to drift away. Internet governance options exist outside of ICANN that are generally less favorable to and welcoming of contracted parties, business, and civil society. A multi-governmental shift away from ICANN would connote negative long-term implications for its existence. It could also eventually subject the DNS to a maze of disparate national laws and policies or the more worrisome specter of intergovernmental oversight far more intrusive than GAC advice. ICANN, with the acquiescence of its multistakeholder community, will ultimately adopt a majority of the GAC recommendations in some form as doing so is in its long-term institutional interest. Overall, the receipt of the GAC communique and ICANN's solicitation of applicant and public comments on it marks an inflection point for the organization, and the manner in which it assimilates the advice and the responsive feedback will define its working relationships with governments through the end of the decade, and perhaps beyond. In their video interview at the conclusion of the Beijing meeting, Board Chairman Steve Crocker stated that the communique raised "interesting issues that have to be dealt with, and we'll be quite thorough about it." CEO Fadi Chehade committed that action would be taken only following consideration of public comment from the "entire community" along with staff analysis. As it is not at all customary to subject GAC advice to direct public comment, this will be politically sensitive, complicated, and highly detailed work invoking multiple judgment calls. New TLD Advice on Which ICANN Has Not Requested General Public Comment The April 18 notice to new TLD applicants solicits feedback on every aspect of the GAC communique, with applicant responses to be published and provided to the full ICANN Board. However, it is not clear whether individual applicant responses will be made public. Should any applicant respond to the GAC by seeking to file a PICS — which raises the collateral question of whether ICANN will waive the previously expired deadline for PICS submissions — those filings are made public at the updated application status page of the new TLDs website. GAC advice affecting new TLD strings on which applicant feedback is being explicitly solicited, but general public response is not, includes: Targeted Advice Targeted advice against proceeding further on a specific application for .africa and one for .gcc, as well as on applications for .islam and .halal; and advice not to proceed beyond initial evaluation for two Chinese Internationalized Domain Name (IDN) strings (.shenzhen and .guangzhou) as well as the applications for .persiangulf, .amazon (and related IDNs in Japanese and Chinese), .patagonia, .date, .spa, .yun, .thai, .zulu, .wine, and .vin. Written Briefing The GAC's request for "a written briefing about the ability of an applicant to change the string applied for in order to address concerns raised by a GAC Member and to identify a mutually acceptable solution." Such a briefing should also be made publicly available, as this is a critical issue for applicants and the general public because it relates to the central question of whether and the extent to which an applicant can amend its application to comply with a relevant GAC safeguard if it is adopted by ICANN. Community Support The GAC's view on community support for applications, in which it advises "that in those cases where a community, which is clearly impacted by a set of new TLD applications in contention, has expressed a collective and clear opinion on those applications, such opinion should be duly taken into account, together with all other relevant information." That seems elementary, yet it fails to resolve ongoing disputes about whether or not certain strings legitimately fall into the "community" category, as well as who can legitimately claim to speak for the impacted community. Singulars Versus Plurals The GAC's belief that "singular and plural versions of the string as a TLD could lead to potential consumer confusion" and the consequent advice that the Board should "Reconsider its decision to allow singular and plural versions of the same strings." This is a reaction to the February 26 decision of ICANN's string similarity panel that singulars and plurals of the same term did not create a probability of visual similarity confusion, a conclusion that many have categorized as clueless, as well as something that is likely to receive general public comment notwithstanding it falling outside the "safeguard' category. At the Board-GAC interaction in Beijing, the Board advised the GAC that it would not second guess the Panel's conclusion and that "the ball is now in your [the GAC's] court." The GAC has now forcefully tossed the ball back to the Board. Some ICANN constituencies have already weighed in with the view that singular and plural versions of a string should be placed in the same contention set. IGO Protections Reiteration of prior advice that "appropriate preventative initial protection for the IGO [Intergovernmental Organizations] names and acronyms on the provided list be in place before any new TLDs would launch." The RAA Advice that "the 2013 Registrar Accreditation Agreement should be finalized before any new TLD contracts are approved' with the notation that "The GAC also strongly supports the amendment to the new TLD registry agreement that would require new TLD registry operators to use only those registrars that have signed the 2013 RAA."5 IOC/Red Cross Protections Strong advice that ICANN should "amend the provisions in the new TLD Registry Agreement pertaining to the [International Olympic Committee/Red Cross-Red Crescent] IOC/RCRC names to confirm that the protections will be made permanent prior to the delegation of any new TLDs. PICs A request for "more information on the Public Interest Commitments Specifications [PICS] on the basis of the questions listed in annex II." These GAC-posed questions may become critical matters to be addressed, especially for applicants seeking strings in categories raising heightened GAC concerns as well as for third parties concerned by those applications. The questions raised in Annex II are addressed later in this article. Annex I – The GAC's Proposed Safeguards Annex 1 of the communique addresses "Safeguards on New TLDs" with introductory advice that "The GAC considers that Safeguards should apply to broad categories of strings. For clarity, this means any application for a relevant string in the current or future rounds, in all languages applied for." The GAC is clearly stating that its advice should be interpreted and implemented broadly, not narrowly. This introduction further advises that all the proposed safeguards should "be implemented in a manner that is fully respectful of human rights and fundamental freedoms," "respect all substantive and procedural laws under the applicable jurisdictions," and "be operated in an open manner consistent with general principles of openness and nondiscrimination." None of that seems particularly objectionable, but even this hortatory language raises such interpretative questions as to what are the "applicable jurisdictions" for a particular string — and how should operation in an open manner be squared with later admonitions relating to strings related to regulated industries and professions where domain registrations are to be circumscribed? Safeguards Applicable to All New TLDs The first detailed section of the advice proposes that six specific safeguards be applicable to all TLDs and "be subject to contractual oversight" by ICANN. At a minimum, to the extent that ICANN accepts any of this it will then need to review the existing new TLD Registry Agreement (RA) — already the subject of some controversy, especially in regard to whether ICANN should have some unilateral right to amend it — and determine whether further amendments are needed to incorporate any parts of the GAC advice that are adopted. As ICANN is not a governmental body and all of its powers over registries and registrars are derived via contractual enforcement, this is no small matter. On April 29, ICANN published the Proposed Final New TLD Registry Agreement for public comment, open through June 11.6 Yet, except in the highly unlikely event that ICANN rejects all of the GAC's safeguards proposals, adoption of any of them would seem to inevitably require further amendment of the RA to spell out related, contractually enforceable registry obligations — with such further amendment triggering yet another period of public comment. Further, as the following analysis illustrates, the question for ICANN's Board is not just whether to accept a particular safeguard but how to implement it in a manner that is effective yet reasonable. Determining the right balance will take time. Six Basic Safeguards The GAC's proposed six basic safeguards are: 1. WHOIS Verification and Checks Registry operators are to conduct statistically significant checks at least twice a year on false, inaccurate, and incomplete WHOIS registrant identification data, and notify registrars of inaccurate or incomplete data. This appears to impose proactive oversight and enforcement duties that registry operators were probably not contemplating. It also implicates matters addressed by the just-released-for-comment final Registrar Accreditation Agreement, as well as ongoing discussions focused on increasing WHOIS registrant data accuracy. All of these approaches must ultimately be reconciled and coordinated. 2. Mitigating Abusive Activity Registrant terms of use must "include prohibitions against the distribution of malware, operation of botnets, phishing, piracy, trademark or copyright infringement, fraudulent or deceptive practices, counterfeiting or otherwise engaging in activity contrary to applicable law." No one can be in favor of such activities, but that begs the questions of whether this imposes some affirmative oversight duty on registry operators, and what steps they should take to monitor compliance with and enforce such prohibitions. Also, in some instances the issue of whether a violation has occurred may not be discernible absent other adjudicative processes. Trademark infringement, for example, is already the subject of the UDRP and national laws. It will also be addressed by the two new rights protection mechanisms — the trademark clearinghouse and uniform rapid suspension system in new TLDs — but all these mechanisms require some judicial or expert determination of where infringement has actually occurred. Digital copyright infringement is an evolving and muddled area of the law in which courts in the same nation have reached sharply divergent opinions on similar fact patterns. While some "piracy' may be evident from a cursory review of a website, other alleged instances invoke unsettled legal issues. Ultimately, the question is whether registry operators should wait on law enforcement authorities or adjudicative processes to verify legally actionable harm, or take their own initiatives to identify and halt it. 3. Security Checks In a bow to law enforcement concerns, registry operators are to periodically conduct technical analyses of whether domains are being used to perpetrate security threats "such as pharming, phishing, malware, and botnets," all the while "respecting privacy and confidentiality." Such information is already available from various industry groups, with existing registry operators typically engaged in these initiatives. In addition, the new TLD registry application process already includes security checks. Nonetheless, this could require registries to take on proactive, quasi-police cybersecurity inquiries. More disturbingly, where security risks posing "an actual risk of harm" are identified, registry operators must notify the relevant registrar. If the registrar fails to "take immediate action" then the registry operator must "suspend the domain name until the matter is resolved." This recommendation is almost sure to be controversial, as domain suspensions are widely viewed as equivalent to internet censorship. The notion that private parties will do this on their own accord, absent any due process requirements, and with no additional definition as to how or by whom the matter will ultimately be resolved, raises significant questions concerning registrant rights. 4. Documentation Registry operators are to maintain statistical reports on inaccurate WHOIS records or security threats and provide them to ICANN on request. This advice does not seem particularly burdensome or controversial. 5. Making and Handling Complaints Registry operators must have a mechanism for other parties to submit complaints about domains with inaccurate WHOIS information or domains being used to facilitate bad acts. This safeguard, motivated by growing concerns in regard to cybercrime, fraud, and abuse, is not particularly burdensome, either. But questions remain unanswered: What is the registry operator's duty to further investigate such complaints, and what action should be taken if it finds them well-founded? Will ICANN's compliance staff have an intermediary role in this area? 6. Consequences Registry operators must, "consistent with applicable law" — to the extent it exists or is clear — "ensure that there are real and immediate consequences for "domains with false WHOIS violations or being used in breach of "applicable law," and "these consequences should include suspension of the domain name." Domain suspension, as was seen during the PIPA/SOPA debate, is viewed by many as synonymous with internet censorship, and the requirement that registry operators assume policing oversight powers may well generate substantial controversy. The requirement may also trigger discussion of the existence and adequacy of due process protections and a defined appeals process for affected registrants.
In sum, the six basic safeguards call for various oversight and investigative responsibilities that many registry operators may not have contemplated when they constructed their business plans.
To some extent, these recommendations may be an attempt by fiscally-strapped governments to place the costs of policing and subduing negative externalities resulting from new TLDs back onto registry operators, minimizing the need for potential allocation of substantial new public sector resources focused on law enforcement and cybersecurity. Additional Safeguards for Particular Categories of New TLDs Beyond those six basic safeguards recommended for all new TLDs, the GAC prescribes additional safeguards for strings related to regulated or professional sectors for which end users generally anticipate targeted protections. The communique states: Strings that are linked to regulated or professional sectors should operate in a way that is consistent with applicable laws. These strings are likely to invoke a level of implied trust from consumers, and carry higher levels of risk associated with consumer harm. The dozen sectors identified by the GAC for application of these additional safeguards, accompanied in the communique by a non-exhaustive list of TLD applications asserted to fall within them, are: children,
One may certainly question why certain TLD applications made the GAC's nonexclusive list or have been placed in particular categories. For example, .free, .gratis, .discount and .sale are all placed in the intellectual property category even though they might attract domains with no relationship to goods and services of a primarily IP nature. And .law is given its own separate listing rather than being placed in the professional services category along with .abogado, .attorney, .lawyer and .legal. But, for the present purpose of this analysis, all the specifically listed applications are potentially subject to the additional safeguards depending on follow-up ICANN action. Other applicants with any possible relationship to the identified sectors should presume that they may be similarly affected before this process concludes. Those applicants, along with parties with concerns about or opposed to specific strings, should thoroughly review this advice. Proposed Additional Safeguards for Regulated, Professional Sectors The additional safeguards proposed for regulated and professional sectors — accompanied by some observations — are: 1. Applicable Use Policies. Registry operators will include in their acceptable use policies a requirement that registrants comply with all applicable laws, including those that relate to privacy, data collection, consumer protection, fair lending, debt collection, organic farming, disclosure of data, and financial disclosures. It seems axiomatic that registry operators must be in compliance with applicable laws of all types. However, the questions raised again by such general use policies is to what extent a registry operator will be expected to proactively police and directly enforce them, and what are the applicable laws for a particular domain registrant? What is a registry operator expected to do, for example, if a registrant is accused of operating in violation of a particular nation's laws and the registrant responds that under applicable principles for determining jurisdiction it is not subject to those laws? These are roles and decisions that have traditionally been delegated to law enforcers, regulators, and judicial forums, not to private parties lacking adjudicative expertise under contract to a nonprofit corporation. 2. Notifications. Registry operators will require registrars at the time of registration to notify registrants of this requirement. This is a relatively straightforward requirement to implement, although it will require registrars to identify and separate out affected TLDs and provide additional disclosures at or in close proximity to the time of domain registration. It also highlights the fact that it is registrars, not the registry operators of new TLDs, who have direct contact and contractual relations with registrants. To the extent that registrars of particular TLDs are tasked with going beyond offering a simple domain purchase interface to registrants, and must provide and obtain acceptance of particular disclosures — much less ascertain that registrants satisfy relevant registration eligibility criteria — this will both complicate the domain registration process and generate costs that must be reflected in compensation arrangements with the registry operator as well as in the prices charged to registrants. The only exception to the registrar standing as a separate intermediary between the registry operator and the registrant will be those instances in which the registry operator has directly affiliated with a registrar, now that ICANN has relaxed the former prohibition against such relationships — although, even then, for all but ".brand" or whatever other "closed generic" TLDs are permitted, there will likely be many unaffiliated registrars offering identical domain registration and renewal services for the TLD. 3. Security for Sensitive Data. Registry operators will require that registrants who collect and maintain sensitive health and financial data implement reasonable and appropriate security measures commensurate with the offering of those services, as defined by applicable law and recognized industry standards. While clearly having direct bearing on registrants at strings falling within the health and fitness and financial categories, this safeguard may also implicate others — as an example, at such professional services strings as .accountant(s), .doctor, and .realtor, where registrants will likely collect and maintain confidential health and financial data. Again, the more difficult issues are what are the "reasonable and appropriate security measures" that registrants should implement to safeguard such data, what monitoring and enforcement duties are expected of registry operators to assure compliance, and what constitutes the "applicable law and recognized industry standards' that should be looked to in establishing relevant security measures? The proper standards for protection and disclosure of sensitive digital data remain one of the most hotly debated matters of 21st century cyberlaw and policy, with sharp disagreements between governments and with and within affected industries — yet registry operators are being asked to require the implementation of responsive security measures by their registrants. 4. Working Relationships. Establish a working relationship with the relevant regulatory, or industry self-regulatory, bodies, including developing a strategy to mitigate as much as possible the risks of fraudulent, and other illegal, activities. For registry operators of TLDs falling within the listed sectors this would require an ongoing, perpetual establishment of a "working relationship" — but with whom? As one example, with what financial regulatory authorities and industry self-regulatory bodies located in which nations must the operator of .retirement establish a working relationship? Is it to be based upon the nations to which .retirement registrants direct their activities, or must it involve global outreach so that any potential future registrant and its customers will be accommodated by an already existent working relationship? And what would comprise an effective strategy to mitigate potential fraud or other illegal activities by registrants — would this require proactive engagement, monitoring, and enforcement by registry operators, who may well be asked by regulators to establish such frontline risk mitigation activities? Overall, this safeguard must be read in conjunction with the others, with the expectation that regulators will likely seek proactive registry operator involvement in the development and implementation of risk mitigation strategies. Further, registry operators must take into account that a TLD is a global DNS resource. A registrant eligibility policy or regulatory engagement approach too narrowly focused on a specific nation(s) or region may well and rightly be criticized by potential registrants, consumer groups, and other public and private sector entities. 5. Single Point of Contact. Registrants must be required by the registry operators to notify them of a single up-to-date point of contact for the notification of complaints or reports of registration abuse, as well as the contact details of the relevant regulatory, or industry self-regulatory, bodies in their main place of business. Single points of contact are already standard practice for ISPs and web hosting companies. This safeguard again places a duty upon registry operators to obtain information from registrants with whom they otherwise likely have no direct dealings or contractual relationship. While the actual information that must be obtained — the unitary contact point for urgent notifications of reported abuse at a website — is relatively simple, the question again arises regarding whether the registry operator has a duty to validate this data on an initial or continuing basis. Further, since this safeguard relies on the registrant to designate the contact details for what it claims to be its relevant regulatory and industry self-regulatory bodies in its main place of business, is there any duty for the registry operator to investigate whether the registrant has accurately done so? And does "main place of business" just cover the jurisdiction in which the registrant is domiciled — or all the additional jurisdictions in which it conducts or may seek to conduct substantial volumes of business with customers (e.g., a Bahamas-based .insurance registrant soliciting and conducting business in the U.S., E.U., and certain Latin American nations)? Miscellaneous 'Gripe Site Registry Advice In related GAC advice, applicants for the .fail, .gripe, .sucks, and .wtf TLDs are singled out to "develop clear policies and processes to minimize the risk of cyber bullying/harassment." Such "criticism" TLDs could be particularly susceptible to such abuses — though they already exist today, often centered in "closed garden" social media platforms. Further Targeted Safeguards In addition to the six basic safeguards and the five additional ones for regulated and professional sectors, the GAC has also proscribed three additional safeguards for at least seven of the twelve sectors listed above — financial, gambling, professional services, environmental, health and fitness, corporate identifiers, and charity. These additional safeguards are aimed at "market sectors which have clear and/or regulated entry requirements in multiple jurisdictions," and are applicable to some of the strings in the listed sectors — although the GAC provides no guidance as to which strings might be exempt and on the basis of what criteria exemptions might be granted or denied. These further targeted safeguards consist of: 1. Added Checks At the time of registration, the registry operator must verify and validate the registrants' authorizations, charters, licenses, and/or other related credentials for participation in that sector. This verification and validation duty is placed on the registry operator, rather than the registrar who interfaces with the registrant at the time of registration. While the registry operator night prefer to delegate such responsibilities to registrars with which it has established business relationships, doing so as a thousand-plus diverse TLDs launch could prove infeasible. Thus, there are questions of how such a process would be coordinated and the status of a registrant's registration until such time as the verification/validation duty is completed. It clearly places significant new responsibilities on registry operators — although one that is already managed by many ccTLD operators — that will entail the use of in-house or outside compliance counsel and staff. 2. Consultations With Regulators In case of doubt with regard to the authenticity of licenses or credentials, registry operators should consult with relevant national supervisory authorities, or their equivalents. This would require each registry operator to develop policies relating to how authenticity of credentials will be evaluated, as well as establish relationships with relevant supervisory authorities in all nations in which registrants may be domiciled or otherwise have significant jurisdictional contacts. Again, this creates additional significant new compliance responsibilities likely to require increased staffing by both registries and ICANN. 3. Post-Registration Checks The registry operator must conduct periodic post-registration checks to ensure registrants' validity and compliance with the above requirements in order to ensure they continue to conform to appropriate regulations and licensing requirements and generally conduct their activities in the interests of the consumers they serve. This would place a continuing, post-registration duty on registry operators to not just confirm the regulatory compliance and licensing validity of registrants but to make a subjective judgment on whether they are conducting their activities in consumers' interests. This raises the issue of whether it is reasonable and appropriate to place such subjective judgment responsibilities on what are primarily providers of technical DNS services. On the other hand, TLDs aiming to serve specialized communities associated with regulatory and licensing requirements may wish to accept this GAC advice and address it via responsive PICs as well as cooperative engagement with ICANN compliance staff to develop reasonable yet effective enforcement mechanisms. Restricted Registration Policies — Limited or Exclusive Strings In addition to the above proposed safeguards, the GAC provided advice regarding restricted or exclusive access to strings. First, as "an exception to the general rule that the TLD domain name space is operated in an open manner registration may be restricted," with such restrictions being particularly applicable for strings subject to the extra safeguards for regulated and professional sectors — especially including those with entry requirements. However, the GAC advice proposes that such registration restrictions be administered by registry operators "in a transparent way that does not give an undue preference to any registrars or registrants, including itself, and shall not subject registrars or registrants to an undue disadvantage." In other words, registrant entry can be restricted, but the restrictions must be geared to the relevant risks associated with the TLD. The restrictions must also be transparent and neutral under the subjective standard of not providing an "undue preference [or] disadvantage." What this means in practice will likely be a subject of some debate, and certainly provides an opening for any party who believes that a TLD's proposed registration restrictions seek to advance goals other than legal/regulatory compliance and consumer protection — such as granting an undue competitive advantage to a subset of potential registrants, or seeking to advance policy goals within the TLD program that more properly should fall to legislators or regulators. The second and final bit of GAC advice in annex I addresses the controversial subject of "closed generic" TLDs, for which ICANN recently conducted a public comment period which attracted one of the largest numbers of comments in recent years.7 That extensive public feedback has so far resulted in no formally announced ICANN policy or position. Amazon, Google, and other business applicants from both the United States and abroad have applied for generic word domains in which they hold no trademark rights yet for which they have proposed to be the sole registrant. Critics of "closed generic' TLDs have charged that they are fundamentally incompatible with the new TLD program's stated goal of fostering innovation and competition. Google, for one, has responded to such criticism by proposing significant alterations for four of its most controversial applications. On this hot button subject, the GAC simply states, "For strings representing generic terms, exclusive registry access should serve a public interest goal." That statement is followed by a non-exhaustive list of strings identified by the GAC as constituting generic terms. Registry Operator Code of Conduct It appears that this is one bit of GAC advice that ICANN may have already taken into account. The revised RA released by ICANN on April 29 proposes to strike the phrase "that are reasonably necessary for the management, operations and purpose of the TLD" from Section 1b of Specification 9, otherwise known as the "REGISTRY OPERATOR CODE OF CONDUCT" (COC). The proposed changes would replace the provision with authorization for the registry operator to allocate up to 100 domain names for its own exclusive use. That deleted phrase constituted the prior parameters of the exception to the general rule that a registry operator will not register domain names in its own right — and some closed generics applicants had argued that the word "purpose" permitted avoidance of seeking a sole registrant exemption under Section 6 of the COC. Presuming that deletion carries through the public comment and Board approval process for the revised RA, it would seem that closed generic applicants may now have no way to avoid seeking a formal exemption from ICANN. ICANN staff provided no comprehensive explanation of the intended purpose of these proposed amendments to the evolving contractual documents, so there may well be parties who interpret this alteration differently. The exemption language of Section 6 remains unchanged in the revised RA, and allows ICANN to grant an exemption in its "reasonable discretion" if a registry operator demonstrates to ICANN's reasonable satisfaction that:
Thus, the GAC's admonition that closed generics must "serve a public interest goal" dovetails well with the Section 6 requirement that ICANN must determine that permitting closed generic operation is not adverse to the public interest — if all TLDs that propose to have the registry operator as sole registrant are indeed required to affirmatively seek an exemption. The matter is not fully settled, as ICANN must still determine general principles to decide when application of the code of conduct is not necessary to protect the public interest. ICANN must then apply those principles on a case-by-case basis for those proposed closed registries that can still muster a convincing rationale for exemption. It is quite possible that ICANN might find a public purpose in protecting trademarks at the top level of the DNS for non-generic, trademarked term ".brand" TLD applications. The revised RA contains multiple, extensive additional revisions beyond the code of conduct changes that may also be highly controversial. For example, on May 1 VeriSign Inc. filed an aggressive comment letter on the registry agreement,8 complaining that: ICANN has broadened its unilateral amendment rights even further under a new and never before disclosed Section 7.7 which permits ICANN to make changes to the registry agreement on subjects that even the consensus policies are explicitly prohibited from considering — and beyond ... Under its bylaws, ICANN is to serve the Internet community based on bottom-up, transparent decision making. Sections 7.6 and 7.7 are the antithesis of lCANN's core values. They should not become part of registry agreements. The Governmental Advisory Committee and Commerce Dept. should rein in any such unprecedented expansion of ICANN's powers. In the Affirmation of Commitments, the DOC affirms its commitment to a private sector led, bottom-up policy development process. Sections 7.6-7.7 seek the opposite. As one example of what VeriSign purports ICANN could do unilaterally, "without governmental oversight and over the objections of registry operators," the letter states that: ICANN unilaterally determines that no new TLDs should be operated in a closed manner and amends the agreement to require all TLDs to be open, endangering established registry business model. However, as discussed, governments represented on the GAC have already given consensus advice that closed registries must further public interest goals — and many parties who filed public comments on "closed generics" wanted ICANN to ban them outright. Regardless of the final provisions of the RA relevant to closed generics, the GAC's position is now clear — a string in which the registry operator is the only permissible registrant must serve a public interest goal. As for the overall RA, the new TLD program cannot go forward until all remaining disputes are resolved and it is made final, as there must be a standard contract document for registry operators to sign before their new TLDs can go forward. Annex II – The GAC's PICs Questions As noted earlier in this article, in the main body of the communique the GAC requests additional information on eight PICs-related questions contained in Annex II. These questions relate to such matters as:
While PICs were originally put on the table as an optional means for applicants to demonstrate their commitment to and recognition of responsibility to operate a particular TLD in a beneficial and non-abusive manner, many applicants did not file them because the self-imposed obligations result in no offsetting application award benefit. The new TLD program rules encourage applicants for the same string in contention sets to resolve matters among themselves. Failing that, contention sets will be settled by auction where the highest bid settles matters irrespective of PICs or other qualitative applicant commitments. Now the GAC communique may well be pushing PICs toward the status of mandatory and enforceable guarantees. Indeed, a few months ago the United States suggested that all TLD applicants should submit PICs — especially for categories of strings for which the GAC has requested additional safeguards. If that is the case, then ICANN will eventually need to reopen the PICs submission window. Once filed, PICs are made available for public inspection — although not direct public comment — at the new TLD current application status page.9 Enforcement of Accepted GAC Advice ICANN's Board consideration of the GAC communique is now clearly underway. The process raises threshold questions of whether and how various categories of GAC recommendations will be accepted, as well as multiple subsidiary issues of consideration of public comments, modification and implementation. While we don't yet know which of the GAC advice will be accepted by ICANN, or with what modifications or implementation details, the realpolitik's of the current situation appear to dictate that a substantial number will find themselves into the final requirements for the first round of new TLDs. That raises the question of how the safeguards and other accepted elements of GAC advice can be implemented in a manner that is "subject to contractual oversight by ICANN." The standard approach would be to amend the RA so that the requirements for all similarly situated registry operators are uniform. But that could well require substantial additional delay in the new TLD program — first to draft concrete expressions of broad and subjective requirements and devise appropriate enforcement criteria, and then to republish the amended RA for further public comment. The apparent controversy being generated by the April 29 RA revision drives home the possibility of extended delay. The alternative approach would be to reopen the PICs window and require all applicants to submit initial or revised PICs that address the GAC's safeguards and other accepted advice. But that would place an enormous review and feedback/revision burden on ICANN staff, as well as result in significantly disparate approaches and commitments from applicants seeking to operate in the same sector categories. If a standard approach to consumer protection and harm mitigation are the main goals then a uniform approach through RA modification would seem the best route to assuring consistent implementation of safeguards. Realpolitik 101: Substantial Portions of the GAC Communique Will Be Accepted and Implemented Critics of the Beijing GAC communique may well assert that it comes two years too late, imposes inappropriate and vague burdens on registry operators that negatively impact their business models, gives governments an inappropriately enhanced role in ICANN's multistakeholder process, offloads governmental responsibilities onto the private sector, and will cause further delay in the new TLD program, among other complaints. While there is some justification for those assertions, they are also beside the point. ICANN is a unique and inherently fragile entity — a standalone nonprofit corporation imbued with authority to manage the addressing system of the most powerful global telecommunications network ever devised, dealing with issues that routinely intrude on legal and policy decisions normally the province of national governments or multinational organizations.
While freed of formal U.S. oversight in 2009, ICANN lacks the mass and velocity to escape governmental oversight of some type. Further, with ICANN no longer under the clear protective wing of a superpower, it must forge a rapprochement with the multi-governmental GAC to assure long-term viability.
The Beijing communique can be regarded as the completion of a four-year governmental journey within ICANN since the termination of formal U.S. oversight and its replacement by the Affirmation of Commitments (AOC). There should be no surprise that it took so long — governments are by nature reactive and risk-averse entities, and the scale of the TLD program and the unexpected issues that developed added to the response time. GAC members arrived early in Beijing and labored long hours over the course of an entire week to produce the communique. In a way, that commitment of time and effort, and the delivery and content of the document, signaled a broad multi-governmental embrace of the ICANN model and of the new TLD program. Imagine if, instead of proposing safeguards, the GAC had announced that the perceived threats to consumer protection, intellectual property, online competition and innovation, DNS stability and security, and other potential negatives generated by the program simply outweighed the potential benefits — and that therefore it should be halted. ICANN and applicants would now be in a crisis state if that had occurred. If ICANN were now to reject the bulk of the GAC safeguards and other recommendations there might be no immediate dire consequences. What there likely would be is a collective decision by many governments that ICANN involvement is not worth the time and expense, and a drifting away of government involvement. If, on the other hand, ICANN now adopts, with reasonable modifications, the bulk of the GAC advice it will provide the feedback that participating governments need to justify continued engagement — as well as to defend ICANN's model within other forums. Continued Threats From ITU The threat to ICANN's role and existence is far from dissipated — the International Telecommunication Union (ITU) will hold its World Telecommunication Policy Forum (WTPF) in Geneva this month, and the UN Internet Governance Forum is preparing for its next meeting in Bali, Indonesia. ICANN must continue to befriend governments, not alienate them. A general embrace of the GAC communique can help ensure ICANN's long-term support from governments and thereby its survival — and, as for most organizations, self-preservation is a high priority. The survival of ICANN, whatever its flaws, is also better for business, civil society, and other constituencies than ICANN's replacement by a DNS manager in which governments have control rather than just substantial influence. The GAC communique and responsive requests for comments provide an opportunity for everyone involved in ICANN and every interest affected by the new TLD program to submit final input on its proposed framework for the launch of new TLDs. Yes, it will likely cause some delay; and yes, it will impose unanticipated duties and responsibilities on all registry operators, particularly those seeking to operate strings related to sensitive sectors. But it also provides a blueprint for the means by which ICANN and registry operators can work cooperatively with the global public sector in decades to come.
i http://www.icann.org/en/news/announcements/announcement-2-10may13-en.htm
Copyright © 2013 by The Bureau of National Affairs, Inc. Reproduced [or Adapted] with permission from Electronic Commerce & Law Report, Vol. 18, No. 20 (May 7, 2013). Copyright 2013 The Bureau of National Affairs, Inc. (800-372-1033) www.bna.com. Written by Philip S Corwin, Founding Principal of Virtualaw LLC, a Washington, DC Law and Public Policy Firm Follow CircleID on Twitter More under: DNS, Domain Names, ICANN, Internet Governance, Law, Policy & Regulation, Top-Level Domains Categories: Net coverage
What New gTLD Applicants Need Is a Quick, Lightweight Answer to the World's Governments. Here It Is.It's safe to say that with just a week to go before ICANN intended to sign the first contract for a new gTLD, the last thing anyone wanted was a 12-page document from the world's governments with 16 new "safeguards", six of which it wants to see applied to every new extension. But what the industry shouldn't overlook, especially in the face of the expected critical responses this week and next, is that the Governmental Advisory Committee's (GAC's) formal advice from the ICANN Beijing meeting represents an opportunity for the domain name industry to lock-in self-regulation at a critical point in its evolution. IFFOR has been focused for some time on the question of what registries will need to do in a world where domain names can end in any word. As such, we see the GAC advice as a simple reflection of genuine, and understandable, concerns from a body whose main job is to identify public policy issues. It is also nothing new: IFFOR went through this exact process to find policy solutions to questions raised by GAC over the dot-xxx top-level domain. Many of the same issues are present in this most recent advice — something we highlighted at the beginning of the year. So here is the good news: it is perfectly possible to find a simple, effective and lightweight solution that will meet the concerns of governments — including that it be contractually binding — while keeping ICANN firmly out of content regulation. It is also possible to do it right now without compromising business plans, redrawing financial projections, or seeking hundreds of thousands of dollars in new investment. So what is this solution? As part of the process for reaching agreement with both ICANN and the GAC over the dot-xxx top-level domain, a set of "baseline policies" was created (by IFFOR) to demonstrate a clear commitment to resolving concerns. Those baseline policies covered issues such as:
The implementation of those policies was then left up to the registry operator — ICM Registry — and IFFOR was also given the role of auditing the subsequent systems. In response to the GAC advice in Beijing, IFFOR is close to completing a new set of "Safeguard Policies" designed specifically to encompass the six most broad safeguards that the GAC wishes to see apply to all new gTLDs. In so doing, we have drawn on our original "baseline policies" to develop policies for the gTLD market as a whole, and have used our experience as a registry policy body to ensure all six GAC safeguards are fully addressed. In an effort to make this work as widely accessible as possible, we plan to simply license these policies for a low annual fee. As well as the right to use, publish and reference the Safeguard Policies, each license will come complete with documentation to help registries implement each policy in the way most suited to their circumstances. We will also extend IFFOR's internal information service that provides ongoing information on related policy and regulatory topics to all licensees. Again, for one, low annual fee. We believe this approach solves a number of issues:
Perhaps most importantly, adopting such an approach will give the industry a chance to demonstrate that it is committed to be a good actor while retaining the flexibility to develop the right systems for the right markets in the right way. The mark of a self-regulated market is how well it responds to issues identified by a third party. With the right mix of creative pragmatism, the GAC safeguard advice can act as a catalyst for this industry. If you are interested in learning more about IFFOR's Safeguard Policies, please visit our website at http://iffor.org/safeguard. Written by Kieren McCarthy, Executive Director at IFFOR; CEO at .Nxt Follow CircleID on Twitter More under: DNS, Domain Names, Registry Services, ICANN, Internet Governance, Regional Registries, Top-Level Domains, Whois Categories: Net coverage
Bloomberg on Netflix as World's Biggest User of Cloud ComputingNetflix is arguable one of the world's biggest users of cloud computing, renting all its computing power from Amazon Web Services, the cloud division of Amazon.com, which runs its own video-streaming service that competes with Netflix. Ashlee Vance from Bloomberg reports: "Netflix has more than 36 million subscribers. They watch about 4 billion hours of programs every quarter on more than 1,000 different devices. To meet this demand, the company uses specialized video servers scattered around the world. When a subscriber clicks on a movie to stream, Netflix determines within a split second which server containing that movie is closest to the user, then picks from dozens of versions of the video file, depending on the device the viewer is using. At company headquarters in Los Gatos, Calif., teams of mathematicians and designers study what people watch and build algorithms and interfaces to present them with the collection of videos that will keep them watching." Follow CircleID on Twitter More under: Cloud Computing, Data Center Categories: Net coverage
Bloomberg on Netflix as World's Biggest User of Cloud ComputingNetflix is arguable one of the world's biggest users of cloud computing, renting all its computing power from Amazon Web Services, the cloud division of Amazon.com, which runs its own video-streaming service that competes with Netflix. Ashlee Vance from Bloomberg reports: "Netflix has more than 36 million subscribers. They watch about 4 billion hours of programs every quarter on more than 1,000 different devices. To meet this demand, the company uses specialized video servers scattered around the world. When a subscriber clicks on a movie to stream, Netflix determines within a split second which server containing that movie is closest to the user, then picks from dozens of versions of the video file, depending on the device the viewer is using. At company headquarters in Los Gatos, Calif., teams of mathematicians and designers study what people watch and build algorithms and interfaces to present them with the collection of videos that will keep them watching." Follow CircleID on Twitter More under: Cloud Computing, Data Center Categories: Net coverage
Hundreds of Syrian Domains Seized As a Result of Trade Sanctions"In apparent observation of international trade sanctions against Syria, a U.S. firm that ranks as the world's fourth-largest domain name registrar has seized hundreds of domains belonging to various Syrian entities, including a prominent Syrian hacker group and sites associated with the regime of Syrian President Bashar al-Assad," reports Brian Krebs. "The apparently coordinated action ended with each of the site's registration records being changed to include Web.com's Florida address, as well as the notation 'OFAC Holding'." Follow CircleID on Twitter More under: Domain Names Categories: Net coverage
Hundreds of Syrian Domains Seized As a Result of Trade Sanctions"In apparent observation of international trade sanctions against Syria, a U.S. firm that ranks as the world's fourth-largest domain name registrar has seized hundreds of domains belonging to various Syrian entities, including a prominent Syrian hacker group and sites associated with the regime of Syrian President Bashar al-Assad," reports Brian Krebs. "The apparently coordinated action ended with each of the site's registration records being changed to include Web.com's Florida address, as well as the notation 'OFAC Holding'." Follow CircleID on Twitter More under: Domain Names Categories: Net coverage
America Closing Down Its Copper Network - So What's Next?We have reported in the past on the rapid decline of the copper telecoms network in the USA. A decade ago BuddeComm predicted that it would be impossible to move two customer access networks in parallel towards the new fibre future, the one operated by the telcos and the other operated by the cable companies. At that stage we indicated that a possible outcome could be that the telcos would upgrade their networks to FttH and that the cable companies would become the key tenants on that network. This however, turned out not to be the case. The telcos were late moving into broadband, while the cablecos embraced these new opportunities and rapidly obtained a 50%+ share in the broadband market. For a long time the market anticipated that the telcos would fight back and regain their share: this never happened and the cablecos were able to extend their lead further. With 90% cable penetration in the country they had a captive market. Cablecos have also made considerable investments in network upgrades since 1996, including the rebuilding of around 1.6 million kilometres of cable plant. The vast majority of this infrastructure uses DOCSIS3.0 technology, which is far superior to the DSL products which telcos offer. The latest cable upgrade to DOCSIS3.1 promises a significant enhancement, which should be a great concern to telcos which, having failed to invest in FttH networks, are unable to compete with the technical ability of cable networks. Last year the telcos declared defeat and indicated that they would start closing down parts of the PSTN. Interestingly, these developments align with the discussions I had over the last few years with the newly nominated FCC chairman Tom Wheeler. He is also on the public record on this issue, believing that the PSTN would end its life around 2018 and that the cable companies would become the key broadband providers. Of course, with his extensive background in the mobile industry he also sees a golden future for mobile communications, since these players would start taking over large parts of the PSTN, especially for telephony services. One of the most serious problems that the telcos are facing is the escalating cost of maintaining copper plant — this is estimated to increase from $2.72 per line in 2007 to $17.50 by 2018. This rapid rise is a combination of real cost increases, because of the aging nature of the network, as also because telcos are actively reducing the number of users and so the cost has to be shared among fewer customers. Another reason for the rapid increase is that for decades past maintenance as been deferred. Clearly the telcos are not closing down all of the PSTN willy-nilly. They do have good quality infrastructure that can deliver quality DSL services, and they will milk that infrastructure for as long as possible. This will specifically be targeted in areas where it is relatively cheaper to maintain the copper network. The main casualty here will be areas of rural America, where maintenance costs are higher and where there are relatively few competing cablecos operating. As a result, many of these telcos' customers will only have mobile networks to access both voice and data services. Another, perhaps even more serious issue — and one that the new FCC chair will have to face — is the rapid monopolization of the fixed broadband sector, with one cableco being the sole provider. These companies operate within franchises, so there is no competition between them. Currently there are no policies in place that regulate this situation, and with the American plutocracy in full force it will be interesting to see if any action will (or can) be taken by the FCC to rein in this emerging monopoly. In the meantime the telcos are also under attack from companies such as Google: Google alone has refigured the landscape, having invested in FttH networks with great success. These companies' high take-up rate is worrying both the telcos and the cable companies, who all charge exorbitantly high prices for services similar to the ones that Google now offers at close to half the price. They are increasingly working with municipalities around the country, many of whom either operate FttH networks or would like to do so but are blocked by court rulings forced upon them through the lobbying of vested telco and cable interests. Based on the strong American conviction that government (including local government) should not be involved in telecoms infrastructure, they get away with it. Increasingly however, citizens are asking why local councils can be involved in electricity infrastructure but not in telecoms infrastructure. There is a growing political groundswell that is providing municipalities with greater freedom to be involved in such infrastructure developments. This could become a turning point in the American telecoms industry. Potentially it could also see the telcos returning to the market rather than retreating from it, this time starting from scratch by building new fibre infrastructure. Written by Paul Budde, Managing Director of Paul Budde Communication Follow CircleID on Twitter More under: Access Providers, Broadband, Telecom Categories: Net coverage
New gTLDs: Money MakerThere are fascinating ideas about how, when and more than everything: who is going to earn money from gTLDs? I think back-end registry providers will earn money, some applicants will earn money too but my experience launching Eurid, the registry for .EU, reminded me one thing: these days, just before launching, until the (first) Sunrise period is launched. These days are special because the entire team is prepared, has been trained, and knows what to do.
So why were these days special and what does it have to do with earning money?
I remember these days when I was working on support to Registrants. Eurid was special because we could not launch until we had, at least, one accredited Registrar from every country in the European Union and we had to be able to provide support in most languages. Though. Support to Registrants I talk to a lot of applicants and I like to ask this question about support and my question is very basic: "what do you do before the Sunrise period when the phone starts to ring?" I am not sure all applicants are considering this question. I think they should and here is why. Domain names are such a boring thing that many brand owners don't want to consider they are important until the last minute and guess what are the 2 questions your support is going to have to answer most of the time when the phone can't stop ringing before the launching: 1 - "how do I register my domain name?";
When it becomes important for a brand owner, he calls the Registry, not necessarily his Registrar. This is where the support to Registrants (at the Registry and at the Registrar) is going to work for the Trademark Clearinghouse. In the ICANN new gTLD program, new Registries and all existing Registrars are becoming the support of the Trademark Clearinghouse who's going to take their client's money. Good deal isn't it? There should be more than a thousand of Accredited Registrars, Hundreds of Registries, and many IP providers to become the sales force of the Trademark Clearinghouse. In the past, a Registrant did not have to go through this process. With new gTLDs, he is forced to if he wants to "protect" his brand and register in the Trademark Clearinghouse. So Yes, the Trademark Clearinghouse is also a monopoly in the new gTLD program and I think it will be one of these organizations to earn A LOT of money. Written by Jean Guillon, New generic Top-Level Domain specialist Follow CircleID on Twitter More under: Top-Level Domains Categories: Net coverage
|
Recent comments
ICANN newsNet coverage |