NavigationSearchUser loginRecent blog posts
|
news aggregatorIntelligence Exchange in a Free Market EconomySpeaking on behalf of myself, Dear U.S. Government: I was reading some interesting articles the past few weeks:
http://www.reuters.com/article/2013/05/15/...
and with the understanding that:
The USG is causing a huge disservice to protection and defense in the private sector (80%+ of CIKR1) by creating an ECS that contains monetary incentive for a few large players to exert undue control over the availability, distribution, and cost of security threat indicators. While there may be a legitimate need for the federal government to share classified indicators to entities for protecting critical infrastructure, the over-classification of indicator data is a widely recognized issue that presents real problems for the private sector. ECS as currently construed creates monetary incentives for continued or even expanded over-classification. The perception of a paid broker-dealer relationship with the USG sets a very unsettling precedent. Private citizens are already concerned about the relationship between the intelligence community and the private sector and these types of stories do very little to help clear the FUD. Compounded with the lack of transparency about what constitutes classified data, how it protects us and the relationship agreement between the entities sharing the data, this type of program could do much more economic harm than good. Many private sector orgs have indicators that the USG would find useful, but have given up trying to share them. The current flow suggests that we would send data thru competitors to get it to the USG, would never scale well in a free-market based economy. The network As with the "PDF sharing programs" of the past (err… present?), it also appears to be a system that adds cost to the intelligence network with the addition of each new node, rather than reducing it. High barriers to entry for any network reduce that network's effectiveness, and in a free market economy, eventually isolates those nodes from the greater network where the barrier to entry is lower. I get it, I understand why certain things are happening, I'm arguing that it's NOT OK. My intent is to widen the dialog a bit to see where we, as an operational community can step up and start doing a better job of leading, instead of allowing the divide between the USG community and the operational community to widen. Before tackling ECS, the USG should strongly address the over-classification issue. It should establish efficient and effective means for engaging with existing operational information exchanges that are working now in the private sector. Most useful indicators to the non-govt community are not classified, and in my understanding, much of the classified intel is classified due to its "source, method and/or attribution", not the actual threat data. Finding a way to mark the data appropriately and then share it directly with a (closed) community will be a good thing. Washing the data thru a classified pipe does nothing to make the data more useful to the non-classified community. While the exchange of classified intelligence problem still exists, figuring out how to scale it to the unclassified environment will more aggressively help solve scaling it in an classified environment (more players can help solve similar problems across many spaces). Economics In my opinion, we should be leveraging existing, trusted security operational fabrics such as the ISC (SIE), TeamCymru, Shadowserver, Arbor networks, Internet Identity, the APWG and the ISAC's (to name a few, based on the most recent industry wide effort, DNS Changer Botnet takedown) that have facilitated great public/private partnerships in the past2. Leveraging this existing framework for intelligence exchange would have been a much more valuable investment than what this is perceived to be, or what development has taken place thus far. There are also a number of ISP's2 who actively pursue a better, more cleaner internet that have proven to be great partners in this game. The tools and frameworks for this type of intelligence sharing have existing semi-developed (workable) economic models and more importantly, they consist of those who actually run the internet (ISP's, DNS providers, malware researchers, a/v companies, large internet properties, financial institutions, international law enforcement, policy advisors (ICANN/ARIN/etc) and other sector based CSIRTS). These operational communities have already taken down botnets, put people in jail and in some estimates, saved the economy billions of dollars at a global scale over the last few years. The process has proven to work, scale, and is rapidly maturing. It is my opinion that a subsection of USG agencies are falling behind in the realm of intelligence exchange with the operations space. The rest of the world is moving towards the full-scale automation of this exchange across political boundaries and entire cultures. All this while finding unique and interesting, market friendly ways of reducing our "exchange costs". As a nation, we're at a crossroads. There are operational folks from within the USG that actively participate in these communities help make the Internet safe and "do the right thing". There are elements within the USG (mainly on the "national security" side) that appear to operate in isolation. The argument I'm sure to hear is "well, wait, we're working on that!". In my opinion, whatever "that" is, is mostly a re-invention of existing technologies and frameworks that will mostly only ever be adopted by those that get funding in the .gov space to implement it, which still isolates the USG from what the rest of the operational community is already doing. Competition of ideas is good, it encourages innovation and all, but it's something we should be taking a hard look at and asking if it's the best use of our limited resources… I've been pitched my own ideas from enough belt-way startups that it almost makes me want to scream… almost. The bigger picture My concern is that, it's becoming evident that the decision makers for some agencies are making choices that could ultimately isolate their operational folks from the rest of the operational world (whether in terms of principal, or in terms of trust, or fear of legal action, etc). As private industry progresses and parts of the USG fall further and further behind, this can only hurt us as a nation, and as a culture. My suggestions:
If you want to be more successful (reads: we want you to be more successful), don't put so much emphasis on standards or how to disseminate classified information, and more on how to aggressively share unclassified intel with your constituents. We have lots of data we'd like to share with you to help protect our national investments. If the USG can get to that place (without invoking something like CISPA, which makes zero sense in a free market economy), the classified problem will solve itself, while only accounting for .001% of the data being shared (reads: will not be such a distraction). I know some in the USG understand this and are fighting the good fight, but it's clear that not enough at the higher levels of government do (reads: have you written your elected officials lately?). When you combine this with haphazard style of reporting (terrible at best) and lack of a clear message (reads: translucency), these types of ill perceptions can run rampant and do more economic harm that good to the national process. I personally will be pushing harder in the coming months in figuring out how we, as the operational community can do more to bring more of USG folks into the fold in terms of building out more sustainable operational relationships. Also, facilitating ways we can share classified intel more aggressively in the future. My goal, is that in the coming year or two, we can change the culture of over-classification while bridging the gap with the rest of the operational industry when it comes to protecting the internet. In order to protect ourselves from economic threats that vastly outweigh our individual business models, there has to be a better solution than the [perceived?] sale of classified intel. Why we're re-inventing the wheel, why our federal government clamors for "the need to share intel with industry" but appears to not be listening, at-least to the right people, who have a good record of sharing highly sensitive intelligence globally, and operationalizing it ... is beyond me. Washington is a very large echo chamber, and is such a large economy unto itself, that sometimes I feel like the process can sometimes drown out what's going on just a few miles down the road. Sincerely, Wes.
1 http://www.dhs.gov/blog/2009/11/19/cikr
Written by Wes Young, Security Architect Follow CircleID on Twitter More under: Cybercrime, Security Categories: Net coverage
CAN SPAM Issues in Zoobuh V. Better BroadcastingLast week a Utah court issued a default judgement under CAN SPAM in Zoobuh vs. Better Broadcasting et al. I think the court's opinion is pretty good, even though some observers such as very perceptive Venkat Balasubramani have reservations. The main issues were whether Zoobuh had standing to sue, whether the defendants domain names were obtained fraudulently, and whether the opt-out notice in the spam was adequate. Standing The standing issue was easy. Zoobuh is a small ISP with 35,000 paying customers who spends a lot of time and money doing spam filtering, using their own equipment. That easily met the standard of being adversely affected by spam, since none of the filtering would be needed if it weren't for all the spam. Domain names CAN SPAM prohibits "header information that is materially false or materially misleading." The spammer used proxy registrations at eNom and Moniker. The first subquestion was whether using proxies is materially false. Under the California state anti-spam law, courts have held that they are, and this court found that the California law is similar enough to CAN SPAM that proxies are materially false under CAN SPAM, too. Venkat has reservations, since in principle one can contact the domain owner through the proxy service, but I'm with the court here. For one thing, even the best of proxies take a while to respond, and many are in fact black holes, so the proxy does not give you useful information about the mail at the time you get or read the mail. More importantly, businesses that advertise are by nature dealing with the public, and there in no plausible reason for a legitimate business to hide from its customers. (Yes, if they put real info in their WHOIS they'll get more spam. Deal with it.) CAN SPAM also forbids using a "domain name, ... the access to which for purposes of initiating the message was obtained by means of false or fraudulent pretenses or representations." Both eNom and Moniker's terms of service forbid spamming, so the court found that the senders obtained the addresses fraudulently, hence another violation. Venkat finds this to be circular reasoning, arguing that the court found the spam to be illegal because the spam was illegal, but in this case, he's just wrong. Despite what some bulk mailers might wish, CAN SPAM does not define what spam is, and mail that is entirely legal under CAN SPAM can still be spam. eNom's registration agreement forbids "if your use of the Services involves us in a violation of any third party's rights or acceptable use policies, including but not limited to the transmission of unsolicited email". Moniker's registration agreement prohibits "the uploading, posting or other transmittal of any unsolicited or unauthorized advertising, promotional materials, "junk mail," "spam," "chain letters," "pyramid schemes," or any other form of solicitation, as determined by Moniker in its sole discretion." There is no question that the defendants sent "unsolicited email" or "unsolicited advertising" and there's nothing circular about the court finding that the defendants did what they had agreed they wouldn't. Opt out notice The third issue is whether the spam contained the CAN SPAM required opt out notices. There were no notices in the messages themselves, but only links to remote images that presumably were supposed to contain the required text. As the court said: The question presented to the Court in this case is whether Required Content provided in the emails through a remotely hosted image is clearly and conspicuously displayed. This Court determines that it is not. One issue is that many mail programs do not display external images for security reasons or (as in my favorite program Alpine) because they don't display images at all. The court cites multiple security recommendations against rendering remote images, and concludes that there's nothing clear or conspicuous about a remote image. Even worse, the plaintiffs said that the remote images weren't even there if they tried to fetch them, The real point here is that the senders are playing games. There is no valid reason to put the opt-out notice anywhere other than text in the body of the message, which is where every legitimate sender puts it. Summary Overall, I am pleased at this decision. The court understood the issues, was careful not to rely on any of the plaintiff's claims that couldn't be verified (remember that the defendant defaulted, so there was no counter argument) and the conclusions about proxy registrations and remote images will be useful precedents in the next case against spammers who use the same silly tricks. Written by John Levine, Author, Consultant & Speaker Follow CircleID on Twitter Categories: Net coverage
NSA Builds Its Biggest Data Farm Amidst ControversyAs privacy advocates and security experts debate the validity of the National Security Agency's massive data gathering operations, the agency is putting the finishing touches on its biggest data farm yet. The gargantuan $1.2 billion complex at a National Guard base 26 miles south of Salt Lake City features 1.5 million square feet of top secret space. High-performance NSA computers alone will fill up 100,000 square feet. Read full story: NPR Follow CircleID on Twitter More under: Data Center, Privacy Categories: Net coverage
NSA Builds Its Biggest Data Farm Amidst ControversyAs privacy advocates and security experts debate the validity of the National Security Agency's massive data gathering operations, the agency is putting the finishing touches on its biggest data farm yet. The gargantuan $1.2 billion complex at a National Guard base 26 miles south of Salt Lake City features 1.5 million square feet of top secret space. High-performance NSA computers alone will fill up 100,000 square feet. Read full story: NPR Follow CircleID on Twitter More under: Data Center, Privacy Categories: Net coverage
World IPv6 Day: A Year in the LifeOn the 6th June 2012 we held the World IPv6 Launch Day. Unlike the IPv6 event of the previous year, World IPv6 Day, where the aim was to switch on IPv6 on as many major online services as possible, the 2012 program was somewhat different. This time the effort was intended to encourage service providers to switch on IPv6 and leave it on. What has happened since then? Have we switched it on and left it on? What has changed in the world of IPv6 over the past 12 months? Who's been doing all the work? In this article I'd like to undertake a comparison of then and now snapshots of IPv6 deployment data. For this exercise I'm using the data set that we have collected using a broad based sampling of Internet users through online-advertisements. The daily snapshots of the V6 measurement can be found here, and the breakdown of this data by economy and by provider can be found on this page and here. First a look at the big number picture A year ago, in June 2012, we measured some 0.60% of the world's Internet user population that was able to successfully retrieve a dual stack web object using IPv6. At the time the estimate of the total user population of the Internet was some 2.24B users, so 0.60% equates to 13.5M users who were using a working IPv6 protocol stack, and preferring to use IPv6 when given a choice of protocols by a dual stack service. What does it look like one year later? In June 2013 We see a rolling average of 1.29% of the Internet's users who are preferring to use IPv6 when presented with a dual stack object to fetch. With a current estimate of Internet user population of an estimated 2.43B users, that figure equates to a count of 29.3 M users. In one sense a growth of 0.60% to 1.29% of the Internet sounds like very small steps, but at the same time a growth in users from 13.5M to 29.3M users is indeed a significant achievement in 12 months, and is easily doubling the extent of IPv6 use in this period. The tracking of this metric across the past 12 months is shown in Figure 1. There is some indication that there was a significant exercise in the deployment of IPv6 in June 2012 at the time of the World IPv6 Launch event, but also some evidence of shutting IPv6 down in some parts of the network in the months thereafter. There was another cycle of growth and decline in the period November 2012 to March 2013, and another period of further growth from March 2013 until the present day. Figure 1 – IPv6 Deployment: June 2012 - June 2013 (Click to Enlarge) Where did IPv6 happen? One way to look at IPv6 deployment is by looking at IPv6 deployment efforts on a country-by-country basis. Which countries were leading the IPv6 deployment effort twelve months ago? Table 1 contains the list of the top 20 countries, ordered by percentage of the Internet user population who are showing that they can use IPv6, from June 2012.
2012 RankEconomy% of Internet Users who use IPv6 # of IPv6 Users1Romania7.40%641,3892France4.03%2,013,9203Luxembourg2.59%12,0494Japan1.75%1,766,7995Slovenia1.07%15,1756United States of America1.01%2,500,6847China1.01%5,209,0308Croatia0.85%22,5519Switzerland0.80%51,57510Lithuania0.66%13,84511Czech Republic0.55%39,69412Norway0.51%23,33313Slovakia0.44%19,11214Russian Federation0.39%238,57615Germany0.32%217,49416Hungary0.31%19,89617Portugal0.30%16,40618Netherlands0.27%40,87019Australia0.25%49,42520Taiwan0.24%38,843
That's an interesting list. There are some economies in this list that were also rapid early adopters of the internet, such as the United States, Japan, Norway and the Netherlands, and also some of the larger economies, such as the France, Japan, the United States, the Russian Federation and Germany, who are members of the G8 (Italy, the United Kingdom and Canada are the other members of the G8). Some 15 of the 20 are European economies, and neither South America or Africa are represented on this list at all. Also surprising is the top economy at the time. The efforts in Romania earlier in 2012 to provision their fixed and mobile service network with IPv6 produced an immediate effect, and by June 2012 some 7.4% of their user base was using IPv6, after commencing the public deployment of IPv6 in late April 2012. Interestingly, in percentage terms, the numbers trail off quickly, so that only 10 countries were above the global average, and by the time you get to the 20th ranked economy in this list, Taiwan, the level of IPv6 deployment is rate was some 0.24%. So the overall picture could be described as "piecemeal", with some significant efforts in just a small number of countries to deploy IPv6. There is another way to look at this 2012 list, which is to perform the same ranking of economies by the population of IPv6 users, as shown in the following table:
2012 RankEconomy% of Internet Users who use IPv6 # of IPv6 Users1China1.01%5,209,0302United States of America1.01%2,500,6843France4.03%2,013,9204Japan1.75%1,766,7995Romania7.40%641,3896Russian Federation0.39%238,5767Germany0.32%217,4948Indonesia0.17%94,5439Switzerland0.80%51,57510Australia0.25%49,42511United Kingdom0.08%41,46112Netherlands0.27%40,87013Czech Republic0.55%39,69414Taiwan0.24%38,84315India0.03%36,88116Ukraine0.21%31,93317Malaysia0.18%30,03418Thailand0.15%27,61719Brazil0.03%26,05120Nigeria0.06%25,149
Of the 13.5M IPv6 users a year ago, some 5M were located in China, and between the four economies of China, the United States, France and Japan we can account for 85% of the total estimated IPv6 users of June 2012. This observation illustrates a somewhat fragmented approach to IPv6 adoption in mid 2012, where Internet Service Providers in small number of economies had made some significant levels of progress, while in other economies the picture of IPv6 deployment ranged from experimental or highly specialised programs through to simply non-existent. There are a number of interesting entrants in this economy list, including India, Brazil and Nigeria, which point to some levels of experimentation by some service providers in the provision of IPv6 services in other economies. Hopefully this experimentation was a precursor to subsequent wider deployment programs. Was this this case? What has happened in the ensuing year? Here are the same two tables, using IPv6 use data as of June 2013, showing a comparable perspective of IPv6 deployment as it stands today.
2013 RankEconomy% of Internet Users This table clearly shows that Switzerland, Belgium, Germany, Peru, the Czech Republic and Greece have made a significant change in their level of IPv6 deployment in the last 12 months. We now see 7 of the 20 economies as being non-European economies. Eleven of these economies have IPv6 usage rates above the global average of 1.3%, an increase of 2 since 2012. We can plot these numbers onto a world map, as shown in Figure 2, using a colouring scale from 0 to 4% of each national Internet user population that is capable of using IPv6. Figure 2 – IPv6 Deployment, ranked by % of national users: June 2013 (Click to Enlarge) The following table shows the estimated IPv6 user population per economy in June 2013.
2013 RankEconomy% of Internet Users Again the distribution of IPv6 users appears to be somewhat is skewed, in so far as just 5 economies account for 85% of the total population of IPv6 users in June 2013, which is the same four economies of United States, Japan, France, and China, but this time joined by Germany. Unfortunately we no longer see India, Brazil or Nigeria in this top 20 economy list. The top 20 economies cut off has risen from 38,000 IPv6 users per economy to 44,000, so unless there was some form of continued expansion of IPv6 deployment (such as the United Kingdom's rise from 41,000 users in mid-2012 to 135,000 in mid-2013), the economies at the lower end of the top 20 in 2012 were likely to slip off the top 20 list if there was no continued expansion of their IPv6 program through the year. In percentage terms what has changed over the past 12 months? The following table compares the values from mid 2012 to the values in mid-2013. The first table lists the top 20 economies who have lifted the percentage of their users who are capable of using IPv6, ranked by the rate of change of this percentage value.
2013 RankEconomyDiff (%)Diff IPv6 The largest change was Switzerland, where a further 10% of their users were able to use IPv6, and significant efforts were visible in Luxembourg, Belgium, Romania, Germany, Peru and Japan in terms of the ratio of IPv6 users in each economy. In terms of user population who are IPv6-capable, the table of economies who deployed IPv6 over the largest set of users is provided in Table 6. Obviously one economy where there has been substantial effort in the past 12 months has been the United States, where some additional 4.2M users are now using IPv6 in just 12 months. That is an extremely impressive effort. Similarly, there has been a significant effort in Japan and Germany. It also should be noted that as or April 2011 the further provision of IPv4 addresses through the conventional Regional Internet Registry allocation system had ceased for the Asia Pacific region, so a case could be made that the efforts this region, including those of Japan, Tawian, Australia and New Zealand were spurred on by this event. Similarly the Regional Internet Registry serving Europe and the Middle East also exhausted its pools of available IPv4 addresses in September 2012, which may have some bearing on the IPv6 efforts in Germany, France, Switzerland Romania. Belgium, the Czech Republic, the United Kingdom, the Netherlands, Greece, Norway, Portugal and Luxembourg. However, IPv4 addresses are still available for service providers in North and South America and in Africa, which makes the efforts in the United States all the more laudable for their prudence.
2013 RankEconomyDiff (%)Diff IPv6 Also it would appear that Europe remains a strong focal point for IPv6 deployment at present, while the deployment in other regions is far more piecemeal. Although I must mention Peru and South Africa in this context as two highly notable exceptions to this general observation. And where is China in June 2013? What we saw in our measurements is a relative decline in the population of users who are seen to use IPv6 from June 2012 to June 2013. This decline was estimated to be some 557,000 users. One of the more variable factors for China is the role of the national firewall structure, and its capabilities with respect to IPv6, and as the IPv6 measurement system was hosted outside of China, the measurements relating to Chinese use of IPv6 are dependant on the behaviour of this filter structure. It is possible that the firewall has different behaviours for IPv6, and equally possible that these behaviours have altered over time. It could well be that an internal view of China would have a different result than that which we see from outside the country. It is also possible to provide some insights as to which ISPs are undertaking this activity, by tracing the originating Autonomous System number of the user's IP address who have provided capability data to this measurement exercise. The following is a list of some of the larger Service Providers that are showing some significant levels of activity in the past 12 months with IPv6. The list is by no means exhaustive, but it is intended to highlight those providers that have been seen to make significant changes in their IPv6 capability measurements over the past 12 months in the economies listed in Table 6. The percentage figures provided in the list are the percentage of clients whose IP address is originated by these AS's who are able to use IPv6 in June 2012 and in June 2013.
2EconomyAS NumberAS Name2012 IPv6 (%)2013 IPv6 (%)United States of AmericaAS6939Hurricane Electric29%37%AS22394Cellco Partnership DBA Verizon Wireless6%20%AS7018AT&T Services6%15%AS3561Savvis0.7%5%AS7922Comcast0.5%2.7%JapanAS2516KDDI16%27%AS18126Chubu Telecommunications0.2%23%AS17676Softbank0.5%4%GermanyAS3320Deutsche Telekom AG0.01%4.9%AS31334Kabel Deutschland1.18%7.4%AS29562Kabel BW GmbH0%10.2%FranceAS12322Free SAS19%22%SwitzerlandAS67722Swisscomm0.2%23%AS559Switch; Swiss Education and Research Network11%18%RomaniaAS8708RCS & RDS SA11.5%24.7%BelgiumAS12392Brutele SC0%33%AS2611BELNET2.6%22.4%PeruAS6147Telefonica del Peru SA0%3.1%Czech RepublicAS2852CESNET z.s.p.o. 20%27%AS5610Telefonica Czech Republic; a.s. 0%3.5%AS51154Internethome; s.r.o. 0%2.8%United KingdomAS786The JNT Association (JANET()51%68%AS13213UK2 Ltd0%23%TaiwanAS9264Academic Sinica Network0%21%AS1659Taiwan Academic Network1.6%7.6%AustraliaAS7575Australian Academic and Research Network13%21%AS4739Internode5%11%NetherlandsAS3265XS4ALL Internet BV6%27%SingaporeAS7472Starhub Internet Pte Ltd0%13%AS4773MobileOne Ltd.0%10%GreeceAS5408Greek Research and Technology Network S.A17%19%South AfricaAS2018TENET0.3%3%CanadaAS6453TATA Communications10%13%AS22995Xplornet Communications Inc0.1%9%NorwayAS224Uninett; The Norwegian University and Research Network16%24%AS39832Opera Software ASA1.3%100%AS57963Lynet Internett0%56%PortugalAS3243PT Comunicacoes S.A.0.01%1.3%LuxembourgAS6661Entreprise des Postes et Telecommunications4%14%
What can we say about the state of IPv6 deployment one year after the commencement of the IPv6 Launch program? The encouraging news is that overall numbers of IPv6-capable end users have doubled in 12 months. The measurements presented here support an estimate that today some 30 million Internet users who will use IPv6 when they can. But this is not happening everywhere. Indeed, it is happening in a small number of countries, with still a relatively small set of service providers. What we appear to be seeing are concentrated areas of quite intense IPv6 activity. Many national academic and research networks have been highly active in supporting IPv6 deployment within their network. In the commercial networks we are seeing a number of major commercial network service operators, primarily in the United States, Japan, Germany, France, Switzerland and Romania, launch programs that integrate IPv6 services into their retail offerings. Whether this effort will provide sufficient impetus to motivate other providers to also commit to a similar program of IPv6 deployment is perhaps still an open issue today, but there is some evidence that there is now a building momentum and an emerging sense of inexorable progress with the deployment of IPv6. We'll be continuing these measurements, and providing further insights as to where we can see IPv6 deployment underway across the Internet over the coming months. You can find daily reports of our measurements, including breakdowns by economy and tracking of progress with IPv6 for individual network service providers at http://labs.apnic.net/ipv6-measurement. If you would like to assist us in this measurement exercise we'd obviously like to hear from you&emdash;drop us a note to research@apnic.net. Written by Geoff Huston, Author & Chief Scientist at APNIC Follow CircleID on Twitter More under: IPv6 Categories: Net coverage
UNESCO Director-General on Linguistic Diversity on the Internet: Main Challenges Are TechnicalIrina Bokova, Director-General, UNESCOEURid, the .eu registry, in collaboration with UNESCO, in November of last year released the 2012 World report on Internationalized Domain Names (IDNs) deployment. It updated previous year's study, Internationalised Domain Names State of Play, which was published in June 2011 and presented at the 2011 United Nations Internet Governance Forum in Nairobi, kenya. Today, Irina Bokova, Director-General of UNESCO has released a statement concerning the linguistic diversity on the Internet stating: "UNESCO's experience and the 2012 study of the use of internationalized domain names undertaken with EURid show that the main challenges are technical. Obstacles lie with Internet browsers that do not consistently support non-ASCII characters, with limited e-mail functionality, and with the lack of support of non-ASCII characters in popular applications, websites and mobile devices." Below is an excerpt from the 70-page EURid-UNESCO 2012 report: This year, the data set for this study is expanded from 53 to 88 TLDs, and includes 90% of all domain names registered as at December 2011, albeit that the data set is not complete for every parameter. The World Report includes case studies on the ccTLDs for the European Union, Russian Federation, Qatar, Saudi arabia, Egypt and the Republic of korea. Where an existing registry has launched an IDN ccTLD (for example, .sa and ????????.) these are considered as two separate entities for the purpose of the report. Part 1 of the World Report on IDN deployment sets out a background to IDNs and a timeline. It considers progress in supporting IDNs in email and browsers. It then reviews the IDN applications in ICANN's programmes to create new TLDs. A comparison of growth rates of IDN registrations versus general registrations is made within European registries and usage rates are compared amongst .eu and .?? IDNs and benchmarked with other TLDs. Case studies follow, on the European Union (.eu) ccTLD, and country case studies on the Russian Federation, Qatar, Saudi arabia, Egypt and the Republic of korea. Also noteworthy is the included foreword in the report by Vint Cerf (excerpt below) on the historical adoption of simple Latin characters in the early days of the Domain Name System (DNS). Cerf writes: "For historical reasons, the Domain Name System (DNS); and its predecessor (the so-called "host.txt" table) adopted naming conventions using simple Latin characters drawn from the letters a-Z, digits 0-9 and the hyphen ("-"). The host-host protocols developed for the original aRPaNET project were the product of research and experimentation led in very large part by English language speaking graduate students working in american universities and research laboratories. The project was focused on demonstrating the feasibility of building a homogeneous, wide area packet switching network connecting a heterogeneous collection of time-shared computers. This project led to the Internetting project that was initially carried out by researchers in the United States of america and the United kingdom, joined later with groups in Norway, Germany and Italy, along with a few visiting researchers from Japan and France. The primary focus of the Internetting project was to demonstrate the feasibility of interconnecting different classes of packet switched networks that, themselves, interconnected a wide and heterogeneous collection of timeshared computers. The heterogeneity of interest was not in language or script but in the underlying networks and computers that were to be interconnected. moreover, the Internet inherited applications and protocols from the aRPaNET and these were largely developed by English language speakers (not all of them necessarily native speakers). The documentation of the projects was uniformly prepared in English. It should be no surprise, then, that the naming conventions of the Internet rested for many years on simple aSCII-encoded strings. The simplicity of this design and the choice to treat upper and lower case characters as equivalent for matching purposes, avoided for many years the important question of support for scripts other than Latin characters. as the Internet has spread across the globe, the absence of support for non-Latin scripts became a notable deficiency. For technical reasons, support for non-Latin scripts was treated as a design and deployment problem whose solution was intended to minimise change to the domain name resolution infrastructure. This was debated in the Internet Engineering Task Force more than once, but the general conclusion was always that requiring a change to every resolver and domain name server, rather than changes on the client side only, would inhibit deployment and utility. This led to the development of so-called "punycode" that would map Unicode characters representing characters from many of the world's scripts into aSCII characters (and the reverse). This choice also had the salient feature of making unambiguous the question of matching domain names since the punycoded representations were unique and canonical in form. This design is not without its problems but that is where we are at present." IDN introduction timeline – Source: EURid-UNESCO World report on Internationalised Domain Names deployment 2012 (Click to Enlarge) The full report can be downloaded here in PDF here: EURid-UNESCO World report on Internationalised Domain Names deployment 2012 Follow CircleID on Twitter More under: DNS, Domain Names, Multilinguism, Top-Level Domains Categories: Net coverage
UNESCO Director-General on Linguistic Diversity on the Internet: Main Challenges Are TechnicalIrina Bokova, Director-General, UNESCOEURid, the .eu registry, in collaboration with UNESCO, in November of last year released the 2012 World report on Internationalized Domain Names (IDNs) deployment. It updated previous year's study, Internationalised Domain Names State of Play, which was published in June 2011 and presented at the 2011 United Nations Internet Governance Forum in Nairobi, kenya. Today, Irina Bokova, Director-General of UNESCO has released a statement concerning the linguistic diversity on the Internet stating: "UNESCO's experience and the 2012 study of the use of internationalized domain names undertaken with EURid show that the main challenges are technical. Obstacles lie with Internet browsers that do not consistently support non-ASCII characters, with limited e-mail functionality, and with the lack of support of non-ASCII characters in popular applications, websites and mobile devices." Below is an excerpt from the 70-page EURid-UNESCO 2012 report: This year, the data set for this study is expanded from 53 to 88 TLDs, and includes 90% of all domain names registered as at December 2011, albeit that the data set is not complete for every parameter. The World Report includes case studies on the ccTLDs for the European Union, Russian Federation, Qatar, Saudi arabia, Egypt and the Republic of korea. Where an existing registry has launched an IDN ccTLD (for example, .sa and ????????.) these are considered as two separate entities for the purpose of the report. Part 1 of the World Report on IDN deployment sets out a background to IDNs and a timeline. It considers progress in supporting IDNs in email and browsers. It then reviews the IDN applications in ICANN's programmes to create new TLDs. A comparison of growth rates of IDN registrations versus general registrations is made within European registries and usage rates are compared amongst .eu and .?? IDNs and benchmarked with other TLDs. Case studies follow, on the European Union (.eu) ccTLD, and country case studies on the Russian Federation, Qatar, Saudi arabia, Egypt and the Republic of korea. Also noteworthy is the included foreword in the report by Vint Cerf (excerpt below) on the historical adoption of simple Latin characters in the early days of the Domain Name System (DNS). Cerf writes: "For historical reasons, the Domain Name System (DNS); and its predecessor (the so-called "host.txt" table) adopted naming conventions using simple Latin characters drawn from the letters a-Z, digits 0-9 and the hyphen ("-"). The host-host protocols developed for the original aRPaNET project were the product of research and experimentation led in very large part by English language speaking graduate students working in american universities and research laboratories. The project was focused on demonstrating the feasibility of building a homogeneous, wide area packet switching network connecting a heterogeneous collection of time-shared computers. This project led to the Internetting project that was initially carried out by researchers in the United States of america and the United kingdom, joined later with groups in Norway, Germany and Italy, along with a few visiting researchers from Japan and France. The primary focus of the Internetting project was to demonstrate the feasibility of interconnecting different classes of packet switched networks that, themselves, interconnected a wide and heterogeneous collection of timeshared computers. The heterogeneity of interest was not in language or script but in the underlying networks and computers that were to be interconnected. moreover, the Internet inherited applications and protocols from the aRPaNET and these were largely developed by English language speakers (not all of them necessarily native speakers). The documentation of the projects was uniformly prepared in English. It should be no surprise, then, that the naming conventions of the Internet rested for many years on simple aSCII-encoded strings. The simplicity of this design and the choice to treat upper and lower case characters as equivalent for matching purposes, avoided for many years the important question of support for scripts other than Latin characters. as the Internet has spread across the globe, the absence of support for non-Latin scripts became a notable deficiency. For technical reasons, support for non-Latin scripts was treated as a design and deployment problem whose solution was intended to minimise change to the domain name resolution infrastructure. This was debated in the Internet Engineering Task Force more than once, but the general conclusion was always that requiring a change to every resolver and domain name server, rather than changes on the client side only, would inhibit deployment and utility. This led to the development of so-called "punycode" that would map Unicode characters representing characters from many of the world's scripts into aSCII characters (and the reverse). This choice also had the salient feature of making unambiguous the question of matching domain names since the punycoded representations were unique and canonical in form. This design is not without its problems but that is where we are at present." IDN introduction timeline – Source: EURid-UNESCO World report on Internationalised Domain Names deployment 2012 (Click to Enlarge) The full report can be downloaded here in PDF here: EURid-UNESCO World report on Internationalised Domain Names deployment 2012 Follow CircleID on Twitter More under: DNS, Domain Names, Multilinguism, Top-Level Domains Categories: Net coverage
NSA PRISM Program Has Direct Access to Servers of Google, Skype, Yahoo and Others, Says ReportThe National Security Agency has obtained direct access to the systems of Google, Facebook, Apple and other US internet giants, according to a top secret document obtained by the Guardian. The NSA access is part of a previously undisclosed program called PRISM, which allows officials to collect material including search history, the content of emails, file transfers and live chats, the document says. Follow CircleID on Twitter More under: Privacy Categories: Net coverage
NSA PRISM Program Has Direct Access to Servers of Google, Skype, Yahoo and Others, Says ReportThe National Security Agency has obtained direct access to the systems of Google, Facebook, Apple and other US internet giants, according to a top secret document obtained by the Guardian. The NSA access is part of a previously undisclosed program called PRISM, which allows officials to collect material including search history, the content of emails, file transfers and live chats, the document says. Follow CircleID on Twitter More under: Privacy Categories: Net coverage
Akram Atallah Named Head of ICANN's New Generic Domains DivisionAkram Atallah – Named Head of ICANN's New Generic Domains DivisionIn a post today, ICANN's CEO, Fadi Chehadé, has announced the creation of a new division within ICANN, called Generic Domains Division, in order "to handle the tremendous increase in scale resulting from the New gTLD Program." Akram Atallah, who is currently the Chief Operating Officer (COO), will become divisional President of the Generic Domains Division that will include gTLD Operations, DNS Industry Engagement, and Online Community Services. Susanna Bennett, a financial and operational executive in technology services, will be joining ICANN as the new COO, filling Akram's position, starting July 1st. Follow CircleID on Twitter More under: ICANN, Top-Level Domains Categories: Net coverage
Akram Atallah Named Head of ICANN's New Generic Domains DivisionAkram Atallah – Named Head of ICANN's New Generic Domains DivisionIn a post today, ICANN's CEO, Fadi Chehadé, has announced the creation of a new division within ICANN, called Generic Domains Division, in order "to handle the tremendous increase in scale resulting from the New gTLD Program." Akram Atallah, who is currently the Chief Operating Officer (COO), will become divisional President of the Generic Domains Division that will include gTLD Operations, DNS Industry Engagement, and Online Community Services. Susanna Bennett, a financial and operational executive in technology services, will be joining ICANN as the new COO, filling Akram's position, starting July 1st. Follow CircleID on Twitter More under: ICANN, Top-Level Domains Categories: Net coverage
First Private Auction for New Generic Top Level Domains Completed: 6 gTLDs Valued at Over $9 MillionOn behalf of Innovative Auctions, I am very happy to announce that we've successfully completed the first private auction for generic Top Level Domains (gTLDs). Our auction resolved contention for 6 gTLDs: .club, .college, .luxury, .photography, .red, and .vote. Auction winners will pay a total of $9.01 million. All other participants will be paid from these funds in exchange for withdrawing their application. In ICANN's gTLD Applicant Guidebook, applicants for gTLD strings that are in contention are asked to resolve the contention among themselves. ICANN did not further specify how to do that. Our Applicant Auction, designed by my colleague Peter Cramton, has now become the most successful—and proven— alternative to tedious multilateral negotiations. The first withdrawal as a result of our auction (an application for .vote) has already been announced by ICANN. All participants—winners and non-winners alike—indicated that they were pleased with the results of the first Applicant Auction. "The auction system was clear, user-friendly, and easy to navigate," said Monica Kirchner, applicant for Luxury partners. "The process worked smoothly, and we're very happy with the outcome." "The Applicant Auction process is extremely well organized and we were very pleased with the results for us" said Colin Campbell, of .CLUB LLC. "It is a fair and efficient way to resolve contention and support the industry at the same time, with auction funds remaining among the domain contenders." Top Level Design's CEO Ray King praised the auction's execution. "The applicant auction process was great, the software functioned without a hitch and all of the folks involved were responsive and highly professional. We look forward to participating in future auctions with Innovative Auctions." In the last days leading up to the auction, many single-string and multiple-string participants have expressed an interest to participate in private auctions in general and the Applicant Auction in particular. Antony van Couvering's insightful article on CircleID a few days ago lays out the reasons why his company TLDH will participate in private auctions, and Colin Campbell, who announced earlier today that his company was the winner for .club, predicts that "many other parties who stood by the sidelines in this first auction will participate in future Applicant Auctions." We'll hold additional auctions in the coming months, on a schedule and under terms mutually agreed upon by applicants, to resolve contention for many more of the rougly 200 gTLDs still pending. Please direct questions to info@applicantauction.com. Written by Sheel Mohnot, Project Director, Applicant Auction Follow CircleID on Twitter More under: ICANN, Top-Level Domains Categories: Net coverage
BIND 9 Users Should Upgrade to Most Recent Version to Avoid Remote ExploitA remote exploit in the BIND 9 DNS software could allow hackers to trigger excessive memory use, significantly impacting the performance of DNS and other services running on the same server. BIND is the most popular open source DNS server, and is almost universally used on Unix-based servers, including those running on Linux, the BSD variants, Mac OS X, and proprietary Unix variants like Solaris. A flaw was recently discovered in the regular expression implementation used by the libdns library, which is part of the BIND package. The flaw enables a remote user to cause the 'named' process to consume excessive amounts of memory, eventually crashing the process and tying up server resources to the point at which the server becomes unresponsive. Affected BIND versions include all 9.7 releases, 9.8 releases up to 9.8.5b1, and 9.9 releases up to version 9.9.3b1. Only versions of BIND running on UNIX-based systems are affected; the Windows version is not exploitable in this way. The Internet Systems Consortium considers this to be a critical exploit. All authoritative and recursive DNS servers running the affected versions are vulnerable. The most recent versions of BIND in the 9.8 and 9.9 series have been updated to close the vulnerability by disabling regular expression support by default. The 9.7 series is no longer supported and those using it should update to one of the more recent versions. However, if that is not desirable or possible there is a workaround, which involves recompiling the software without regex support. Regex support can be disabled by editing the BIND software's 'config.h' file and replacing the line that reads "#define HAVE_REGEX_H 1" with "#undef HAVE_REGEX_H" before running 'make clean' and then recompiling BIND as usual. At the time of the initial report, ISC stated that there were no active exploits for the vulnerability, but a user reported that he was able to develop and implement a working exploit in ten minutes. While most of the major DNS providers, including DNS Made Easy, have patched and updated their software, DNS software on servers around the Internet tends to lag behind the most recent version. Because BIND is so widely used and DNS is essential to the functioning of the Internet, knowledge of this vulnerability should be disseminated as widely as possible to encourage system administrators to update. It should be noted that this exploit is totally unrelated to the widely publicized problems with the DNS that allows criminals to launch DNS amplification attacks. Those attacks depend on a misconfiguration of DNS servers rather than a flaw in the software. However, both problems can be used to create a denial of service attack. Open recursive DNS servers can be used to direct large amounts of data at their targets; effectively using DNS as a weapon to attack other parts of the Internet's infrastructure, whereas the regex vulnerability could be used to attack the DNS itself. Written by Evan Daniels Follow CircleID on Twitter More under: DNS, DNS Security Categories: Net coverage
A Look Ahead to Fedora 19Fedora 19 is the community-supported Linux distribution that is often used as a testing ground for features that eventually find their way into the Red Hat Enterprise Linux commercial distribution and its widely used noncommercial twin, CentOS. Both distributions are enormously popular on servers and so it's often instructive for sysadmins to keep an eye on what's happening with Fedora. Fedora prides itself on being at the bleeding edge of Linux software, so all the cool new features tend to get implemented there before they are included in Ubuntu and the other popular distros. Late May saw the release of the beta version of Fedora 19, AKA Schrödinger's Cat, which has a number of new features that will be of interest to developers, system administrators, and desktop users. Updated Programming Languages This release seems to be primarily focused on developers, who will be pleased to hear that many of the most popular programming languages used on the web are getting a bump. Ruby 2.0 – This is the first major ruby release in half a decade, and adds a number of new features to the language, including keyword arguments, a move to UTF — 8 as the default source encoding, and many updates to the core classes. PHP 5.5 – PHP 5.5 brings some great additions to everyone's favorite web programming language, including support for generators with the new "yield" keyword, and the addition of a new password hashing API that should make it easier to manage password storage more securely. OpenJDK 8 – Those who really like to live on the bleeding edge can check out the technology review of OpenJDK 8, which won't be officially released until September (if all goes according to plan). This release is intended to add support for programming in multicore environments by adding closures to the language in addition to the standard performance enhancements and bug fixes. Node.js – The Node.js' runtime and its dependencies will be included as standard for the first time. Developer's Assistant The Developer's Assistant is a new tool to make it easier to automate the setting up of an environment suitable for programming in a particular language, so it'll take care of installing compilers, interpreters, and their dependencies, and running various scripts to set environmental variables and other factors necessary for creating the perfect development environment for the chosen language. OpenShift Origin OpenShift origin is an application platform intended for the building, testing and deploying Platform-as-a-Service offerings. It was originally developed for RHEL and is now finding its way into Fedora. Desktop environments are also getting the usual version increment, with KDE moving to version 4.10 and Gnome getting a bump to 3.10. If you want, you can give the new Fedora Beta a try by grabbing the image from their site. The usual caveats apply: you shouldn't use it in a production environment. Written by Graeme Caldwell, Inbound Marketer for InterWorx Follow CircleID on Twitter More under: Web Categories: Net coverage
The Pros and Cons of VectoringVectoring is an extension of DSL technology that employs the coordination of line signals to reduce crosstalk levels to improve performance. It is based on the concept of noise cancellation: the technology analyses noise conditions on copper lines and creates a cancelling anti-noise signal. While data rates of up to 100Mb/s are achievable, as with all DSL-based services this is distance related: the maximum available bit rate is possible at a range of about 300-400 meters. Performance degrades rapidly as the loop attenuation increases, becoming ineffective after 700-800 meters. The technology is seen as an intermediate step to full FttH networks. Vectoring is also specific to the DSL environment, being more appropriate to DSL LLU but becoming severely limited when applied with VDSL2 sub-loops unless all the lines are managed by the same system. Vectoring requires that all copper pairs of a cable binder are operated by the same DSLAM, and several DSLAMs need to work in combination in order to eliminate crosstalk. A customer's DSL modem also needs to support vectoring. Though the ITU has devised a Recommendation for vectoring (G.993.5), the technology is still under development and currently there remains a lack of standardisation across these various elements. The quality of the copper network is also an issue, with better quality (newer) copper providing better results. Poorer quality copper cabling (e.g. having poorer isolation, less copper pair drilling) can also result in higher crosstalk, and thus a higher degree of pair-related interference. Nevertheless, these issues could be addressed within the vectoring process. Vectoring is also incompatible with some current regulatory measures, though again future amendments could bring a resolution to these difficulties. While Telekom Deutschland has been engaged in vectoring since late 2012, the technology requires regulatory approval since it is based on DSL infrastructure, and some services which TD must provide to competitors is incompatible with vectoring. As such, TD must negotiate with the regulator the removal of those services from its service obligations. A partial solution may be achieved through the proposal that the regulator restricts total unbundling obligations for copper access lines to the frequency space below 2.2MHz. Operators which have looked to deploy vectoring are being driven by cost considerations. The European Commission's target in its 'Digital Agenda 2020' is for all citizens in the region to have access to speeds of at least 30Mb/s by 2020, with at least half of all premises to receive broadband at over 100Mb/s. This presupposed fibre for most areas, with the possibility of LTE to furnish rural and remote areas. However, some cash-strapped incumbents are considering vectoring to enable them to meet these looming targets more cheaply, while still pursuing fibre (principally FttC, supplemented by FttH in some cities). Belgium was an early adopter of vectoring: the incumbent Belgacom had been one of the first players to deploy VDSL1, which has since been phased out for the more widely used VDSL2, supplying up to 50Mb/s for its bundled services customers. The company's investment in vectoring will enable it to upgrade a portion of its urban customers more quickly and cheaply than would otherwise be possible with FttH. Yet it is perceived as a stop-gap measure to buy it time and to forestall customer churn to the cablecos which have already introduced /s and 120Mb/s services across their footprints and are looking to release 200Mb/s services or higher. The inherent limitations of copper, regardless of technological tweaking, will mean that Belgacom will have to follow Scandinavian operators and deploy 1Gb/s FttH services in order to keep pace with consumer demand for bandwidth for the next decade. Vectoring technology has also been trialled by Telekom Austria as part of its FttC GigaNet initiative, as also b P&T;Luxembourg which in early 2013 contracted Alcatel-Lucent (one of the vendors leading vectoring R&D;) to develop one of the world's first trials of combined VDSL2 bonding and vectoring technologies. The Italian altnet Fastweb is also investing in vectoring, in conjunction with a programme to deliver FttC to about 20% of households by the end of 2014. Fastweb's parent company Swisscom has budgeted €400 million for the project (as part of a wider FttC co-investment with Telecom Italia), costing each connection at about €100 per home. The low figure is partly explained by Fastweb being able to utilise its existing fibre networks. Nevertheless, Fastweb in the long-term is aiming to have an FttH-based network across its footprint, having recently committed an additional €2 billion investment to 2016, contracting Huawei to upgrade its network from 100Mb/s to 1Gb/s. Written by Paul Budde, Managing Director of Paul Budde Communication Follow CircleID on Twitter More under: Access Providers, Broadband, Telecom Categories: Net coverage
ISOC Funds 11 Projects that Enhance Internet Environments in Underserved RegionsEach year, a number of projects around the world receive funding from the Internet Society to do everything from connecting Sri Lankan farmers with up-to-date sustainable agriculture information, to teaching ICT skills to at-risk youth in Africa, to working with local engineers to further their IPv6 implementation knowledge. These projects are planned and brought to life by Internet Society members.The Internet Society today announced funding for 11 community-based Internet projects that will enhance the Internet ecosystem in underserved communities around the world. The Community Grants are awarded twice each year to Internet Society Chapters and Members. Recipients receive up to US$10,000 to implement their projects. The 11 projects funded in this round of grants will:
The next application round opens in September. Additional information is available here on the Community Grants Programme and these winning projects. Follow CircleID on Twitter More under: Access Providers, Broadband Categories: Net coverage
ISOC Funds 11 Projects that Enhance Internet Environments in Underserved RegionsEach year, a number of projects around the world receive funding from the Internet Society to do everything from connecting Sri Lankan farmers with up-to-date sustainable agriculture information, to teaching ICT skills to at-risk youth in Africa, to working with local engineers to further their IPv6 implementation knowledge. These projects are planned and brought to life by Internet Society members.The Internet Society today announced funding for 11 community-based Internet projects that will enhance the Internet ecosystem in underserved communities around the world. The Community Grants are awarded twice each year to Internet Society Chapters and Members. Recipients receive up to US$10,000 to implement their projects. The 11 projects funded in this round of grants will:
The next application round opens in September. Additional information is available here on the Community Grants Programme and these winning projects. Follow CircleID on Twitter More under: Access Providers, Broadband Categories: Net coverage
Michele Neylon, Blacknight CEO Elected as Chair of Registrar Stakeholder Group of ICANNMichele Neylon, Blacknight CEOMichele Neylon, CEO of Blacknight, announced today his election as Chair of the Registrar Stakeholder Group of ICANN, the first European to ever hold this position. The Registrar Stakeholder Group (RrSG) is one of several Stakeholder Groups within the ICANN community and is the representative body of domain name Registrars worldwide. It is a diverse and active group that works to ensure the interests of Registrars and their customers are effectively advanced. The chair, in consultation with the executive committee and members, organises the work of the Stakeholder Group and conducts RrSG meetings. The chair often confers with others in the ICANN community on Registrar-related policy and business issues, and is the primary point of contact between the RrSG and ICANN staff. Neylon has previously served as the Secretary to the RrSG and is the only European member of the executive committee. Follow CircleID on Twitter More under: Domain Names, ICANN Categories: Net coverage
Michele Neylon, Blacknight CEO Elected as Chair of Registrar Stakeholder Group of ICANNMichele Neylon, Blacknight CEOMichele Neylon, CEO of Blacknight, announced today his election as Chair of the Registrar Stakeholder Group of ICANN, the first European to ever hold this position. The Registrar Stakeholder Group (RrSG) is one of several Stakeholder Groups within the ICANN community and is the representative body of domain name Registrars worldwide. It is a diverse and active group that works to ensure the interests of Registrars and their customers are effectively advanced. The chair, in consultation with the executive committee and members, organises the work of the Stakeholder Group and conducts RrSG meetings. The chair often confers with others in the ICANN community on Registrar-related policy and business issues, and is the primary point of contact between the RrSG and ICANN staff. Neylon has previously served as the Secretary to the RrSG and is the only European member of the executive committee. Follow CircleID on Twitter More under: Domain Names, ICANN Categories: Net coverage
One Year Later: Who's Doing What With IPv6?One year on from the World IPv6 Launch in June 2012, we wanted to see how much progress has been made towards the goal of global IPv6 deployment. Both APNIC and Google are carrying out measurements at the end user level, which show that around 1.29% (APNIC) and 1.48% (Google) of end users are capable of accessing the IPv6 Internet. Measurements taken from this time last year show 0.49% (APNIC) and 0.72% (Google), which means the amount of IPv6-enabled end users has more than doubled in the past 12 months. Rather than looking at the end user, the measurements the RIPE NCC conducts look at the networks themselves. To what extent are network operators engaging with IPv6? And how ready are they to deploy it on their networks? IPv6 RIPEness The RIPE NCC measures the IPv6 "readiness" of LIRs in its service region by awarding stars based on four indicators. LIRs receive stars when:
The pie charts below show the number of LIRs holding 0-4 RIPEness stars at the time of the World IPv6 Launch in June 2012, and the number today.
The first RIPEness star is awarded when the LIR receives an allocation of IPv6 address space. When we look at the charts above, we see that the number of LIRs without an IPv6 allocation has decreased from 50% at the time of the World IPv6 Launch to 39% today. One factor that shouldn't be overlooked here is that the current IPv4 policy requires that an LIR receive an initial IPv6 allocation before it can receive its last /22 of IPv4 address space. However, this does not explain the increase in 2-4 star RIPEness, which can only come from LIRs working towards IPv6 deployment. Five-Star RIPEness At the recent RIPE 66 Meeting in Dublin, we presented the results from our introduction of a fifth RIPEness star, which is still in the prototype stage. This fifth star measures actual deployment of IPv6. It looks at whether LIRs are providing content over IPv6 and the degree to which they are providing IPv6 access to end users. More information on the fifth star and the methodology behind it can be found on RIPE Labs. In this first version, 573 LIRs in the RIPE NCC service region qualify for the fifth star, which represents 6.24% of all LIRs in the region. The Day We Crossed Over Coincidentally, the World IPv6 Launch was around the same time as another milestone for the RIPE NCC service region. It was roughly then that the number of LIRs with IPv6 allocations outnumbered those without IPv6 for the first time. This number has continued to increase, and there are currently 5,630 LIRs with IPv6 and 3,584 without. The blue line on the graph below represents LIRs with an IPv6 allocation, while the red line indicates those with no IPv6.
ASNs Announcing IPv6 One of the things the RIPE NCC regularly checks is the percentage of autonomous networks announcing one or more IPv6 prefixes into the global routing system. This is an important step before a network can begin exchanging IPv6 traffic with other networks. When we take a global view using the graph, we see that in the year since the World IPv6 Launch, the percentage of networks announcing IPv6 has increased from 13.7% to 16.1%. Of the 44,470 autonomous networks visible on the global Internet, 7,168 are currently announcing IPv6. When we adopt a regional perspective, one of the things we would hope to see is increasing IPv6 deployment in those regions where the free pool of IPv4 has been exhausted. It is reassuring to see this confirmed — both the APNIC and the RIPE NCC service regions are leading the way, with 20.0% and 18.1% (respectively) of networks announcing IPv6. The table below compares the percentage of autonomous networks announcing IPv6 — both now and at the time of the World IPv6 Launch in 2012.
The RIPE NCC's graph of IPv6-Enabled Networks (below) shows this as a comparison over time and allows for comparisons between countries and regions.
Reassuring, But The Real Work Is Still Ahead While the above statistics provide good cause for optimism, there is still a long way to go. Now, more than ever, network operators need to learn about IPv6 and deploy it on their networks in order to safeguard the future growth of the Internet. To find out more about IPv6, visit IPv6ActNow. Written by Mirjam Kuehne Follow CircleID on Twitter More under: IPv6 Categories: Net coverage
|
Recent comments
ICANN newsNet coverage |