CircleID posts

Syndicate content CircleID
Latest posts on CircleID
Updated: 17 weeks 4 days ago

Will the GNSO Review Be Pushed Back Another Year?

Sat, 2013-06-15 20:12

ICANN bylaws mandate periodic reviews of the organisation's main structures. For the body that handles gTLD policy making, the GNSO, that review was due to start in February this year.

The review appears much needed. The GNSO Council is the manager of the gTLD policy process and as such, it has representatives of all GNSO groups. But according to repeated statements by many of those representatives, the Council's current bicameral structure has not lived up to expectations.

This two-house structure is the result of the last review and the recommendations that came out of it. Each GNSO Council house is divided in two sub-groups called stakeholder groups (SGs). But that's where the symmetry ends.

In the contracted parties house, the two SGs are the only entities. So the registry SG and the registrar SG are often able to find areas of common interests.

But in the non contracted parties house, the 2 SGs are made up of 5 sub-groups, called constituencies. Seeing eye to eye is not always so easy for the commercial SG (the business, internet service providers and intellectual property constituencies) and the non commercial SG (the non commercial users and the not for profit organizations constituencies).

The review would help evaluate whether the GNSO's current structure is well suited to ICANN's changing environment and policy making needs in a world that has already been dramatically changed by the new gTLD program. Its results would pave the way for any changes that are deemed necessary.

Yet it seems the review is unlikely to start anytime soon. On June 12, in an email to the GNSO Council, ICANN staff explained that the "Board Structural Improvements Committee (SIC) is considering postponing the GNSO Review (potentially for a year) while it evaluates options for streamlining the organizational review process and considers relevant discussions involving development of a new ICANN Strategic Plan. The SIC expects to make a recommendation to the Board in Durban and staff will keep you apprised of these developments."

Those who feel the current bicameral structure is unbalanced will not be happy to hear the GNSO may not be reviewed for another year. It's also unclear how such a decision, should it be confirmed, would fit with the ICANN bylaws requirement which states that "reviews shall be conducted no less frequently than every five years, based on feasibility as determined by the Board." (Article IV, section 4).

Obviously allowing the Board to determine feasibility gives the SIC the leeway to push the review back. But can a 2 year delay still be considered reasonable?

Written by Stéphane Van Gelder, Chairman, STEPHANE VAN GELDER CONSULTING

Follow CircleID on Twitter

More under: ICANN, Internet Governance, Top-Level Domains

Categories: Net coverage

Belgian Country Code Now Supports Internationalized Domain Names

Fri, 2013-06-14 19:53

Earlier this week dns.be launched Internationalized Domain Names (IDNs).

The Belgian registry opted to support the accented characters for Dutch, French and German. In so doing they've also ended up providing support for other European languages, such as Swedish, Finnish and Danish.

The characters supported are below:

ß*àáâãóôþüúðæåïçèõöÿýòäœêëìíøùîûñé

The registry reported quite a bit of interest in the launch with over 3000 IDN domains being registered in the first hour. That number had practically doubled by close of business on the first day.

So what domains are people registering?

The most popular requests were:

1. café.be
2. météo.be
3. hôtels.be
4. bébé.be
5. crédit.be
6. hôtel.be
7. één.be
8. italië.be
9. cinéma.be
10. château.be

The registry announced that close to 80% of the IDN domains registered were registered to Belgian residents, thus reinforcing the view that IDNs were in demand from the local market.

More details here.

Written by Michele Neylon, MD of Blacknight Solutions

Follow CircleID on Twitter

More under: Domain Names, Multilinguism

Categories: Net coverage

Don't Overlook the Network When Migrating to the Cloud

Fri, 2013-06-14 00:26

The success or failure of public cloud services can be measured by whether they deliver high levels of performance, security and reliability that are on par with, or better than, those available within enterprise-owned data centers. To emphasize the rapidly growing cloud market, IDC forecasts that public cloud IT spending will increase from $40 billion in 2012 to $100 billion in 2016. To provide the performance, security and reliability needed, cloud providers are moving quickly to build a virtualized multi-data center service architecture, or a "data center without walls."

This approach federate the data centers of both the enterprise customer and cloud service provider so that all compute, storage, and networking assets are treated as a single, virtual pool with optimal placement, migration, and interconnection of workloads and associated storage. This "data center without walls" architecture gives IT tremendous operational flexibility and agility to better respond and support business initiatives by transparently using both in-house and cloud-based resources. In fact, internal studies show that IT can experience resource efficiency gains of 35 percent over isolated provider data center architectures.

However, this architecture is not without its challenges. The migration of workload between enterprise and public cloud creates traffic between the two, as well as between clusters of provider data centers. In addition, transactional loads and demands placed on the backbone network, including self-service customer application operations (application creation, re-sizing, or deletion in the cloud) and specific provider administrative operations can cause variability and unpredictability to traffic volumes and patterns. To accommodate this variability in traffic, providers normally would have to over-provision the backbone to handle the sum of these peaks — an inefficient and costly approach.

Getting to Performance-on-Demand

In the future, rather than over-provisioning, service providers will employ intelligent networks that can be programmed to allocate bandwidth from a shared pool of resources where and when it is needed. This software-defined network (SDN) framework consists of virtualizing the infrastructure layer — the transport and switching network elements; a network control layer (or SDN controller) — the software that configures the infrastructure layer to accommodate service demands; and the application layer — the service-creation/delivery software that drives the required network connectivity — e.g. the cloud orchestrator.

SDN enables cloud services to benefit from performance-on-demand

The logically-centralized control layer software is the lynchpin to providing orchestrated performance-on-demand. This configuration allows the orchestrator to request allocation of those resources without needing to understand the complexity of the underlying network.

For example, the orchestrator may simply request a connection between specified hosts in two different data centers to handle the transfer of 1 TB with a minimum flow rate of 1 Gb/s and packet delivery ratio of 99.9999% to begin between the hours of 1:00 a.m. and 4:00 a.m. The SDN controller first verifies the request against its policy database, performs path computation to find the best resources for the request, and orchestrates the provisioning of those resources. It subsequently notifies the cloud orchestrator so that the orchestrator may initiate the inter-data center transaction.

The benefits to this approach include cost savings and operational efficiencies. Delivering performance-on-demand in this way can reduce cloud backbone capacity requirements by up to 50 percent compared to over-provisioning, while automation simplifies planning and operational practices, and reduces the costs associated with these tasks.

The network control and cloud application layers also can work hand-in-hand to optimize the service ecosystem as a whole. The network control layer has sight of the entire landscape of all existing connections, anticipated connections, and unallocated resources, making it more likely to find a viable path if one is possible — even if nodes or links are congested along the shortest route.

The cloud orchestrator can automatically respond to inter-data center workload requirements. Based on policy and bandwidth schedules, the orchestrator works with the control layer to connect destination data centers and schedule transactions to maximize the performance of the cloud service. Through communication with the network control layer, it can select the best combination of connection profile, time window and cost.

Summary

Whether built with SDN or other technologies, an intelligent network can transform a facilities-only architecture into a fluid workload orchestration workflow system, and a scalable and intelligent network can offer performance-on-demand for assigning network quality and bandwidth per application.

This intelligent network is the key ingredient to enable enterprises to inter-connect data centers with application-driven programmability, enhanced performance and at the optimal cost.

Written by Jim Morin, Product Line Director, Managed Services & Enterprise at Ciena

Follow CircleID on Twitter

More under: Cloud Computing, Data Center

Categories: Net coverage

Poll on New TLDs Shows Value of Brand Loyalty, Willingness to Try New Equivalents to .COMs

Thu, 2013-06-13 21:01

Internet users are willing to navigate to, use, and trust new web addresses that will be flooding the Internet later this year, and brand name websites will carry more weight with Internet users than generic sites.

These are among the results of a public opinion survey commissioned by FairWinds Partners, a consultancy that specializes in domain name strategy.

The poll also found that the owners and operators of these new addresses should be technically prepared or risk driving away or losing traffic intended for their sites.

Hundreds of new gTLDs are expected to roll out later this year and a total of approximately 1,400 could be operational in a year or so.

Internet users are untethered to the past, broadminded, and based on this survey, receptive to new ways of doing things. Respondents prefer to take control of their Internet experiences, expect to find their favorite brands at intuitive sites, and will adapt to the new addresses without too much difficulty.

The online poll of 1,000 Internet users found that consumers have an open mind about new gTLDs even though they remain a largely unknown and abstract concept:

  • 57 percent said they had no preference or would be willing to navigate to a new gTLD media website, while 43 percent said they would stick with a .COM media site
  • 52 percent said they had no preference or would be willing to shop on a new gTLD, compared to 48 percent who preferred .COM
  • 53 percent said they had no preference or would be willing to bank with a financial institution operating a new gTLD site, compared to 47 percent who would stick with .COM

The poll found that consumers trust the brands that they know and likely would embrace brand name gTLDs without much hesitation:

  • 14 percent of respondents navigating to a media site would prefer a brand name site, such as .CBS compared to 9 percent who preferred a generic such as .NEWS
  • 17 percent of respondents shopping online would navigate to a brand name site compared to 9 percent who preferred a generic gTLD, such as .SHOES
  • 15 percent of those banking online said they would prefer a brand gTLD, for example .CITI compared to 10 percent who opted for a generic site such as .LOANS

Brand owners — whether they applied for a new gTLD or not — can draw valuable lessons from FairWinds' research. Internet users indicated they expect to see their favorite brands adopt and use new gTLDs and that poor online user experiences will lead to missed revenue and opportunities for brand owners.

The better brand owners understand consumer behavior, the better prepared they will be to optimize use of their new gTLDs and remain competitive in the new Internet space.

The poll, conducted by InsightsNow! in April, questioned Internet users between the ages of 13 and 64. This is the second in a series of polls FairWinds is undertaking to gauge the impact of new gTLDs on consumers and businesses. The first FairWinds market research survey may be read here.

Written by Josh Bourne, Managing Partner at FairWinds Partners

Follow CircleID on Twitter

More under: Domain Names, Top-Level Domains

Categories: Net coverage

The Cable Show Experience

Thu, 2013-06-13 19:20

National Cable & Telecommunications Association (NCTA) Cable Show Washington, DC – June 10-12, 2013 (Photo: NCTA)I had the opportunity this week to take part in the National Cable & Telecommunications Association (NCTA) Cable Show — a traveling show in the U.S. that took place in Washington, DC, this year. The Cable Show is one of the largest events of the cable industry and this year was also my first time attending.

In the U.S. capital, it's difficult to avoid the topic of politics and its effects on the telecommunications industry. This was especially true during The Cable Show in light of recent news around communication monitoring, wiretapping, and how far it's going. But while this was a hot topic on the minds of attendees, politics for the most part was left at the door when it came to the exhibition floor.

As expected, a wide variety of exhibitors brought their best efforts to The Cable Show, displaying tools, software, services, and content. From mega-sized displays showcasing the latest TV shows and series; to rubbing shoulders with famous actors, business celebrities, and reality TV cast members; to viewing the very precise equipment and software that allows all this to come true — this show had it all.

The number of companies in attendance and their technology categories are useful in identifying trends for where the cable industry is heading:

  • Content was definitely at the core of the show, with 81 exhibitors involved in cable programming
  • Multi-screen content and HDTV were also well represented, with more than 40 vendors each
  • IPTV followed closely, with 37 exhibiting companies
  • Mobile apps and cloud services also had a presence

This focus on content and new strategies indicates a disruption in traditional cable TV, the strengthening of over-the-top (OTT) services, and the adoption of IPTV. It also raises the question — how long before Quadrature Amplitude Modulation (QAM), which is the format used by cable providers to transmit content, is replaced by IP?

Even with all this on site, two displays placed strategically side-by-side caught my attention. One was called the "Observatory" and celebrated the history and evolution of the cable industry and its technologies. The other, "Imagine Park," looked at the path ahead of us. What is the cable industry working on to stay relevant, when competition is continuously increasing?

Technology is all about evolution and creating solutions to problems. That said, one cannot simply focus on the future and ignore the past, which is why these displays were so effective. It's good to see that someone is thinking of that — celebrating how far the cable industry has come and how far it will continue to take us.

Written by Rick Oliva, Sales Support Engineering Manager at Incognito Software

Follow CircleID on Twitter

More under: Access Providers, Broadband, IPTV, Telecom

Categories: Net coverage

So, Your gTLD was Approved - What Now?

Thu, 2013-06-13 18:29

The world is just waking up to the fact that the Internet Corporation for Assigned Names and Numbers (ICANN) has been accepting applications for new generic top-level domains, or gTLDs, since 2012 and that hundreds of these gTLDs have already been approved through Initial Evaluation, with more being approved every week. It is expected that the new extensions will begin appearing online in the second half of 2013, and over 1,000 new extensions will likely be added to the Internet by 2014.

But if you're reading this, you've known this for a long time. In fact, you may have just gotten word that your application is approved.

Congratulations! Awesome news… but, what now?

You've put all of this time, money and effort into getting a valuable domain extension, but even if your application has been approved, there is still a lot to be done before you're able to go out and start marketing and selling. Consider taking this time to hone in on your strategy and prepare for a successful launch.

You're not alone if this is the first time you, or your company, has launched a TLD — after all, ICANN has controlled the Doman Name System very tightly until now. So how do you know what to expect next? Once you've gotten the "go ahead," how do you know where you should focus your efforts to launch with a big splash and finally begin generating revenue?

As a company that has helped launch TLDs in the past, and as a neutral observer in the new TLD process (i.e. we did not apply for any TLDs of our own), here are a few tips that Sedo has gained over the last ten plus years in the industry.

When developing a launch strategy, it's important to place significant emphasis on your registry's premium names. Obviously they're the most valuable, so selling a few good names up front has the potential to jump start revenue. In addition, getting a few premium names in the hands of end users that have aggressive marketing plans (or budgets) is free advertising for your registry and could drive general interest. With that in mind, here are five things you need to think about when preparing your premium sales and auction strategy.

1. Data Gets You Started; Not All Premium Lists Are Created Equal – Identifying your most valuable domain assets is one of the first things you should do. But, at the same time, as you create a list of premium addresses, think about which ones you may want to place on reserve for later sales. Put simply, you need to know which possible addresses will be worth more to you than the others. You have one chance to do this correctly — and you don't want to leave money on the table or let a potential "category killer" slip through the cracks because you didn't correctly identify the opportunities in front of you.

A historical view is important in order to accurately crunch the numbers. What has been popular in the past? What types of domains have consistently sold or increased in value? Which ones have decreased? How about international opportunities — have you considered what domains wouldn't be successful in North America, but might be of huge interest in other parts of the world? What non-English domains could be valuable with your TLD?

History, as they say, offers lessons, and without access to historical data to make your decisions you will already be at a disadvantage. You need to use every advantage possible to ensure that you get the best possible list, so you don't miss out on potential revenue.

2. Auction Everything? Or Develop a Sales Strategy? Auctions are a good way to generate revenues quickly. However, many times the highest sale prices don't come in an auction. This is because it can be difficult for the 'perfect' buyer(s) to know that the auction is happening at X date. Many registries are neglecting the idea of using a longer term approach, including sales distribution channels and premium domain marketplaces.

It is important to understand, however, that there is no "one-size-fits-all" way to sell domains under your TLD:

• It's important to actively look for strategic deals early on, via the Brokerage and Business Development of your premium domains. Your focus should be finding end users that will develop, use and actively market their company or product under your new TLD.

• All new TLDs must hold a Sunrise period to give trademark holders an opportunity to pre-register related names. Sunrise is a key opportunity for early cash flow, but you need to properly drive awareness of when the period will begin and end and identify potential leads.

• A Landrush period is another excellent way to secure cash flow for your extension. It's customary to hold a Landrush so anyone can submit an application to get early access to the domains they really want. But did you know a Landrush is not something that's mandated by ICANN? It's optional, so it's worth carefully considering the benefits (quick cash flow, free publicity from usage of the domain) and drawbacks (potential for domain value to increase if extension is successful) to your new registry. Competition and conflict auctions give some high demand domains a strong chance at very high values (higher than you may ever have expected).

Auctions are a key element to your success as well – and to auction domains successfully, you need to have global reach, a way to weed out fraudulent bidders and the international expertise to make sure the widest audience possible can bid. A good thing to remember is that there is a huge appetite for English language domains outside of North America, so make sure you can reach those buyers globally!

3. Take Your Marketing Strategy Seriously – Businesses today understand the power of a good domain name. Whether a premium "category killer" name or a company's own proper name, the right domain makes a company easy to find and helps it stand out in searches. Businesses will want to get in on names they may have had to pay six- or seven-figure sums for as a .com, or names that line up with their existing or planned products. This is why you need to start marketing now. Where is my market and how can I reach it?

The first step is developing a consistent message that will connect with your most valuable audience, be it a specific audience like skiers (.ski, for example), or a general one like business technology users (.web, for example). When it comes to executing, stick to that message across all channels to really drive it home. You need to take marketing seriously and pick your strategy wisely — and early.

4. Choose Your Registrar Distribution Strategy – Target registrars that make the most sense for distributing your new gTLD. You want to look for global reach and areas of activity — in short, who do I need to work with to reach the greatest amount of potential buyers in the shortest amount of time?

Registrars may actually come into the picture when you're considering premium name sales too. Many registrars are not set up to sell premium domains, while others have joined premium networks that have been in place for years, enabling end users that are looking for a "regularly priced" name to also see an option for a premium name that may suit them better. When choosing registrars and premium sales partners, it's worth looking into synergies between the two so all your domains get the best visibility in front of potential buyers. When doing so, make sure the partner you choose will act as a true partner, helping with launch, promotion and everything in between.

5. Data Keeps Your TLD Strong; Build Valuable Market Data and Harness it Moving Forward – Data is key, which is why it bookends a solid strategy. If you're successful with the above and have a solid sales and marketing strategy in place, then this will be a repeatable process and you'll want to track sales and customer data. It's important to retain and refine your data to help you grow as the TLD grows. A strong partner can help you to do this and re-market or continue marketing to the same groups in a way that keeps your premium domain strategy fresh.

The planning phase that you enter as soon as your application makes it through initial evaluation — if not sooner — is a critical period that will ultimately determine whether your new gTLD is a success or a failure. There are several steps that need to be undertaken correctly, from identifying which domains will be the most valuable under your new extension, to making sure that you find the audience most likely to purchase them. Taking the extra time to consider these steps carefully and begin executing on them immediately will give you a lasting advantage over other new gTLDs as they are approved and released.

Written by Kathy Nielsen, Head of Business Development, New gTLDs, Sedo

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

Broadband Meets Content at ANGA COM 2013

Wed, 2013-06-12 23:31

AGNA COM Exhibition and Congress 4-6 June 2013, Cologne/Germany (Photo: AGNA COM)The Association of German Cable Operators' annual trade show has a new name. Europe's principal cable industry exhibition and convention was previously known as ANGA Cable, but last week (June 4-6, 2013), the show launched as ANGA COM. This new title — an abbreviation of communication — highlights how the convergence of technologies and networks is blurring the line between cable operators and other communication and entertainment services providers.

This new focus was reflected in the many service-oriented sessions centered on broadband, video, and all forms of entertainment delivered to consumers via various modes of access technologies. The annual convention in Cologne, Germany, brought together broadband, cable, and satellite operators, as well as their vendor partners. For the first time, major telcos Deutsche Telekom and Vodafone were invited and took to the stage to discuss trends, technologies, and how broadband is working to deliver content.

Germany is a major player in the cable industry and holds the lion's share of cable homes in Europe, with 18 million households subscribed to cable. Digital transition has helped drive cable adoption, and today, about half of all German cable households use digital TV packages offered by broadband cable, especially HDTV, VOD, DVR, and TV sets with integrated digital receivers. The German cable industry is poised for further growth, with Europe's largest cable operator, UPC — which operates cable services in 13 European countries — citing Germany as its "growth engine" and the company's CEO stating that some 40% of the UPC's growth comes from this country.

Of course, cable faces fierce competition in Europe, as it does elsewhere in the world. Recent research from IHS Screen Digest shows that during the five-year period from 2007 to 2012, European cable operators lost 1.4 million subscribers. So where did the growth come from and why did convention attendees seem so upbeat about cable's future, as evidenced by the show's record number of 16,000 attendees and 450 exhibiting companies? The fact is, it's not all doom and gloom. The same IHS research shows that cable actually gained 17.8 million more revenue-generating units (RGU) during the same five year period that it lost subscribers. RGU drives total revenue growth and is a positive sign for an industry facing fierce competition from traditional telcos, satellite, and OTT players.

So, what's fueling this RGU growth? There are a few factors at play here:

  • Digital transition
  • "Triple-play" or bundling of data, voice, and video
  • The multitude of new, value-added services

Value-added services have been made possible by the growing number of available consumer devices — tablets, laptops, PCs, and smartphones. These new services include home security and Wi-Fi, both in and around the house, as well as in public areas. Multi-screen services are also enabling cable operators to offer OTT-like, proprietary video services.

These offerings are becoming essential as consumers demand easy access to content as they move from room to room inside the home, as well as outside in public places such as stadiums, theaters, and shopping malls. It's not surprising, then, that multiscreen was a hot topic at ANGA COM, along with the usual topics of fiber expansion, IPTV, video on demand, smart TV, software solutions, and consumer electronics.

For an uninterrupted multi-screen experience, consumers need to be able to easily access content across devices, and device provisioning and service activation should occur seamlessly. This enables customers to enjoy the same quality of experience across multiple devices, both inside and outside the home. At the same time, service providers need to be aware of security concerns associated with the multi-device consumption of content — particularly security of content and consumer privacy.

Operators are also turning their attention to the monetization of services associated with multiple-device use. Clearly, the multi-screen experience is changing the dynamics of customer services and technical support. Consumers want ease of use and an uninterrupted experience, where they can simply order a service that is then provisioned so quickly that they don't even realized it's happened, and everything works without any issues. For operators, this type of experience requires network reliability and for customer service representatives (CSRs) to have all the necessary information and tools at their fingertips for fast issue resolution.

The demand for quality of experience puts pressure on service providers to understand subscriber usage behavior patterns. Solutions from vendors like Incognito offer operators ways to construct meaning out of the massive amount of bandwidth utilization data that they accumulate, and enable them to use that intelligence to improve the user experience.

ANGA COM provided an excellent opportunity for us to catch up with customers and partners, and strategize ways to take advantage of new technologies to provide a better service for customers and their subscribers. Bring on ANGA COM 2014!

Written by Will Yan, Senior VP, Worldwide Sales at Incognito Software

Follow CircleID on Twitter

More under: Broadband, IPTV

Categories: Net coverage

Google Asks U.S. Government to Allow Transparency for Its National Security Request Data

Tue, 2013-06-11 23:09

In an open letter published today, Google has asked the U.S. Attorney General and the Federal Bureau of Investigation for more transparency regarding national security request data in light of the NSA data collection controversy. The letter, signed by David Drummond, Google's Chief Legal Officer, states in part:

"We have always made clear that we comply with valid legal requests. And last week, the Director of National Intelligence acknowledged that service providers have received Foreign Intelligence Surveillance Act (FISA) requests.

Assertions in the press that our compliance with these requests gives the U.S. government unfettered access to our users' data are simply untrue. However, government nondisclosure obligations regarding the number of FISA national security requests that Google receives, as well as the number of accounts covered by those requests, fuel that speculation.

We therefore ask you to help make it possible for Google to publish in our Transparency Report aggregate numbers of national security requests, including FISA disclosures — in terms of both the number we receive and their scope. Google's numbers would clearly show that our compliance with these requests falls far short of the claims being made. Google has nothing to hide."

Follow CircleID on Twitter

More under: Internet Governance, Law, Policy & Regulation, Privacy

Categories: Net coverage

Intelligence Exchange in a Free Market Economy

Tue, 2013-06-11 20:09

Speaking on behalf of myself,

Dear U.S. Government:

I was reading some interesting articles the past few weeks:

http://www.reuters.com/article/2013/05/15/...
http://www.networkworld.com/news/2013/...
http://www.securityweek.com/dhs-share-zero-day-intelligence
http://www.csoonline.com/article/733557/...
http://www.dhs.gov/enhanced-cybersecurity-services

and with the understanding that:

  • my livelihood right now depends on building tools that facilitate data-sharing and trust relationships
  • I'm sure there are misunderstandings in the reporting
  • this process requires some level of sustainability to be effective
  • this process requires some extra care with respect to sensitivity, legal and ethical constraints (not to mention cultural implications)

The USG is causing a huge disservice to protection and defense in the private sector (80%+ of CIKR1) by creating an ECS that contains monetary incentive for a few large players to exert undue control over the availability, distribution, and cost of security threat indicators. While there may be a legitimate need for the federal government to share classified indicators to entities for protecting critical infrastructure, the over-classification of indicator data is a widely recognized issue that presents real problems for the private sector. ECS as currently construed creates monetary incentives for continued or even expanded over-classification.

The perception of a paid broker-dealer relationship with the USG sets a very unsettling precedent. Private citizens are already concerned about the relationship between the intelligence community and the private sector and these types of stories do very little to help clear the FUD. Compounded with the lack of transparency about what constitutes classified data, how it protects us and the relationship agreement between the entities sharing the data, this type of program could do much more economic harm than good. Many private sector orgs have indicators that the USG would find useful, but have given up trying to share them. The current flow suggests that we would send data thru competitors to get it to the USG, would never scale well in a free-market based economy.

The network

As with the "PDF sharing programs" of the past (err… present?), it also appears to be a system that adds cost to the intelligence network with the addition of each new node, rather than reducing it. High barriers to entry for any network reduce that network's effectiveness, and in a free market economy, eventually isolates those nodes from the greater network where the barrier to entry is lower. I get it, I understand why certain things are happening, I'm arguing that it's NOT OK. My intent is to widen the dialog a bit to see where we, as an operational community can step up and start doing a better job of leading, instead of allowing the divide between the USG community and the operational community to widen.

Before tackling ECS, the USG should strongly address the over-classification issue. It should establish efficient and effective means for engaging with existing operational information exchanges that are working now in the private sector. Most useful indicators to the non-govt community are not classified, and in my understanding, much of the classified intel is classified due to its "source, method and/or attribution", not the actual threat data. Finding a way to mark the data appropriately and then share it directly with a (closed) community will be a good thing. Washing the data thru a classified pipe does nothing to make the data more useful to the non-classified community. While the exchange of classified intelligence problem still exists, figuring out how to scale it to the unclassified environment will more aggressively help solve scaling it in an classified environment (more players can help solve similar problems across many spaces).

Economics

In my opinion, we should be leveraging existing, trusted security operational fabrics such as the ISC (SIE), TeamCymru, Shadowserver, Arbor networks, Internet Identity, the APWG and the ISAC's (to name a few, based on the most recent industry wide effort, DNS Changer Botnet takedown) that have facilitated great public/private partnerships in the past2. Leveraging this existing framework for intelligence exchange would have been a much more valuable investment than what this is perceived to be, or what development has taken place thus far. There are also a number of ISP's2 who actively pursue a better, more cleaner internet that have proven to be great partners in this game.

The tools and frameworks for this type of intelligence sharing have existing semi-developed (workable) economic models and more importantly, they consist of those who actually run the internet (ISP's, DNS providers, malware researchers, a/v companies, large internet properties, financial institutions, international law enforcement, policy advisors (ICANN/ARIN/etc) and other sector based CSIRTS). These operational communities have already taken down botnets, put people in jail and in some estimates, saved the economy billions of dollars at a global scale over the last few years. The process has proven to work, scale, and is rapidly maturing.

It is my opinion that a subsection of USG agencies are falling behind in the realm of intelligence exchange with the operations space. The rest of the world is moving towards the full-scale automation of this exchange across political boundaries and entire cultures. All this while finding unique and interesting, market friendly ways of reducing our "exchange costs". As a nation, we're at a crossroads. There are operational folks from within the USG that actively participate in these communities help make the Internet safe and "do the right thing". There are elements within the USG (mainly on the "national security" side) that appear to operate in isolation.

The argument I'm sure to hear is "well, wait, we're working on that!". In my opinion, whatever "that" is, is mostly a re-invention of existing technologies and frameworks that will mostly only ever be adopted by those that get funding in the .gov space to implement it, which still isolates the USG from what the rest of the operational community is already doing. Competition of ideas is good, it encourages innovation and all, but it's something we should be taking a hard look at and asking if it's the best use of our limited resources…

I've been pitched my own ideas from enough belt-way startups that it almost makes me want to scream… almost.

The bigger picture

My concern is that, it's becoming evident that the decision makers for some agencies are making choices that could ultimately isolate their operational folks from the rest of the operational world (whether in terms of principal, or in terms of trust, or fear of legal action, etc). As private industry progresses and parts of the USG fall further and further behind, this can only hurt us as a nation, and as a culture.

My suggestions:

  • fix the classification problem with respect to non-attribution type threat intelligence
  • parallel to the the classified sharing projects, DHS should be working more aggressively with the rest of industry with as much unclassified intel as possible, figure out where we can bridge the gaps
  • encourage participation with things like the NCFTA, SIE, TeamCymru, ArborNetworks, Shadowserver, Internet Identity, the APWG and the ISAC's when working to share intelligence, not through private 3rd parties whom have a noted history as the industry standards for operationalizing and disseminating threat intelligence.
  • encourage long term participation with the FBI at NCFTA, take lessons learned from their adventures in intelligence sharing and locking up bad-guys.

If you want to be more successful (reads: we want you to be more successful), don't put so much emphasis on standards or how to disseminate classified information, and more on how to aggressively share unclassified intel with your constituents. We have lots of data we'd like to share with you to help protect our national investments. If the USG can get to that place (without invoking something like CISPA, which makes zero sense in a free market economy), the classified problem will solve itself, while only accounting for .001% of the data being shared (reads: will not be such a distraction).

I know some in the USG understand this and are fighting the good fight, but it's clear that not enough at the higher levels of government do (reads: have you written your elected officials lately?). When you combine this with haphazard style of reporting (terrible at best) and lack of a clear message (reads: translucency), these types of ill perceptions can run rampant and do more economic harm that good to the national process.

I personally will be pushing harder in the coming months in figuring out how we, as the operational community can do more to bring more of USG folks into the fold in terms of building out more sustainable operational relationships. Also, facilitating ways we can share classified intel more aggressively in the future. My goal, is that in the coming year or two, we can change the culture of over-classification while bridging the gap with the rest of the operational industry when it comes to protecting the internet. In order to protect ourselves from economic threats that vastly outweigh our individual business models, there has to be a better solution than the [perceived?] sale of classified intel.

Why we're re-inventing the wheel, why our federal government clamors for "the need to share intel with industry" but appears to not be listening, at-least to the right people, who have a good record of sharing highly sensitive intelligence globally, and operationalizing it ... is beyond me. Washington is a very large echo chamber, and is such a large economy unto itself, that sometimes I feel like the process can sometimes drown out what's going on just a few miles down the road.

Sincerely,

Wes.

1 http://www.dhs.gov/blog/2009/11/19/cikr

2
http://www.fbi.gov/news/stories/2011/november/malware_110911/...
http://www.dcwg.org/isps/
http://www.dcwg.org/detect/
http://www.nytimes.com/2009/03/19/technology/...
http://krebsonsecurity.com/2012/05/microsoft-to-botmasters-abandon-your-inboxes/...

3 As denoted at the bottom of http://www.dcwg.org/detect:

• AT&T
• Bell Canada
• Century Link
• Comcast
• COX
• Shaw Communications
• Telecom Italia
• Time Warner
• Verizon

Written by Wes Young, Security Architect

Follow CircleID on Twitter

More under: Cybercrime, Security

Categories: Net coverage

CAN SPAM Issues in Zoobuh V. Better Broadcasting

Tue, 2013-06-11 18:26

Last week a Utah court issued a default judgement under CAN SPAM in Zoobuh vs. Better Broadcasting et al. I think the court's opinion is pretty good, even though some observers such as very perceptive Venkat Balasubramani have reservations.

The main issues were whether Zoobuh had standing to sue, whether the defendants domain names were obtained fraudulently, and whether the opt-out notice in the spam was adequate.

Standing

The standing issue was easy. Zoobuh is a small ISP with 35,000 paying customers who spends a lot of time and money doing spam filtering, using their own equipment. That easily met the standard of being adversely affected by spam, since none of the filtering would be needed if it weren't for all the spam.

Domain names

CAN SPAM prohibits "header information that is materially false or materially misleading." The spammer used proxy registrations at eNom and Moniker. The first subquestion was whether using proxies is materially false. Under the California state anti-spam law, courts have held that they are, and this court found that the California law is similar enough to CAN SPAM that proxies are materially false under CAN SPAM, too.

Venkat has reservations, since in principle one can contact the domain owner through the proxy service, but I'm with the court here. For one thing, even the best of proxies take a while to respond, and many are in fact black holes, so the proxy does not give you useful information about the mail at the time you get or read the mail. More importantly, businesses that advertise are by nature dealing with the public, and there in no plausible reason for a legitimate business to hide from its customers. (Yes, if they put real info in their WHOIS they'll get more spam. Deal with it.)

CAN SPAM also forbids using a "domain name, ... the access to which for purposes of initiating the message was obtained by means of false or fraudulent pretenses or representations." Both eNom and Moniker's terms of service forbid spamming, so the court found that the senders obtained the addresses fraudulently, hence another violation. Venkat finds this to be circular reasoning, arguing that the court found the spam to be illegal because the spam was illegal, but in this case, he's just wrong.

Despite what some bulk mailers might wish, CAN SPAM does not define what spam is, and mail that is entirely legal under CAN SPAM can still be spam. eNom's registration agreement forbids "if your use of the Services involves us in a violation of any third party's rights or acceptable use policies, including but not limited to the transmission of unsolicited email". Moniker's registration agreement prohibits "the uploading, posting or other transmittal of any unsolicited or unauthorized advertising, promotional materials, "junk mail," "spam," "chain letters," "pyramid schemes," or any other form of solicitation, as determined by Moniker in its sole discretion." There is no question that the defendants sent "unsolicited email" or "unsolicited advertising" and there's nothing circular about the court finding that the defendants did what they had agreed they wouldn't.

Opt out notice

The third issue is whether the spam contained the CAN SPAM required opt out notices. There were no notices in the messages themselves, but only links to remote images that presumably were supposed to contain the required text. As the court said:

The question presented to the Court in this case is whether Required Content provided in the emails through a remotely hosted image is clearly and conspicuously displayed. This Court determines that it is not.

One issue is that many mail programs do not display external images for security reasons or (as in my favorite program Alpine) because they don't display images at all. The court cites multiple security recommendations against rendering remote images, and concludes that there's nothing clear or conspicuous about a remote image. Even worse, the plaintiffs said that the remote images weren't even there if they tried to fetch them,

The real point here is that the senders are playing games. There is no valid reason to put the opt-out notice anywhere other than text in the body of the message, which is where every legitimate sender puts it.

Summary

Overall, I am pleased at this decision. The court understood the issues, was careful not to rely on any of the plaintiff's claims that couldn't be verified (remember that the defendant defaulted, so there was no counter argument) and the conclusions about proxy registrations and remote images will be useful precedents in the next case against spammers who use the same silly tricks.

Written by John Levine, Author, Consultant & Speaker

Follow CircleID on Twitter

More under: Law, Spam

Categories: Net coverage

NSA Builds Its Biggest Data Farm Amidst Controversy

Tue, 2013-06-11 04:17

As privacy advocates and security experts debate the validity of the National Security Agency's massive data gathering operations, the agency is putting the finishing touches on its biggest data farm yet. The gargantuan $1.2 billion complex at a National Guard base 26 miles south of Salt Lake City features 1.5 million square feet of top secret space. High-performance NSA computers alone will fill up 100,000 square feet.

Read full story: NPR

Follow CircleID on Twitter

More under: Data Center, Privacy

Categories: Net coverage

World IPv6 Day: A Year in the Life

Mon, 2013-06-10 20:58

On the 6th June 2012 we held the World IPv6 Launch Day. Unlike the IPv6 event of the previous year, World IPv6 Day, where the aim was to switch on IPv6 on as many major online services as possible, the 2012 program was somewhat different. This time the effort was intended to encourage service providers to switch on IPv6 and leave it on.

What has happened since then? Have we switched it on and left it on? What has changed in the world of IPv6 over the past 12 months? Who's been doing all the work? In this article I'd like to undertake a comparison of then and now snapshots of IPv6 deployment data. For this exercise I'm using the data set that we have collected using a broad based sampling of Internet users through online-advertisements.

The daily snapshots of the V6 measurement can be found here, and the breakdown of this data by economy and by provider can be found on this page and here.

First a look at the big number picture

A year ago, in June 2012, we measured some 0.60% of the world's Internet user population that was able to successfully retrieve a dual stack web object using IPv6. At the time the estimate of the total user population of the Internet was some 2.24B users, so 0.60% equates to 13.5M users who were using a working IPv6 protocol stack, and preferring to use IPv6 when given a choice of protocols by a dual stack service.

What does it look like one year later?

In June 2013 We see a rolling average of 1.29% of the Internet's users who are preferring to use IPv6 when presented with a dual stack object to fetch. With a current estimate of Internet user population of an estimated 2.43B users, that figure equates to a count of 29.3 M users.

In one sense a growth of 0.60% to 1.29% of the Internet sounds like very small steps, but at the same time a growth in users from 13.5M to 29.3M users is indeed a significant achievement in 12 months, and is easily doubling the extent of IPv6 use in this period.

The tracking of this metric across the past 12 months is shown in Figure 1. There is some indication that there was a significant exercise in the deployment of IPv6 in June 2012 at the time of the World IPv6 Launch event, but also some evidence of shutting IPv6 down in some parts of the network in the months thereafter. There was another cycle of growth and decline in the period November 2012 to March 2013, and another period of further growth from March 2013 until the present day.

Figure 1 – IPv6 Deployment: June 2012 - June 2013 (Click to Enlarge)

Where did IPv6 happen?

One way to look at IPv6 deployment is by looking at IPv6 deployment efforts on a country-by-country basis. Which countries were leading the IPv6 deployment effort twelve months ago?

Table 1 contains the list of the top 20 countries, ordered by percentage of the Internet user population who are showing that they can use IPv6, from June 2012.

2012 RankEconomy% of Internet Users who use IPv6 # of IPv6 Users1Romania7.40%641,3892France4.03%2,013,9203Luxembourg2.59%12,0494Japan1.75%1,766,7995Slovenia1.07%15,1756United States of America1.01%2,500,6847China1.01%5,209,0308Croatia0.85%22,5519Switzerland0.80%51,57510Lithuania0.66%13,84511Czech Republic0.55%39,69412Norway0.51%23,33313Slovakia0.44%19,11214Russian Federation0.39%238,57615Germany0.32%217,49416Hungary0.31%19,89617Portugal0.30%16,40618Netherlands0.27%40,87019Australia0.25%49,42520Taiwan0.24%38,843
Table 1 – IPv6 Deployment, ranked by % of national users: June 2012

That's an interesting list. There are some economies in this list that were also rapid early adopters of the internet, such as the United States, Japan, Norway and the Netherlands, and also some of the larger economies, such as the France, Japan, the United States, the Russian Federation and Germany, who are members of the G8 (Italy, the United Kingdom and Canada are the other members of the G8). Some 15 of the 20 are European economies, and neither South America or Africa are represented on this list at all.

Also surprising is the top economy at the time. The efforts in Romania earlier in 2012 to provision their fixed and mobile service network with IPv6 produced an immediate effect, and by June 2012 some 7.4% of their user base was using IPv6, after commencing the public deployment of IPv6 in late April 2012.

Interestingly, in percentage terms, the numbers trail off quickly, so that only 10 countries were above the global average, and by the time you get to the 20th ranked economy in this list, Taiwan, the level of IPv6 deployment is rate was some 0.24%. So the overall picture could be described as "piecemeal", with some significant efforts in just a small number of countries to deploy IPv6.

There is another way to look at this 2012 list, which is to perform the same ranking of economies by the population of IPv6 users, as shown in the following table:

2012 RankEconomy% of Internet Users who use IPv6 # of IPv6 Users1China1.01%5,209,0302United States of America1.01%2,500,6843France4.03%2,013,9204Japan1.75%1,766,7995Romania7.40%641,3896Russian Federation0.39%238,5767Germany0.32%217,4948Indonesia0.17%94,5439Switzerland0.80%51,57510Australia0.25%49,42511United Kingdom0.08%41,46112Netherlands0.27%40,87013Czech Republic0.55%39,69414Taiwan0.24%38,84315India0.03%36,88116Ukraine0.21%31,93317Malaysia0.18%30,03418Thailand0.15%27,61719Brazil0.03%26,05120Nigeria0.06%25,149
Table 2 – IPv6 Deployment, ranked by IPv6 users: June 2012

Of the 13.5M IPv6 users a year ago, some 5M were located in China, and between the four economies of China, the United States, France and Japan we can account for 85% of the total estimated IPv6 users of June 2012. This observation illustrates a somewhat fragmented approach to IPv6 adoption in mid 2012, where Internet Service Providers in small number of economies had made some significant levels of progress, while in other economies the picture of IPv6 deployment ranged from experimental or highly specialised programs through to simply non-existent.

There are a number of interesting entrants in this economy list, including India, Brazil and Nigeria, which point to some levels of experimentation by some service providers in the provision of IPv6 services in other economies. Hopefully this experimentation was a precursor to subsequent wider deployment programs.

Was this this case? What has happened in the ensuing year?

Here are the same two tables, using IPv6 use data as of June 2013, showing a comparable perspective of IPv6 deployment as it stands today.

2013 RankEconomy% of Internet Users
who use IPv6
# of IPv6
Users1Romania10.84%1,053,2372Switzerland10.72%700,7773Luxembourg6.96%32,5354France5.46%2,824,4655Belgium4.17%339,6516Japan4.13%4,137,4767Germany3.24%2,212,0628United States of America2.72%6,768,2649Peru2.42%273,37010Czech Republic2.12%157,20311Singapore1.58%54,06012Norway1.21%53,67713Slovenia0.92%13,23014China0.90%4,651,95315Greece0.78%44,57216Portugal0.76%45,40817Taiwan0.72%120,18018Netherlands0.70%109,42519Australia0.69%121,25620Slovakia0.52%21,169
Table 3 – IPv6 Deployment, ranked by % of national users: June 2013

This table clearly shows that Switzerland, Belgium, Germany, Peru, the Czech Republic and Greece have made a significant change in their level of IPv6 deployment in the last 12 months. We now see 7 of the 20 economies as being non-European economies. Eleven of these economies have IPv6 usage rates above the global average of 1.3%, an increase of 2 since 2012.

We can plot these numbers onto a world map, as shown in Figure 2, using a colouring scale from 0 to 4% of each national Internet user population that is capable of using IPv6.

Figure 2 – IPv6 Deployment, ranked by % of national users: June 2013 (Click to Enlarge)

The following table shows the estimated IPv6 user population per economy in June 2013.

2013 RankEconomy% of Internet Users
who use IPv6
# of IPv6
Users1United States of America2.72%6,768,2642China0.90%4,651,9533Japan4.13%4,137,4764France5.46%2,824,4655Germany3.24%2,212,0626Romania10.84%1,053,2377Switzerland10.72%700,7778Belgium4.17%339,6519Peru2.42%273,37010Czech Republic2.12%157,20311Russian Federation0.21%143,67712United Kingdom0.27%135,07613Australia0.69%121,25614Taiwan0.72%120,18015Netherlands0.70%109,42516Canada0.19%55,49217Singapore1.58%54,06018Norway1.21%53,67719Portugal0.76%45,40820Greece0.78%44,572
Table 4 – IPv6 Deployment, ranked by IPv6 users: June 2013

Again the distribution of IPv6 users appears to be somewhat is skewed, in so far as just 5 economies account for 85% of the total population of IPv6 users in June 2013, which is the same four economies of United States, Japan, France, and China, but this time joined by Germany. Unfortunately we no longer see India, Brazil or Nigeria in this top 20 economy list. The top 20 economies cut off has risen from 38,000 IPv6 users per economy to 44,000, so unless there was some form of continued expansion of IPv6 deployment (such as the United Kingdom's rise from 41,000 users in mid-2012 to 135,000 in mid-2013), the economies at the lower end of the top 20 in 2012 were likely to slip off the top 20 list if there was no continued expansion of their IPv6 program through the year.

In percentage terms what has changed over the past 12 months? The following table compares the values from mid 2012 to the values in mid-2013. The first table lists the top 20 economies who have lifted the percentage of their users who are capable of using IPv6, ranked by the rate of change of this percentage value.

2013 RankEconomyDiff (%)Diff IPv6
User Count1Switzerland+9.92%+649,2022Luxembourg+4.37%+20,4863Belgium+4.07%+331,1534Romania+3.44%+411,8485Germany+2.92%+1,994,5686Peru+2.41%+272,3277Japan+2.38%+2,370,6778United States of America+1.71%+4,267,5809Czech Republic+1.57%+117,50910Singapore+1.43%+48,52411France+1.43%+810,54512Greece+0.70%+40,53013Norway+0.70%+30,34414Taiwan+0.48%+81,33715Portugal+0.46%+29,00216Australia+0.44%+71,83117Netherlands+0.43%+68,55518New Zealand+0.35%+13,17419South Africa+0.33%+34,02220Bosnia and Herzegovina+0.32%+8,914
Table 5 – IPv6 Deployment, ranked by % of national users: change from June 2012 to June 2013

The largest change was Switzerland, where a further 10% of their users were able to use IPv6, and significant efforts were visible in Luxembourg, Belgium, Romania, Germany, Peru and Japan in terms of the ratio of IPv6 users in each economy.

In terms of user population who are IPv6-capable, the table of economies who deployed IPv6 over the largest set of users is provided in Table 6.

Obviously one economy where there has been substantial effort in the past 12 months has been the United States, where some additional 4.2M users are now using IPv6 in just 12 months. That is an extremely impressive effort. Similarly, there has been a significant effort in Japan and Germany.

It also should be noted that as or April 2011 the further provision of IPv4 addresses through the conventional Regional Internet Registry allocation system had ceased for the Asia Pacific region, so a case could be made that the efforts this region, including those of Japan, Tawian, Australia and New Zealand were spurred on by this event. Similarly the Regional Internet Registry serving Europe and the Middle East also exhausted its pools of available IPv4 addresses in September 2012, which may have some bearing on the IPv6 efforts in Germany, France, Switzerland Romania. Belgium, the Czech Republic, the United Kingdom, the Netherlands, Greece, Norway, Portugal and Luxembourg. However, IPv4 addresses are still available for service providers in North and South America and in Africa, which makes the efforts in the United States all the more laudable for their prudence.

2013 RankEconomyDiff (%)Diff IPv6
User Count1United States of America+1.71%+4,267,5802Japan+2.38%+2,370,6773Germany+2.92%+1,994,5684France+1.43%+810,5455Switzerland+9.92%+649,2026Romania+3.44%+411,8487Belgium+4.07%+331,1538Peru+2.41%+272,3279Czech Republic+1.57%+117,50910United Kingdom+0.19%+93,61511Taiwan+0.48%+81,33712Australia+0.44%+71,83113Netherlands+0.43%+68,55514Singapore+1.43%+48,52415Greece+0.70%+40,53016South Africa+0.33%+34,02217Canada+0.11%+33,10418Norway+0.70%+30,34419Portugal+0.46%+29,00220Luxembourg+4.37%+20,486
Table 6 – IPv6 Deployment, ranked by national users: change from June 2012 to June 2013

Also it would appear that Europe remains a strong focal point for IPv6 deployment at present, while the deployment in other regions is far more piecemeal. Although I must mention Peru and South Africa in this context as two highly notable exceptions to this general observation.

And where is China in June 2013? What we saw in our measurements is a relative decline in the population of users who are seen to use IPv6 from June 2012 to June 2013. This decline was estimated to be some 557,000 users. One of the more variable factors for China is the role of the national firewall structure, and its capabilities with respect to IPv6, and as the IPv6 measurement system was hosted outside of China, the measurements relating to Chinese use of IPv6 are dependant on the behaviour of this filter structure. It is possible that the firewall has different behaviours for IPv6, and equally possible that these behaviours have altered over time. It could well be that an internal view of China would have a different result than that which we see from outside the country.

It is also possible to provide some insights as to which ISPs are undertaking this activity, by tracing the originating Autonomous System number of the user's IP address who have provided capability data to this measurement exercise. The following is a list of some of the larger Service Providers that are showing some significant levels of activity in the past 12 months with IPv6. The list is by no means exhaustive, but it is intended to highlight those providers that have been seen to make significant changes in their IPv6 capability measurements over the past 12 months in the economies listed in Table 6. The percentage figures provided in the list are the percentage of clients whose IP address is originated by these AS's who are able to use IPv6 in June 2012 and in June 2013.

2EconomyAS NumberAS Name2012 IPv6 (%)2013 IPv6 (%)United States of AmericaAS6939Hurricane Electric29%37%AS22394Cellco Partnership DBA Verizon Wireless6%20%AS7018AT&T Services6%15%AS3561Savvis0.7%5%AS7922Comcast0.5%2.7%JapanAS2516KDDI16%27%AS18126Chubu Telecommunications0.2%23%AS17676Softbank0.5%4%GermanyAS3320Deutsche Telekom AG0.01%4.9%AS31334Kabel Deutschland1.18%7.4%AS29562Kabel BW GmbH0%10.2%FranceAS12322Free SAS19%22%SwitzerlandAS67722Swisscomm0.2%23%AS559Switch; Swiss Education and Research Network11%18%RomaniaAS8708RCS & RDS SA11.5%24.7%BelgiumAS12392Brutele SC0%33%AS2611BELNET2.6%22.4%PeruAS6147Telefonica del Peru SA0%3.1%Czech RepublicAS2852CESNET z.s.p.o. 20%27%AS5610Telefonica Czech Republic; a.s. 0%3.5%AS51154Internethome; s.r.o. 0%2.8%United KingdomAS786The JNT Association (JANET()51%68%AS13213UK2 Ltd0%23%TaiwanAS9264Academic Sinica Network0%21%AS1659Taiwan Academic Network1.6%7.6%AustraliaAS7575Australian Academic and Research Network13%21%AS4739Internode5%11%NetherlandsAS3265XS4ALL Internet BV6%27%SingaporeAS7472Starhub Internet Pte Ltd0%13%AS4773MobileOne Ltd.0%10%GreeceAS5408Greek Research and Technology Network S.A17%19%South AfricaAS2018TENET0.3%3%CanadaAS6453TATA Communications10%13%AS22995Xplornet Communications Inc0.1%9%NorwayAS224Uninett; The Norwegian University and Research Network16%24%AS39832Opera Software ASA1.3%100%AS57963Lynet Internett0%56%PortugalAS3243PT Comunicacoes S.A.0.01%1.3%LuxembourgAS6661Entreprise des Postes et Telecommunications4%14%
Table 7 – IPv6 Deployment 2012-2013, Selected Autonomous System Measurements

What can we say about the state of IPv6 deployment one year after the commencement of the IPv6 Launch program?

The encouraging news is that overall numbers of IPv6-capable end users have doubled in 12 months. The measurements presented here support an estimate that today some 30 million Internet users who will use IPv6 when they can.

But this is not happening everywhere. Indeed, it is happening in a small number of countries, with still a relatively small set of service providers. What we appear to be seeing are concentrated areas of quite intense IPv6 activity. Many national academic and research networks have been highly active in supporting IPv6 deployment within their network. In the commercial networks we are seeing a number of major commercial network service operators, primarily in the United States, Japan, Germany, France, Switzerland and Romania, launch programs that integrate IPv6 services into their retail offerings. Whether this effort will provide sufficient impetus to motivate other providers to also commit to a similar program of IPv6 deployment is perhaps still an open issue today, but there is some evidence that there is now a building momentum and an emerging sense of inexorable progress with the deployment of IPv6.

We'll be continuing these measurements, and providing further insights as to where we can see IPv6 deployment underway across the Internet over the coming months. You can find daily reports of our measurements, including breakdowns by economy and tracking of progress with IPv6 for individual network service providers at http://labs.apnic.net/ipv6-measurement. If you would like to assist us in this measurement exercise we'd obviously like to hear from you&emdash;drop us a note to research@apnic.net.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: IPv6

Categories: Net coverage

UNESCO Director-General on Linguistic Diversity on the Internet: Main Challenges Are Technical

Fri, 2013-06-07 21:51

Irina Bokova, Director-General, UNESCOEURid, the .eu registry, in collaboration with UNESCO, in November of last year released the 2012 World report on Internationalized Domain Names (IDNs) deployment. It updated previous year's study, Internationalised Domain Names State of Play, which was published in June 2011 and presented at the 2011 United Nations Internet Governance Forum in Nairobi, kenya.

Today, Irina Bokova, Director-General of UNESCO has released a statement concerning the linguistic diversity on the Internet stating: "UNESCO's experience and the 2012 study of the use of internationalized domain names undertaken with EURid show that the main challenges are technical. Obstacles lie with Internet browsers that do not consistently support non-ASCII characters, with limited e-mail functionality, and with the lack of support of non-ASCII characters in popular applications, websites and mobile devices."

Below is an excerpt from the 70-page EURid-UNESCO 2012 report:

This year, the data set for this study is expanded from 53 to 88 TLDs, and includes 90% of all domain names registered as at December 2011, albeit that the data set is not complete for every parameter. The World Report includes case studies on the ccTLDs for the European Union, Russian Federation, Qatar, Saudi arabia, Egypt and the Republic of korea. Where an existing registry has launched an IDN ccTLD (for example, .sa and ????????.) these are considered as two separate entities for the purpose of the report.

Part 1 of the World Report on IDN deployment sets out a background to IDNs and a timeline. It considers progress in supporting IDNs in email and browsers. It then reviews the IDN applications in ICANN's programmes to create new TLDs. A comparison of growth rates of IDN registrations versus general registrations is made within European registries and usage rates are compared amongst .eu and .?? IDNs and benchmarked with other TLDs. Case studies follow, on the European Union (.eu) ccTLD, and country case studies on the Russian Federation, Qatar, Saudi arabia, Egypt and the Republic of korea.

Also noteworthy is the included foreword in the report by Vint Cerf (excerpt below) on the historical adoption of simple Latin characters in the early days of the Domain Name System (DNS). Cerf writes:

"For historical reasons, the Domain Name System (DNS); and its predecessor (the so-called "host.txt" table) adopted naming conventions using simple Latin characters drawn from the letters a-Z, digits 0-9 and the hyphen ("-"). The host-host protocols developed for the original aRPaNET project were the product of research and experimentation led in very large part by English language speaking graduate students working in american universities and research laboratories. The project was focused on demonstrating the feasibility of building a homogeneous, wide area packet switching network connecting a heterogeneous collection of time-shared computers. This project led to the Internetting project that was initially carried out by researchers in the United States of america and the United kingdom, joined later with groups in Norway, Germany and Italy, along with a few visiting researchers from Japan and France. The primary focus of the Internetting project was to demonstrate the feasibility of interconnecting different classes of packet switched networks that, themselves, interconnected a wide and heterogeneous collection of timeshared computers.

The heterogeneity of interest was not in language or script but in the underlying networks and computers that were to be interconnected. moreover, the Internet inherited applications and protocols from the aRPaNET and these were largely developed by English language speakers (not all of them necessarily native speakers). The documentation of the projects was uniformly prepared in English. It should be no surprise, then, that the naming conventions of the Internet rested for many years on simple aSCII-encoded strings. The simplicity of this design and the choice to treat upper and lower case characters as equivalent for matching purposes, avoided for many years the important question of support for scripts other than Latin characters. as the Internet has spread across the globe, the absence of support for non-Latin scripts became a notable deficiency.

For technical reasons, support for non-Latin scripts was treated as a design and deployment problem whose solution was intended to minimise change to the domain name resolution infrastructure. This was debated in the Internet Engineering Task Force more than once, but the general conclusion was always that requiring a change to every resolver and domain name server, rather than changes on the client side only, would inhibit deployment and utility. This led to the development of so-called "punycode" that would map Unicode characters representing characters from many of the world's scripts into aSCII characters (and the reverse). This choice also had the salient feature of making unambiguous the question of matching domain names since the punycoded representations were unique and canonical in form. This design is not without its problems but that is where we are at present."

IDN introduction timeline – Source: EURid-UNESCO World report on Internationalised Domain Names deployment 2012 (Click to Enlarge)

The full report can be downloaded here in PDF here: EURid-UNESCO World report on Internationalised Domain Names deployment 2012

Follow CircleID on Twitter

More under: DNS, Domain Names, Multilinguism, Top-Level Domains

Categories: Net coverage

NSA PRISM Program Has Direct Access to Servers of Google, Skype, Yahoo and Others, Says Report

Fri, 2013-06-07 20:09

The National Security Agency has obtained direct access to the systems of Google, Facebook, Apple and other US internet giants, according to a top secret document obtained by the Guardian. The NSA access is part of a previously undisclosed program called PRISM, which allows officials to collect material including search history, the content of emails, file transfers and live chats, the document says.

Follow CircleID on Twitter

More under: Privacy

Categories: Net coverage

Akram Atallah Named Head of ICANN's New Generic Domains Division

Fri, 2013-06-07 08:52

Akram Atallah – Named Head of ICANN's New Generic Domains DivisionIn a post today, ICANN's CEO, Fadi Chehadé, has announced the creation of a new division within ICANN, called Generic Domains Division, in order "to handle the tremendous increase in scale resulting from the New gTLD Program."

Akram Atallah, who is currently the Chief Operating Officer (COO), will become divisional President of the Generic Domains Division that will include gTLD Operations, DNS Industry Engagement, and Online Community Services.

Susanna Bennett, a financial and operational executive in technology services, will be joining ICANN as the new COO, filling Akram's position, starting July 1st. 

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

First Private Auction for New Generic Top Level Domains Completed: 6 gTLDs Valued at Over $9 Million

Fri, 2013-06-07 05:40

On behalf of Innovative Auctions, I am very happy to announce that we've successfully completed the first private auction for generic Top Level Domains (gTLDs). Our auction resolved contention for 6 gTLDs: .club, .college, .luxury, .photography, .red, and .vote. Auction winners will pay a total of $9.01 million. All other participants will be paid from these funds in exchange for withdrawing their application.

In ICANN's gTLD Applicant Guidebook, applicants for gTLD strings that are in contention are asked to resolve the contention among themselves. ICANN did not further specify how to do that. Our Applicant Auction, designed by my colleague Peter Cramton, has now become the most successful—and proven— alternative to tedious multilateral negotiations. The first withdrawal as a result of our auction (an application for .vote) has already been announced by ICANN.

All participants—winners and non-winners alike—indicated that they were pleased with the results of the first Applicant Auction. "The auction system was clear, user-friendly, and easy to navigate," said Monica Kirchner, applicant for Luxury partners. "The process worked smoothly, and we're very happy with the outcome."

"The Applicant Auction process is extremely well organized and we were very pleased with the results for us" said Colin Campbell, of .CLUB LLC. "It is a fair and efficient way to resolve contention and support the industry at the same time, with auction funds remaining among the domain contenders."

Top Level Design's CEO Ray King praised the auction's execution. "The applicant auction process was great, the software functioned without a hitch and all of the folks involved were responsive and highly professional.  We look forward to participating in future auctions with Innovative Auctions."

In the last days leading up to the auction, many single-string and multiple-string participants have expressed an interest to participate in private auctions in general and the Applicant Auction in particular. Antony van Couvering's insightful article on CircleID a few days ago lays out the reasons why his company TLDH will participate in private auctions, and Colin Campbell, who announced earlier today that his company was the winner for .club, predicts that "many other parties who stood by the sidelines in this first auction will participate in future Applicant Auctions."

We'll hold additional auctions in the coming months, on a schedule and under terms mutually agreed upon by applicants, to resolve contention for many more of the rougly 200 gTLDs still pending. Please direct questions to info@applicantauction.com.

Written by Sheel Mohnot, Project Director, Applicant Auction

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

BIND 9 Users Should Upgrade to Most Recent Version to Avoid Remote Exploit

Thu, 2013-06-06 21:02

A remote exploit in the BIND 9 DNS software could allow hackers to trigger excessive memory use, significantly impacting the performance of DNS and other services running on the same server.

BIND is the most popular open source DNS server, and is almost universally used on Unix-based servers, including those running on Linux, the BSD variants, Mac OS X, and proprietary Unix variants like Solaris.

A flaw was recently discovered in the regular expression implementation used by the libdns library, which is part of the BIND package. The flaw enables a remote user to cause the 'named' process to consume excessive amounts of memory, eventually crashing the process and tying up server resources to the point at which the server becomes unresponsive.

Affected BIND versions include all 9.7 releases, 9.8 releases up to 9.8.5b1, and 9.9 releases up to version 9.9.3b1. Only versions of BIND running on UNIX-based systems are affected; the Windows version is not exploitable in this way. The Internet Systems Consortium considers this to be a critical exploit.

All authoritative and recursive DNS servers running the affected versions are vulnerable.

The most recent versions of BIND in the 9.8 and 9.9 series have been updated to close the vulnerability by disabling regular expression support by default.

The 9.7 series is no longer supported and those using it should update to one of the more recent versions. However, if that is not desirable or possible there is a workaround, which involves recompiling the software without regex support. Regex support can be disabled by editing the BIND software's 'config.h' file and replacing the line that reads "#define HAVE_REGEX_H 1" with "#undef HAVE_REGEX_H" before running 'make clean' and then recompiling BIND as usual.

At the time of the initial report, ISC stated that there were no active exploits for the vulnerability, but a user reported that he was able to develop and implement a working exploit in ten minutes.

While most of the major DNS providers, including DNS Made Easy, have patched and updated their software, DNS software on servers around the Internet tends to lag behind the most recent version. Because BIND is so widely used and DNS is essential to the functioning of the Internet, knowledge of this vulnerability should be disseminated as widely as possible to encourage system administrators to update.

It should be noted that this exploit is totally unrelated to the widely publicized problems with the DNS that allows criminals to launch DNS amplification attacks. Those attacks depend on a misconfiguration of DNS servers rather than a flaw in the software. However, both problems can be used to create a denial of service attack. Open recursive DNS servers can be used to direct large amounts of data at their targets; effectively using DNS as a weapon to attack other parts of the Internet's infrastructure, whereas the regex vulnerability could be used to attack the DNS itself.

Written by Evan Daniels

Follow CircleID on Twitter

More under: DNS, DNS Security

Categories: Net coverage

A Look Ahead to Fedora 19

Thu, 2013-06-06 21:00

Fedora 19 is the community-supported Linux distribution that is often used as a testing ground for features that eventually find their way into the Red Hat Enterprise Linux commercial distribution and its widely used noncommercial twin, CentOS. Both distributions are enormously popular on servers and so it's often instructive for sysadmins to keep an eye on what's happening with Fedora.

Fedora prides itself on being at the bleeding edge of Linux software, so all the cool new features tend to get implemented there before they are included in Ubuntu and the other popular distros.

Late May saw the release of the beta version of Fedora 19, AKA Schrödinger's Cat, which has a number of new features that will be of interest to developers, system administrators, and desktop users.

Updated Programming Languages

This release seems to be primarily focused on developers, who will be pleased to hear that many of the most popular programming languages used on the web are getting a bump.

Ruby 2.0 – This is the first major ruby release in half a decade, and adds a number of new features to the language, including keyword arguments, a move to UTF — 8 as the default source encoding, and many updates to the core classes.

PHP 5.5 – PHP 5.5 brings some great additions to everyone's favorite web programming language, including support for generators with the new "yield" keyword, and the addition of a new password hashing API that should make it easier to manage password storage more securely.

OpenJDK 8 – Those who really like to live on the bleeding edge can check out the technology review of OpenJDK 8, which won't be officially released until September (if all goes according to plan). This release is intended to add support for programming in multicore environments by adding closures to the language in addition to the standard performance enhancements and bug fixes.

Node.js – The Node.js' runtime and its dependencies will be included as standard for the first time.

Developer's Assistant

The Developer's Assistant is a new tool to make it easier to automate the setting up of an environment suitable for programming in a particular language, so it'll take care of installing compilers, interpreters, and their dependencies, and running various scripts to set environmental variables and other factors necessary for creating the perfect development environment for the chosen language.

OpenShift Origin

OpenShift origin is an application platform intended for the building, testing and deploying Platform-as-a-Service offerings. It was originally developed for RHEL and is now finding its way into Fedora.

Desktop environments are also getting the usual version increment, with KDE moving to version 4.10 and Gnome getting a bump to 3.10.

If you want, you can give the new Fedora Beta a try by grabbing the image from their site. The usual caveats apply: you shouldn't use it in a production environment.

Written by Graeme Caldwell, Inbound Marketer for InterWorx

Follow CircleID on Twitter

More under: Web

Categories: Net coverage

The Pros and Cons of Vectoring

Thu, 2013-06-06 20:11

Vectoring is an extension of DSL technology that employs the coordination of line signals to reduce crosstalk levels to improve performance. It is based on the concept of noise cancellation: the technology analyses noise conditions on copper lines and creates a cancelling anti-noise signal. While data rates of up to 100Mb/s are achievable, as with all DSL-based services this is distance related: the maximum available bit rate is possible at a range of about 300-400 meters. Performance degrades rapidly as the loop attenuation increases, becoming ineffective after 700-800 meters. The technology is seen as an intermediate step to full FttH networks.

Vectoring is also specific to the DSL environment, being more appropriate to DSL LLU but becoming severely limited when applied with VDSL2 sub-loops unless all the lines are managed by the same system. Vectoring requires that all copper pairs of a cable binder are operated by the same DSLAM, and several DSLAMs need to work in combination in order to eliminate crosstalk. A customer's DSL modem also needs to support vectoring. Though the ITU has devised a Recommendation for vectoring (G.993.5), the technology is still under development and currently there remains a lack of standardisation across these various elements.

The quality of the copper network is also an issue, with better quality (newer) copper providing better results. Poorer quality copper cabling (e.g. having poorer isolation, less copper pair drilling) can also result in higher crosstalk, and thus a higher degree of pair-related interference. Nevertheless, these issues could be addressed within the vectoring process.

Vectoring is also incompatible with some current regulatory measures, though again future amendments could bring a resolution to these difficulties. While Telekom Deutschland has been engaged in vectoring since late 2012, the technology requires regulatory approval since it is based on DSL infrastructure, and some services which TD must provide to competitors is incompatible with vectoring. As such, TD must negotiate with the regulator the removal of those services from its service obligations. A partial solution may be achieved through the proposal that the regulator restricts total unbundling obligations for copper access lines to the frequency space below 2.2MHz.

Operators which have looked to deploy vectoring are being driven by cost considerations. The European Commission's target in its 'Digital Agenda 2020' is for all citizens in the region to have access to speeds of at least 30Mb/s by 2020, with at least half of all premises to receive broadband at over 100Mb/s. This presupposed fibre for most areas, with the possibility of LTE to furnish rural and remote areas. However, some cash-strapped incumbents are considering vectoring to enable them to meet these looming targets more cheaply, while still pursuing fibre (principally FttC, supplemented by FttH in some cities).

Belgium was an early adopter of vectoring: the incumbent Belgacom had been one of the first players to deploy VDSL1, which has since been phased out for the more widely used VDSL2, supplying up to 50Mb/s for its bundled services customers. The company's investment in vectoring will enable it to upgrade a portion of its urban customers more quickly and cheaply than would otherwise be possible with FttH. Yet it is perceived as a stop-gap measure to buy it time and to forestall customer churn to the cablecos which have already introduced /s and 120Mb/s services across their footprints and are looking to release 200Mb/s services or higher. The inherent limitations of copper, regardless of technological tweaking, will mean that Belgacom will have to follow Scandinavian operators and deploy 1Gb/s FttH services in order to keep pace with consumer demand for bandwidth for the next decade.

Vectoring technology has also been trialled by Telekom Austria as part of its FttC GigaNet initiative, as also b P&T;Luxembourg which in early 2013 contracted Alcatel-Lucent (one of the vendors leading vectoring R&D;) to develop one of the world's first trials of combined VDSL2 bonding and vectoring technologies. The Italian altnet Fastweb is also investing in vectoring, in conjunction with a programme to deliver FttC to about 20% of households by the end of 2014. Fastweb's parent company Swisscom has budgeted €400 million for the project (as part of a wider FttC co-investment with Telecom Italia), costing each connection at about €100 per home. The low figure is partly explained by Fastweb being able to utilise its existing fibre networks. Nevertheless, Fastweb in the long-term is aiming to have an FttH-based network across its footprint, having recently committed an additional €2 billion investment to 2016, contracting Huawei to upgrade its network from 100Mb/s to 1Gb/s.

Written by Paul Budde, Managing Director of Paul Budde Communication

Follow CircleID on Twitter

More under: Access Providers, Broadband, Telecom

Categories: Net coverage

ISOC Funds 11 Projects that Enhance Internet Environments in Underserved Regions

Thu, 2013-06-06 19:59

Each year, a number of projects around the world receive funding from the Internet Society to do everything from connecting Sri Lankan farmers with up-to-date sustainable agriculture information, to teaching ICT skills to at-risk youth in Africa, to working with local engineers to further their IPv6 implementation knowledge. These projects are planned and brought to life by Internet Society members.The Internet Society today announced funding for 11 community-based Internet projects that will enhance the Internet ecosystem in underserved communities around the world. The Community Grants are awarded twice each year to Internet Society Chapters and Members. Recipients receive up to US$10,000 to implement their projects.

The 11 projects funded in this round of grants will:

  • Enable teachers and students in the Sultanate of Oman to produce and share video presentations that meet Omani curriculum standards and students' needs
  • Facilitate access to the Internet via a wireless mesh network for students, parents, and others in rural Panama, enabling them to use their own equipment at home
  • Provide research for an evidence-based ICT policy to help bridge the Internet divide in Ethiopia
  • Develop online resources to help Internet Society chapters effectively create and implement cost-effective video streaming to its membership and the wider community
  • Create a digital community of women in Science, Technology, Engineering, and Mathematics (STEM) in Kenya to serve as a virtual mentorship program
  • Support the Koh Sirae School in Thailand by enhancing their wireless network, updating the learning center and classrooms with laptops and workstations, and providing furniture for 1,000 children and 53 teachers
  • Empower and connect the women of Chuuk State in the Pacific Islands by establishing an Internet-connected computer lab at the Chuuk Women's Council (CWC) building and offering classes in ICT usage
  • Promote child online safety in Uganda by educating children, teachers and parents at three urban schools; developing a user guide; and advocating for sound policies that ensure Internet safety
  • Build a collaborative, independent, and transparent observatory that quantitatively assesses the Internet quality in Lebanon to help providers enhance their services and the Lebanese government accelerate the transition to broadband Internet
  • Jump start the establishment of an Internet of Things (IoT) community-operated space in the University of the Philippines, where people with shared interests in computers, technology, science, digital art, or electronic art can meet and collaborate
  • Initiate a movement that will encourage and facilitate university students majoring in ICT subjects to contribute their knowledge, skills, and time to teach ICT courses at Indonesia's rural high schools

The next application round opens in September. Additional information is available here on the Community Grants Programme and these winning projects.

Follow CircleID on Twitter

More under: Access Providers, Broadband

Categories: Net coverage