Net coverage

Multi-Layer Security Architecture - Importance of DNS Firewalls

CircleID posts - Thu, 2013-05-30 20:09

In today's world with botnets, viruses and other nefarious applications that use DNS to further their harmful activities, outbound DNS security has been largely overlooked. As a part of multi-layer security architecture, a DNS Firewall should not be ignored.

After serving as a consultant for multiple organizations, I have encountered many companies that allow all internal devices to send outbound DNS queries to external DNS servers — a practice that can lead to myriad problems, including cache poisoning and misdirection to rogue IP addresses. For companies that want to enable internal devices to send these types of queries, having the ability to manually or automatically blacklist domains is a very effective way to add a layer of security to a broader security architecture.

DNS & Blacklisting

Companies of all sizes are susceptible to DNS attacks. Depending on the type of external recursive DNS server that is running, there are a number of ways to tighten your outbound DNS recursive service, from manual domain blocking to fully automated updates as threats appear.

I recently worked with a company that was infected by a virus that got ahead of the anti-virus software for a short period of time. The security team knew that approximately 100-150 domains were actively being resolved to aid in the spread of the virus and payload. We resolved the issue by manually blacklisting the affected domains.

Infoblox has created a very compelling solution that allows users to update their blacklist as threats emerge. While we were able to successfully help mitigate the threat with manual updates, the Infoblox solution would have enabled us to be even more proactive.

If your company is small and runs a DNS server in house, using something tried and true, such as BIND can benefit you from this type of added security. Depending on where you prefer to source your list of blacklisted domains, these can be loaded to the external recursive server — causing a DNS firewall effect. The server will need to be updated regularly, removing domains that no longer need to be blacklisted and adding new domains on an as-needed basis.

Ensuring that the DNS firewall architecture is as effective as possible will require reviewing your firewall rules. For example, I recommend restricting outbound port 53, Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) ,to allow only recursive server IP addresses access to the Internet on port 53 UDP/TCP. This rule would need to allow access to ANY IP address on the Internet, as these servers will have to walk the DNS tree and resolve DNS from servers worldwide.

Written by Jesse Dunagan, Senior Professional Services Engineer at Neustar

Follow CircleID on Twitter

More under: Cyberattack, DNS, Security

Categories: Net coverage

European ccTLDs Passed 64 Million Domains, Growth Slower, Reports CENTR

CircleID news briefs - Thu, 2013-05-30 18:21

CENTR, the European ccTLD organization, has published its biannual statistics report on the state of the domain name industry with a European ccTLD focus. From the report:

"European ccTLDs closed April 2013 with just over 64 million domains under management. Over the 12 months preceding, overall net growth was 6.7% — an increase of around 4 million domains. This growth however, is a lower rate compared with that of the same period in the year before. This could be most likely explained by factors such as the maturing ccTLD market in Europe (particularly among the larger Operators) as well as the ongoing financial crisis. Renewal rates remain consistent over the past 3 years at around 79% on the whole and actually increasing marginally in some zones."

Follow CircleID on Twitter

More under: Domain Names, Top-Level Domains

Categories: Net coverage

European ccTLDs Passed 64 Million Domains, Growth Slower, Reports CENTR

CircleID posts - Thu, 2013-05-30 18:21

CENTR, the European ccTLD organization, has published its biannual statistics report on the state of the domain name industry with a European ccTLD focus. From the report:

"European ccTLDs closed April 2013 with just over 64 million domains under management. Over the 12 months preceding, overall net growth was 6.7% — an increase of around 4 million domains. This growth however, is a lower rate compared with that of the same period in the year before. This could be most likely explained by factors such as the maturing ccTLD market in Europe (particularly among the larger Operators) as well as the ongoing financial crisis. Renewal rates remain consistent over the past 3 years at around 79% on the whole and actually increasing marginally in some zones."

Follow CircleID on Twitter

More under: Domain Names, Top-Level Domains

Categories: Net coverage

Are You Ready for the Launch of New gTLDs?

CircleID posts - Wed, 2013-05-29 23:24

It seems as though the inevitable is now upon us, and though there are many that have wished this day never come, the launch of the first new gTLD registries is approaching.

Now whether the first new gTLD registry will launch within the next few months or be delayed due to Advice from world governments remains to be seen. However, most companies with which I have spoken desperately need any extra time to prepare for the launch of new gTLDs.

So exactly, what should companies be doing to prepare for the launch of new gTLDs?

1. Identify and Submit Trademarks to the Trademark Clearinghouse – The Trademark Clearinghouse will serve as a central repository of authenticated trademark information. The information contained within the Trademark Clearinghouse will be used to enable Sunrise Registrations and Domain Name Blocking.

2. Review All New gTLD Applications – Last year, ICANN revealed the entire list of 1,930 applications, representing approximately 1,400 new TLDs, about half of which were closed registries. It is important for brand owners to familiarize themselves with the applications and begin thinking about how these new gTLDs will affect their domain management policies and brand protections strategies. There are quite a few resources on the web to facilitate this process, including the MarkMonitor New gTLD Application Database located here.

3. Rationalize Existing Domain Name Portfolios – Now, more than ever, is the time to take a hard look at defensive holdings and decide if any of your existing domain names are no longer needed. Domain traffic statistics should be considered and used to add domains where needed or drop domains with little or no traffic.

4. Revise and Implement Domain Management Policies – It is important to create enterprise-wide policies and procedures covering topics such as who can register domains, what should be registered and how those registrations will be used. Policies should also include where you want your domains to "point" as well as security measures like domain locking.

5. Ensure that Your Existing Registrar is Committed to Providing New gTLD Registration Services – Select a Registrar that is committed to providing registration services for all new gTLDs. Working with a single Registrar (as opposed to multiple Registrars) will help to ease some of this anticipated complexity.

6. Become Familiar with New Rights Protection Mechanisms – ICANN has adopted a number of new Rights Protection Mechanisms, including Trademark Claims, Sunrise Registrations, the URS (Uniform Rapid Suspension), the PDDRP (Post-Delegation Dispute Resolution Procedure) and the RRDRP.

7. Police for Abuse and Take Action Only When Appropriate – It's important to monitor for potential problems in all new gTLD registrations for improper use of brands, trademarks and slogans. By monitoring domain registrations, companies can identify abuse and take immediate action where it makes sense.

8. Set Budgets Accordingly – Budgets will likely need to increase to take in account registration fees, Trademark Clearinghouse submission fees as well as additional costs for policing and remediating domain name abuse.

Of course, there are still many unknowns surrounding the launch of new gTLDs such as timing, costs, eligibility requirements, etc. That said, now is the time to prepare given the anticipated complexity expected.

Written by Elisa Cooper, Director of Product Marketing at MarkMonitor

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

Liberty Reserve Now, Bitcoin Next?

CircleID posts - Wed, 2013-05-29 17:55

The papers have been abuzz with the shutdown of Liberty Reserve, an online payments system, due to accusations of large scale money laundering via anonymous transactions. Many people have noted similarities between LR and Bitcoin and wonder whether Bitcoin is next. I doubt it, because with Bitcoin, nothing is anonymous.

Liberty Reserve was designed to make it extremely difficult to figure out who paid what to whom. Accounts were anonymous, identified only by an email address and an unverified birth date. Users could direct LR to move funds from their account to another, optionally (and usually) blinding the transaction so the payee couldn't tell who the payor was. But they couldn't transfer money in or out. LR sold credits in bulk to a handful of exchangers, who handled purchases and sales. So to put money in, you'd contact an exchanger to buy some of their LR credits, which they would then transfer to your account. To take money out, you'd transfer LR credits to an exchanger who would in turn pay you. Nobody kept transaction records, so payments to exchangers couldn't be connected to the LR accounts they funded, there was no record of where the credits in each LR account came from, and outgoing payments from exchangers couldn't be connected to the accounts that funded those payments. This was an ideal setup for drug deals and money laundering, not so much for legitimate commerce.

Bitcoins are not like that. The wallets, analogous to accounts, are nominally anonymous, but the bitcoins aren't. Every wallet and every bitcoin has a serial number, and every transaction is publicly logged. It's as though you did all your buying and selling with $100 bills, but for each transaction the serial number of each bill and the two wallets in each transaction is published with a timestamp for all the world to see. (This is how Bitcoin prevents double spending, by the payee checking the public logs to ensure that the payor minted or received the bitcoins and hasn't paid them to someone else.) This makes truly anonymous transactions very hard.

Multiple transactions from the same wallet are trivially linked, so if the counterparty in any of your transactions knows who you are, all the transactions from that wallet are known to be you. This is roughly the same problem with using a prepaid debit card or throwaway cell phone purchased for cash — if one of the people you buy something from, or one of the people you call knows who you are, your cover is blown. While it's possible to obscure the situation by using multiple wallets, if you transfer bitcoins from one wallet to another, that transaction is public, and a sufficiently determined analyst can likely figure out they're both you. Doing all of your transactions so that the other party can't identify you is very hard, unless you're the kind of person who wears a different ski mask each time he buys groceries.

There have been some widely publicised thefts of large numbers of bitcoins, in one case by installing malware on the owner's PC which was visible on the Internet and using the malware to transfer bitcoins out of his wallet. But the thief hasn't spent the loot and probably never will, because everyone knows the serial numbers of the stolen bitcoins, and nobody will accept them for payment. This is sort of like unsalable stolen famous paintings, except that there's no analogy to the rich collector who'll buy the art and never show it to anyone else, because, frankly, bitcoins aren't much to look at. Again, the bitcoins aren't anonymous.

You could imagine a bitcoin mixmaster, which took in bitcoins from lots of people, mixed them around and sent back a random selection to each, less a small transaction fee, to try and obscure the chain of ownership. But that wouldn't be much of a business for anyone who wanted to live in the civilized world since it would just scream money laundering. (Yeah, we know cyberlibertarians would do it out of principle, but the other 99% of the business would be drug dealers.)

And finally, the only place where you can exchange any significant number of bitcoins for normal money is still MtGox. They are in Japan, and they take money laundering seriously, so you cannot sell more than a handful without providing extensive documentation such as an image of your passport, and your bank account numbers. Maybe there will be other exchanges eventually, but it's not an easy business to get into. MtGox is a broker, arranging sales between its clients, and doesn't keep bitcoins in inventory. For a broker to be successful, it needs enough clients that buyers can successfully find sellers and vice versa, which means that big brokers tend to get bigger, and it's hard to start a new one. You could try to be a broker buying and selling directly to customers, but given how volatile bitcoin prices are, you'd likely go broke when the market turned against you.

Or you could try to arrange a private transaction by finding someone with bitcoins to sell, or looking to buy. That can work for small transactions, but as soon as someone does very much of that, he's in the money transfer business and money laundering laws kick in.

So with all these factors, perfectly logged transactions, a complete public history of every bitcoin so that tainted ones are unusable, and a chokepoint on cashing out, bitcoin makes a great novelty (akin as I have said before to pet rocks) but not a very good medium for large scale money laundering.

Written by John Levine, Author, Consultant & Speaker

Follow CircleID on Twitter

More under: Cybercrime, Security, Web

Categories: Net coverage

US Should Take More Aggressive Counter-Measures On IP Theft, Including Use of Malware

CircleID posts - Tue, 2013-05-28 19:31

A bipartisan Commission recently produced a report titled, "The Report of the Commission on the Theft of American Intellectual Property". Karl Bode from dslreports.com writes: The almost-respectfully-sounding Commission on the Theft of American Intellectual Property (read: the entertainment industry) has come up with a new 84 page report (pdf) that has a few curious recommendations for Congress. Among them is the request by the industry that they be allowed to use malware, trojans, and other countermeasures against pirates. That includes the use of so-called "ransomware," which would allow the entertainment industry to lock down your computer and all of your files — until you purportedly confess to downloading copyrighted materials."

Follow CircleID on Twitter

More under: Law, Malware, Policy & Regulation

Categories: Net coverage

US Should Take More Aggressive Counter-Measures On IP Theft, Including Use of Malware

CircleID news briefs - Tue, 2013-05-28 19:31

A bipartisan Commission recently produced a report titled, "The Report of the Commission on the Theft of American Intellectual Property". Karl Bode from dslreports.com writes: The almost-respectfully-sounding Commission on the Theft of American Intellectual Property (read: the entertainment industry) has come up with a new 84 page report (pdf) that has a few curious recommendations for Congress. Among them is the request by the industry that they be allowed to use malware, trojans, and other countermeasures against pirates. That includes the use of so-called "ransomware," which would allow the entertainment industry to lock down your computer and all of your files — until you purportedly confess to downloading copyrighted materials."

Follow CircleID on Twitter

More under: Law, Malware, Policy & Regulation

Categories: Net coverage

Video Dominates Internet Traffic As File Sharing Networks Overall Traffic Continues to Fall

CircleID posts - Tue, 2013-05-28 19:09

Video continues to be the trend to watch as devices and tablets cater to higher definition content with larger screen sizes enabling the market for longer form video on mobile, reports Sandvine in its latest Internet traffic trends report.

"The 'home roaming' phenomenon, the concept of subscribers voluntarily offloading mobile traffic onto Wi-Fi networks, has continued. This combined with increased consumption of real-time entertainment on mobile networks globally, and the doubling of Netflix traffic on mobile networks in North America, suggests that users are getting comfortable with watching longer form videos on their handheld devices."

Other findings include:

• Apple devices (iPads, iPhones, iPods, AppleTVs, and Macs) represent 35% of all audio and video streaming on North American home networks

• YouTube accounts for over 20% of mobile downstream traffic in North America, Europe and Latin America

• Netflix mobile data usage share doubled in the last 12 months in North America

Follow CircleID on Twitter

More under: Broadband, IPTV

Categories: Net coverage

Video Dominates Internet Traffic As File Sharing Networks Overall Traffic Continues to Fall

CircleID news briefs - Tue, 2013-05-28 19:09

Video continues to be the trend to watch as devices and tablets cater to higher definition content with larger screen sizes enabling the market for longer form video on mobile, reports Sandvine in its latest Internet traffic trends report.

"The 'home roaming' phenomenon, the concept of subscribers voluntarily offloading mobile traffic onto Wi-Fi networks, has continued. This combined with increased consumption of real-time entertainment on mobile networks globally, and the doubling of Netflix traffic on mobile networks in North America, suggests that users are getting comfortable with watching longer form videos on their handheld devices."

Other findings include:

• Apple devices (iPads, iPhones, iPods, AppleTVs, and Macs) represent 35% of all audio and video streaming on North American home networks

• YouTube accounts for over 20% of mobile downstream traffic in North America, Europe and Latin America

• Netflix mobile data usage share doubled in the last 12 months in North America

Follow CircleID on Twitter

More under: Broadband, IPTV

Categories: Net coverage

Google Plans Wireless Access to Remote Regions Using High-Altitude Balloons and Blimps

CircleID posts - Tue, 2013-05-28 18:37

Google is reported to be building huge wireless networks across Africa and Asia, using high-altitude balloons and blimps. The company is aiming to finance, build and help operate networks from sub-Saharan Africa to Southeast Asia, with the aim of connecting around a billion people to the web. To help enable the campaign, Google has been putting together an ecosystem of low-cost smartphones running Android on low-power microprocessors.

Read full story: Wired News

Follow CircleID on Twitter

More under: Access Providers, Mobile, Wireless

Categories: Net coverage

Google Plans Wireless Access to Remote Regions Using High-Altitude Balloons and Blimps

CircleID news briefs - Tue, 2013-05-28 18:37

Google is reported to be building huge wireless networks across Africa and Asia, using high-altitude balloons and blimps. The company is aiming to finance, build and help operate networks from sub-Saharan Africa to Southeast Asia, with the aim of connecting around a billion people to the web. To help enable the campaign, Google has been putting together an ecosystem of low-cost smartphones running Android on low-power microprocessors.

Read full story: Wired News

Follow CircleID on Twitter

More under: Access Providers, Mobile, Wireless

Categories: Net coverage

Who Has Helped the Internet? May 31 Deadline for Nominations for 2013 Jonathan Postel Service Award

CircleID posts - Fri, 2013-05-24 19:53

Do you know of a person or organization who has made a great contribution to the Internet community? If so, have you considered nominating that person or organization for the 2013 Jonathan B. Postel Service Award? The nomination deadline of May 31 is fast approaching! From the description of the award:

Each year, the Internet Society awards the Jonathan B. Postel Service Award. This award is presented to an individual or an organization that has made outstanding contributions in service to the data communications community. The award includes a presentation crystal and a prize of US$20,000.

The award is focused on sustained and substantial technical contributions, service to the community, and leadership. The committee places particular emphasis on candidates who have supported and enabled others in addition to their own specific actions.

The award includes a $20,000 USD prize and will be presented at the 87th meeting of the Internet Engineering Task Force (IETF) in Berlin, Germany, in July. Anyone can nominate a person or organization for consideration.

To understand more about the award, you can view the list of past Postel Service Award recipients and also read more about Jon Postel and his many contributions to the Internet.

Full disclosure: I am employed the Internet Society but have nothing whatsoever to do with this award. I am posting this here on CircleID purely because I figure that people within the CircleID community of readers are highly likely to know of candidates who should be considered for the award.

Written by Dan York, Author and Speaker on Internet technologies

Follow CircleID on Twitter

More under: Web

Categories: Net coverage

Removing Need at RIPE

CircleID posts - Fri, 2013-05-24 02:25

I recently attended RIPE 66 where Tore Anderson presented his suggested policy change 2013-03, "No Need – Post-Depletion Reality Adjustment and Cleanup." In his presentation, Tore suggested that this policy proposal was primarily aimed at removing the requirement to complete the form(s) used to document need. There was a significant amount of discussion around bureaucracy, convenience, and "liking" (or not) the process of demonstrating need. Laziness has never been a compelling argument for me and this is no exception. The fact is that any responsible network manager must keep track of IP address utilization in order to design and operate their network, regardless of RIR policy. Filling this existing information into a form really does not constitute a major hurdle to network or business operations. So setting aside the laziness decree, let's move on to the rationale presented.

IPv4 is Dead?

Tore pointed to section 3.0.3 of RIPE-582, the "IPv4 Address Allocation and Assignment Policies for the RIPE NCC Service Region:"

Conservation: Public IPv4 address space must be fairly distributed to the End Users operating networks. To maximise the lifetime of the public IPv4 address space, addresses must be distributed according to need, and stockpiling must be prevented.

According to Mr. Anderson, this is "something that has served us well for quite a long time" but now that IANA and RIPE have essentially exhausted their supply of free/unallocated IPv4 addresses, is obsolete. From the summary of the proposal:

Following the depletion of the IANA free pool on the 3rd of February 2011, and the subsequent depletion of the RIPE NCC free pool on the 14th of September 2012, the "lifetime of the public IPv4 address space" in the RIPE NCC region has reached zero, making the stated goal unattainable and therefore obsolete.

This argument appears to be the result of what I would consider a very narrow and unjustified interpretation of the goal of conservation. Tore seems to interpret "maximise the lifetime of the public IPv4 address space" to mean "maximise the duration that public IPv4 space remains available at the RIPE NCC." Under this translation, it is possible to believe that a paradigm shift has occurred which calls for a drastic reassessment of the goal of conservation. If, however, we take the goal as written in RIPE NCC policy as a carefully crafted statement meant to convey it's meaning directly and without interpretation or translation; a different conclusion seems obvious. While Tore is correct in his observation that IANA and RIPE NCC (and APNIC and soon ARIN) have all but depleted their reserves of "free" IPv4 addresses, that does not mean that the lifetime of the public IPv4 address space has come to an end. While I would love for everyone to enable IPv6 and turn off IPv4 tomorrow (or better yet, today), that is simply not going to happen all at once. The migration to IPv6 is underway and gaining momentum but there are many legacy devices and legacy networks which will require the use of IPv4 to continue for years to come. Understanding that the useful life of IPv4 is far from over (raise your hand if you have used IPv4 for a critical communication in the past 24 hours) makes it quite easy to see that we still have a need to "maximise the lifetime of the public IPv4 address space."

In fact, the IANA and RIR free pools have essentially been a buffer protecting us from those who would seek to abuse the public IPv4 address space. As long as there was a reserve of IPv4 addresses, perturbations caused by bad actors could be absorbed to a large extent by doling out "new" addresses into the system under the care of more responsible folks. Now that almost all of the public IPv4 address space has moved from RIR pools into the "wild," there is arguably a much greater need to practice conservation. The loss of the RIR free pool buffer does not mark the end of "the lifetime of the public IPv4 address space" as Tore suggests but rather marks our entry into a new phase of that lifetime where stockpiling and hoarding have become even more dangerous.

A Paradox

Tore made two other arguments in his presentation, and I have trouble rectifying the paradox created by believing both of them at once. The two arguments are not new, I have heard them both many times before in similar debates, and they invariably go something like this:

  1. Because IPv4 addresses are now a scarce resource, people will only use what they need, so we don't need to require them to demonstrate need in policy.
  2. Because IPv4 addresses are now a scarce resource, people will lie and cheat to get more addresses than they can justify, so we should remove the incentives for them to lie and cheat.

I want to look at these arguments first individually, and then examine the paradox they create when combined.

Early in his presentation, Tore said something to the effect of because the LIR can not return to RIPE NCC for more addresses, they would never give a customer more addresses than they need and that the folks involved will find ways of assessing this need independently. OK, if this is true then why not make it easy for everyone involved by standardizing the information and process required to demonstrate need? Oh, right, we already have that. Removing this standardization opens the door for abuse, large and small. The most obvious example is a wealthy spammer paying an ISP for more addresses then they can technically justify, in order to carry out their illegal bulk mail operation. The reverse is true as well, with no standard for efficient utilization to point to, it is more possible for an ISP to withhold addresses from a down stream customer (perhaps a competitor in some service) who actually does have justifiable technical need for them.

The second argument is more ridiculous. I truly don't understand how anyone can be convinced by the "people are breaking the rules so removing the rules solves the problem" argument. While I am in favor of removing many of the rules, laws, and regulations that I am currently aware of; I favor removing them not because people break them but because they are unjust rules which provide the wrong incentives to society. If you have a legitimate problem with people stealing bread, for example, then making the theft of bread legal does not in any way solve your problem. While it is possible that bread thieves may be less likely to lie about stealing the bread (since they no longer fear legal repercussions) and it is certainly true that they would no longer be breaking the law, law-breaking and lying are not the problem. The theft of bread is the problem. Legalizing bread theft has only one possible outcome: Encouraging more people to steal bread. So the fact that bad actors currently have an incentive to lie and cheat to get more addresses in no way convinces me that making their bad behavior "legal" would solve the problem. If anything it is likely to exacerbate the issue by essentially condoning the bad behavior, causing others to obtain more addresses then they can technically justify.

Of course it get's even worse when you try to hold up both of these arguments as true at once. If people can be counted on to take only what they need, why are they lying and cheating to get more? If people are willing to lie and cheat to get around the needs based rules, why would they abide by needs when the rules are removed? I just can't make these two statements add up in a way that makes any sense.

Conclusions

Since we still need IPv4 to continue working for some time, maximizing the lifetime of the public IPv4 address space through conservation is still a noble and necessary goal of the RIRs, perhaps more important than ever. Filling out some paperwork (with information you already have at hand) is a very low burden for maintaining this goal. At this time, there is no convincing rationale for removing this core tenant of the Internet model which has served us so well.

Written by Chris Grundemann, Network Architect, Author, and Speaker

Follow CircleID on Twitter

More under: Internet Governance, Internet Protocol, IP Addressing, IPv6, Policy & Regulation, Regional Registries

Categories: Net coverage

IPv6: Penny Wise and Pound Foolish

CircleID posts - Tue, 2013-05-21 23:24

The theory put forward by the IETF was simple enough… while there were still enough IPv4 addresses, use transition technologies to migrate to dual stack and then wean IPv4 off over time. All nice and tidy. The way engineers, myself included, liked it. However those controlling the purse strings had a different idea. There was, don't spend a cent on protocol infrastructure improvement until the absolute last minute — there's no ROI in IPv6 for shareholders. Getting in front of the problem at the expense of more marketable infrastructure upgrades was career suicide.

Graph from my 2008 sales presentation… sound but not convincing

By considering this a technical issue rather than a business one, it was easier to delay the inevitable but this had unintended consequences. The fewer IPv4 addresses there were, the fewer technical options there were to address the problem. This coupled with a simpler user experience/expense led us to today and the emergence of the so called Carrier Grade NAT (CGN).

[For a thorough overview of the various flavors of CGN and the choices in front of us, see Phil's post, The Hatred of CGN on gogoNET. Don't let the title fool you.]

By deploying CGNs, ISPs are sharing single IPv4 addresses with more and more households and this isn't good. Why? Because two levels of NAT break things and that leads to unhappy customers. Case in point, British Telecom. BT recently put their retail Option 1 broadband customers (lowest tier) behind CGNs and they are now feeling the pain for a variety of brokenness but mostly because Xbox Live stopped working.

Asian fixed line operators were the first to deploy CGN as a Band-Aid to cover over the problem until the rest of the world standardized on a transition solution. Japan and South Korea notwithstanding I suspect the reasons we haven't heard the same outcry earlier are cultural and the result of lower expectations/SLAs. However in a mature broadband market like the UK where customers are vocal and expectations/SLAs are high you are going to hear about it. And since there isn't a steady stream of new customers to offset the churn, this can turn into a PR nightmare resulting in the loss of high acquisition-cost customers.

Expect to see more of these reports as more European and North American ISPs follow suit. The irony here is it was the British who coined the term, "Penny wise and pound foolish".

Below are a selection of reader comments from the article, "BT Retail in Carrier Grade NAT Pilot”.

Posted by zyborg47 13 days ago:
This IPv4 should have been sorted out a few years back if the larger ISPs have got off their backside and started to change to IPv6 then we would not have this problem and IPv6 routers/modems would not have stayed at such a high price for so long. The problem is now, we the paying public, will suffer because of this, or the poor sods on Bt option one anyway.

Posted by Kushan 13 days ago:
If you start trialing CGNAT before you trial IPv6, you're doing something wrong.

Posted by driz 13 days ago
Is CGNAT even technically an 'internet connection' anymore?

Written by Bruce Sinclair, CEO, gogo6

Follow CircleID on Twitter

More under: IP Addressing, IPv6

Categories: Net coverage

An Agreement in Geneva

CircleID posts - Tue, 2013-05-21 16:19

For all the tranquility at the end of last week's World Technology/ICT Policy Forum (WTPF), E.B. White's words come to mind: "there is nothing more likely to start disagreement among people or countries than an agreement." One also has to wonder though what a literary stylist like White would think of the linguistic gyrations demanded by the compromises reached at the WTPF in Geneva, and what they portend.

Past as Prologue

The management of the International Telecommunication Union (ITU) and a number of influential Member States made best efforts to recalibrate the dialogue at the WTPF towards mending political fences battered by the ITU's last major gathering back in December, and delegates of all stripes found a decent hearing for their concerns. But attempts by governments of Brazil and Russia to heighten the prominence of governments and the ITU itself in Internet governance still clashed with traditional defenders of the multistakeholder model. Where the clashes could not be resolved, we are left with gems such as this: a formal recommendation dealing with the role for governments that "invites all stakeholders to work on these issues." Where, if anywhere, do you go from there?

Where to, ITU?

Uncertainty exists about how the next stages of the Internet governance debate will play out, but we at least know on what stages they will be played. Stakeholders in need of determining which venues to attend can choose among plenty of meetings and acronyms, IGF to CSTD to UNGA's 2C. The next opportunity for the ITU to consider the issue of Internet governance will be their own Council Working Group on the World Summit on the Information Society (WSIS) in June that take place alongside the ITU's larger Council meetings, where a broader discussion around the organization's budget may prove more important in determining priorities for the organization and how much resource it should spend in traditional areas of expertise, like satellite and spectrum allocations, and Internet policy.

In the coming months the ITU will also host a series of regional meetings in preparation for the World Telecommunications Development Conference (WTDC) from 31 March – 11 April 2014 in Sharm el?Sheikh, Egypt. The ITU is colocating that meeting with its own ten year review of WSIS (called WSIS+10), as well as its annual WSIS Forum, in which it has traditionally served to review the WSIS action lines for itself and various other UN institutions.

Heralding what?

These meetings, and some of the new voices in them, imply that the ITU continues to position itself as a key forum for governments to come and make their views heard on Internet matters — a welcome if redundant function. So if the process of getting the agreement struck at the WTPF suggests anything, it is that stakeholders can agree to disagree. This will not mean a stalemate or halt a discussion, in this case, but rather an evolving debate about the role for government in Internet policymaking. The steady pace of ITU-sponsored engagements will provide further opportunities to agree, disagree, and in the end, hopefully create a set of shared understandings and brokered solutions that actually advance the debate to the benefit of people and countries around the world.

Written by Christopher Martin, Senior Manager, International Public Policy at Access Partnership

Follow CircleID on Twitter

More under: Internet Governance

Categories: Net coverage

How to Stop the Spread of Malware? A Call for Action

CircleID posts - Mon, 2013-05-20 22:07

On Webwereld an article was published (in Dutch) following a new Kaspersky malware report Q1-2013. Nothing new was mentioned here. The Netherlands remains the number 3 as far as sending malware from Dutch servers is concerned. At the same time Kaspersky writes that The Netherlands is one of the most safe countries as far as infections go. So what is going on here?

Inbound, outbound and on site

From my anti-spam background I have the experience that as long as a spammer remains under the radar of national authorities, e.g. by making sure that he never targets end users in his own country, he is pretty safe. The international cooperation between national authorities is so low, that seldom that something happens in cross border cases. Priorities are mainly given to national cases as cooperation is near existent. (If priority is given to spam fighting at all.)

The same will be the case for the spreading of malware. National authorities focus on things national. Cross border issues are just too much of a hassle and no one was murdered, right?

Of course it is true that if the allegation is right and we are talking about 157 command and control servers for botnets on thousands and thousands if not millions of servers in The Netherlands, the 157 servers is a very low figure. This does not mean that we can ignore this figure if our country is the number 3 spewing malware country in the world. Something needs to happen. Preferably through self-regulation and if not that way, then through regulation.

If it is also true that it is the same few hosting providers that never respond to complaints, it is time to either make them listen or shut them down. There is no excuse for (regulatory) enforcement bodies not to do so. Harm is being done, the economic effects are huge and the name of The Netherlands is mentioned negatively again and again.

In January 2005 at OPTA we were very proud that we had dropped from the number 3 position worldwide for spamming to a position out of the top 20. In six months time! I do not think it is much harder to do so for sending malware.

A suggestion for an action plan

Here's an action plan:

  1. Give it priority
  2. Start a national awareness campaign
  3. Provide a final date to the hosting community
  4. Preferably coordinate on 1 to 3 with DHPA (Dutch Hosting Providers Association)
  5. Start acting against those that do not mend their ways.

And if anti-botnet infection centre ABUSE-IX starts doing its part on disinfecting end users' devices, The Netherlands may have a winning combination this way.

Of course this can be duplicated in your respective countries also for spam, malware, phishing, cyber crime, etc.

International cooperation

Of course the topics surrounding cyber security calls for international cooperation and coordination. In 2013 it is still virtually impossible to cooperate on cross border cyber crime, spam, the spreading of malware. This needs addressing on EU and world level. National institutions can not afford not to do so. Even if it is hard to give up a little national jurisdiction. There are in between forms, like coordination.

Conclusion

Let's push the boundaries for cyber threats back. It all starts with ambition. Experience shows that (the threat of) enforcement works. This isn't rocket science, it is about political will and insight.

Written by Wout de Natris, Consultant international cooperation cyber crime + trainer spam enforcement

Follow CircleID on Twitter

More under: Cybercrime, Internet Governance, Law, Malware, Security, Spam

Categories: Net coverage

A Royal Opinion on Carrier Grade NATs

CircleID posts - Mon, 2013-05-20 02:13

There are still a number of countries who have Queen Elizabeth as their titular head of state. My country, Australia, is one of those countries. It's difficult to understand what exactly her role is these days in the context of Australian governmental matters, and I suspect even in the United Kingdom many folk share my constitutional uncertainty. Nevertheless, it's all great theatre and rich pageantry, with great press coverage thrown in as well. In the United Kingdom every year the Queen reads a speech prepared by the government of the day, which details the legislative measures that are being proposed by the government for the coming year. Earlier this month the Queen's speech included the following statement in her speech:

"In relation to the problem of matching Internet Protocol addresses, my government will bring forward proposals to enable the protection of the public and the investigation of crime in Cyberspace." [on Youtube, 5:45]

As the Guardian pointed out:

The text of the Queen's speech gives the go-ahead to legislation, if needed, to deal with the limited technical problem of there being many more devices including phones and tablets in use than the number of internet protocol (IP) addresses that allow the police to identify who sent an email or made a Skype call at a given time.

What's the problem here?

The perspective of various law enforcement agencies is that the Internet is seen as a space that has been systematically abused, and too many folk are felling prey to various forms of deceit and fraud. If you add to that the undercurrent of concern that the Internet contains a wide range of vulnerabilities from the perspective of what we could generally term "cybersecurity," then it's not surprising to see law enforcement agencies now turning to legislation to assist them in undertaking their role. And part of their desired toolset in undertaking investigations and gathering intelligence is access to records from the public communications networks of exactly who is talking to whom. Such measures are used in many countries, falling under the generic title of "data retention."

In the world of telephony the term "data retention" was used to refer to the capture and storage of call detail records. Such records typically contain the telephone numbers used, time and duration of the call, and may also include ancillary information including location and subscriber details. Obviously such detailed use data is highly susceptible to data mining, and such call records can be used to identify an individual's associates and can be readily used to identify members of a group. Obviously, such data has been of enormous interest to various forms of law enforcement and security agencies over the years, even without the call conversation logs from direct wire tapping of targeted individuals. The regulatory measures designed to protect access to these records vary from country to country, but access is typically made available to agencies on the grounds of national security, law enforcement or even enforcement of taxation conformance.

So if that's what happens in telephony, what happens on the Internet?

Here the story is a continually evolving one, and these days the issues of IPv4 address exhaustion and IPv6 are starting to be very important topics in this area. To see why it is probably worth a looking at how this used to happen and what technical changes have prompted changes to the requirements related to data retention for Internet Service Providers (ISPs).

The original model of the analogous data records for the Internet was the registry of allocated addresses maintained by Internet Network Information Centre, or Internic. This registry did not record any form of packet activity, but was the reference data that shows which entity had been assigned which IP address. So if you wanted to know what entity was using a particular IP address, then you could use a very simple "whois" query tool to interrogate this database:

$ whois -h whois.apnic.net 202.12.29.211

inetnum: 202.12.28.0 - 202.12.29.255
netname: APNIC-AP
descr: Asia Pacific Network Information Centre
descr: Regional Internet Registry for the Asia-Pacific Region
descr: 6 Cordelia Street
descr: PO Box 3646
descr: South Brisbane, QLD 4101
descr: Australia

However, this model of the registry making direct allocations to end user entities stopped in the early 1990's with the advent of the ISP. The early models of ISP service were commonly based on the dial-up model, where a customer would be assigned an IP address for the duration of their call, and the IP address would return to the free pool for subsequent reassignment at the end of the call. The new registry model was that the identity of the service provider was described in the public address registry, and the assignment of individual addresses to each of their dial-up customers was information that was private to the service provider. Now if you wanted to know what entity was using a particular IP address you also had to know the time of day as well, and while a "whois" query could point you in the direction of whom to ask, you now had to ask the ISP for access to their Access, Authentication and Accounting (AAA) records, typically the radius log entries, in order to establish who was using a particular IP address at a given time. Invariably, this provider data is private data, and agencies wanting access to this data had to obtain appropriate authorization or warrants under the prevailing regulatory regime.

This model of traceback has been blurred by the deployment of edge NATs, where a single external IP address is shared across multiple local systems serviced by the NAT. This exercise can therefore trace back to the NAT device, but no further. So with access to this data you can get to understand the interactions on the network at a level of granularity of customer end points, but not at a level of individual devices or users.

We've used this model of Internet address tracking across the wave of cable and DSL deployments. The end customer presents their credentials to the service provider, and is provided with an IPv4 address as part of the session initiation sequence. The time of this transaction, the identity of the customer and the IP address is logged, and when the session is terminated the address is pulled back into the address pool and the release of the address is logged. The implication is that as long as the traceback can start with a query that includes an IP address and a time of day, its highly likely that the end user can be identified from this information.

But, as the Guardian's commentary points out, this is all changing again. IPv4 address exhaustion is prompting some of the large retail service providers to enter the Carrier Grade NAT space, and join what has already become a well established practice in the mobile data service world. The same week of the Queen's speech, BT announced a trial of Carrier Grade NAT use in its basic IP service.

At the heart of the Carrier Grade NAT approach is the concept of sharing a public IP address across multiple customers at the same time. An inevitable casualty of this approach is the concept of traceback in the internet and the associated matter of record keeping rules. It is no longer adequate to front up with an IP address and a time of day. That is just not enough information to uniquely distinguish one customer's use of the network from another's. But what is required is now going to be dependant on the particular NAT technology that is being used by the ISP. If the CGN is a simple port-multiplexing NAT then you need the external IP address and the port number. When combined with the CGN-generated records of NAT's bindings of internal to external address, this can map you back to the internal customer's IP address, and using the ISP's address allocations records, this will lead to identification of the customer.

So traceback is still possible in this context. In a story titled "Individuals can be identified despite IP address sharing, BT says" the newsletter out-law.com (produced by the law firm Pinsent Masons) reports:

BT told Out-Law.com that its CGNAT technology would not prevent the correct perpetrators of illegal online activity from being identified.

"The technology does still allow individual customers to be identified if they are sharing the same IP address, as long as the port the customer is using is also known," a BT spokesperson said in a statement. "Although the IP address is shared, the combination of IP address and port will always be unique and as such these two pieces of information, along with the time of the activity can uniquely identify traffic back to a broadband line. [...] If we subsequently receive a request to identify someone who is using IP address x, and port number y, and time z we can then determine who this is from the logs," the spokesperson said. [...] "If only the IP address and timestamp are provided for a CGNAT customer then we are unable to identify the activity back to a broadband line," they added.

But port-multiplexing NATs are still relatively inefficient in terms of address utilization. A more efficient form of NAT multiplexing uses the complete 5-tuple of the connection signature, so that the NAT's binding table uses a lookup key of the protocol field and the source and destination addresses and port values. This allows the NAT to achieve far higher address sharing ratios, allowing a single external IP address to be shared across a pool of up to thousands of customers.

So what data needs to be collected by the ISP to allow for traceback in this sort of CGN environment? In this case the ISP needs to collect the complete 5-tuple of the external view of the connection, plus the start and stop times at a level of granularity to the millisecond or finer, together with the end-user identification codes. Such a session state log entry takes typically around 512 bytes as a stored data unit.

How many individual CGN bindings, or session states, does each user generate? One report I've seen points to an average of some 33,000 connections per end customer each day. If that's the case then the implication is that each customer will generate some 17Mbytes of log information every day. For a very large service provider, with, say, some 25 million customers, that equates to a daily log file of 425Tbytes. If these CGN records were produced at an unrealistically uniform rate per day, that's a constant log data flow of some 40Gbps. At a more realistic estimate of the busy period peaking at 10 times the average, the peak log data flow rate is some 400Gbps.

That's the daily load, but what about longer term data retention storage demands? The critical questions here is the prevailing data retention period. In some regimes it's 2 years, while in other regimes it's up to 7 years. Continuing with our example, holding this volume of data for 7 years of data will consume 1,085,875 Terrabytes, or 1.0 Exabytes to use the language of excessively large numbers. And that's even before you contemplate backup copies of the data! And yes, that's before you contemplate an Internet that becomes even more pervasive and therefore of course even larger and used more intensively in the coming years.

The questions such a data set can answer also requires a very precisely defined question. It's no longer an option to ask "who used this IP address on this date?" Or even "who used this IP address and this port address in this hour?" A traceback that can penetrate the CGN-generated address overuse fog requires the question to include both the source and destination IP addresses and port numbers, the transport protocol, and the precise time of day, measured in milliseconds. This last requirement, of precise coordinated time records, is a new addition to the problem, as traceback now requires that the incident being tracked be identified in time according to a highly accurate time source running in a known timezone, so that a precise match can be found in the ISP's data logs. It's unclear what it will cost to collect and maintain such massive data sets, but its by no means a low cost incidental activity for any ISP.

No wonder the UK is now contemplating legislation to enforce such record keeping requirements in the light of the forthcoming CGN deployments in large scale service provider networks in that part of the world. Without such a regulatory impost its unlikely that any service provider would, of their own volition, embark on such a massive data collection and long term storage exercise. One comment I've heard is that in some regimes it may well be cheaper not to collect this information and opt to pay the statutory fine instead — it could well be cheaper!

This is starting to look messy. The impact of CGNs on an already massive system is serious, in that it alters the granularity of rudimentary data logging from the level of a connection to the Internet to the need to log each and every individual component conversation that every consumer has. Not only is it every service you use and every site you visit, but its even at the level of every image, every ad you download, everything. Because when we start sharing addresses we now can only distinguish one customer from another at the level of these individual basic transactions. Its starting to look complicated and certainly very messy.

But, in theory in any case, we don't necessarily have to be in such a difficult place for the next decade and beyond.

The hopeful message is that if we ever complete the transitional leap over to an all-IPv6 Internet the data retention capability reverts back to a far simpler model that bears a strong similarity to the very first model of IP address registration. The lack of scarcity pressure in IPv6 addresses allows the ISP to statically assign a unique site prefix to each and every customer, so that the service providers data records can revert to a simple listing of customer identities and the assigned IPv6 prefix. In such an environment the cyber-intelligence community would find that their role could be undertaken with a lot less complexity, and the ISPs may well find that regulatory compliance, in this aspect at least, would be a lot easier and a whole lot cheaper!

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: Access Providers, Cybercrime, Internet Governance, IP Addressing, IPv6, Policy & Regulation

Categories: Net coverage

Major New Funding Opportunities for Internet Researchers and R&E Networks

CircleID posts - Thu, 2013-05-16 23:02

Nationally Appropriate Mitigation Action (NAMA) is a new policy program that was developed at the Bali United Nations Climate Change Conference.

As opposed to the much maligned programs like CDM and other initiatives NAMA refers to a set of policies and actions that developed and developing countries undertake as part of a commitment to reduce greenhouse gas emissions. Also unlike CDM, NAMA recipients are not restricted to developing countries. The program recognizes that different countries may take different nationally appropriate action based on different capabilities and requirements. Most importantly any set of actions or policies undertaken by a nation under NAMA will be recorded in a registry along with relevant technology, finance and capacity building support and will be subject to international measurement, reporting and verification.

Already most industrialized countries have committed funding,or intend to commit funding to NAMA projects. It is expected that by 2020 over $100 billion will be committed to NAMA programs by various nation states.

As I have blogged ad nauseam, I believe Internet researcher and R&E networks can play a critical leadership role in developing zero carbon ICT and "Energy Internet" technologies and architectures. ICT is the fastest growing sector in terms of CO2 emissions and is rapidly become one of the largest GHG emission sectors on the planet. For example a recent Australian study pointed out that the demand for new wireless technologies alone will equal the CO2 emissions of 4 1/2 million cars!

Once you get past the mental block of energy efficiency solves all problems, and realize that energy consumption is not the problem, but the type of energy we use, then a whole world of research and innovation opportunities opens up. More significantly, whether you believe in climate change or not, it is expected that within a couple of years the cost of power from distributed roof top solar panels is going to be less than that from the grid. This is going to fundamentally change the dynamics of the power industry much like the Internet disrupted the old telecom world. Those countries and businesses that take advantage of these new power realities are going to have a huge advantage in the global marketplace.

I am pleased to see that Europe is at the forefront of these developments with Future Internet initiatives like FINSENY.EU that is actively working with NRENs and Internet researchers to develop the architectural principles of building an energy Internet built around distributed small scale renewable power. My only concern is that Europe may screw it up, like they did with the early Internet, when most of the research funding went to incumbent operators.

The global Internet started in the academic research community and R&E networks. It would be great to see these same organizations play a leadership role in deploying the global "Energy Internet". Universities, in many cases have the energy profile of small cities, of which 25-40% of their electrical consumption is directly attributable to ICT. Most campuses also operate large fleets of utility vehicles that could easily be converted to dynamic charging to "packetize" power and provide it where needed and when needed on campus, especially when there is no power from the solar panels.

I dream of the day when a university announces it is going zero carbon and off the grid.

Written by Bill St. Arnaud , Green IT Networking Consultant

Follow CircleID on Twitter

More under: Access Providers, Broadband, Telecom

Categories: Net coverage

Government Advisory Committee (GAC) Beijing Communiqué Inconsistent With ICANN's gTLD Policy

CircleID posts - Wed, 2013-05-15 00:32

This is an edited version of comments submitted to ICANN on the Government Advisory Committee (GAC) Beijing Communiqué of 11 April 2013.

The GAC Communiqué recommends that ICANN implement a range of regulations (which the GAC calls "safeguards") for all new generic top-level domains (gTLDs) covering areas ranging from malware to piracy to trademark and copyright infringement. The GAC proposes specific safeguards for regulated and professional sectors covering areas as diverse as privacy and security, consumer protection, fair lending and organic farming. Finally, the GAC proposes a "public interest" requirement for approval of new "exclusive registry access" gTLDs.

The GAC's recommendations raise complex issues of ICANN's mission and governance and how they relate to the laws of the jurisdictions in which the registries operate. Without getting into the details of the specific recommendations, the expansion of ICANN's role implicit in the GAC's recommendations is inconsistent with ICANN's policy of opening entry into the domain space. Opening entry into the domain name space is intended to bring the benefits of competition and greater innovation to the market for TLDs. A major benefit of a competitive market is that there is generally no need for regulation of product attributes, as the GAC is proposing. Indeed, regulation of such a market will be counterproductive to the interests of consumers.

In a competitive gTLD market, registries can be expected to provide the services their customers demand. Registries that provide those services will flourish, and those who do not will not survive. Importantly, a competitive gTLD market allows for a range of services corresponding to different preferences and needs. The type of regulation the GAC is recommending will raise costs to registries and impede the development of innovative new TLD services, ultimately harming consumers. The value of gTLDs as economic assets and the benefits of the new gTLD program will be diminished.

Included in the GAC Communiqué is the recommendation that exclusive access or closed registries for generic terms should be in the "public interest." A public interest standard is vague and difficult to define and therefore is susceptible to being applied in an arbitrary manner. As I indicated in March 6, 2013, comments to ICANN on the subject, a major benefit of the new gTLD program, in addition to providing competition to incumbents, is the ability of the entrants to develop new business models, products, and services. Valuable innovations are likely to be blocked if ICANN attaches a public interest requirement to exclusive access registries.

There may be instances where regulation is warranted. For example, the protection of intellectual property in domain names has become a major issue, particularly in connection with the introduction of new gTLDs. ICANN's trademark clearing house is an attempt to address that issue. There may be other areas where regulation is warranted, but it is unclear whether ICANN is the appropriate venue.

If ICANN wants to be more of a regulatory agency, it should adopt good regulatory policy practices. Specifically, ICANN should demonstrate that there is a significant market failure that is addressed by the proposed regulation (or safeguard), that the benefits of the regulation are likely to be greater than the costs, and that the proposal is the most cost-effective one available.

It is preferable, however, for ICANN to minimize its regulatory role. ICANN should hew closely to the technical functions involved in administering the Domain Name System — i.e., coordinating the allocation of IP addresses, managing the DNS root, and ensuring the stability of the DNS. This has historically been ICANN's essential mission and should continue to be so.

Written by Tom Lenard, President, Technology Policy Institute

Follow CircleID on Twitter

More under: ICANN, Internet Governance, Top-Level Domains

Categories: Net coverage

Joint Venture Promises Broadband Benefits with Potential Risks for Latin American, Caribbean Markets

CircleID posts - Tue, 2013-05-14 21:03

When Columbus Networks and Cable & Wireless Communications announced the formation of their new joint venture entity at International Telecoms Week 2013, it signaled an important milestone for the telecommunications sector in Latin American and the Caribbean. The development comes at a time when the region's appetite for bandwidth is rapidly rising. The market for wholesale broadband capacity is experiencing solid growth and shows no sign of slowing anytime soon. It is no surprise then, to see consolidation in the market as service providers position themselves to take full advantage of the expected growth in demand.

Significant Development

Columbus Communications’ Submarine Cable Footprint (Click to Enlarge)
Source: Columbus CommunicationsThe two companies were already the most significant providers of wholesale bandwidth for the region. Barbados registered Columbus International, which operates in 27 markets in the greater Caribbean, Central American and Andean region, estimates that it currently manages 70% of the region's traffic. CWC Wholesale Solutions is a subsidiary of UK-based Cable & Wireless Communications, which manages a diverse set of telecommunications businesses in Central America and the Caribbean including the well-known LIME brand.

Their new arrangement is not a union of equals. CWC's assets, subject to the joint venture arrangement, had a gross asset value of US$108.2 million, and recorded a loss before tax of US$0.9 million in the year to 31 March 2013. In contrast, Columbus's assets, subject to the joint venture arrangement, had a gross asset value of US$304.6 million and recorded a profit before tax of US$29.3 million in the year to 31 December 2012. Their joint venture, called CNL-CWC Networks, will be managed by Columbus, whose share will be 72.5% to CWC Wholesale Solutions' 27.5%.

Columbus and CWC in a joint statement said, "The new joint venture company will serve as the sales agent of both Columbus Networks and CWC Wholesale Solutions for international wholesale capacity." It added, "Columbus Networks and CWC Wholesale Solutions will retain ownership and control of their respective existing networks in the region."

The companies expect that after completing necessary network interconnections, the joint venture will offer wholesale customers an expanded network platform that spans more than 42,000 kilometers and reaches more than 42 countries in the region.

Officials from both companies shared that they hope to offer customers greater IP traffic routing options, improved reliability and higher performance as the joint venture rolls out. However, for all their enthusiasm about the joint venture, the success of an enlarged Columbus/CWC is by no means guaranteed. Given the strong parent brands, there is the real possibility of potentially conflicting strategies from Columbus and CWC for development of the Caribbean market.

It remains to be seen how the enlarged entity will position itself in the market. For Columbus, the deal enables the supply of international wholesale capacity and IP services to markets the company does not currently reach, such as Grenada, Barbados, St Lucia, Antigua and St Vincent and the Grenadines. It also provides them with additional connectivity options for Dominican Republic and Jamaica. For Cable and Wireless, its current LIME territories will be able to benefit from enhanced bandwidth capacity, enabled by access to Columbus Networks sub-sea capacity.

However, both companies must await further regulatory approvals in Panama, Columbia, Cayman Islands, The Bahamas, Anguilla, Antigua and Barbuda, The British Virgin Islands, Montserrat and St Kitts and Nevis before they can begin rolling out services on behalf of the joint venture in those countries. It is anyone's guess as to how long this approval process will take.

Unanswered Questions

The promise of an expanded network that can offer greater resilience, redundancy and routing options for Caribbean and Latin American traffic is certainly laudable. So too is the possibility of improving the region's access to international capacity to better meet the increasing demand.

However, the benefits of this joint venture must be weighed against the possibility that this new entity can negatively influence pricing, competition and downstream market growth. Unhealthy collusion or price-fixing in this significant sector of the telecommunications market could deal a serious blow to already fragile economies in the region. This must not be allowed to happen.

But who is to be tasked with the responsibility of ensuring that things proceed in the interest of health market growth and economic development?

There is no official body with the means or mandate for providing oversight of the region's telecommunications sector. The small markets of the Caribbean are marked by under-resourced national regulators, more practiced in responding to local telecom wrangling than to strategically analyzing the international wheeling and dealing of trans-national players.

So the questions now are, who is going to act as watchdog to safeguard regional, national and public interests? And, who is going to ensure that the promised efficiencies and capacity increase, actually benefit the region? Hopefully, it will not be too long before the answers emerge.

Written by Bevil Wooding, Internet Strategist at Packet Clearing House

Follow CircleID on Twitter

More under: Access Providers, Broadband, Telecom

Categories: Net coverage
Syndicate content