Net coverage

Verisign Doesn't Think the Net Is Ready for a Thousand New TLDs

CircleID posts - Sat, 2013-03-30 05:12

Yesterday Verisign sent ICANN a most interesting white paper called New gTLD Security and Stability Considerations. They also filed a copy with the SEC as an 8-K, a document that their stockholders should know about.

It's worth reading the whole thing, but in short, their well-supported opinion is that the net isn't ready for all the new TLDs, and even if they were, ICANN's processes or lack thereof will cause other huge problems.

The simplest issues are administrative ones for ICANN. In the olden days updates to the root zone were all handled manually, signed email from ICANN to Verisign, who manages the root zone, with a check at NTIA, who oversees it under longstanding contracts. As the number of changes increased, more due to added IPv6 and DNSSEC records than increased numbers of TLDs, the amount of email got unwieldy so they came up with a new system where the change data is handled automatically with people looking at secure web sites rather than copy and paste from their mailboxes. This system still in testing and isn't in production yet; Verisign would really prefer that it was before ICANN starts adding large numbers of new TLDs.

The new domains all have to use the Trademark Clearinghous (TMCH), a blacklist of names that people aren't allowed to register. Due to lengthy dithering at ICANN, the the TMCH operator was just recently selected, and they haven't even started working out the technical details of how registry operators will query it in real time as registrations arrive.

There are other ICANN issues as well, the process for transferring a failed registry's data to a backup provider isn't ready, nor is zone file access for getting copies of zone data, nor are the pre-delegation testing reqiurements done, and the GAC (the representatives from various governments) could still retroactively veto new domains even after they'd been placed in service.

All of these issues are well known, and the technical requirements have been listed in the applicant guidebook for several years, so it does reflect poorly on ICANN that they're so far from being ready to implement the new domains.

Most importantly, Verisign notes that the root servers, who are run by a variety of fiercely independent operators, have no coordinated logging or problem reporting system. If something does go wrong at one root server, there's no way to tell whether it's just them or everyone other than making phone calls. Verisign gives some examples of odd and unexpected things that happened as DNSSEC was rolled out, and again their concerns are quite reasonable.

An obvious question is what is Verisign's motivation in publishing this now. Since they are the registry for .COM and .NET and a few smaller domains, one possibility is FUD, trying to delay all the new domains to keep competitors out of the root. I don't think that's it. Over 200 of the applications say that they'll use Verisign to run their registries, so Verisign stands to make a fair amount of money from them. And everyone expects that to the extent the new TLDs are successful at all, it'll be additional, often defensive registrations, not people abandoning .COM and .NET.

So my take on this is that Verisign means what they say, the root isn't ready for all these domains, nor are ICANN's processes ready, and Verisign as the root zone manager is justifiably worried that if they go ahead anyway, the root could break.

Update: Thu April 4, 2013
A follow up to the discussed Verisign's white paper, New gTLD Security and Stability Considerations, in which they listed a bunch of reasons that ICANN isn't ready to roll out lots of new TLDs. Among the reasons were that several of the services the new GTLDs are required to use aren't available yet, including the Emergency Back End Registry Operators (EBEROs), who would take over the registry functions for a TLD whose operator failed. They were supposed to have been chosen in mid-2012. By complete coincidence, ICANN has announced that they had chosen the three Emergency End Registry Operators. I can't wait to see what happens next week.

Written by John Levine, Author, Consultant & Speaker

Follow CircleID on Twitter

More under: DNS, DNS Security, ICANN, Security, Top-Level Domains

Categories: Net coverage

The Spamhaus Distributed Denial of Service - How Big a Deal Was It?

CircleID posts - Sat, 2013-03-30 02:49

If you haven't been reading the news of late, venerable anti-spam service Spamhaus has been the target of a sustained, record-setting Distributed Denial-of-Service (DDoS) attack over the past couple of weeks.

Al Iverson over at Spamresource has a great round-up of the news, if you haven't managed to catch the news, go check it out, then come on back, we'll wait ...

Of course, bad guys are always mad at Spamhaus, and so they had a pretty robust set-up to begin with, but whoever was behind this attack was able to muster some huge resources, heretofore never seen in intensity, and it had some impact, on the Spamhaus website, and to a limited degree, on the behind-the-scenes services that Spamhaus uses to distribute their data to their customers.

Some reasonable criticism, was aimed at the New York Times, and Cloudflare for being a little hyperbolic in their headlines and so on, and sure, it was a bit 'Chicken Little'-like, the sky wasn't falling and the Internet didn't collapse.

But, don't let the critics fools you, this was a bullet we all dodged.

For one, were Spamhaus to be taken offline, their effectiveness in filtering spam and malware would rapidly decay, due to the rate at which their blocklists need to be updated. The CBL anti-botnet feed and the DROP list both have many additions and deletions every day. These services are used to protect mail servers and networks against the most malicious criminal traffic. If they go down, a lot of major sites would have trouble staying up, or become massively infected with malware.

There are also a ton of small email systems that use the Spamhaus lists as a key part of their mail filtering (for free as it turns out). Were those lookups prevented, or tampered with, those systems would buckle under the load of spam that they dispense with easily thanks to Spamhaus.

To put it into perspective, somewhere between 80% & 90% of all email is spam, and that's the stuff Spamhaus helps filter. So it doesn't take a Rocket Scientist to figure out that if filters go out, so do the email systems, in short order. AOL's Postmaster famously said, at an FTC Spam Summit a decade ago, before the inception of massive botnets, that were their filtering to be taken offline, it'd be 10 minutes before their email systems crashed.

Due to some poorly researched media reports (hello, Wolf Blitzer!), there is a perception that this is a fight between two legitimate entities, Spamhaus and Stophaus; some press outlets and bloggers have given equal time to the criminals (we use that word advisedly, there is an ongoing investigation by law enforcement in at least five countries to bring these people to justice). Nothing could be further from the truth. The attackers are a group of organized criminals, end of story. There is nothing to be celebrated in Spamhaus taking it on the chin, unless you want email systems and networks on the Internet to stop working.

So yeah, it was a big deal.

Written by Neil Schwartzman, Executive Director, CAUCE North America

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Data Center, DDoS, DNS, DNS Security, Email, Malware, Security, Spam

Categories: Net coverage

DNS Reflection/Amplification Attack: Proved

CircleID posts - Fri, 2013-03-29 18:49

Last year there was a "threat" by anonymous group to black out Internet by using DNS Reflection/Amplification attack against the Internet DNS Root servers. I even wrote a little article about it: "End of the world/Internet

In the article I was questioning if this was even possible and what was needed as general interest and curiosity.

Well, looking at the "stophaus" attack last week, we are getting some answers.

I would say it is a real threat now and is a valid attack vector. Seems you only need a couple of ingredients:

Open recursive DNS servers

Many of these are already available, and numbers increase. This not only includes dedicated DNS Server systems, but also any equipment attached to the internet capable of handling DNS requests it seems (like cable-modems, routers, etc). So the risk this will be utilized again, will be greater every day now.

A party that is capable/willing do set it off

Seems that there are more and more parties on the Internet that open to "attack" certain entities on the Internet to defend their believes. In above case, stressing even the Internet and influence the usage of everyone on it.

Infrastructure

Lets call it the "Internet", "Logistics" and "Bandwidth". Looking at the numbers, it is apparent that you need little (in context) and it is possible to do so if you want. Technology, services or other wise it is not really challenging. And it can be done not from a shady area/country either.

I suspect we will see more of this happening now the "proof-of-concept" is done. It still worries me when the real guns are pulled out and focus would shift from particular entities to the root infrastructure of the Internet.

I had a couple of talks with my expertise peers on this how to mitigate this, it is very difficult as it is sheer load coming from every corner of the Internet. We really did not come up with a single solution. Mitigation would probably mean "breaking" some parts of the Internet as collateral damage, which in size would probably be disruptive enough as well.

Main concern in this, again, is the "open resolvers" out there that we cannot control without education and regulation on how DNS is deployed (you know, the thing we are allergic/apathetic about on/about Internet).

The more thoughts I give this, the more I think the solution is not only technical but mostly an organisational/educational/regulation one… Before that is in place, we probably will experience some outages…

Written by Chris Buijs, Head of Delivery

Follow CircleID on Twitter

More under: Cyberattack, DDoS, DNS, DNS Security

Categories: Net coverage

Largest DDoS Attack To Date Aimed at Spamhaus Effects Global Internet Traffic

CircleID news briefs - Wed, 2013-03-27 18:31

The internet around the world has been slowed down in what security experts are describing as the biggest cyber-attack of its kind in history. A row between a spam-fighting group and hosting firm has sparked retaliation attacks affecting the wider internet. It is having an impact on popular services like Netflix — and experts worry it could escalate to affect banking and email systems.

Read full story: BBC

Follow CircleID on Twitter

More under: Cyberattack, DDoS, Spam

Categories: Net coverage

Largest DDoS Attack To Date Aimed at Spamhaus Effects Global Internet Traffic

CircleID posts - Wed, 2013-03-27 18:31

The internet around the world has been slowed down in what security experts are describing as the biggest cyber-attack of its kind in history. A row between a spam-fighting group and hosting firm has sparked retaliation attacks affecting the wider internet. It is having an impact on popular services like Netflix — and experts worry it could escalate to affect banking and email systems.

Read full story: BBC

Follow CircleID on Twitter

More under: Cyberattack, DDoS, Spam

Categories: Net coverage

Live Webcast Thursday March 28 of ION Singapore IPv6 and DNSSEC Sessions

CircleID posts - Wed, 2013-03-27 18:00

For those of you interested in IPv6 and/or DNSSEC, we'll have a live webcast out of the Internet Society's ION Singapore conference happening tomorrow, March 28, 2013, starting at 2:00pm Singapore time.

Sessions on the agenda include:

  • The Business Case for IPv6 & DNSSEC
  • Deploying DNSSEC: From End-customer to Content
  • Industry Collaboration: Working Together to Deploy IPv6

Joining the sessions are a variety of speakers from across the industry and within the Asia Pacific region. Information about the webcast can be found at:

http://www.internetsociety.org/deploy360/ion/singapore2013/webcast/

We'll also be recording the sessions so you can view them later. For example, given that Singapore time is 12 hours ahead of U.S. Eastern time, I don't expect many of the folks I know there to be up at 2am to watch these sessions!

The ION Singapore conference is produced by the Internet Society Deploy360 Programme and is part of the ICT Business Summit taking place this week in Singapore. I just got to meet some of the panelists at a dinner tonight and I think the sessions tomorrow should be quite educational and also quite engaging and fun. Please do feel free to tune in if you are interested and have the chance to do so.

P.S. In full disclosure I am employed by the Internet Society to work on the Deploy360 Programme and for once a post of mine at CircleID IS related to my employer.

Written by Dan York, Author and Speaker on Internet technologies

Follow CircleID on Twitter

More under: DNS, DNS Security, IPv6, Security

Categories: Net coverage

ICANN Launches the Trademark Clearinghouse Amid gTLD Expansion

CircleID news briefs - Tue, 2013-03-26 17:43

ICANN today launched a database to enable trademark holders register their brands for protection against the upcoming new gTLDs. The Trademark Clearinghouse, according to ICANN, is the only officially authorised solution offering brands a one-stop-foundation for the safeguarding of their trademarks in domain names across the multiple new gTLDs that will go live from summer 2013. The cost of registering a trademark ranges between $95 and $150 a year.

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

ICANN Launches the Trademark Clearinghouse Amid gTLD Expansion

CircleID posts - Tue, 2013-03-26 17:43

ICANN today launched a database to enable trademark holders register their brands for protection against the upcoming new gTLDs. The Trademark Clearinghouse, according to ICANN, is the only officially authorised solution offering brands a one-stop-foundation for the safeguarding of their trademarks in domain names across the multiple new gTLDs that will go live from summer 2013. The cost of registering a trademark ranges between $95 and $150 a year.

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

SQL Injection in the Wild

CircleID posts - Mon, 2013-03-25 23:13

As attack vectors go, very few are as significant as obtaining the ability to insert bespoke code in to an application and have it automatically execute upon "inaccessible" backend systems. In the Web application arena, SQL Injection vulnerabilities are often the scariest threat that developers and system administrators come face to face with (albeit way too regularly). In fact the OWASP Top-10 list of Web threats lists SQL Injection in first place.

This "in the wild" SQL Injection attempt was based upon the premise that video cameras are actively monitoring traffic on a road, reading license plates, and issuing driver warnings, tickets or fines as deemed appropriate by local law enforcement.
(Click to Enlarge)More often than not, when security professionals discuss SQL Injection threats and attack vectors, they focus upon the Web application context. So it was with a bit of fun last week when I came across a photo of a slightly unorthodox SQL Injection attempt — that of someone attempting to subvert a traffic monitoring system by crafting a rather novel vehicle license plate.

My original tweet got retweeted a couple of thousand of times — which just goes to show how many security nerds there are out there in the twitterverse.

This "in the wild" SQL Injection attempt was based upon the premise that video cameras are actively monitoring traffic on a road, reading license plates, and issuing driver warnings, tickets or fines as deemed appropriate by local law enforcement.

At some point the video captures of the passing vehicle's license plate must be converted to text and stored — almost certainly in some kind of backend database. The hope of the hacker that devised this attack was that the process would be vulnerable to SQL Injection — and crafted a simple SQL statement that could potentially cause the backend database to drop (i.e. "delete") the table containing all of the license plate information.

Whether or not this particular attempt worked, I have no idea (probably not if I have to guess an outcome); but it does help nicely to raise attention to this category of vulnerability.

As surveillance systems become more capable — digitally storing information, distilling meta-data from image captures, and sharing observation data between systems — it opens many new doors for mischievous and malicious attack.

The physical nature of these systems, coupled with the complexities of integration with legacy monitoring and reporting systems, often makes them open to attacks that would be classed as fairly simple in the world of Web application security.

A common failure of system developers is to assume that the physical constraints of the data acquisition process are less flexible than they are. For example, if you're developing a traffic monitoring system it's easy to assume that license plates are a fixed size and shape, and can only contain 10 alphanumeric characters. Meanwhile, the developers of the third-party image processing code had no such assumptions and will digitize any image. It reminds me a little of the story in which reuse of some object-oriented code a decade ago resulted in Kangaroos firing Stinger missiles during a military training simulation.

While the image above is amusing, I've encountered similar problems before when physical tracking systems integrate with digital backend processes — opening the door to embarrassing and fraudulent events. For example, in the past I've encountered similar SQL Injection vulnerabilities within systems such as:

  • Toll booths reading RFID tags mounted on vehicle windshields — where the tag readers would accept up to 2k of data from each tag (even though the system was only expecting a 16 digit number).
  • Credit card readers that would accept pre-paid cards with negative balances — which resulted in the backend database crediting the wrong accounts.
  • RFID inventory tracking systems — where a specially crafted RFID token could automatically remove all record of the previous hours' worth of inventory logging information from the database allowing criminals to "disappear" with entire truckloads of goods.
  • Luggage barcode scanners within an airport — where specially crafted barcodes placed upon the baggage would be automatically conferred the status of "manually checked by security personnel" within the backend tracking database.
  • Shipping container RFID inventory trackers — where SQL statements could be embedded to adjust fields within the backend database to alter Custom and Excise tracking information.

Unlike the process of hunting for SQL Injection vulnerabilities within Internet accessible Web applications, you can't just point an automated vulnerability scanner at the application and have at it. Assessing the security of complex physical monitoring systems is generally not a trivial task and requires some innovative approaches. Experience goes a long way.

Written by Gunter Ollmann, Chief Technology Officer at IOActive

Follow CircleID on Twitter

More under: Security

Categories: Net coverage

So, How Big Is the Internet?

CircleID posts - Mon, 2013-03-25 21:26

The results of an excellent study made, for reasons that will become clear, by an anonymous author reaches this conclusion:

So, how big is the Internet?
That depends on how you count. 420 Million pingable IPs + 36 Million more that had one or more ports open, making 450 Million that were definitely in use and reachable from the rest of the Internet. 141 Million IPs were firewalled, so they could count as "in use". Together this would be 591 Million used IPs. 729 Million more IPs just had reverse DNS records. If you added those, it would make for a total of 1.3 Billion used IP addresses. The other 2.3 Billion addresses showed no sign of usage.

Notice that, of the roughly 4 billion possible IPv4 addresses, less than half appear to be "owned" by somebody and only 591 million appear to be active.

The problem is, to make the study, the author created a botnet — that is he wrote a small program that took advantage of insecure devices to enlist additional machines to help in the study. What is amazing (if you are not a security researcher) is the extent to which he was able to coop insecure devices testing only four name/password combinations, e.g. root:root, admin:admin and both without passwords.

This is very valuable research and it was apparently done without causing anyone any harm. None-the-less, the US government has treated this kind of research as a crime in the past even before all the cyber security laws of the past decade. So I hope this researcher anonymity holds.

Written by Brough Turner, Founder & CTO at netBlazr

Follow CircleID on Twitter

More under: Web

Categories: Net coverage

ICANN Releases Initial Evaluation Results for First Set of New gTLD Applications

CircleID news briefs - Mon, 2013-03-25 18:58

The first round of Initial Evaluation results has been released exactly on schedule. On March 23, ICANN announced that 27 out of 30 new gTLD applications reviewed this round passed Initial Evaluation. The remaining three applicants are still marked as in Initial Evaluation. For more details see, '27 Applicants Passed Initial Evaluation in the First Round' via www.GetNewTLDs.com.

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

ICANN Releases Initial Evaluation Results for First Set of New gTLD Applications

CircleID posts - Mon, 2013-03-25 18:58

The first round of Initial Evaluation results has been released exactly on schedule. On March 23, ICANN announced that 27 out of 30 new gTLD applications reviewed this round passed Initial Evaluation. The remaining three applicants are still marked as in Initial Evaluation. For more details see, '27 Applicants Passed Initial Evaluation in the First Round' via www.GetNewTLDs.com.

Follow CircleID on Twitter

More under: ICANN, Top-Level Domains

Categories: Net coverage

To Tax or Not to Tax

CircleID posts - Mon, 2013-03-25 18:36

The Writing's On The Wall

Well it is not new that the US has always maintained that the Internet should be a tax free zone as per the US Congress's Tax Freedom Act 1998 (authored by Representative Christopher Cox and Senator Ron Wyden and signed into law on October 21 1998 by then President Clinton) which following expiry continued to be reauthorized and its most recent re-authorization (legal speak for extension) was in October 2007 where this has been extended till 2014. It is unclear whether there will be another extension post 2014. There is a moratorium on new taxes on e-commerce, and the taxing of internet access via the Tax Freedom Act. Whilst the US Congress's Tax Freedom Act 1998 bars federal, state and local governments from taxing Internet access and from imposing discriminatory Internet only taxes such as bit taxes, bandwidth taxes and email taxes, it also bars multiple taxes on electronic commerce. It does not exempt sales made on the Internet from taxation, as these may be taxed at the same state and local sales tax rate as non Internet sales.

New Bill in the House

With the introduction of the US Marketplace Fairness Act in 2013 in both the Senate and the House of Representatives will make for some interesting discussions and lobbying at the Hill. Whilst the Bill in its current form acknowledges the exemptions that are currently in place — the manner in which discussions play out by the manner in which both Senators and Representatives are having reflect a change in atmospheric pressure — which in my mind is significant.

In 1998 the US Senate voted 96-2 to approve the Tax Freedom Act and the mere fact that the new Bill has 28 Co Sponsors and in the House of Reps, there are 47 co sponsors is indicative of either a shift in paradigm or that State coffers are screaming to be filled.

The S.336 Marketplace Fairness Act of 2013 introduced on February 14 day, 2013 and sponsored by US Senator Michael Enzi [R-WY] There are 28 co-sponsors (21D, 6R, 1I).

There is a prognosis that the Bill might not get past the Committee and 0% chance of getting enacted.

The H.R.684: Marketplace Fairness Act of 2013 introduced on February 14, 2013 and sponsored by US Rep. Steve Womack [R-AR3] had 47 cosponsors (25D, 22R). There is a prognosis that it has a 28% chance of getting past the committee and 11% chance of getting enacted.

To Tax or Not to Tax

The term 'electronic commerce' (e commerce) means any transaction conducted over the Internet or through Internet access, comprising the sale, lease, license, offer, or delivery of property, goods, services, or information, whether or not for consideration, and includes the provision of Internet access.

As early as 2000, the problems of tax free e commerce was discussed during the first E Commerce Roundtable meeting in Washington D.C. If e-commerce proceeds untaxed, it would mean that state treasuries would face an eroding tax base. States within the United States of America rely on sales tax for approximately 25-40% of their revenue. As such there is a trade-off or opportunity cost as other taxes may have to increase to make up for the deficit caused by tax-free e-commerce.

The deficit caused by tax free e-commerce means that other taxes may be subjected to increase and also potential funding may be siphoned away from other priority areas. Traditional firms or businesses who do not trade electronically are at a disadvantage as they are forced to collect sales tax at the register. This is why it is sometimes cheaper to purchase a pair of boots online than if you were to walk into a traditional store.

One of the issues that was discussed in the E commerce round table meeting was the widening of the digital divide where people without credit cards or Internet access may be forced to shoulder the burden of sales tax.

E Commerce is blossoming

Global business-to-consumer e-commerce sales will pass the 1 trillion euro ($1.25 trillion) mark by 2013, and the total number of Internet users will increase to approximately 3.5 billion from around 2.2 billion at the end of 2011, according to a new report by the Interactive Media in Retail Group (IMRG), a U.K. online retail trade organization as reported by Internet Retailer dot com . The study estimates that business-to-consumer e-commerce sales in 2011 increased to 690 billion euros ($961 billion), an increase of close to 20% from a year earlier.

According to that study, the US remains the world's largest single market as far as e commerce goes. The same study highlighted that with China's phenomenal growth rates that it is speculated to surpass the United States in this regard shortly.

The US Department of Commerce reported that Total Retail Sales from the fourth quarter of 2012 was estimated at $1,105.8 billion which is an increase of 4% from the third quarter of the same year.

Only Time Will Tell

Whether the US Marketplace Fairness Act will eventually get passed and enacted is something that only time will tell but the timing is certainly interesting.

Written by Salanieta Tamanikaiwaimaro, Director of Pasifika Nexus

Follow CircleID on Twitter

More under: Internet Governance, Law, Policy & Regulation

Categories: Net coverage

Fiber to the Home: 'Awesome' - But What Is Its Purpose?

CircleID posts - Fri, 2013-03-22 19:56

Two approaches can be taken towards the development of Fiber to the Home (FttH). One is all about its commercial potential — the sale of the most awesome commercial applications in relation to video entertainment, gaming and TV. The other is a perhaps more sophisticated approach — from the perspective of social and economic development.

Of course the two are not mutually exclusive. Those who successfully follow the commercial route create an infrastructure over which those other social and economic applications will eventually be carried as well. This is quite a legitimate route, but the reality is that most people in this situation will say 'the FttH entertainment applications are absolutely awesome, but totally useless'. In other words, nice to have but it is highly unlikely that people will pay for them.

We basically see this with such commercial FttH deployments around the world. Commercial FttH subscriptions cost consumers well over $100 per month, and at such a price penetration in developed countries will reach no more than approximately 20%. That will not be sufficient mass to launch other social and economic applications over such a network.

If we are serious about those national benefits we will have to treat FttH differently — not just as another telecoms network, but as national infrastructure. However the all-powerful telcos will fight such an approach tooth and nail, since that would make their network a utility. They are used to extracting premium prices based on their vertically-integrated monopolies and they are in no mood to relinquish this. Simply looking at the amount of money telcos spend on lobbying reveals that they do not want to see government making any changes to their lucrative money-making schemes.

It will be interesting to see what Google Fibre in Kansas City will do. Its price is more affordable (around $75) but it is still operating on that 'awesome entertainment' level. Will it be able to attract sufficient customers to eventually create that broader infrastructure that will be used by a far greater range of applications? We estimate that it would be able to achieve around 40% penetration, and if it could move past 'awesome but useless' that could grow to 60%. By that time sufficient mass would have been created to move to the next stage. So, all very doable over, let us say, a five-year period.

The good thing is that if any company can create such a breakthrough development it is Google. It is not a telco. It simply wants to prove the business case — that FttH makes business sense. If it can prove the commercial success of Ftth it is more likely that other telcos will follow. There is no way Google on its own can fibre the USA, let alone the world. So its role in relation to Google Fibre is to extend the global FttH footprint by example, as that would allow it to increase the number of next-gen applications and service. With its dominant position in this market the spill-over from that is many times larger than the financial gains the company can make running a FttH network.

Written by Paul Budde, Managing Director of Paul Budde Communication

Follow CircleID on Twitter

More under: Access Providers, Broadband

Categories: Net coverage

Technology Fights Against Extreme Poverty

CircleID posts - Thu, 2013-03-21 19:05

One of the good things about participating in the meetings of the UN Broadband Commission for Digital Development is seeing the amazing impact our industry has on the daily lives of literally billions of people. While everybody — including us — is talking about healthcare, education and the great applications that are becoming available in these sectors, the real revolution is taking place at a much lower level.

If one looks in particular at those who live below the extreme poverty line of $1.25 per day then e-health and e-education are certainly not the first applications that reach these people. The most fundamental change happens when people get access to communications — thus extending their network beyond neighbours, who are probably living below the poverty line as well, and so are unable to do much to lift the community out of its misery. In the 1990s Broadband Commissioner Muhammad Yunus through his Grameen Bank initiative showed that a simple mobile phone (2G) in a Bangladesh village, and, by extension, in any other village operating below the poverty line, can lift the local economy by 20%. This technology gives access to data, and people can make calls to find out what is the best market to go to today to sell the fish they just caught, or find out what the market price is for their wheat (not just the price that their middleman is quoting).

Access to facts is liberating people, and with facts they can start improving their lives. Once people know something, it cannot be taken away from them and therefore will create a lasting change. People will use that knowledge, data and information to make social and economic improvements.

On a larger scale the same thing happens when access is obtained to facts that go beyond what the local politicians are providing, or hiding. The Arab Spring is a good example here. While its end result is not yet clear there is no way back once people have the facts; again, this is a very liberating experience and will ultimately lead to improving people's lives and lifestyles.

Another of the Broadband Commissioners, Dr Mohamed Ibrahim, the founder of Celtel in Africa, is a staunch supporter of the movement 'one.org'. This grassroots, non-political organisation is concentrating on eradicating extreme poverty and statistics are showing that this could be possible before 2030.

Extreme poverty has already declined and this trend is accelerating. In 1990 43% of the global population fell into the category of extreme poverty; by 2000 this had dropped to 33%; and by 2010 it had dropped further, to 21%. Interestingly, the fastest acceleration of this trend is taking place in most of the poorest countries in Africa.

Rock star and activist Bono stated in a recent TED presentation that the major obstacles to this process of acceleration are inertia, loss of momentum and corruption. The silver lining here, especially in relation to the latter, is that again technology is a driving force for change. With access to communications and facts it becomes much easier to expose corruption. Technology makes it easier to create a more transparent society and, while corruption will never be stamped out altogether, extreme corruption will be greatly reduced.

It is great to work with the Broadband Commission to develop projects and programs, using our technologies, to ensure that the social and economic processes accelerate these positive developments, creating greater equality. The high ranking of those involved makes it possible to get these messages across at the highest levels of government and the highest level governance of the international organisations addressing these issues.

Written by Paul Budde, Managing Director of Paul Budde Communication

Follow CircleID on Twitter

More under: Access Providers, Broadband

Categories: Net coverage

Research Group Releases International Law on Cyber Warfare Manual

CircleID news briefs - Wed, 2013-03-20 20:11

Tallinn Manual on the International Law Applicable to Cyber Warfare
Paperback / ISBN:9781107613775
Publication date: March 2013The newly released handbook applies the practice of international law with respect to electronic warfare. The Tallinn Manual on the International Law Applicable to Cyber Warfare — named for the Estonian capital where it was compiled — was created at the behest of the NATO Co-operative Cyber Defence Centre of Excellence, a NATO think tank. It takes current rules on battlefield behaviour, such as the 1868 St Petersburg Declaration and the 1949 Geneva Convention, to the internet, occasionally in unexpected ways.

"The product of a three-year project by twenty renowned international law scholars and practitioners, the Tallinn Manual identifies the international law applicable to cyber warfare and sets out ninety-five 'black-letter rules' governing such conflicts. It addresses topics including sovereignty, State responsibility, the jus ad bellum, international humanitarian law, and the law of neutrality. An extensive commentary accompanies each rule, which sets forth the rule's basis in treaty and customary law, explains how the group of experts interpreted applicable norms in the cyber context, and outlines any disagreements within the group as to each rule's application."

Related Links:
First cyber war manual released The Age, Mar.20.2013
Tallinn Manual on the International Law Applicable to Cyber Warfare Cambridge University Press

Follow CircleID on Twitter

More under: Cyberattack, Law, Policy & Regulation

Categories: Net coverage

Research Group Releases International Law on Cyber Warfare Manual

CircleID posts - Wed, 2013-03-20 20:11

Tallinn Manual on the International Law Applicable to Cyber Warfare
Paperback / ISBN:9781107613775
Publication date: March 2013The newly released handbook applies the practice of international law with respect to electronic warfare. The Tallinn Manual on the International Law Applicable to Cyber Warfare — named for the Estonian capital where it was compiled — was created at the behest of the NATO Co-operative Cyber Defence Centre of Excellence, a NATO think tank. It takes current rules on battlefield behaviour, such as the 1868 St Petersburg Declaration and the 1949 Geneva Convention, to the internet, occasionally in unexpected ways.

"The product of a three-year project by twenty renowned international law scholars and practitioners, the Tallinn Manual identifies the international law applicable to cyber warfare and sets out ninety-five 'black-letter rules' governing such conflicts. It addresses topics including sovereignty, State responsibility, the jus ad bellum, international humanitarian law, and the law of neutrality. An extensive commentary accompanies each rule, which sets forth the rule's basis in treaty and customary law, explains how the group of experts interpreted applicable norms in the cyber context, and outlines any disagreements within the group as to each rule's application."

Related Links:
First cyber war manual released The Age, Mar.20.2013
Tallinn Manual on the International Law Applicable to Cyber Warfare Cambridge University Press

Follow CircleID on Twitter

More under: Cyberattack, Law, Policy & Regulation

Categories: Net coverage

IPv6: SAVA, Ca va pas?

CircleID posts - Tue, 2013-03-19 23:28

Sender Address Validation and Authentication (SAVA) is the silver bullet. It will send to Cyberia all dark forces that make us shiver when we make a purchase on the internet, pose a threat to our very identities and have made DDoS a feared acronym.

Some of you will remember the heated debates when Calling Line Identification (CLID) was first introduced in telephony. Libertarians of all stripes called passionately to ban such an evil tool threatening our most precious civil liberties like the impunity of calling home from the bar, pretending to be still at work or with a customer. Today everybody welcomes the decline of crank and obscene calls even if telemarketers can continue to be a nuisance. Will SAVA be for the internet what CLID was for telephony?

One of the beauties and at the same time a source of potential vulnerability of the internet design is that it forwards packets connectionless, hop by hop, based on the destination address. This has proven a cornerstone of the amazing resiliency and scalability of the internet. The flip side is that this makes the blue box offspring, address spoofing more prevalent. From making occasional free calls in the 'telephony era', internet address spoofing now substitutes legitimate source addresses to fraudulently obtain personal information from unsuspecting end-users or wreak havoc flooding network hosts, DNS systems and even networks with DDoS attacks. So much so that a number of ISP's now offer 'scrubbing services' to their customers. Zacks Investment sees Cyber Security firms as a major investment opportunity. This is surely a growing and lucrative market segment; I might follow their advise.

SAVA was first presented at an IEEE conference in 2007 and subsequently proposed as a RFC to the IETF in 2008 with Tsinghua University of Beijing as lead author. The paper addressed the need for source address verification on the access network, intra-AS within a network, and inter-AS between networks across BGP boundaries. This led to the creation of a quite active IETF working group called SAVI to tackle the subject. An informational draft issued this February provides a good overview of a variety of 'attack vectors' and threats. How fast some of these RFC will be completed and approved and, more importantly, implemented remains however an open question.

China has reported that it is experimenting with a SAVA implementation in its CNGI (China Next Generation Internet) IPv6 only based R&E network, in no less than the United Kingdom's prestigious Philosophical Transactions of the Royal Society. This has in turn triggered some activity in the blogosphere ranging from more factual to a bit more alarming. Concluding yet again that China is light years ahead of the United States in IPv6 deployment remains questionable however. While CNGI has without question been the benchmark for native IPv6 deployment for many years in a Research and Education Networking environment, China has been really lagging so far in the commercial deployment of IPv6. They obviously bide their time.

While some will argue that SAVA would undermine their civil liberties and individual freedom especially when they prefer anonymity in whatever they are doing on the internet and others will see it as another step to big brother watching us, the need for better security is undeniable and even more urgent as we accelerate towards a mobile broadband data environment. IDC predicts that, this year, smartphone sales will for the first time surpass feature phones. Mobile operators enjoy usage based services and billing; to correctly identify the source will always remain essential to revenue generation and corporate wellbeing. And what would the impact be of a DDoS attack choking a major LTE network?

Major ISP's and mobile operators might want to track SAVA more closely; ça va ou ça va pas?

Written by Yves Poppe, Director, Business Development IP Strategy at Tata Communications

Follow CircleID on Twitter

More under: DDoS, DNS Security, IPv6, Security

Categories: Net coverage

Google Announces DNSSEC Support for Public DNS Service

CircleID news briefs - Tue, 2013-03-19 22:13

Google today announced that its "Public DNS" service is now performing DNSSEC validation. Yunhong Gu, Team Lead for Google Public DNS, in post today wrote:

"We launched Google Public DNS three years ago to help make the Internet faster and more secure.Today, we are taking a major step towards this security goal: we now fully support DNSSEC (Domain Name System Security Extensions) validation on our Google Public DNS resolvers. Previously, we accepted and forwarded DNSSEC-formatted messages but did not perform validation. With this new security feature, we can better protect people from DNS-based attacks and make DNS more secure overall by identifying and rejecting invalid responses from DNSSEC-protected domains."

Follow CircleID on Twitter

More under: DNS, DNS Security, Security

Categories: Net coverage

Google Announces DNSSEC Support for Public DNS Service

CircleID posts - Tue, 2013-03-19 22:13

Google today announced that its "Public DNS" service is now performing DNSSEC validation. Yunhong Gu, Team Lead for Google Public DNS, in post today wrote:

"We launched Google Public DNS three years ago to help make the Internet faster and more secure.Today, we are taking a major step towards this security goal: we now fully support DNSSEC (Domain Name System Security Extensions) validation on our Google Public DNS resolvers. Previously, we accepted and forwarded DNSSEC-formatted messages but did not perform validation. With this new security feature, we can better protect people from DNS-based attacks and make DNS more secure overall by identifying and rejecting invalid responses from DNSSEC-protected domains."

Follow CircleID on Twitter

More under: DNS, DNS Security, Security

Categories: Net coverage
Syndicate content