NavigationSearchUser loginRecent blog posts
|
CircleID postsMichele Neylon, Blacknight CEO Elected as Chair of Registrar Stakeholder Group of ICANNMichele Neylon, Blacknight CEOMichele Neylon, CEO of Blacknight, announced today his election as Chair of the Registrar Stakeholder Group of ICANN, the first European to ever hold this position. The Registrar Stakeholder Group (RrSG) is one of several Stakeholder Groups within the ICANN community and is the representative body of domain name Registrars worldwide. It is a diverse and active group that works to ensure the interests of Registrars and their customers are effectively advanced. The chair, in consultation with the executive committee and members, organises the work of the Stakeholder Group and conducts RrSG meetings. The chair often confers with others in the ICANN community on Registrar-related policy and business issues, and is the primary point of contact between the RrSG and ICANN staff. Neylon has previously served as the Secretary to the RrSG and is the only European member of the executive committee. Follow CircleID on Twitter More under: Domain Names, ICANN Categories: Net coverage
One Year Later: Who's Doing What With IPv6?One year on from the World IPv6 Launch in June 2012, we wanted to see how much progress has been made towards the goal of global IPv6 deployment. Both APNIC and Google are carrying out measurements at the end user level, which show that around 1.29% (APNIC) and 1.48% (Google) of end users are capable of accessing the IPv6 Internet. Measurements taken from this time last year show 0.49% (APNIC) and 0.72% (Google), which means the amount of IPv6-enabled end users has more than doubled in the past 12 months. Rather than looking at the end user, the measurements the RIPE NCC conducts look at the networks themselves. To what extent are network operators engaging with IPv6? And how ready are they to deploy it on their networks? IPv6 RIPEness The RIPE NCC measures the IPv6 "readiness" of LIRs in its service region by awarding stars based on four indicators. LIRs receive stars when:
The pie charts below show the number of LIRs holding 0-4 RIPEness stars at the time of the World IPv6 Launch in June 2012, and the number today.
The first RIPEness star is awarded when the LIR receives an allocation of IPv6 address space. When we look at the charts above, we see that the number of LIRs without an IPv6 allocation has decreased from 50% at the time of the World IPv6 Launch to 39% today. One factor that shouldn't be overlooked here is that the current IPv4 policy requires that an LIR receive an initial IPv6 allocation before it can receive its last /22 of IPv4 address space. However, this does not explain the increase in 2-4 star RIPEness, which can only come from LIRs working towards IPv6 deployment. Five-Star RIPEness At the recent RIPE 66 Meeting in Dublin, we presented the results from our introduction of a fifth RIPEness star, which is still in the prototype stage. This fifth star measures actual deployment of IPv6. It looks at whether LIRs are providing content over IPv6 and the degree to which they are providing IPv6 access to end users. More information on the fifth star and the methodology behind it can be found on RIPE Labs. In this first version, 573 LIRs in the RIPE NCC service region qualify for the fifth star, which represents 6.24% of all LIRs in the region. The Day We Crossed Over Coincidentally, the World IPv6 Launch was around the same time as another milestone for the RIPE NCC service region. It was roughly then that the number of LIRs with IPv6 allocations outnumbered those without IPv6 for the first time. This number has continued to increase, and there are currently 5,630 LIRs with IPv6 and 3,584 without. The blue line on the graph below represents LIRs with an IPv6 allocation, while the red line indicates those with no IPv6.
ASNs Announcing IPv6 One of the things the RIPE NCC regularly checks is the percentage of autonomous networks announcing one or more IPv6 prefixes into the global routing system. This is an important step before a network can begin exchanging IPv6 traffic with other networks. When we take a global view using the graph, we see that in the year since the World IPv6 Launch, the percentage of networks announcing IPv6 has increased from 13.7% to 16.1%. Of the 44,470 autonomous networks visible on the global Internet, 7,168 are currently announcing IPv6. When we adopt a regional perspective, one of the things we would hope to see is increasing IPv6 deployment in those regions where the free pool of IPv4 has been exhausted. It is reassuring to see this confirmed — both the APNIC and the RIPE NCC service regions are leading the way, with 20.0% and 18.1% (respectively) of networks announcing IPv6. The table below compares the percentage of autonomous networks announcing IPv6 — both now and at the time of the World IPv6 Launch in 2012.
The RIPE NCC's graph of IPv6-Enabled Networks (below) shows this as a comparison over time and allows for comparisons between countries and regions.
Reassuring, But The Real Work Is Still Ahead While the above statistics provide good cause for optimism, there is still a long way to go. Now, more than ever, network operators need to learn about IPv6 and deploy it on their networks in order to safeguard the future growth of the Internet. To find out more about IPv6, visit IPv6ActNow. Written by Mirjam Kuehne Follow CircleID on Twitter More under: IPv6 Categories: Net coverage
IPv6: Less Talk and More WalkThe sixth month of the year is both symbolic and historic for IPv6 and a good time to take stock and see how we've progressed. But instead of looking at the usual suspects of number of networks, number of users, number of websites, etc… on IPv6, let's look at some new trends to see what's happening. At gogo6 we've been measuring the "Buzz" of the IPv6 market every week over the last two and a half years. Each tweet, blog and news story on IPv6 has been counted, categorized and indexed for posterity. By graphing the 102,641 tweets, 6,620 blogs and 4,251 news stories during that time we capture the "Talk" of the market. Reviewing Graph 1 shows spikes in the right places but what is striking is the definitive downward trend in volume as time goes on. The "Talk" is going down.
This could be interpreted as a slowing of interest or a job complete so next I dug into the gogoNET social network database. By plotting the registration dates of the 47,142 networking professionals who joined during this same period of time I could infer the level of interest and work being done in deploying IPv6. The resulting trend line in Graph 2 is flat indicating a constant interest and flow of networking professionals preparing to implement IPv6. These are the "Workers".
The fruit of this steadfast labor pool can be seen in Graph 3. Plotting the first derivative of the IPv6 Adoption curve generated by the Google Access Graph over the same period of time yields a normalized curve of new IPv6 users. Though the original data is noisy there is a definitive upward trend indicating the rate of new users is increasing over time. And this is what I call the "Walk" — the tangible result of the constant stream of IPv6 workers.
The big headline on this one year anniversary of World IPv6 Launch is the number of IPv6 users have doubled. Taking a closer look indicates a market starting to get the job done. Sure there are more people using IPv6 but more importantly this is happening at an increasing rate — the result of a constant stream of new workers walking the walk by spending less time on naval gazing and more time on doing. Less talk and more walk. Written by Bruce Sinclair, CEO, gogo6 Follow CircleID on Twitter More under: IPv6 Categories: Net coverage
France Drops Its Internet "Three Strikes" Anti-Piracy LawFrance has put an end to its most extreme measure of its notorious "three strikes" anti-piracy law which came into effect in 2009. Cyrus Farivar reporting in Ars Technica: The law is better known by its French acronym, Hadopi. In the last few years under the law, the Hadopi agency famously set up a system with graduating levels of warnings and fines. The threat of being cut off entirely from the Internet was the highest degree, but that penalty was never actually put into place. "Getting rid of the cut-offs and those damned winged elephants is a good thing. They're very costly," Joe McNamee, of European Digital Rights. Follow CircleID on Twitter More under: Law Categories: Net coverage
ICANN Auctions or Private Auctions?By this time next year the allocation of the new Internet namespace will be complete. Several hundred contention sets, ranging from likely blockbusters like .WEB to somewhat less obvious money-makers like .UNICORN, will be decided by some method. One way to resolve contention is to form a joint venture. We are in the process of doing this with Uniregistry for .country. That works well when there are only two competitors and there's a good basis of trust, and it's a great solution because there are no losers. But if there are three or more competitors, or if you don't like and trust your prospective partner-to-be, this really isn't an option. Realistically, there will be only a limited number of joint ventures. It may happen that you and a competitor are head-to-head on two strings and if so, a second method for resolving contention is a straight-across trade. It's not a bad solution: it's cashless, it's quick, and each party gets something. But it's not as easy as it might at first appear. Some people want to win everything, so they view this solution as a loss. And who gets to pick first? Is a random draw an acceptable solution? If you can't manage one of these two solutions, you're left with either arranging a private deal with someone (again, not very realistic if there are more than two parties), or else you're going to auction. There are two kinds of auction being talked about: ICANN's "mechanism of last resort," or a "public auction" as it's often called; and the much-debated private auctions. The ICANN auction is simple enough: it's an ascending floor auction. The auctioneer asks (electronically) who's in at $100K, $250K, $1M, $2M, and so on. The last one left standing pays their money and walks away with the TLD. The losers walk home with nothing except some memories and a 20% refund on their application fee. ICANN walks away with a bundle of cash to add to the dragon's hoard of $350M that they have already reaped in application fees. A private auction is just an auction between companies, but without ICANN. The parties involved decide on the rules, so it may be in any auction format, but the favorite today is to ape the ICANN format exactly — with one important difference — instead of ICANN getting the money, it's split evenly among the losers. If you have more than one or two applications, a little money can go a long way (it's also good for single applicants, see below). Let's suppose, for example, that you are head-to-head with someone for .tremendous. The bidding goes up and up, but in the end your competitor likes it more and pays you $2M for .tremendous. The next day, you and your competitor are back again for .fantastic. This time, you value it more, and you win, again for $2M. Result: you both have $2M and you both have a TLD. Except for the auctioneer's fee, it ends up being a cashless transaction. Across multiple private auctions, this recycling of cash is writ large. With a modest war chest, you can lose more auctions and walk away with more money than you started with; you can win and lose in equal monetary proportions and end up cash-neutral; or you can try to win more value than you lose and spend your war chest in exchange for TLDs. As long as auction prices are stable relative to one another, then even a modest amount of cash will enable you to walk away with a return. Compare to ICANN auctions, where you get nothing if you lose and winning one auctions could mean that you're unable to compete in any others. Is this analysis only relevant for portfolio players like us? On inspection, the logic of the benefit holds no matter how many strings you are in contention for. If you have a single string, you should bid up to that string's value — given your financial resources — in either a public or private auction. In each case, a competitor who places a higher value on the TLD, who has the financial resources, will ultimately beat you. The question is:, do you want to be compensated for that loss? It took us a while to get our heads around private auctions. Actually, auctions of any kind take some time to understand, but at first blush an auction under the aegis of ICANN seemed safer; private auctions provoked a lot of questions. For instance, what if someone overbids in order to drive the price up and get a bigger payout from you? If you win, why should your money go to a competitor who might use your money to beat you at the next auction? Are the bid prices in general going to be higher or lower in a private auction? The basic answer to all these questions is that you should bid up to what you think a TLD is worth, and no more. If you follow that rule, you should do well in a private auction. Auction participants must have an idea of what they believe a TLD is worth. For example, if Minds + Machines bid on .awesome, we would estimate how many .awesome registrations we could sell in a given year, how many premium .awesome names we could sell, and what the brand uptake might be for a .awesome sunrise. We would then translate that into a discounted net present value for the TLD, and in no case bid higher than that in either a public or private auction. Keeping this in mind should spare you all kinds of woe, and it's equally valid in an ICANN or a private auction. What about someone overbidding to drive up the price? If you were up against Awesome Industries, the 10-billion-dollar king of awesome products, you might be tempted to overbid for .awesome in a private auction with the view of getting a higher pay out. But that's a dangerous strategy, because the reality is that at any instant Awesome could drop out, leaving you with a very, very expensive bid for something you don't think is really that… awesome. Everyone has their limit, even Awesome Industries. On the flip side, you might worry that by winning .great for $2M in a private auction you will be providing cash to your competitor for the next auction, for .amazing. But if they overbid on .amazing, beating you, you should be pleased to take their money — leaving you with a bunch of cash as well that you think is the tld that is more amazing than .amazing. Nobody likes the idea of enriching the competition. But consider these options, you can either:
Which is the rational choice? Game theory says that it's the first choice. There's another important point in favor of private auctions: we believe that our competitors will do more to promote the TLD space in general (thereby helping us) than ICANN will. So we're actually happy about paying our competitors instead of ICANN, as we view it as an investment in the promotion of the entire new gTLD program. We struggled for a while with the question of which auction process, ICANN or private would produce the higher winning bids. We think that private auction prices will be lower, for the simple reason that people who've been at this as long as we have are going to inevitably fall victim to the dreaded sunk cost fallacy and are going to hate the idea of walking away with nothing. The loser's consolation prize mitigates this effect, we think, and works to prevent people from overbidding to prevent being totally skunked. Even larger corporate players like Google or Amazon have an economic incentive to enter private auctions, because overall it will lower their cost of acquiring their name portfolios. We don't believe that Google will pay any amount for .LOL — and even Google and Amazon could find themselves outbid for some strings which are not core to their business models. Then there is the issue of anti-trust. Do private auctions constitute bid-rigging? I talked to the Justice Department, who told me that they might or might not issue a letter giving guidance, which might or might not say that private auctions were good or bad. In other words, they told me nothing at all. We've all seen opinions issued by lawyers on both sides of the question, and we've had our own lawyers opine as well. It's clearly an untested area, and that means it carries some risk. In the end, we decided that while we are bound to follow the law, we are not bound — in fact we're not qualified, and neither is anyone else — to decide what the law might or might not become. In our experience, successful startups do not succeed without taking risks, and they do not succeed if they let themselves be ruled by lawyers pointing out potential risks — that way madness lies. Collusion carries with it an implication of secrecy, of back-room deals, but private auctions are advertised and anyone with a contested application can join in. ICANN, perhaps the most conservative, risk-averse, lawyer-driven organization in our industry, clearly encourages applicants to "work it out" and we think that private auctions are a fair and open way to do so. The first private auction is being held in a few days, and we'll see if anyone gets a letter or phone call from the government. We feel that it's a remote possibility. Minds + Machines will proceed with private auctions. We won't participate in the first set of private auctions: given ICANN's history of delays, and the fact that it has not yet delegated a single new gTLD, we don't see any need to enter private auctions just yet. But we intend do so when the time is right: it's to our benefit, to the benefit of our competitors and to the industry generally. We also believe it's to ICANN's benefit and to the benefit of consumers, because while the ICANN auctions would leave applicants even more depleted of cash and unable invest in marketing, research, and technology, private auctions will provide money to create healthy vigorous registries that will fulfill ICANN's mission to create choice and competition in the top-level namespace. Written by Antony Van Couvering, CEO of Minds + Machines Follow CircleID on Twitter More under: ICANN, Top-Level Domains Categories: Net coverage
The Rise of Cyrillic Domain NamesThis week, on a cruise ship navigating Russia's Neva river, around 250 domain registrars and resellers are gathered for the RU-CENTER annual domain conference. RU-CENTER is the largest Russian registrar in a market that is dominated by three companies. RU-CENTER and competitor Reg.Ru both manage around 28% of domains registered in the country's national suffix .RU, whilst a third registrar R-01 (also part of the RU-CENTER group of companies) has 18%. RU-CENTER is also a figurehead for Russia's drive to make Internet use more palatable for those who are not natural ASCII writers. Because the Latin alphabet has been the only available option to browse the Internet up until now, and because Russian Internet users learn Latin characters anyway, having to use a foreign script has not dampened their drive to get online and reap the Web's many benefits. But give them a chance to type in Cyrillic, and Russians jump at it. That became evident when the country launched its own Cyrillic country code, Dot RF (Latin script spelling). Pent up demand for full local language web addresses meant Dot RF exceeded all registration expectations as soon as it went online. Now in its third year, the TLD already has almost 800,000 registrations compared to the 4.5 million names in Russia's original (ASCII) ccTLD Dot RU. That trend could grow as the new gTLD program allows further Cyrillic script suffixes to be created on the Internet. "Even with new gTLDs, it seems the market is underestimating the potential for Cyrillic domains," says RU-CENTER's Marketing Director Pavel Khramtsov. "We've only seen 8 IDN applications in Cyrillic script in this first round." By initiating the implementation of Dot Moskva, RU-CENTER became one of these Cyrillic IDN pioneers. The domain is one of a pair, the other being the Latin character version Dot Moscow (www.domainmoscow.org). "That's where we expect the highest demand to come from," explains Sergey Gorbunov, head of RU-CENTER's International Relations division. "Because Russians have become so used to Latin characters as the default that the ASCII string "moscow" is likely to be considered the more recognisable brand when they both launch." But the Muscovite pair may help change that. On top of the hunger for Cyrillic URLs that Russians have shown through Dot RF, the Moscow twins stand to help further the Cyrillic IDN cause because of their geography. The majority of people accessing the Internet in Russia do so from the Moscow area. "On average, around 40% of the total pool of Dot RU and Dot RF domains are registered by people from Moscow area," Gorbunov says. The country as a whole has an Internet penetration rate of around 70%, with up to 50 million Russian going online every day, making Russia the number one source of Internet users in Europe (European economic power-weight Germany is number two). But most of that traffic comes from Moscow. So having a local script TLD for Russia's Capital may make the rise of the Cyrillic script on the Internet even more of a reality. Both TLDs have been prepared and applied for with the support of the local authorities. The project is being coordinated by a non-profit foundation called the Foundation for Assistance for Internet Technologies and Infrastructure Development (www.faitid.org), or FAITID for short (pronounced "fated"). FAITID's governance structure is based on the multi-stakeholder model and brings together users, local government, business and industry to ensure that Dots Moscow and Moskva serve the local Internet community. As an example, FAITID and Moscow city officials are working on reserving a number of second level domains such as school.moscow or museum.moscow for public service use. The plan was to launch both TLDs at the same time. "For domains like Dot Moscow and Dot Moskva, it's easier to launch them as one, with a single roll-out and marketing plan," says Gorbunov. "It also means less confusion for the end-user." But ICANN's prioritisation draw put paid to those plans. As an IDN, Dot Moskva was a priority application in the December 2012 draw used by ICANN to determine the processing order for the 1930 applications it has received. Moskva drew number 69, as compared to Dot Moscow's 881. A huge gap which means the only way for both launches to coincide is for one to be put on hold whilst the other can play catch-up. "Because of the draw results, FAITID is now planning a long Sunrise period for Dot Moskva before moving on to initiate the rest of the launch schedule when Dot Moscow gets the green light," Gorbunov reveals. It's still unclear when that would be. Partly because of general contract negotiations currently going on between ICANN and both the registrars and the registries, which need to be resolved before ICANN can put contracts on the table for new gTLD operators to sign. But even when that happens, Moscow will have to wait for more contract negotiations to be done. This time, it will be direct talks between FAITID and ICANN. "The current registry contract as proposed has clauses which are illegal under Russian law," Gorbunov explains. "We also have problems with the trademark clearinghouse because Russian law requires FAITID to give Russia trademarks priority. So doing a Sunrise where the trademarks registered in the clearinghouse get priority is a challenge." FAITID is hoping for a November Dot Moskva Sunrise. That assumes 2 months for these specific contract negotiations so that the foundation no longer finds itself between a rock and a hard place, having to decide whether to stand foul of its national law by signing the ICANN contract as-is, or to refuse the contract and risk having to give up on its TLD application. That would be a pity for the millions of Internet users around the world who want to be able to type their web addresses in their own alphabet. Especially as Russia's innovative and cutting-edge Internet community could push the IDN system as a whole to new heights. "Email use remains limited with IDNs," says Khramtsov. "Most of the systems currently available can only work if both sender and receiver use the same technology provider. But this limited use does have some advantages. Spam is non-existent with Dot RF emails for example, because the technology for doing IDN spam just isn't there." Imagine an Internet where spam is heavily reduced by applying new techniques first developed for namespaces which are younger and can therefore afford to start afresh and apply new solutions to problems which have plagued the ASCII Internet since its inception. This is just one way in which the development of local-language web addresses could help the Internet as a whole, ASCII namespace included. A true embodiment of one of the new gTLD's program founding goals "to open up the top level of the Internet's namespace to foster diversity, encourage competition, and enhance the utility of the DNS." Written by Stéphane Van Gelder, Chairman, STEPHANE VAN GELDER CONSULTING Follow CircleID on Twitter More under: DNS, Domain Names, ICANN, Multilinguism, Top-Level Domains Categories: Net coverage
Switzerland Overtakes Romania as Top IPv6 AdopterAccording to recent statistics by Google, Switzerland has achieved the top for IPv6 adoption, passing Romania which topped the charts for nearly a year. Jo Best reporting in ZDNet: IPv6 adoption stands at 10.11 percent in Switzerland — the highest penetration of any country, according to stats from Google, which takes a snapshot of adoption by measuring the proportion of users that access Google services over IPv6… It's been suggested that the sudden spike in Switzerland's IPv6 adoption has been down to Swisscom, the country's biggest telco with around 55 percent of the broadband market and 60 percent of mobile, moving to adopt it. Comparison of IPv6-Enabled Web Browsers in Different Countries (Source Google / Visit Chart Page) Follow CircleID on Twitter More under: IPv6 Categories: Net coverage
The Company You KeepThis story started earlier this year, with a posting to the Australian network operators' mailing list, asking if anyone had more information about why the web site that was operated by an outfit called "Melbourne Free University" was inaccessible through a number of major Australian ISPs. When they asked their local ISP if there was some issue, they were informed that "this was due to an Australian government request, and could say no more about it." This was unusual, as it was very hard to see that this site would fall under the gamut of Australian Internet censorship efforts, or fall foul of various law enforcement or security investigations. As the name suggests, their web site was all about a community-based educational initiative. To quote from their web site: "The Melbourne Free University provides a platform for learning, discussion and debate which is open to everyone [...] and aims to offer space for independent engagement with important contemporary ideas and issues." What dastardly crime had the good folk at the Melbourne Free University committed to attract such a significant response? One Australian technology newsletter, The Delimiter, (delimiter.com.au) subsequently reported that their investigation revealed that the Australian Securities and Investments Commission (ASIC) had used their powers under Section 313 of the Australian Telecommunications Act (1997) to demand that a network block be applied by local Internet Service Providers. That section of the Act falls under Part 14, the part that is described as "National Security Matters." The mechanics of the network block was to demand that all Australian ISPs block IP level address to the IP address 198.136.54.104. As it turned out in subsequent ASIC announcements, it wasn't Melbourne Free University that had attracted ASIC's interest. What had lead to this block was an investment scam operation that had absolutely nothing in common with the Melbourne Free University. Well almost nothing. They happened to use a web hosting company for their web site, and that web hosting company used name-based virtual hosting, allowing multiple web sites to be served from a common IP address. Some financial scammers attracted ASIC's interest, and when ASIC formed the view that the scammers had breached provisions of Australian corporate and/or financial legislation. They used their powers under Section 313 to require Australia ISPs to block the web site of this finance operation. However, the critical aspect here was that the block was implemented as a routing block in the network, operating at the level of the IP address. No packets could get to that IP address from all customers of ISPs that implemented the requested network level block. The result was that financial scammers were blocked, but so were Melbourne Free University and more than a thousand other web sites. At this point the story could head in many different directions. There is the predominately Australian issue of agency accountability on their use of Section 313 of the Australian Telecommunications Act (1997) to call for the imposition of network-level blocks by Australian carriers, or concerns over the more general ability under this section for Australian government agencies to initiate such blocking of content without clear accountability and evidently without consistent reporting mechanisms. However, what specifically interests me here is not the issues about agency behaviors and matters of the application of national security interests and criminal investigations. What interests me here is that this story illustrates another aspect of the collateral damage that appears to have arisen from IPv4 address exhaustion. How do we make too few IP addresses span an ever-growing Internet? Yes, you can renumber all the network's internal infrastructure into private addresses, and reclaim the public addresses for use by customers and services. Yes, you can limit the addresses assigned to each customer to a single address, and require the customer to run a NAT to share this address across all the devices in their local network. And these days you may well be forced to run up larger Carrier Grade NATs (CGNs) in the interior of your network so that each public IP address can be shared across multiple customers. What about at the server side of the client/server network? If you can get multiple clients to share an address with a CGN, can you share a single public address across multiple services? For the web service at least, the answer is a clear "yes". The reason why this can be done so readily in the web is because the HTTP 1.1 protocol specification includes a mandatory Host request-header field (see: "Hypertext Transfer Protocol — HTTP/1.1”, R. Fielding et. al, RFC2616, June 1999") This field is the DNS name part of the URL being referenced, and must be provided by the client upon every GET request. When multiple DNS names share a common IP address, a web server can distinguish between the various DNS names and select the correct server context by examining the Host part of the request. This allows the server to establish the appropriate virtual context for every request. This form of virtual hosting appears to be very common. It allows a large number of small scale web servers to co-exist on a single platform without directly interfering with each other, and allows service providers to offer relatively low priced web service hosting. And it makes highly efficient use of IP addresses by servers, which these days is surely a good thing. Right? Well, as usual, it may be good, but it's not all good. As Melbourne Free University can attest, the problem is that we have yet to come to terms with these address sharing practices, and as a result we are still all too ready to assign reputation to IP addresses, and filter or otherwise block these IP addresses when we believe that they are used for nefarious purposes. So when one of the tenants on a shared IP address is believed to be misbehaving, then it's the common IP address that often attracts the bad reputation, and it's the IP address that often gets blocked. And a whole lot of otherwise uninvolved folk are then dragged into the problem space. It seems that in such scenarios the ability of clients to access your service does depend on all your online neighbors who share your IP address also acting in a way that does not attract unwelcome attention. And while you might be able to vet your potential online neighbors before you move your service into a shared server, such diligence might not be enough, in so far as it could just as easily be the neighbor who moves in after you that triggers the problem of a bad reputation. Exactly how widespread is address sharing on the server side? I haven't looked at that question myself, but others have. There is a web site that will let you know about the folk who share the same address. When I enter the address 198.136.54.104 into http://sameid.net then I see that more than a thousand various domain names are mapped to this particular IP address. So in the case of Melbourne Free University, they were relying on an assumption that none of these 1,000 unrelated online services were attracting unwelcome attention. Does this sharing apply to all forms of web services? Do secure web sites that use Transport Layer Security (TLS) have their own IP address all the time, or are we seeing sharing there as well? By default, sharing of secure web services requires that all the secure web service names that coexist on the same service IP address need to be named in the public key certificate used within the startup of the TLS session. This means that the transport key security is therefore shared across all the services that are located at the same IP address, which, unless the set of services are actually operated by a single entity, represents an unacceptable level of compromise. For this reason, there is a general perception that if you want to use a TLS-enabled secure channel for your service then you need your own dedicated IP address. But that's not exactly true. Back in 2006 the IETF published RFC 4366, which describes extensions to TLS that allows each service on a shared service platform to use its own keys for TLS sessions. (This has subsequently been obsoleted by a revised technical specification, RFC 6066.) The way this is done is that the client can include the server name of the service connection when it starts the TLS session, allowing the server to then respond with the desired service context and allows the session to start up using keys associated uniquely with the named service. So if the server and the client support this Server Name Indication (SNI) extension in TLS, then it is possible to use name-based server sharing and also support secured sessions for service access. Now if you are running a recent software platform as either a server or a client then it is likely that SNI will work for you. But if the client does not support SNI, such as the still relatively ubiquitous Windows XP platform, or version 2 or earlier of Android platforms, then the service client does not recognize this TLS extension and will encounter certificate warnings, and be unable to use the appropriate secure channel. At this stage SNI still appears to be relatively uncommon, so while it is feasible to use a shared server platform for a secure service, most secure services tend to avoid that and use a dedicated IP address and not require specific extension functionality from TLS. But back to the issue of shared service platforms and IP-level imposed filtering. Why are IP addresses being targeted here? Why can't a set of distinct services share a common platform, yet lead entirely separate online lives? If there are diverse unrelated services that are located on a common IP address then maybe a filter could be constructed at the DNS phase rather than traffic blocking by IP address. Certainly the DNS has been used as a blocking measure in some regimes. In the world of imposed filters we see filter efforts directed at both the DNS name and at the IP address. Both have their weaknesses. The DNS filters attempt to deny access to the resource by requiring all DNS resolvers in a particular regime not to return an IP address for a particular DNS name. The problem here is that circumvention is possible for those who are determined to circumvent such imposed DNS filters. There are a range of counter-measures, include using resolvers located in another regime that does not block the DNS name, running your own DNS resolver, or patching the local host by adding the blocked DNS entry into the local hosts.txt file. The general ease of circumvention of this approach supports the view that the DNS filter approach is akin to what Bruce Schneier refers to as "security theatre."[7] In this case the desired outcome is to be seen to be doing something that gives the appearance of improved security, as distinct to actually doing something that is truly effective in improving the security of the system. An IP block is the other readily available approach used as a service filter mechanism. It's implementation can be as simple as an eBGP feed of the offending (or offensive) IP addresses where the BGP next hop address is unreachable. It has been argued that such network filter mechanisms can be harder to circumvent than a DNS-based filter, in that you need to perform some form of tunneling to pass your packets over the network filter point. But in these days of TOR, VPNs and a raft of related IP tunnel mechanisms, that's hardly a forbidding hurdle. It may require a little more thought and configuration than simply using an open DNS service to circumvent a DNS block, which may make it a little more credible as an effective block. So it should not be surprising to learn that many regulatory regimes use this form of network filtering using IP addresses as a means of implementing blocks. However, this approach of blocking IP addresses assumes that IP addresses are not shared, and that blocking an IP address is synonymous with blocking the particular service that is located at that address. These days that's not a very good universal assumption. While many services do exist that are uniquely bound to a dedicated IP address, many others exist on shared platforms, where the IP address is no longer unique to just that service. It's not just the "official" IP blocks that can cause collateral damage in the context of shared service platforms. In the longstanding efforts to counter the barrage of spam in the email world, the same responses of maintaining blocking filters based on domain name and IP address "reputation" are used. Once an IP address gains a poor reputation as a spam originator and its value is placed on these lists as a spam originator, it can be a challenging exercise to "cleanse" the address, as such lists of spamming IP addresses exist in many forms and are maintained in many different ways. In sharing IP addresses, it's not just the collection of formal and informal IP filters that pose potential problems for your service. In the underworld of Denial of Service (DOS) attacks, the packet level saturation attack is also based on victimizing an IP address. So even if your online neighbor has not attracted some form of official attention, and it has not been bought to the attention of the various spam list maintainers, there is still the risk that your neighbor has managed to invite the unwelcome attentions of a DOS attack. Again here your online service is then part of the collateral damage, as when the attack overwhelms the common service platform all the hosted services inevitably fall victim the to attack. For those who can afford it, including all those who have invested what is for them significant sums of money and effort in their online service, then using a dedicated service platform, and a dedicated IP address, is perhaps an easy decision to make. When you share a service platform, your online presence is always going to be vulnerable to the vagaries of your neighbors' behavior. But there are costs involved in such a decision, and if you cannot afford it, or do not place such a premium value on your online service, then using a shared service platform often represents an acceptable compromise of price and service integrity. Yes, the risks are a little higher of your neighbors attracting unwelcome attention, but that may well be an acceptable risk for your service. And in those cases when your service is located on a shared service platform, if the worst case does happen, and you subsequently find that your service falls foul of a sustained DDOS attack that was launched at the common service platform, or your service address becomes the subject of some government agency's IP filter list, or is listed on some spam filter, you may take some small comfort in the knowledge that it's probably not personal. It's not about you. But there may well be a problem with the online company you keep. Postscript: What about www.melbournefreeuniversity.org? They moved hosts. Today they can be found at 103.15.178.29, on a server rack facility operated within Australia. Are they still sharing? Well, sameid.net reports that they share this IP address with www.vantagefreight.com.au. I sure hope that they’ve picked better online company this time around! Written by Geoff Huston, Author & Chief Scientist at APNIC Follow CircleID on Twitter More under: IP Addressing Categories: Net coverage
Moving Beyond Telephone Numbers - The Need for a Secure, Ubiquitous Application-Layer IdentifierDo "smart" parking meters really need phone numbers? Does every "smart meter" installed by electric utilities need a telephone number? Does every new car with a built-in navigation system need a phone number? Does every Amazon Kindle (and similar e-readers) really need its own phone number? In the absence of an alternative identifier, the answer seems to be a resounding "yes” to all of the above. Henning Schulzrinne, U.S. Federal Communications Commission CTOAt the recent SIPNOC 2013 event, U.S. Federal Communications Commission CTO Henning Schulzrinne gave a presentation (slides available) about "Transitioning the PSTN to IP” where he made a point about the changes around telephone numbers and their uses (starting on slide 14) and specifically spoke about this use of phone numbers for devices (slide 20). While his perspective is obviously oriented to North America and country code +1, the trends he identifies point to a common problem: What do we use as an application-layer identifier for Internet-connected devices? In a subsequent conversation, Henning indicated that one of the area codes seeing the largest amount of requests for new phone numbers is one in Detroit — because of the automakers need to provision new cars with navigation systems such as OnStar that need an identifier. Why Not IPv6 Addresses? Naturally, doing the work I do promoting IPv6 deployment, my first reaction was of course: "Can't we just give all those devices IPv6 addresses and be done with it?" The answer turns out to be a bit more complex. Yes, we can give all those devices IPv6 addresses (and almost certainly will as we are simply running out of IPv4 addresses), but: 1. Vendors Don't Want To Be Locked In To Infrastructure – Say you are a utility and you deploy 1,000 smart meters in homes in a city that all connect back to a central server to provide their information. They can connect over the Internet using mobile 3G/4G networks and in this case they could use an IPv6 address or any other identifier. They don't need to use a telephone number when they squirt their data back to the server. However, the use of IP addresses as identifiers then ties the devices to a specific Internet Service Provider. Should the utility wish to change to a different provider of mobile Internet connectivity, they would now have to reconfigure all their systems with the new IPv6 addresses of the devices. Yes, they could obtain their own block of "Provider Independent (PI)" IPv6 addresses, but now they add the issue of having to have their ISP route their PI address block across that provider's network. 2. Some Areas Don't Have Internet Connectivity – In some places where smart meters are being deployed, or where cars travel, there simply isn't any 3G/4G Internet connectivity and so the devices have to connect back to their servers using traditional "2G" telephone connections. They need a phone number because they literally have to "phone home". While we might argue that #2 is a transitory condition while Internet access continues to expand, the first issue of separating the device/application identifier from the underlying infrastructure is admittedly a solid concern. Telephone Numbers Work Well The challenge for any new identifier is that telephone numbers work rather well. They are:
For all these reasons, it is understandable that device vendors have chosen phone numbers as identifiers. The Billing / Provisioning Conundrum The last bullet above points to a larger issue that will be a challenge for any new identifier. Utilities, telcos and other industries have billing and provisioning systems that in some cases are decades old. They may have been initially written 20 or 30 (or more) years ago and then simply added on to in the subsequent years. These systems work with telephone numbers because that's what they know. Changing them to use new identifiers may be difficult or in some cases near impossible. So Why Change? So if telephone numbers work so well and legacy systems are so tied to those numbers, why consider changing? Several reasons come to mind: 1. Security – There really is none with telephone numbers. As Henning noted in his presentation and I've written about on the VOIPSA blog in the past, "Caller ID" is easily spoofable. In fact, there are many services you can find through a simple search that will let you easily do this for a small fee. If you operate your own IP-PBX you can easily configure your "Caller ID" to be whatever you want and some VoIP service providers may let you send that Caller ID on through to the recipient. 2. OTT mobile apps moving to desktop (and vice versa) – Many of the "over the top (OTT)" apps that have sprung up in the iOS and Android devices for voice, video or chat communication started out using the mobile devices phone number as an identifier. It's a simple and easy solution as the device has the number already. We're seeing some of those apps, though, such as Viber, now move from the mobile space to the desktop. Does the phone number really make sense there? Similarly, Skype made the jump from desktop to mobile several years ago and used its own "Skype ID" identifier — no need for a phone number there. 3. WebRTC – As I've written before, I see WebRTC as a fundamental disruption to telecommunications on so many different levels. It is incredibly powerful to have browser-based communication via voice, video or chat… in any web browser… on any platform including ultimately mobile devices. But for WebRTC to work, you do need to have some way to identify the person you are calling. "Identity" is a key component here — and right now many of the WebRTC systems being developed are all individual silos of communication (which in many cases may in fact be fine for their specific use case). WebRTC doesn't need phone numbers — but some kind of widely-accepted application-layer identifier could be helpful. 4. Global applications – Similarly, this rise of WebRTC and OTT apps has no connection to geography. I can use any of these apps in any country where I can get Internet connectivity (and yes, am not being blocked by the local government). I can also physically move from country to country either temporarily or permanently. Yet if I do so I can't necessarily take my phone number with me. If I move to the US from the UK, I'll probably want to get a new mobile device — or at least a new SIM card — and will wind up with a new phone number. Now I have to go back into the apps to change the identifier used by the app to be that of my new phone number. 5. Internet of Things / M2M – As noted in the intro to this post, we're connecting more and more devices to the Internet. We've got "connected homes" where every light switch and electrical circuit is getting a sensor and all appliances are wired into centralized systems. Devices are communicating with other devices and applications. We talk about this as the "Internet of Things (IoT)" or "machine-to-machine (M2M)" communication. And yes, these devices all need IP addresses — and realistically will need to have IPv6 addresses. In some cases that may be all that is needed for provisioning and operation. In other cases a higher-level identifier may be needed. 6. Challenges in obtaining phone numbers – We can't, yet, just go obtain telephone numbers from a service like we can for domain names. Obtaining phone numbers is a more involved process that, for instance, may be beyond many WebRTC startups (although they can use services that will get them phone numbers). One of the points Henning made in this SIPNOC presentation was the FCC is actually asking for feedback on this topic. Should they open up phone numbers within the US to be more easily obtainable? But even if this were done within the US, how would it work globally?
7. Changes in user behavior – Add to all of this the fact that most of us have stopped remembering phone numbers and instead simply pull them up from contact / address books. We don't need a phone number any more… we just want to call someone, the underlying identifier is no longer critical.
So What Do We Do? What about SIP addresses that look like email addresses? What about other OpenID or other URL-based schemes? What about service-specific identifiers? What about using domain names and DNS? Henning had a chart in his slides that compared these different options ("URL owned" is where you own the domain):
The truth is there is no easy solution. Telephone numbers are ubiquitous, understood and easy-to-use. A replacement identifier needs to be all of that… plus secure and portable and able to adapt to new innovations and uses. Oh… and it has to actually be deployable within our lifetime. Will there be only one identifier as we have with telephone numbers? Probably not… but in the absence of one common identifier we'll see what we are already seeing — many different islands of identity for initiating real-time communications calls:
And in the meantime, Amazon is still assigning phone numbers to each of its Kindles, the utilities are assigning phone numbers to smart meters and automakers are embedding phone numbers in cars. How can we move beyond telephone numbers as identifiers? Or are we already doing so but into proprietary walled gardens? Or are we stuck with telephone numbers until they just gradually fade away?
Related Notes:
• The Internet Society (my employer) has a team focused on the broader subject of online privacy and identity (beyond simply the telephone numbers I mention here) and the links and documents there may be of interest. • There's a new Internet Draft out, draft-peterson-secure-origin-ps, that does an excellent job on the problem statement around "secure origin identification" as it relates to VoIP based on the SIP protocol and why there are security issues with what we think of as "Caller ID". • Chris Kranky recently argued that telcos are missing the opportunity of leveraging telephone numbers as identifiers in the data world. Written by Dan York, Author and Speaker on Internet technologies Follow CircleID on Twitter More under: Internet Protocol, IPv6, Telecom, VoIP Categories: Net coverage
Why Trademark Owners Should Think Twice Before Reclaiming DomainsA recent kerfuffle around Italian chocolate and confectionery producer Ferrero SpA and fan Sara Rosso is the latest example of how important it is for companies to consider carefully the domain and user names they decided to reclaim. Sometimes, enforcing trademark rights online can go really wrong, really quickly. In 2007, Ms. Rosso chose February 5 to be "World Nutella Day" — a time when "Nutella Lovers Unite for One Day!" She built a web presence around Nutella Day that included a nutelladay.com website. Nutelladay.com did everything a brand could hope for from a brand advocate: It encouraged people to go out and buy Nutella to use in scores of listed recipes; it created awareness of the brand and its fan-base by giving tips on how to get involved with and spread the word on World Nutella Day; and it created a strong emotional bond with the brand, giving people a place to share stories about the first time they tried the chocolate/hazelnut spread. It was powerful stuff. Browsing through the site made me nostalgic about my Polish grandmother, who introduced me to Nutella when I visited her one summer in Bytom, a small city in the southern part of Poland, about an hour's drive from the Czech Republic. She hoped the Nutella was close enough to the peanut butter that I ate in the U.S. Oh boy, that made me one happy 8-year-old. Ms. Rosso's campaign not only had a great web presence, it came from a loyal fan who dedicated her own time to promote a product she loved. Then, in a bizarre move, Ferrero issued a cease-and-desist letter to Ms. Rosso, who said she would comply. That sparked a public battering of Ferrero in publications such as The Huffington Post, Mashable, Business Insider, and Adweek. Reversing itself, Ferrero stopped legal action against Ms. Rosso and began backtracking. Adweek reported that the brand called the incident "a routine procedure in defense of trademarks." But it moved quickly to undo the damage it had done. The company expressed "its sincere gratitude to Sara Rosso for her passion for Nutella, which extends gratitude to all the fans of the World Nutella Day" and noted that the brand is "lucky to have a fan of Nutella so devoted and loyal as Sara Rosso." Ms. Rosso posted the update on NutellaDay.com, and noted that Ferrero had been "gracious and supportive." But you know this story will become a case study in how not to pursue trademarks online. FairWinds Partners' would have advised Ferrero SpA to get the marketing, trademark, and domain name experts in the room together when deciding which domains to reclaim to discuss the risks and benefits based on certain criteria — one of which, missed in the Nutella case, is how harmful is the content on the domain name in question? Written by Yvette Miller, Vice President of Communications and Marketing, FairWinds Partners Follow CircleID on Twitter More under: Cybersquatting, Domain Names Categories: Net coverage
The Role of Trust in Determining a New TLD's Business SuccessWarren Buffet famously said, "It takes twenty years to build a reputation and five minutes to ruin it." Like it or not, every Top-Level Domain (TLD) is a brand in the eyes of the consumer. So, just how important is trust in the success of the new top-level-domains? I'm no branding expert, but I grasp that no brand, no matter how memorable, will fail to achieve its goals if it does not gain the public's trust. TLD's are no different. Several TLD's in the past have learned this the hard way by running pricing promotions that flooded their namespace with undesirable content or behavior. Once a TLD is tagged as having a distrust issue, it is difficult to erase it from the public's mind. In the future, building trust will be an even bigger issue for those TLD's that implicitly make some sort of "promise" of the type of registrants who are using the TLD to promote themselves. The public will approach new TLD's in one of two ways: those who begin their relationship with little trust which needs to be earned over time, and those who begin with trust freely given, but is forever taken away on the first sign of behavior deemed untrustworthy. Either way, trust must be established by the TLD or there will be no relationship. So, how does a new TLD build trust? We are working with several TLD applicants that have decided that their business success depends on checking credentials of the registrants up-front. These are typically TLD's that have chosen a string that represents some sort of recognizable community of special interests. Their rationale is simple: success of their business depends on building trust in their TLD. One of the best ways to achieve this is by checking registrant's eligibility for the TLD up front. The ICANN Government Advisory Council (GAC) thinks that this decision should not be left up to the applicants that fall into one of twelve categories: children, environmental, health and fitness, financial, gambling, charity, education, intellectual property, professional services, corporate identifiers, generic geographical terms and inherently governmental functions. Potentially, hundreds of new TLD's are impacted by this advice. Whether this last-minute intervention by the GAC would mean sabotage or rescue for some of these TLD's is an issue that will need to wait for history. Let's assume that ICANN decides that vetting potential registrants is a business decision best left to the applicants. Trust needs to be built in two key TLD constituencies: both registrants and visitors to the new domains. Trust from potential registrants Potential registrants of these new domains will assess the risk to their business or their personal reputation, in making an investment in the new domain name. The investment is not just financial, but also emotional as they will need to decide how far to go in their adoption of the new TLD. Trust from visitors to these new domains Success will also hinge on whether visitors to new websites on these new domains trust the website. If end-users consider a website to be suspect or somewhat shady, then registrants will abandon their investment in the new domains. Hypothetical example: At the risk of over-simplification, let's use a hypothetical TLD: .SURGEON. Let's say the .SURGEON applicant has proposed an open and unrestricted namespace. Anyone who wants a .SURGEON domain name can get one. The applicant's argument for this is that although medical doctors might represent a significant registrant population of the TLD, there are also other people who consider themselves "surgeons", including the "Turkey Surgeon", the grandpa who carves the turkey on Thanksgiving Day; and the "Tree Surgeon", the chainsaw-owning brother-in-law who advertises on Craigslist. The argument is: If you restrict .SURGEON domain names just for medical doctors, then you disenfranchise these other valid uses of the domain. Thus, no up-front review of credentials takes place for .SURGEON. Clearly, there is little potential harm of someone claiming to be a "Turkey Surgeon". To address more serious cases, such as medical doctor imposters, the applicant may propose community policing to catch these registrants. But there may be another important question this applicant needs to ask themselves first: How important is trust to the success of my business? And can I achieve this trust without checking or otherwise policing the credentials of my registrants? "But I have a really strong Acceptable Use and anti-abuse policy!" EVERY TLD applicant is promising to be vigilant about policing for abuse in their TLD. The GAC has called for such safeguards to be mandatory in all new TLD's. I call these principles "Motherhood and Apple Pie". The problem is that they are reactive, as opposed to proactive. Once a TLD lets the wrong registrants in, it is likely that the public will encounter them before the registry is aware of the problem. And by then it may be too late. What types of TLD's should care most about trust? All new TLD's offer some sort of brand promise that will be delivered to registrants and visitors alike. There are a sub-set of TLD's that implicitly promise a lot more. Most, but not all, fall into one of the 12 categories identified by the GAC. Many of these TLD applicants have decided that their business success depends on building trust in their TLD by checking credentials of the registrants up-front. These applicants fall into three general categories:
Is leap of faith the TLD's sole branding strategy? Every TLD applicant will need to decide how to build trust in their new TLD. The question is, how will they do it? Will you do it up-front or do you expect your public to take a leap of faith? The right answer could be a combination of these two. But how this question is answered may well determine the business success of the TLD. Even if the GAC advice is not made mandatory for the 12 categories it has identified, it may still make good business sense for this sub-set of TLD applicants who are planning to run completely open and unrestricted TLDs to take the extra steps to vet their registrants… unless these applicants can think of a better way to build trust in their TLD. Written by Thomas Barrett, President - EnCirca, Inc Follow CircleID on Twitter More under: Top-Level Domains Categories: Net coverage
Multi-Layer Security Architecture - Importance of DNS FirewallsIn today's world with botnets, viruses and other nefarious applications that use DNS to further their harmful activities, outbound DNS security has been largely overlooked. As a part of multi-layer security architecture, a DNS Firewall should not be ignored. After serving as a consultant for multiple organizations, I have encountered many companies that allow all internal devices to send outbound DNS queries to external DNS servers — a practice that can lead to myriad problems, including cache poisoning and misdirection to rogue IP addresses. For companies that want to enable internal devices to send these types of queries, having the ability to manually or automatically blacklist domains is a very effective way to add a layer of security to a broader security architecture. DNS & Blacklisting Companies of all sizes are susceptible to DNS attacks. Depending on the type of external recursive DNS server that is running, there are a number of ways to tighten your outbound DNS recursive service, from manual domain blocking to fully automated updates as threats appear. I recently worked with a company that was infected by a virus that got ahead of the anti-virus software for a short period of time. The security team knew that approximately 100-150 domains were actively being resolved to aid in the spread of the virus and payload. We resolved the issue by manually blacklisting the affected domains. Infoblox has created a very compelling solution that allows users to update their blacklist as threats emerge. While we were able to successfully help mitigate the threat with manual updates, the Infoblox solution would have enabled us to be even more proactive. If your company is small and runs a DNS server in house, using something tried and true, such as BIND can benefit you from this type of added security. Depending on where you prefer to source your list of blacklisted domains, these can be loaded to the external recursive server — causing a DNS firewall effect. The server will need to be updated regularly, removing domains that no longer need to be blacklisted and adding new domains on an as-needed basis. Ensuring that the DNS firewall architecture is as effective as possible will require reviewing your firewall rules. For example, I recommend restricting outbound port 53, Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) ,to allow only recursive server IP addresses access to the Internet on port 53 UDP/TCP. This rule would need to allow access to ANY IP address on the Internet, as these servers will have to walk the DNS tree and resolve DNS from servers worldwide. Written by Jesse Dunagan, Senior Professional Services Engineer at Neustar Follow CircleID on Twitter More under: Cyberattack, DNS, Security Categories: Net coverage
European ccTLDs Passed 64 Million Domains, Growth Slower, Reports CENTRCENTR, the European ccTLD organization, has published its biannual statistics report on the state of the domain name industry with a European ccTLD focus. From the report: "European ccTLDs closed April 2013 with just over 64 million domains under management. Over the 12 months preceding, overall net growth was 6.7% — an increase of around 4 million domains. This growth however, is a lower rate compared with that of the same period in the year before. This could be most likely explained by factors such as the maturing ccTLD market in Europe (particularly among the larger Operators) as well as the ongoing financial crisis. Renewal rates remain consistent over the past 3 years at around 79% on the whole and actually increasing marginally in some zones."
Follow CircleID on Twitter More under: Domain Names, Top-Level Domains Categories: Net coverage
Are You Ready for the Launch of New gTLDs?It seems as though the inevitable is now upon us, and though there are many that have wished this day never come, the launch of the first new gTLD registries is approaching. Now whether the first new gTLD registry will launch within the next few months or be delayed due to Advice from world governments remains to be seen. However, most companies with which I have spoken desperately need any extra time to prepare for the launch of new gTLDs. So exactly, what should companies be doing to prepare for the launch of new gTLDs? 1. Identify and Submit Trademarks to the Trademark Clearinghouse – The Trademark Clearinghouse will serve as a central repository of authenticated trademark information. The information contained within the Trademark Clearinghouse will be used to enable Sunrise Registrations and Domain Name Blocking. 2. Review All New gTLD Applications – Last year, ICANN revealed the entire list of 1,930 applications, representing approximately 1,400 new TLDs, about half of which were closed registries. It is important for brand owners to familiarize themselves with the applications and begin thinking about how these new gTLDs will affect their domain management policies and brand protections strategies. There are quite a few resources on the web to facilitate this process, including the MarkMonitor New gTLD Application Database located here. 3. Rationalize Existing Domain Name Portfolios – Now, more than ever, is the time to take a hard look at defensive holdings and decide if any of your existing domain names are no longer needed. Domain traffic statistics should be considered and used to add domains where needed or drop domains with little or no traffic. 4. Revise and Implement Domain Management Policies – It is important to create enterprise-wide policies and procedures covering topics such as who can register domains, what should be registered and how those registrations will be used. Policies should also include where you want your domains to "point" as well as security measures like domain locking. 5. Ensure that Your Existing Registrar is Committed to Providing New gTLD Registration Services – Select a Registrar that is committed to providing registration services for all new gTLDs. Working with a single Registrar (as opposed to multiple Registrars) will help to ease some of this anticipated complexity. 6. Become Familiar with New Rights Protection Mechanisms – ICANN has adopted a number of new Rights Protection Mechanisms, including Trademark Claims, Sunrise Registrations, the URS (Uniform Rapid Suspension), the PDDRP (Post-Delegation Dispute Resolution Procedure) and the RRDRP. 7. Police for Abuse and Take Action Only When Appropriate – It's important to monitor for potential problems in all new gTLD registrations for improper use of brands, trademarks and slogans. By monitoring domain registrations, companies can identify abuse and take immediate action where it makes sense. 8. Set Budgets Accordingly – Budgets will likely need to increase to take in account registration fees, Trademark Clearinghouse submission fees as well as additional costs for policing and remediating domain name abuse. Of course, there are still many unknowns surrounding the launch of new gTLDs such as timing, costs, eligibility requirements, etc. That said, now is the time to prepare given the anticipated complexity expected. Written by Elisa Cooper, Director of Product Marketing at MarkMonitor Follow CircleID on Twitter More under: ICANN, Top-Level Domains Categories: Net coverage
Liberty Reserve Now, Bitcoin Next?The papers have been abuzz with the shutdown of Liberty Reserve, an online payments system, due to accusations of large scale money laundering via anonymous transactions. Many people have noted similarities between LR and Bitcoin and wonder whether Bitcoin is next. I doubt it, because with Bitcoin, nothing is anonymous. Liberty Reserve was designed to make it extremely difficult to figure out who paid what to whom. Accounts were anonymous, identified only by an email address and an unverified birth date. Users could direct LR to move funds from their account to another, optionally (and usually) blinding the transaction so the payee couldn't tell who the payor was. But they couldn't transfer money in or out. LR sold credits in bulk to a handful of exchangers, who handled purchases and sales. So to put money in, you'd contact an exchanger to buy some of their LR credits, which they would then transfer to your account. To take money out, you'd transfer LR credits to an exchanger who would in turn pay you. Nobody kept transaction records, so payments to exchangers couldn't be connected to the LR accounts they funded, there was no record of where the credits in each LR account came from, and outgoing payments from exchangers couldn't be connected to the accounts that funded those payments. This was an ideal setup for drug deals and money laundering, not so much for legitimate commerce. Bitcoins are not like that. The wallets, analogous to accounts, are nominally anonymous, but the bitcoins aren't. Every wallet and every bitcoin has a serial number, and every transaction is publicly logged. It's as though you did all your buying and selling with $100 bills, but for each transaction the serial number of each bill and the two wallets in each transaction is published with a timestamp for all the world to see. (This is how Bitcoin prevents double spending, by the payee checking the public logs to ensure that the payor minted or received the bitcoins and hasn't paid them to someone else.) This makes truly anonymous transactions very hard. Multiple transactions from the same wallet are trivially linked, so if the counterparty in any of your transactions knows who you are, all the transactions from that wallet are known to be you. This is roughly the same problem with using a prepaid debit card or throwaway cell phone purchased for cash — if one of the people you buy something from, or one of the people you call knows who you are, your cover is blown. While it's possible to obscure the situation by using multiple wallets, if you transfer bitcoins from one wallet to another, that transaction is public, and a sufficiently determined analyst can likely figure out they're both you. Doing all of your transactions so that the other party can't identify you is very hard, unless you're the kind of person who wears a different ski mask each time he buys groceries. There have been some widely publicised thefts of large numbers of bitcoins, in one case by installing malware on the owner's PC which was visible on the Internet and using the malware to transfer bitcoins out of his wallet. But the thief hasn't spent the loot and probably never will, because everyone knows the serial numbers of the stolen bitcoins, and nobody will accept them for payment. This is sort of like unsalable stolen famous paintings, except that there's no analogy to the rich collector who'll buy the art and never show it to anyone else, because, frankly, bitcoins aren't much to look at. Again, the bitcoins aren't anonymous. You could imagine a bitcoin mixmaster, which took in bitcoins from lots of people, mixed them around and sent back a random selection to each, less a small transaction fee, to try and obscure the chain of ownership. But that wouldn't be much of a business for anyone who wanted to live in the civilized world since it would just scream money laundering. (Yeah, we know cyberlibertarians would do it out of principle, but the other 99% of the business would be drug dealers.) And finally, the only place where you can exchange any significant number of bitcoins for normal money is still MtGox. They are in Japan, and they take money laundering seriously, so you cannot sell more than a handful without providing extensive documentation such as an image of your passport, and your bank account numbers. Maybe there will be other exchanges eventually, but it's not an easy business to get into. MtGox is a broker, arranging sales between its clients, and doesn't keep bitcoins in inventory. For a broker to be successful, it needs enough clients that buyers can successfully find sellers and vice versa, which means that big brokers tend to get bigger, and it's hard to start a new one. You could try to be a broker buying and selling directly to customers, but given how volatile bitcoin prices are, you'd likely go broke when the market turned against you. Or you could try to arrange a private transaction by finding someone with bitcoins to sell, or looking to buy. That can work for small transactions, but as soon as someone does very much of that, he's in the money transfer business and money laundering laws kick in. So with all these factors, perfectly logged transactions, a complete public history of every bitcoin so that tainted ones are unusable, and a chokepoint on cashing out, bitcoin makes a great novelty (akin as I have said before to pet rocks) but not a very good medium for large scale money laundering. Written by John Levine, Author, Consultant & Speaker Follow CircleID on Twitter More under: Cybercrime, Security, Web Categories: Net coverage
US Should Take More Aggressive Counter-Measures On IP Theft, Including Use of MalwareA bipartisan Commission recently produced a report titled, "The Report of the Commission on the Theft of American Intellectual Property". Karl Bode from dslreports.com writes: The almost-respectfully-sounding Commission on the Theft of American Intellectual Property (read: the entertainment industry) has come up with a new 84 page report (pdf) that has a few curious recommendations for Congress. Among them is the request by the industry that they be allowed to use malware, trojans, and other countermeasures against pirates. That includes the use of so-called "ransomware," which would allow the entertainment industry to lock down your computer and all of your files — until you purportedly confess to downloading copyrighted materials." Follow CircleID on Twitter More under: Law, Malware, Policy & Regulation Categories: Net coverage
Video Dominates Internet Traffic As File Sharing Networks Overall Traffic Continues to FallVideo continues to be the trend to watch as devices and tablets cater to higher definition content with larger screen sizes enabling the market for longer form video on mobile, reports Sandvine in its latest Internet traffic trends report. "The 'home roaming' phenomenon, the concept of subscribers voluntarily offloading mobile traffic onto Wi-Fi networks, has continued. This combined with increased consumption of real-time entertainment on mobile networks globally, and the doubling of Netflix traffic on mobile networks in North America, suggests that users are getting comfortable with watching longer form videos on their handheld devices." Other findings include: • Apple devices (iPads, iPhones, iPods, AppleTVs, and Macs) represent 35% of all audio and video streaming on North American home networks • YouTube accounts for over 20% of mobile downstream traffic in North America, Europe and Latin America • Netflix mobile data usage share doubled in the last 12 months in North America Follow CircleID on Twitter Categories: Net coverage
Google Plans Wireless Access to Remote Regions Using High-Altitude Balloons and BlimpsGoogle is reported to be building huge wireless networks across Africa and Asia, using high-altitude balloons and blimps. The company is aiming to finance, build and help operate networks from sub-Saharan Africa to Southeast Asia, with the aim of connecting around a billion people to the web. To help enable the campaign, Google has been putting together an ecosystem of low-cost smartphones running Android on low-power microprocessors. Read full story: Wired News Follow CircleID on Twitter More under: Access Providers, Mobile, Wireless Categories: Net coverage
Who Has Helped the Internet? May 31 Deadline for Nominations for 2013 Jonathan Postel Service AwardDo you know of a person or organization who has made a great contribution to the Internet community? If so, have you considered nominating that person or organization for the 2013 Jonathan B. Postel Service Award? The nomination deadline of May 31 is fast approaching! From the description of the award: Each year, the Internet Society awards the Jonathan B. Postel Service Award. This award is presented to an individual or an organization that has made outstanding contributions in service to the data communications community. The award includes a presentation crystal and a prize of US$20,000. The award is focused on sustained and substantial technical contributions, service to the community, and leadership. The committee places particular emphasis on candidates who have supported and enabled others in addition to their own specific actions. The award includes a $20,000 USD prize and will be presented at the 87th meeting of the Internet Engineering Task Force (IETF) in Berlin, Germany, in July. Anyone can nominate a person or organization for consideration. To understand more about the award, you can view the list of past Postel Service Award recipients and also read more about Jon Postel and his many contributions to the Internet. Full disclosure: I am employed the Internet Society but have nothing whatsoever to do with this award. I am posting this here on CircleID purely because I figure that people within the CircleID community of readers are highly likely to know of candidates who should be considered for the award. Written by Dan York, Author and Speaker on Internet technologies Follow CircleID on Twitter More under: Web Categories: Net coverage
Removing Need at RIPEI recently attended RIPE 66 where Tore Anderson presented his suggested policy change 2013-03, "No Need – Post-Depletion Reality Adjustment and Cleanup." In his presentation, Tore suggested that this policy proposal was primarily aimed at removing the requirement to complete the form(s) used to document need. There was a significant amount of discussion around bureaucracy, convenience, and "liking" (or not) the process of demonstrating need. Laziness has never been a compelling argument for me and this is no exception. The fact is that any responsible network manager must keep track of IP address utilization in order to design and operate their network, regardless of RIR policy. Filling this existing information into a form really does not constitute a major hurdle to network or business operations. So setting aside the laziness decree, let's move on to the rationale presented. IPv4 is Dead? Tore pointed to section 3.0.3 of RIPE-582, the "IPv4 Address Allocation and Assignment Policies for the RIPE NCC Service Region:" Conservation: Public IPv4 address space must be fairly distributed to the End Users operating networks. To maximise the lifetime of the public IPv4 address space, addresses must be distributed according to need, and stockpiling must be prevented. According to Mr. Anderson, this is "something that has served us well for quite a long time" but now that IANA and RIPE have essentially exhausted their supply of free/unallocated IPv4 addresses, is obsolete. From the summary of the proposal: Following the depletion of the IANA free pool on the 3rd of February 2011, and the subsequent depletion of the RIPE NCC free pool on the 14th of September 2012, the "lifetime of the public IPv4 address space" in the RIPE NCC region has reached zero, making the stated goal unattainable and therefore obsolete. This argument appears to be the result of what I would consider a very narrow and unjustified interpretation of the goal of conservation. Tore seems to interpret "maximise the lifetime of the public IPv4 address space" to mean "maximise the duration that public IPv4 space remains available at the RIPE NCC." Under this translation, it is possible to believe that a paradigm shift has occurred which calls for a drastic reassessment of the goal of conservation. If, however, we take the goal as written in RIPE NCC policy as a carefully crafted statement meant to convey it's meaning directly and without interpretation or translation; a different conclusion seems obvious. While Tore is correct in his observation that IANA and RIPE NCC (and APNIC and soon ARIN) have all but depleted their reserves of "free" IPv4 addresses, that does not mean that the lifetime of the public IPv4 address space has come to an end. While I would love for everyone to enable IPv6 and turn off IPv4 tomorrow (or better yet, today), that is simply not going to happen all at once. The migration to IPv6 is underway and gaining momentum but there are many legacy devices and legacy networks which will require the use of IPv4 to continue for years to come. Understanding that the useful life of IPv4 is far from over (raise your hand if you have used IPv4 for a critical communication in the past 24 hours) makes it quite easy to see that we still have a need to "maximise the lifetime of the public IPv4 address space." In fact, the IANA and RIR free pools have essentially been a buffer protecting us from those who would seek to abuse the public IPv4 address space. As long as there was a reserve of IPv4 addresses, perturbations caused by bad actors could be absorbed to a large extent by doling out "new" addresses into the system under the care of more responsible folks. Now that almost all of the public IPv4 address space has moved from RIR pools into the "wild," there is arguably a much greater need to practice conservation. The loss of the RIR free pool buffer does not mark the end of "the lifetime of the public IPv4 address space" as Tore suggests but rather marks our entry into a new phase of that lifetime where stockpiling and hoarding have become even more dangerous. A Paradox Tore made two other arguments in his presentation, and I have trouble rectifying the paradox created by believing both of them at once. The two arguments are not new, I have heard them both many times before in similar debates, and they invariably go something like this:
I want to look at these arguments first individually, and then examine the paradox they create when combined. Early in his presentation, Tore said something to the effect of because the LIR can not return to RIPE NCC for more addresses, they would never give a customer more addresses than they need and that the folks involved will find ways of assessing this need independently. OK, if this is true then why not make it easy for everyone involved by standardizing the information and process required to demonstrate need? Oh, right, we already have that. Removing this standardization opens the door for abuse, large and small. The most obvious example is a wealthy spammer paying an ISP for more addresses then they can technically justify, in order to carry out their illegal bulk mail operation. The reverse is true as well, with no standard for efficient utilization to point to, it is more possible for an ISP to withhold addresses from a down stream customer (perhaps a competitor in some service) who actually does have justifiable technical need for them. The second argument is more ridiculous. I truly don't understand how anyone can be convinced by the "people are breaking the rules so removing the rules solves the problem" argument. While I am in favor of removing many of the rules, laws, and regulations that I am currently aware of; I favor removing them not because people break them but because they are unjust rules which provide the wrong incentives to society. If you have a legitimate problem with people stealing bread, for example, then making the theft of bread legal does not in any way solve your problem. While it is possible that bread thieves may be less likely to lie about stealing the bread (since they no longer fear legal repercussions) and it is certainly true that they would no longer be breaking the law, law-breaking and lying are not the problem. The theft of bread is the problem. Legalizing bread theft has only one possible outcome: Encouraging more people to steal bread. So the fact that bad actors currently have an incentive to lie and cheat to get more addresses in no way convinces me that making their bad behavior "legal" would solve the problem. If anything it is likely to exacerbate the issue by essentially condoning the bad behavior, causing others to obtain more addresses then they can technically justify. Of course it get's even worse when you try to hold up both of these arguments as true at once. If people can be counted on to take only what they need, why are they lying and cheating to get more? If people are willing to lie and cheat to get around the needs based rules, why would they abide by needs when the rules are removed? I just can't make these two statements add up in a way that makes any sense. Conclusions Since we still need IPv4 to continue working for some time, maximizing the lifetime of the public IPv4 address space through conservation is still a noble and necessary goal of the RIRs, perhaps more important than ever. Filling out some paperwork (with information you already have at hand) is a very low burden for maintaining this goal. At this time, there is no convincing rationale for removing this core tenant of the Internet model which has served us so well. Written by Chris Grundemann, Network Architect, Author, and Speaker Follow CircleID on Twitter More under: Internet Governance, Internet Protocol, IP Addressing, IPv6, Policy & Regulation, Regional Registries Categories: Net coverage
|
Recent comments
ICANN newsNet coverage |