The Swiss Education & Research Network
Switch ORG Proposal, Appendix B: Registry Concept back to index

Table of content

  1. General description of the registry systems (Zurich and Geneva)
  2. Physical locations and buildings
  3. Internet connectivity, hardware architecture
  4. System capabilities and upgrades
  5. Billing and collection systems
  6. Publicly accessible look up / WHOIS service
  7. Security, reliability, availability, registry failure provisions and backup
  8. Description and implementation of RRP and migration to EPP server
  9. Transition plan
  10. Policies
  11. Technical and other support
  12. Contributions from CORE Internet Council of Registrars

1.General description of the registry systems (Zürich and Geneva)

The excellent registry model relies on technical expertise gained from operating CH and LI top level domains since 1987, a financially stable and robust organization and the aim to deliver state-of-the-art performance to the ORG community.

The application describes a synchronized dual registry system with identical components, one located in Zürich at the Swiss Federal Institute of Technology (ETHZ), the other at CERN in Geneva, both connected over redundant gigabit fiber optic lines with each other and to the Internet. Both systems additionally have local backup and a disaster recovery with copies of the data bases to be stored in a Swiss mountain. The systems are designed to handle exceptionally high loads and provide equal access for registrars using RRP and EPP protocols.

The proposed design uses system similar to one already in use for CH and LI domain name registration but with many enhancements and improvements. It has been possible to specify and measure the parameters for the ORG TLD based on the system in use (its OT&E components) and this system will serve for further developments and measurements.



2.Physical locations and buildings

Two identical and synchronized registry systems, located in two data centers with more than 250 km distance between will be used, one in Zürich at the Swiss Federal Institute of Technology (ETHZ, and one in Geneva (Meyrin) at the European Organization for Nuclear Research (CERN,

The following security requirements are met at each location:

  • 24/7 security personnel available within a quarter hour
  • restricted access to the data centers with personal ID-card only
  • redundant air conditioning systems
  • multiple fallback methods in case of power failure (alternative national suppliers, UPS and Diesel generators)

Both data centers have protective and preventative measures and procedures in case of fire and water.

The main machine rooms are equipped with a sensitive laser based detection system at ceiling level and a less sensitive system in the false floor plenums. An on-site fire brigade is alerted if smoke is detected by either system. CERN also have 24x7 on-site operator cover.

In both locations there is no (or almost no) water piping in the machine rooms.

Zürich and Meyrin (where CERN is located) are not in the lowest earthquake risk category, but risk is very low for both sites. Refer to paras. 7 f and g for protective measures to recover from such events.



3.Internet connectivity, hardware architecture

a. Internet Connectivity

click on picture to enlarge

Both sites are located near the core nodes of the SWITCH backbone in Geneva and Zürich. The underlying DWDM infrastructure allows for the installation of a dedicated gigabit Ethernet channel between both systems.

SWITCH has peerings of up to 1 Gbit/s with global transit providers at the TIX and CIXP Internet exchange points in Zürich and at CERN, respectively ( and, and with many other co-location carriers at these peering nodes. SWITCH is also part of the community of European educational and research networks connected by the GEANT backbone at 2.5 Gbit/s, which ensures excellent connectivity to the North American educational and research network Internet2 and its peers. The connectivity of the data centers at the SWITCH backbone is fully redundant.

b. Hardware Architecture

Each of the two identical systems uses distributed architectures for the different registry components. The load is distributed therefore, and the specifications can be met simply by extending systems. The specifications are described in detail in Section “Hardware Specifications” (para. 3 c). The two identical systems will both deal with peak loads. This concept provides high reliability, availability and security. On a system or network crash or simple unavailability, one system switches over to the second location. Each location has 1 Gbit/s connectivity between both systems, and redundant connections to the Internet. Availability is defined by specifying scheduled service outages, but during normal operation or maintenance, the registry service will not be interrupted. If the system has to be interrupted, the registry will send notifications to the registrars one week or more in advance.

Scalability or extensibility is another key issue. With the planned differentiation of the ORG name, an additional demand for ORG domain names has to be anticipated. Each component has been foreseen to be scalable. The Gateway, the RRP server and the WHOIS directory service can be scaled by using parallel computer park architectures (clustering). Before moving toward any parallel architecture, the registry will upgrade each single hardware component, allowing approximately a doubling of the specified load. The database is realized with ORACLE ™ software and is extendable using parallel computer parks (clusters) within the proposed concept.

The hardware concept uses a gateway and a dedicated RRP server, allowing a smooth transition to EPP. Once the EPP draft recommendations become a proposed standard, the gateway can serve both RRP and EPP servers, which will run in parallel.

The diagram on the next page shows the proposed registry system.

click on picture to enlarge

c. Hardware Specifications

2 web servers and 2 gateway servers

Server Sun Fire 280 R
Processor 2 x 0.9 GHz, 8 MB L2 Cache
Memory 4 Gbyte (extendable to 8)
Network adapter Fast Ethernet
Disk 2 Disc a 36.4 GB
OS Solaris 9

Reporting and administration server
There are multiple computers involved. Basic system:

Server Sun Fire 3800
Processor 2 x 0.9 GHz, 8 MB L2 Cache
Memory 4 Gbyte (extendable to 64)
Network adapter Fast Ethernet
Disk NETAPP filer with > 100GB storage capacity
OS Solaris 9

2 database servers and 2 RRP servers

Server Sun Fire 3800
Processor 4 (up to 8) x 0.9 GHz, 8 MB L2 Cache
Memory 4 Gbyte (extendable to 64)
Network adapter Fast Ethernet
Disk 8 Discs of 18.2 GB with 4 disc controllers
OS Solaris 9

2 Whois servers

Server Compaq Proliant DL 380 G2
Processor 2 x 1.4 GHz 0.5 MB L2 Cache
Memory 2 Gbyte (extendable to 6)
Network adapter 1 Gb
Disk 2 Discs of 18 GB
OS Debian linux




4.System capabilities and upgrades

a. Database capabilities

The database size is ca. 75 GB, distributed over 8 discs to ensure a balanced load. The size can be scaled up to 20 times the projected size. Database throughput has been measured on the actual size of the CH and LI registry and was extrapolated to the ORG registry size. Calculated throughput ranges from 200 up to 400 add or modification commands per second (average). These values will not be influenced by the synchronization of the lookup tables on the RRP as synchronization will be deferred. The Oracle ™ database scalability is evaluated for a size of up to 1 TB. If higher capacities are required, the registry will change to Oracle ™ database clusters to cope with such additional requests. Increasing numbers of registered domain names will cause a more than proportional increase of the number of requests. As a starting point, a square potential of the number of modification commands has to be expected.

b Reporting capabilities

The hardware diagram of the architecture (para. 3. a) shows multiple computers involved to produce reports. A synchronized “reporting only” database will be used. Therefore the main and the second databases do not need to provide reporting capabilities. The intention is to at least provide the same information in the reports of the new registry as those currently provided by the current operator, VeriSign Inc. (VGRS), and it is understood that sample reports will be made available in September 2002. See transition plan (para. 9) for time schedule.

c Peak capabilities for larger-than-projected demands

Considerations for each component and measures required for handling continued high demands (larger-than-projected demands) are outlined below.

The gateways provide SSL connectivity and forward requests to the RRP server. This simple architecture can deal with about 2’000 connections in parallel at any time. Increasing system memory will increase this maximum peak capacity, if required.

The RRP server has a queue which is normally empty. This queue will begin to fill up during larger-than-projected peak demands and the time for processing a request will increase. If the number of demands remains high, the computers involved can be upgraded to deal with at least twice the projected demands.

Should further demand be encountered, the new registry will migrate to parallel computer architectures. The database can handle 10 times the current total of ORG domain names (2.7million). A critical issue is the number of add or modification demands per second. With an upgrade of the hardware, twice the projected load can be processed. If demands increase still further, an Oracle cluster can be used for the main and second databases.

WHOIS directory services are also designed for parallel architecture. They are therefore scalable, using additional parallel computer parks (clusters) on the same anycast IP address.

With larger-than-projected-demands, the only effect on backup will be increased times and larger files. In the data escrow and backup section (para. 7) an option to change backup systems to larger tape systems for storing more data is mentioned. An online backup system will be used for the databases and since the registration system is available during such tasks, there will be no adverse effects for registrars connected to the registry.

The personnel count for registrar services (helpdesk) is calculated according to the number of anticipated trouble tickets, phone calls and e-mails per time and the estimated probability that such requests need to be escalated to back office personnel. Some estimates on average weekly customer-service volumes can be made from the document “.org Metrics Data Package” and this is used for a starting point. For further details see section technical and other supports (para. 11).




5.Billing and collection systems

The default principle is: domain name registrations can be made until the amount of money on the registrar’s bank accounts of and the bank guarantees is reaching the value zero. Each registrar will be notified if a certain registrar-specific threshold has been reached (these thresholds can be set by the registrars).

The main database contains accounting information for each registrar. Security for accounting information is realized by only allowing restricted access to the database using special user and table-spaces in the Oracle ™ software package. The accounting tables for each registrar are modified (charged or credited) according to “payable” transactions.

Synchronization of the registry system with a dedicated on-site accounting system is ensured by inserting the reporting database between. Reports are required if the number of “payable” transactions counted by the registry system and the amount of money on the registrar’s accounts is not consistent or contested. The dedicated on-site accounting systems perform monthly clearings and bookkeeping tasks. Accounting arrangements with each registrar will be made, enabling registrars to use banks in their neighborhood (Asia, America or Europe).

The reporting database and the dedicated on-site accounting system will provide accounting information for monthly and other reports. This information will be accessible at each registrar services office in Singapore, Memphis and Zürich and registrars can download lists displaying payable and received transactions and access further accounting information from a special individually restricted Web site.




6.Publicly accessible look up / WHOIS service

a. General information

The hardware architecture diagram in para. 3 above specifies two parallel computers for WHOIS directory services on both locations using an anycast IP address. No commercial bulk access to WHOIS data will be provided. Upgrading WHOIS directory services to handle increased demands is a linear scaling issue and will be solved with additional parallel computer parks (clustering).

Just to provide an estimate: the system used so far for CH and LI second level names is set up on one computer and can deal with up to 300 WHOIS requests per second. The search engine realized for CH and LI can deal with substrings and presents results within 6 seconds (see The availability of the registry WHOIS directory service (thin registry concept) will be 99.9%. The thin registry concept employed in the beginning can be changed to a thick registry model if the need arises (see appendix D, Community Concept, for methods to seek input from the ORG community).

Coordination with other WHOIS directory services, such as UWHOIS, is already established for CH and LI and will be extended for ORG. SWITCH also closely follows developments using SRV records to locate WHOIS servers, the referral RWHOIS service (RFC-2167) and the WHOIS++ index service.

b. WHOIS compliance

Compliance with specifications of the NICNAME/WHOIS (RFC-954) is assured. WHOIS directory service for ORG is provided with one anycast IP address for the parallel working computer parks to ensure highest availability. New developments for WHOIS or similar directory services will be evaluated and supported. Each registrar will be contractually required to run an independent and “full” WHOIS directory service and to provide accurate contact information of registrants. The registry itself runs a “thin registry” – at least in the beginning – and its WHOIS directory services will only provide domain name, name servers, registration date information and details of the sponsoring registrar.




7.Security, reliability, availability, registry failure provisions and backup

a. System security

ORG registry security system:

click on picture to enlarge

Physical security of each data center is discussed in section hardware (para. 3) above, security for each registry service is outlined below.

All computers providing WHOIS directory service have software firewalls. The SSL port will be used to separate WHOIS access and internal network.

The computers hosting Web sites for registrar related issues will be accessed via HTTPS and login requires individual username and password for each registrar. These computers have restricted IP routing access. The SSL port and a software firewall will be used to distribute and update Web site information.

The gateway servers have a restricted IP routing access and provide the SSL connections for the registrars. The RRP server processes only the RRP instruction set with additional improved commands desired from the registrars.

All other computers are accessible only locally.

The main database, carrying sensible data, can only be accessed through RRP/EPP server commands. Security for such sensible data is thus assured and recovery, fault prevention and backup strategies are described in the corresponding sections below.

Due to security reasons, all other than ports than those mentioned above are disabled for all computers.

b. General availability

Availability has been defined to be ca. 99.4% for registry services. This means that a disruption of services will last no longer than 2.19 days per year or 52.56 hours per year. During normal monthly maintenance work, the second system will take over. The Oracle ™ data base is specified for 99.9% availability with redundant systems, which corresponds to 8.76 hours unavailability per year. Working on the basis that all 3 components (gateway, RRP/EPP server and database) have identical failure rates, half of the unavailability (26.28 hours per year) will be free for additional required outages or “unplanned disasters”. Planned outages, when the registry system will not be available, will be announced to the registrars at least one week in advance. Such outages will last no longer than 8 hours per event.

The “unplanned disasters” referenced above result in the failure of the systems at both sites. Otherwise, operation will continue immediately from the second system.

c. System reliability (QoS)

Quality of service can be defined for the proposed system as follow:

  1. Lookup commands to be answered within 3 seconds
  2. Add/modification commands to be answered within 5 seconds
  3. WHOIS queries, also for querying sub-strings, to be answered within 6 seconds
  4. E-mail transfer will be cached and retried until the mail server of the registrar has accepted it, or the registry has auto-approved. A change notification will be sent to the losing registrar.
  5. When inquiries by registrars are received at the helpdesk, a trouble ticket will be opened. When the problem is solved, the registry will close the ticket. The registrar will be able to lookup the status of open tickets on the registrar Web site.
  6. Three helpdesks, located in different time zones guarantee 24/7 support. Each helpdesk will cover the usual working hours of its location. All registrars will be served on an equal basis (see section “Provision of equal access by accredited registrars”, para. 10 b), and “Technical and other support’, para. 11)
  7. Technical maintenance and updates of the system, as result of an open ticket, will be corrected within 24 hours or less. Routine changes will be applied while the system is working. Critical system changes will be carried out during announced outage times (see section “General availability” above). If a major change results, it will be implemented only after a one-month prior notice period to all registrars. Before any significant updates or changes are made, the consequences for registrars will be considered carefully.
  8. Hardware reliability: see sections on hardware architecture (para. 3) and system recovery procedures (para. 7).
  9. In the unlikely event that both systems are simultaneously damaged beyond use (force majeure) the critical system with full functionality, but reduced performances, will be brought back within 24 hours. Repurchasing of hardware and restoration of full performance will be carried out within 6 weeks. Data security and storage will be assured up to the last transaction prior to the crash (see section system recovery procedures, para. 7)

Quality aspects specified in a), b), c) and d) above will be provided by the system, even for peak capabilities, unless demand is significantly above the projected demands described in para. 4 above.

Quality aspects specified in e), f) and g) will be covered with a ticket system for registrar requests and system monitoring tools (see section d) below).

d. System outage prevention

The registry will perform measurements for all registry services except for name server performance measurements. A registry system is deemed to be “not available”, if the quality points a), b), c) or d) described in the “System reliability” section above are not fulfilled.

The registry uses the Big Brother tool for monitoring network, computer and application status. This tool uses a client/server architecture combined with methods to push and pull data. Each service will have one or more tests on availability and performance.

Redundancy is implemented on an architectural structure (by multiple systems). In addition to the architectural structure prevention, each component has a backup for application and data recovery. The possibility of power failure is covered by multiple UPS on both systems and other means. All critical components are built with Sun ™ computers. In the case of failure, an on-site service agreement with Sun will guarantee to have a technician on the site within 4 hours during working hours. Both locations have additional technical staff for maintaining or repairing hardware failures.

e. System recovery procedures

The hardware architecture (see diagram in para. 3) shows a second identical system to be used in case of failures or unavailability of the other. Therefore there is normally no need to recover data or system components.

Recovery procedures are only needed in case of unavailability of both systems. In this case each system has recovery strategies, which have been fully tested on the CH/LI system in use. These strategies guarantee that only the last open transaction will be lost. The critical points for data loss are the centralized databases of each system. SWITCH has technical staff with more than 12 years Oracle ™ database experience. Knowledge of backup and recovery strategies has been carefully built up at SWITCH and the new registry will be able to benefit from that.

An example of recovery strategies in the case of all local discs having lost their data: The online backup files are used to recreate the data at the time the backup was performed. The archived log files are then applied and Oracle ™ recovers the data for the period up to the time of the crash. Because the transaction log files are very important, multiple mirroring of these files is required, not only on local, but also on external discs.

If both systems fall victim of a majeure force and no hardware can be reused, the following service agreements will be made and the following time statements for restoring the system can be issued:

Max. 24 hours to bring back all critical parts of the registry with slightly
reduced performance

Time calculations:

  1. On-site technical staff available within 1 hour
  2. On-site service agreement with a technician available within 4 hours
  3. Restoration of the hardware within 8 hours
  4. Installation of the operational system (Sun ™ Solaris) within 4 hours
  5. Installation the database and registry services within 2 hours
  6. Recovering data from backup within 2 hours
  7. Testing the recovered system within 1 hour
  8. Time allowed for retesting and other contingencies: 2 hours

f. Data escrow and backup

A backup of the database will be performed each day by local tape robots installed in each registry site, while the registry system is operation. This backup is used for restoring registry data. Additional log files for transactions are used to recover the database to the point the crash has taken place. These additional log files will be multiple times mirrored for optimum security and reliability. This multiple mirroring is realized on hardware using RAID’s and software using internal and external discs. Additional software is being used to mirror these files also on different systems and storage media, such as a NETAPP filer. All mirroring is performed immediately and not delayed, with the result that in case of a crash all mirrored log files are up to date.

An external data escrow agent receives daily updates of the entire data base and these files can be used as escrow data (see para. 8 below). The zone file is escrowed to a dedicated secondary name server (C) and the manager of this name server acts as escrow agent for the zone file. It should be remembered that ORG is operated on the thin registry model, where the registry has no data on registrants, just domain name, name servers, registrar name and certain time information.

g. Registry failure provisions

An external backup agent will receive a backup of the entire registry data base file once or twice a day. SWITCH is in negotiation with MOUNT10, a Swiss based expert for storage infrastructures, with offices in Germany (Munich, Dresden and Hamburg), Austria (Vienna), Finland (Helsinki, Lappeenranta) and USA (Houston). This backup data is intended to serve as disaster recovery in case of severe crashes. The recovery data is stored in a converted Swiss military bunker in a Swiss mountain (data fortress) and is secure against earthquakes and most possible man or “nature made” incidents. This system is also fast and efficient and it includes assessment, alignment, planning, certification, implementation and automation processes of the solution at the registry site (see, section disaster protection, for more information).

Other failures of the registry due to physical vulnerabilities are highly unlikely. A failure due to insolvency of the foundation can be neglected due to the non-commercial character of the foundation, sound financial management, multiple sources of income and the public sector interests in the foundation: the Swiss government and eight Swiss cantons are the founders of SWITCH.



8.Description and implementation of RRP and migration to EPP server

a. Registry-registrar model and protocol

The registry will be a "thin-registry" and will use RRP, version 1.1.0 (May 2000), as described in the Revised VeriSign Registry Agreements Appendix C. Transition from the VeriSign registry system to those operated by SWITCH will be transparent to the registrars.

b. Details and Performance of the RRP implementation

For smooth transition to the new registry SWITCH will still be accepting the certificates used by the registrars, originating from the appointed commercial certification authority.

The RRP can be divided into informational commands and add/modification commands. The informational commands, such as "check", "describe", "status" and "quit" are processed directly on the RRP server using a lookup table, synchronized with the main Oracle ™ database. The add/modification commands, such as "session", "add", "del", "mod", "renew" and "transfer" will operate on the centralized database.

Publically available data from ICANN and VeriSign Inc. show that informational commands (malformed and correct) account for in excess of 98% of the command stream, and data published by ICANN in August 2001 on "add storms" load are reflected by this approach to system design.

This allows load distribution on a protocol layer. The response time for the informational commands will be less than the 3 seconds specified for peak loads. For the add/modification commands the response time will be less than the 5 seconds specified for peak loads.

Up to 500 informational commands per second can be handled. This means up to 43 million requests per day. The planned maximum peak will be 1’000 requests per second.

The system can handle up to 400 modification commands per second. This means up to 34 millions requests per day. The planned maximum peak will be 600 requests per second. For the purposes of understanding the scaling capabilities, the system could handle more than 10 renew requests per day for each of the existing ORG domain name list (approx 2.7m).

c. Procedure for object modification

c (i) Object creation

The “add” command creates objects like domain names or hostnames. The accepted add request is immediately processed on the main database and the registrar is charged for the registered domain name.

c (ii) Object creation

All modification commands change the object in the main database after the commands are accepted either by the registry or the registrar. The centralized authority database guarantees data integrity and data consistency. Transfer and other pending requests are served from a request queue, this queue ensures first-come-first-served principles.

The 'Renew' command will be immediately processed and be charged to the registrar's account.

If a registrar does not renew a domain name before expiration date, an auto-renew is initiated by the registry and the registrar is charged. The expiration date of the domain is extended for one year.

The "transfer"command will insert a pending request in the request list. This request list ensures equal treatment of all registrars on a first-come-first-served basis. An e-mail is sent to the losing registrar. The losing registrar can approve or reject the transfer during the transfer pending period. After that period the registry automatically approves the transfer and sends a change notification to the losing registrar.

The "delete" command will update the domain status from registered to registry-hold. During this period the domain name is no longer in the zone file. The registrar receives a delete notification. After the delete commands there is a delete pending period. After this period the domain name will be deleted from the registry database and the domain name is available for new registration.

d. Grace period

The “add grace” period will be 5 days. During the period the following rules are applied (consistent with VeriSign practices):

  • delete: registrar can delete but is credited for the registration amount
  • extend: registrar can extend and is credited for the registration and the extended years
  • transfer: registrar can not transfer for the following 60 days after initial registration
  • bulk transfer: bulk transfer with ICANN approval can be made. The losing registrar is credited for the initial registration, with no further costs.

During the renew/extend grace period, 5 days after the renew commands, the following rules are applied (consistent with VeriSign practices):

  • delete: registrar can delete, the sponsoring registrar receives a credit of the renew/extend fee
  • extend: registrar can extend, the registrar will be charged for the additional years
  • transfer: registrar can transfer at no cost
  • bulk transfer: ICANN approved bulk-transfer can be made, no cost

The "Auto-renew" grace period will be 45 days. During the auto-renew grace period, the following rules are applied (consistent with VeriSign practices):

  • delete: the registrar can delete, the registrar receives a credit for the auto-renew fee
  • extend: the registrar can extend, the registrar will be charged for the additional years
  • transfer: the registrar can transfer, the losing registrar will not receive a credit and the year added by
    the auto-renewal will be charged on the gaining registrar
  • bulk transfer: ICANN approved bulk-transfer can be made, no refunding or changes in the expiration dates of the domain names will take place

The transfer pending period is set to 5 calendar days. During the transfer pending period, the following rules are applied (consistent with VeriSign practices):

  • Transfer request or renew requests are not accepted by the registry
  • Auto-renew will be performed
  • Delete request are not accepted by the registry
  • Bulk Transfer: ICANN approved bulk-transfer can be made, no refunding or changes in the expiration dates of the domain names will be processed.

During the delete pending period the following rules are applied (consistent with VeriSign practices):

  • Retraction: the registrar can retract the delete process without costs, by contacting the registry
  • Renew and auto-renew requests are ignored
  • Add request on this domain name will not be accepted by the registry
  • Transfer requests are denied
  • Bulk Transfer: ICANN approved bulk-transfer can be made, no refunding and changes in the expiration dates and the status of the domain names will take place.

e Exceptions for the grace periods

e (i) Overlapping Grace Periods

If an operation is performed, that falls into more than one grace period, the actions appropriate for each grace period apply (with some exceptions as noted below)

  • If a domain is during add grace period and extend grace period: then the registrar will be credited for the registration and extend amounts, taking into account the number of years for which the registration and extend were done.
  • If a domain is auto-renewed, then extended, and then deleted within the Extend Grace Period, the registrar will be credited for the Auto-Renew and the number of years for the extension.

e (ii) Overlapping Grace Periods

  • If a domain is deleted within one or several Transfer Grace Periods, then only the current sponsoring Registrar is credited for the transfer amount. For example if a domain is transferred from Registrar A to Registrar B and then to Registrar C and finally deleted by Registrar C within the Transfer Grace Period of the first, second and third transfers, then only the last transfer is credited to Registrar C.
  • If a domain is extended within the Transfer Grace Period, then the current Registrar's account is charged for the number of years the registration is extended.

f. Migration to provreg standard (EPP)

Wampumpeag, LLC (Eric Brunner-Williams, General Manager) is developing a high-performance, 2nd-generation RRP server in C++. This RRP server is scheduled to be operational at SWITCH in 4Q02 (Fall, 2002). Wampumpeag began work on a reference implementation of a multi-protocol registry backend, supporting the EPP (IETF), RRP (VGRS), and SRS (CORE) protocols, over TCP, BEEP, FTP, and SMTP, in January of 2002. Wampumpeag's EPP server is scheduled to be operational at SWITCH in 1Q03 (Winter 2003).

SWITCH is aware that RRP is used by many ccTLD registries and its registrars and that RRP is also important for the smooth continuing of operation of many ICANN accredited registrars.

SWITCH will license its RRP server to ccTLD registries at no cost, either upon transitioning the ORG registry from RRP to EPP, or at an earlier point in time.

Within the proposed registry system the parallel usage of both protocols is foreseen. The gateway server will forward the demands to the parallel operating EPP and RRP servers. An intermix of usage, for example polling from the EPP-protocol, after a transfer request through the RRP protocol has been established, will be assured. The protocol transition is therefore individual for each registrar. For the registrar an extra web-site with hints and a discussion forum for the transition toward EPP will be enabled. An EPP reference client will be posted on the registrar web-site for downloading. Further help on problems discussed in the forum will be provided. . The transport layer for EPP is TCP, secured by SSL.




9.Transition plan

a. Time schedule of the transition with milestones:

  August September October November December January 03
Required   All kinds of reports.   Full transition data for testing. Full transition data for the real take over.  
Decisions Until the end of August: ICANN has made a selection System specificati- ons. Main system available.
Installing and finalizing of software architecture.
First transfer of registry, name server and billing data base for detection of transition problems. Entire system and help desk completed.

Transition to be finalized at the end.

Time for fallback in case transition not completed as planned
at end of each month
Hardware ordered Hardware, Reports and registrar web-site defined. Software and services installed. Transition plan tested and finalized. All registry services tested
OT&E system available and accessible for registrars
Zone file distribution to the new name servers done.

Transition of the registry done.

Registrar Web site and helpdesk optimized.

If not completed as planned, registry transition finalized.

Tasks   During the September: VeriSign required to forward the different reports to the new registry. Reports from the last year would be appreciated.

End of September: all hardware components are defined and first performance tests are available.

During October: fine tuning of all software parts and testing the different strategies. During November: the first transition of the data from VeriSign initiated.

Transition plan updated and finalized end of November.

OT&E system available and accessible.

During December: all systems are finalized and ready for take over from VeriSign.
Helpdesk available.
Web-site for registrars available.
VeriSign to provide the zone file for distribution to the new name servers.

End of December: the final transition and start of the new registry

During January: registrar helpdesk and web-sites optimized for serving registrars.
Milestones End of month: ICANN decision. Start of month:
Form of registrar reports and content finalized.
End of month:
Registry system installed and performance test done.
End of month:
Transition plan finalized.

OT&E system available and accessible for registrars.

Mid of month:
Registry systems ready for take over.

Helpdesk and registrar Web sites available.
End of month:
Transition of registry done

Mid of month:
Form of Registrar Web sites and helpdesk finalized and working.

b. Interruption of any registry services

The first number in the list below is the expected time the second is the maximum time for transition, if all transitions proceed as planned.

  • WHOIS service: 3 hours – up to 1 day
  • Registry service: 8 hours – up to 2 days, depending on the first transition test
  • Ticket service: immediately available on the registrar Web site
  • Web site: immediately available
  • Name server: immediately available

c. Effect on ORG registrants

The registrants will not be able to register domain names, until the new registry service is available for the registrars. See b) above for expected and maximum times.

d. Effect on Internet users seeking to resolve ORG domain names

There will be no interruption in name server services during the transition from the current operator (VeriSign, Inc.) to the new registry.

e. Criteria for a good transition

  • Start of trial run with data from VeriSign in November 2002 at the latest. VeriSign have to provide valid registration data.
  • 2 weeks before transition: distribution of the zone file to the new name servers. The new name servers will start propagating the ORG zone but will not be accessed by resolvers due to an unchanged list for the ORG zone in the root name servers.
  • At the transition time VeriSign have to provide the real and final transition data. This is the start point from where the above interruption times are calculated.
  • At the transition time ICANN/IANA will be requested to activate the new zone file for ORG, listing the new name server network.

f. Experiences

The ccTLD’s CH and LI had a system change in December 1999. Zone file updates were interrupted for several days until the new registry system could provide Web based registration access.





a. Mechanism to ensure compliance with ICANN developed policies and requirements of the registry agreement

The new registry operator (SWITCH) will participate in appropriate stakeholder committees established by ICANN and will abide by ICANN developed policies and requirements of the registry agreement for the ORG TLD. The mechanisms to ensure compliance are a) to follow closely the developments within the ICANN community and b) to put such policies in force in close cooperation with the ORG community (registrars and registrants) and SWITCH personnel working on the implementation. Please also refer to the appendix D, Community Concept, for specific processes proposed for ORG.

If the registry agreement needs to be changed by either party (ICANN or SWITCH), SWITCH enter into negotiations with ICANN to resolve such issues.

b. Provision for equivalent access by accredited registrars

The gateway server will have capabilities for all registrars to be connected on a non-discriminating basis. The Web sites for registrars will have a trouble ticket system and provide online information regarding open tickets until they are closed.

The 24/7 hours helpdesks will be accessible also on a non-discriminating basis it will be ensured that all registrars receive help. The three proposed locations of the helpdesk (in Switzerland, Asia and America) will allow for close interaction of registrars with the registry.

c. Registry Code of Conduct

SWITCH recognizes that in most instances the Domain Name System (DNS) is the means by which businesses, consumers and individuals gain access to, navigate and reap the benefits of the global Internet. SWITCH also recognizes that the DNS resources need to be administered in a fair, efficient and neutral manner. The following Code of Conduct will therefore be applied by SWITCH:

  1. SWITCH will interact in fair and transparent manner with registrars and will apply equal processes to them.
  2. All ICANN accredited registrars will be given equal access to the new registry services, provided that these registrars have also concluded a valid agreement with the new registry (SWITCH).
  3. SWITCH will commit to additional services for registrars of the ORG TLD and will implement structures to promote and differentiate the ORG TLD and support registrants in such efforts.
  4. SWITCH will not in any way attempt to warehouse ORG domain names for its own purposes.
  5. Only SWITCH personnel will have access to data of registrars served by the new registry (SWITCH).
  6. SWITCH will ensure that no data from third parties will be disclosed.
  7. SWITCH will allow ICANN to conduct reviews of the operation of the new registry by SWITCH, at ICANN’s expenses. The results of such reviews will be analyzed, if requested, in cooperation with ICANN and SWITCH will – if requested – try to resolve problems in cooperation with ICANN.

11.Technical and other support

a. Registrar services

Three registrar services offices, one in Zürich, one in the US, one covering the Asia Pacific region will provide customer care for registrars on a 24/7 basis. These three registrar services will monitor both ORG registries and the worldwide name server network for ORG domain names and serve as first level support for registrars. The headquarters will be located in Zürich, with approximately 12 to 15 FTE’s: 8 to 10 for first level support and marketing, 4 to 5 for accounting. The office in Asia will initially be set up with 4 FTE’s and the office in the US with 5 to 6 FTE’s. Supported languages will be English, Spanish, French, Chinese, Japanese, Portuguese and German, with each office free to offer additional support.

The distributed locations of registrar services allows for registrars to use financial institutes in their neighborhood.

b. Back-office services

Back-office services will provide expert know-how for in-depth technical problem analysis and accounting issues for the ORG registry.

The SWITCH network (NOC) group comprises of 7 persons and will work on all issues related to name servers and the network.

The group of system administrators at SWITCH (8 persons) will maintain ansd upgrade the registration systems and SWITCH security experts (5 persons) will monitor the systems from a security point of view and take action in case of events.

Personnel from the CH and LI front- and back-offices can be assigned to ORG on short notice in case of need of human resources (more than 40 persons) and the SWITCH Web design team will provide help setting up Web pages (3 persons).




12.Contributions from CORE Internet Council of Registrars

CORE has agreed to contribute staff resources, expertise and software for .org registry run by SWITCH. This collaboration is based CORE's commitment to the cause of operating shared registries on a not-for-profit basis in the public trust. CORE's contribution helps to increase the level of awareness for the concerns of registrars and providers at large (not just large ICANN-accredited registrars).

CORE has extensive experience in co-operative development and operation of shared provisioning systems. Developers are affiliated with CORE members and tasks are distributed to achieve redundant availability of data, systems and expertise. CORE currently operates as a decentralized ICANN-accredited registrar enabling its members - under equal terms for all members - to register domains for their clients and resellers through CORE's shared registration systems. Currently CORE supports registrations in under biz, .com, .info, .name, .net, .org and .us. Since mid 1999, has performed more than 1.1 million new domain registrations for its members. CORE also developed and operates the first shared registry for a restricted (sponsored) gTLD, .aero, under an outsourcing agreement with SITA.

Contributions of CORE include: (1) contribution to the development and extensive testing of RRP and EPP server and client tools, (2) contributions to the development and standardization of critical process (e.g. bulk data, list requests, notifications) not defined in RRP and EPP, (3) input and critical review for the transition process from VGRS to the new registry in particular with respect to the handling of accounts, grace and pending periods and NS glue records, (4) input for
the thin-thick registry strategy in particular with regard to the concerns of registrars and their channel partners (5) input for the operation of technical working groups to define critical registry procedures in a multilateral consultation mode, (6) input for the of policy definition framework in particular for the representation of registrars, the TLD community at large and registrants. Those contributions are on-going and involve CORE and a number of members of CORE.

CORE Internet Council of Registrars is an international not-for-profit Association of registrars. Its legal form is that of an Association as defined in Swiss Law (Art. 60-79 CC) established in October 1997 for the purpose of creating a shared domain name registry for the introduction of new TLD's. The mission of CORE is to develop and operate standards
and coordinating mechanisms for the central management of Internet domain registrations in the public trust. As a result of its legal structure and its bylaws, CORE is governed through the bottom-up democratic membership process.

The membership of the CORE association spreads over four continents and 25 countries. It includes highly specialized registrars, major Telecommunications or Internet infrastructure and application service providers.