Sections C17.2-C17.16

 

Table of contents

C17.2-C17.6....................................................................................... 1

Table of contents. 2

Table of figures7

C17.2. Registry-registrar model and protocol.9

Registry Registrar Protocol (RRP) Layer. 10

Extensible Provisioning Protocol (EPP) Layer. 10

Protocol Independent Layer. 10

Protocol specific Procedures and rules11

Domain names11

Contact objects (only exist in the Registry in “thick” mode)14

Host objects15

Registrar objects17

General rules for Registry Objects17

Add Grace Periods17

Renew/Extend Grace Period. 18

Transfer Grace Period. 19

Bulk Transfer Grace Period. 20

Overlapping Grace Periods20

Transfer Pending Period. 20

Delete Pending Period. 21

Object Transfers under EPP. 21

Reserved Strings23

C17.3. Database capabilities. 24

Overview. 24

The databases25

Intercontinental Registry Replication. 26

Messages used for replication. 27

Global Name Registry operates Multiple databases28

Some considerations in Moving from a “Thin” SRS Database to a “thick” SRS Database  29

Reporting (handled by separate database)29

Data Validation and consistency checking. 30

Database Structure and Table structure. 30

Scalability of the database. 31

performance. 33

Database Hardware, Size, Throughput, and Scalability. 34

Hardware platform and specifications34

Database design parameters34

Load optimization. 36

Database Administration. 36

Database Backup/Restore. 37

Disaster Recovery. 37

Database Security & Access Privileges37

C17.4 Zone file generation. 38

The update process and zone file generation. 38

Frequency. 40

Security and authentication. 42

Backup and recovery. 42

Zone file access42

C17.5 Zone file distribution and publication. 44

Locations of DNS servers44

Distribution of Zone File. 46

Frequency. 49

Security and authentication. 49

C17.6. Billing and collection systems.50

Global Name Registry billing and VAT Classification. 50

Overview of billing systems52

Mapping components to hardware – Deployment view of the Billing system. 54

The billing process – a process view of the Billing system. 55

Accessibility and Reporting. 58

Increasing credits and adding accounts58

Reporting to Registrars58

System security. 58

Basic Billing rules in the System. 58

Deferred revenue calculation. 59

VAT calculation and VAT information. 60

C17.7. Data escrow and backup.61

Backup. 61

Overview. 61

Backup policy. 62

Tivoli Storage Manager Implementation. 63

Domains and Management Classes65

Storage Pools66

Schedules66

Administrative Schedules67

Procedures for taking tapes off-site. 67

Testing and QA of backup. 67

Backup agent contract/proposal. 68

Escrow. 68

Schedule for escrow deposits68

Escrow deposit format specification. 69

Distribution Of Public Keys.75

Escrow Transfer Procedure. 76

Escrow Verification Procedure. 76

Escrow Retrieval And Rebuild Procedure. 77

Escrow Agent Proposal. 78

C17.8 Publicly accessible look up/Whois service. 79

Technical functioning of the Whois79

Deployment of the Whois Service. 81

Port 43 Whois service. 83

Web-based Whois service. 85

Hardware. 87

Software. 87

Security and reliability. 87

Whois policy and Format of responses88

Modifiers88

Returned Whois Results88

Data format policies92

C17.9. System security. 95

Overview. 95

Firewalls97

Intrusion Detection System (IDS)97

Encrypted channels99

Jump points and gateways100

Physical security of the facilities100

Alerts102

Log and Monitoring Systems102

Data protection Procedures103

Passwords and pass phrases103

Information control. 103

Software security. 104

Software diversity. 105

Human Security. 105

Assured Asynchronous communication with MQ. 105

Registrar security and authentication. 106

Security of offices and employees106

C17.10. Peak capacities. 107

Introduction. 107

rojected Registry volumes108

Handling “Add storms”. 109

Overview of elements influencing Global Name Registry capacity. 110

The current capacity of Global Name Registry Registry Systems111

Whois capacity. 111

DNS capacity. 113

Backup. 117

SRS capacity. 118

WWW services122

Mechanisms used by Global Name Registry to handle peaks and achieve peak capacity  123

General Scalability design. 123

Load balancing. 123

Queuing and batch allocation mechanisms124

Use of solid state storage. 124

Use of ESS126

Burstable bandwidth. 126

C17.11. Technical and other support.127

Global Name Registry’s Dedication to Customer Service. 127

Technical help systems128

Operational Procedures and Practices129

Accreditation and initiation of Customer Support. 129

Registrar Notification Procedure. 129

Registration Requirements130

Registrar Tool Kit. 130

Security and availability of Customer support. 131

Caring for Security. 131

Global Name Registry supports a vast number of languages131

Availability and roles132

Support Priority levels132

Escalation paths137

Categories of Customer support Processes138

C17.12. Compliance with specifications.140

Other RFCs with which Global Name Registry is totally compliant. 141

RFC0954 Nicname/Whois141

RFC1034 STD0013 Domain Names - Concepts And Facilities144

RFC1035 STD0013 Domain Names - Implementation And Specification. 144

RFC1101 Dns Encoding Of Network Names And Other Types144

RFC2181 Clarifications To The Dns Specification. 145

RFC2182 BCP0016 Selection And Operation Of Secondary Dns Servers145

Other relevant RFCs with which Global Name Registry has total compliance. 145

RFC1995   Incremental zone XFR. 145

RFC1996   DNS Notify messages145

RFC2136   Dynamic updates for DNS145

RFC2845   TSIG Transaction signatures146

RFC2535   DNS-SEC Security extensions for DNS146

C17.13 System reliability. 147

Analysis and quantification of QoS148

C17.14. System outage prevention.152

Problem detection. 152

Daily backups transported offsite. 157

Redundant Power Systems158

Redundant network. 158

Proven software and operating systems, open source software used where appropriate.158

Multiple Server locations158

Location. 159

Disaster recovery site. 160

24/7 availability of technical maintenance staff. 160

Triple database servers and centralized database storage. 160

Layered architecture. 161

Option to add servers to the “hot” system. 161

Mirrored, backed up storage for all data and software. 161

Continuous log of database transactions transported offsite. 162

Hardware encrypted communications with Registrars and external DNS servers162

PIX firewalls in failover configuration.162

High availability active/active failover system on critical systems162

Servers and hard disks in stock, pre-configured and ready to boot from central storage  163

Facility security including access control, environmental control, magnetic shielding and surveillance.164

REdundant DNS servers on different backbones164

Repeatedly proven software and hardware. 165

Focus on top class hardware, standardized on few different products from solid vendors165

Some of the highest experience and competence in the industry on DNS and Registry operations165

C17.15. System recovery procedures. 166

Defining Outage. 166

Overview of events that could lead to outages167

Outage events and procedures for restoring operation. 167

Single server failure. 167

ESS failure. 170

Data Center Destruction or otherwise complete one-sided data destruction. 170

Software errors171

Inconsistency. 172

Complete data loss on main site and disaster recovery site. 172

Restoring Software. 173

Restoring Data. 173

Recovery Training of Technical Staff and testing of procedures173

Projected time for restoration of system. 174

Summary of restoration procedures174

Providing Service during outage. 175

Extremely redundant systems175

Failover to Disaster Recovery Site. 175

Backup power systems175

Protecting against unexpected outages176

QA team. 176

Potential system problems that may result in outages176

Documentation of System Outages176

C17.16. Registry failure provisions.177

Insolvency of Registry Operator. 177

Destruction of UK offices178

Emergency Registry Transfer. 179

Conclusion. 181

 

Table of figures

Figure 1 High level SRS layering. 10

Figure 2 Core SRS database distribution at main site. 25

Figure 3 Database distribution and replication main site – backup site. 27

Figure 4: Illustration of data model30

Figure 5 Core SRS – Whois communication. 32

Figure 6: Zone file distribution - a deployment view. 45

Figure 7: Real time zone file distribution - a process view. 47

Figure 8: Periodical zone file distribution - a process view. 48

Figure 9: High-level system view for Billing. 52

Figure 10: Deployment view of the Billing package. 54

Figure 11: The Billing process in accomplishment of a billable operation. 56

Figure 12: The Billing, debit process57

Figure 13: The Billing, credit process57

Figure 14: Backup solution overview. 62

Figure 15: All application data is backed up. 62

Figure 16:Diagram of the dataflow. 64

Figure 17:Domain, Management Classes, Copygroup and storage pool structure. 65

Figure 18: Escrow agent procedure. 78

Figure 19: Overview of the Whois system. 80

Figure 20: Deployment diagram for the Public accessible Look up. 82

Figure 21: Control flow for a port 43 Whois look-up.84

Figure 22: Whois look up through the Web-based interface. 86

Figure 23: Geographical spread of Global Name Registry operations101

Figure 24: Operational query volumes on entire Verisign Registry (com, net, org)108

Figure 25: Operational transaction volumes pro-rated for .org. 109

Figure 26: Evidence of Add Storms in the Verisign System, March 02 (numbers                        pro-rated for .org from total Verisign numbers, as found on gtldregistries.net)110

Figure 27: Whois peak performance when returning queries, returning negative                results, or inserting new entries112

Figure 28: Peak performance per second of single DNS server114

Figure 29: Peak perfomance per second on single DNS site, and across Global Name         Registry DNS network. 115

Figure 30: Zone loading time (full loading, not incremental), memory usage and disk           usage  116

Figure 31: All application data is backed up. 117

Figure 32: The Registrar interface to EPP/RRP servers118

Figure 33: Core SRS overview. 119

Figure 34: The layered structure of the EPP server, business logic and database logic120

Figure 35: Illustrated Whois performance on solid state storage. 125

Figure 36: Customer Support Priority levels133

Figure 37: Priority 1 process flow. 134

Figure 38: Priority 2 process flow. 136

Figure 39: Priority 3 process flow. 136

Figure 40: Illustration of feedback loops to prevent outage. 153

Figure 41: BigIP (loadbalancer) traffic reporting on the UK main site (Note that all                    IP numbers are anonymized for security)155

Figure 42: Global Name Registry monitors response times from all its DNS                       locations and to/from the  Root Servers156

Figure 43: Cross-location response times to all Global Name Registry services (and      nameserver response quality)157

Figure 44: Geographical spread of Global Name Registry locations159

 

 

C17.2. Registry-registrar model and protocol.

Please describe in detail, including a full (to the extent feasible) statement of the proposed RRP and EPP implementations. See also item C22 below.

C17.2. Registry-registrar model and protocol............................ 9[k1] 

Registry Registrar Protocol (RRP) Layer. 10

Extensible Provisioning Protocol (EPP) Layer10

Protocol Independent Layer10

Procedures and rules of the .org SRS11

Domain names11

Contact objects (only exist in the Registry in “thick” mode)14

Host objects15

Registrar objects17

General rules for Registry Objects17

Add Grace Periods17

Renew/Extend Grace Period. 18

Transfer Grace Period. 19

Bulk Transfer Grace Period. 20

Overlapping Grace Periods20

Transfer Pending Period. 20

Delete Pending Period. 21

Object Transfers under EPP. 21

Reserved Strings23

Global Name Registry will fully support RRP as defined by RFC 2832.

Also, Global Name Registry will fully support EPP as defined by Draft-IETF-Provreg-EPP-06, although Global Name Registry will use higher version numbers if and when they become appropriately available, and will migrate all its EPP interfaces to the EPP standard when finally stable and recommended by IETF.

Global Name Registry will initially only support the RRP, but after an initial period, both the RRP and the EPP will be fully and simultaneously supported. This gives registrars flexibility to choose to migrate when they wish. Also, this dual support minimizes the risk of any instability since Registrars can attempt migration and move back to RRP in case of problems. The Registry Business Logic is enforced by an isolated protocol independent layer shared by all interfaces to the registry. This ensures a fair treatment of the Registrars independently of their protocol of choice. Any changes made to Registry Policy will be simultaneously reflected in both protocols. The layering also simplifies the addition of new protocols (e.g. new versions of the EPP protocol) at a later stage.


 

Figure 1 High level SRS layering

A more detailed description of the Layering of our SRS application can be found in Section C22.

Registry Registrar Protocol (RRP) Layer

The RRP server will be implemented according to RFC 2832, with emphasis on compliance with the Verisign implementation already in use by the Registrars. To ensure a smooth transition from Verisign to Global Name Registry for the registrars, all issues not specified in the RFC but implemented by Verisign will thus be implemented to reflect the current functionality. This includes issues like out-of-band communication by email in the case of transfers, etc. For a fuller discussion on Registrar Transition from Verisign to Global Name Registry, see Section C18.2.

Although it is impossible to guarantee 100% compatibility with all existing RRP clients, Global Name Registry will put great effort into the compatibility issues surrounding the transfer from Verisign. All clients built on the Verisign RRP SDK in its latest revision will be supported.

For more information about the RRP protocol layer, see Section C22.

Extensible Provisioning Protocol (EPP) Layer

The EPP protocol will be implemented according to the IETF-ProvReg EPP Draft 0.6. No extensions to the protocol will be needed for initial .org operation.

Protocol Independent Layer

The Protocol Independent Layer will be implemented as an object oriented C++ API library on which to build all our SRS applications. Global Name Registry is very experienced with C++ and C++ embedded SQL. Every application existing in the SRS at any given time will use the same library version to access the Registry Data to enforce equal policy treatment for all services. The interface exposed by the API will be a superset of the functionality needed by all protocols supported by the SRS. Any translations needed from the internal data representation to the client will be done by the Protocol Layer.

For more information about the protocol independent layer, see Section C22.

Protocol specific Procedures and rules

An important part of the SRS (as defined in Section C17.1) is the business logic that translates a Registrar request, whether through RRP or EPP, to transactions in the Oracle database. Some of this logic is handled by the EPP servers, as described in Section C17.2, other by the business logic layer or the stored procedures in the database.

For a number of business operations (both billable and non-billable), the following high level rules and policies apply to the operations in the database. This is a non-layer-specific rule-set, and may be implemented both in the actual database, in the business logic, or in the front end servers (EPP or RRP) (the latter only for protocol specific rules)

Business operations include:

1.     Register (for billable objects)

2.     Create (for non billable objects)

3.     Delete

4.     Modify

5.     Transfer

6.     Renew (for billable objects)

7.     Information query (CHECK or INFO or POLL or for TRANSFER)

8.     Grace Period implementation

9.     Bulk Transfer

 

The procedures and rules of these functions are described in the following:

Domain names

Format

A domain name is of the format “example.org”. A domain name of this format will sometimes be called a Fully Qualified Domain Name (FQDN)

Eligibility requirements and dispute resolution

The domain name is subject to Eligibility Requirements and Dispute Resolution.

Uniqueness/Multiplicity

A FQDN is unique in the .org zone. Two identical FQDNs can not simultaneously exist on .org.

String restrictions

The following restrictions apply to a domain name:

o        Certain strings are reserved as described in “Reserved Strings”

Associations

·         Children nameservers of the domain (e.g. "ns1.example.org" for the domain "example.org")

Whois

The domain name appears in the .org Whois service, as specified in the Whois specification.

DNS

The domain name resolves in the .org DNS

Registration

Registration is for 1-10 years, in one-year intervals

Registrations are subject to Grace Periods.

Modifications

All associated information described under “Associations”, including contact IDs, can be modified, unless the domain name has status.

The domain name can be modified, unless it has any statuses that prohibit this operation. Such status will for the Registrar be protocol specific (EPP or RRP) but will have a consistent internal specification (non-protocol specific).

(note that Deletions, Transfers and Renewals are described separately and are not considered as “modifications”)

Renewals

The domain name can be renewed, unless it has any statuses that prohibit this operation. Such status will for the Registrar be protocol specific (EPP or RRP), but will have a consistent internal specification (non-protocol specific). Protocol-specific statuses are described in more detail in Section C17.2

A request for renewal that would set the expiry date to more than 10 years into the future will be denied. However, a request that would set the expiry date to 10 years plus a fraction of a year, will set the expiry date to 10 years in the future, and any remaining time will be forfeited.

For example, 10 year renewals are possible if the expiry date of the domain name is less than 1 year in the future. In this case, the expiry date will be set to 10 years into the future, and any remaining time will be forfeited.

Automatic renewal will happen when a domain name expires. In the case of Auto-Renewal of the domain name, a separate Auto-Renew Grace Period will apply.

Renewals are subject to Grace Periods as described below.

Transfers

The domain name can be transferred, unless it has any statuses that prohibit this operation. Such status will for the Registrar be protocol specific (EPP or RRP) but will have a consistent internal specification (non-protocol specific). Protocol-specific statuses are described in more detail in Section C17.2

Under thick EPP operations, a Transfer can only be initiated when the appropriate Authentication information is supplied. The only Registrar to which the Authentication information for Transfer is available is the Current Registrar. A Registrar other than the Current Registrar, wishing to initiate a Transfer on behalf of a Registrant must obtain the Authentication information from the Registrant.

The Authentication information shall be made available to the Registrant upon request. The Registrant is the only party other than the Current Registrar that shall have access to the Authentication information.

The section “Object Transfer under EPP” below describes in more detail how Transfers are performed with the relevant protocols.

Under RRP, a Transfer will be automatically accepted if the losing Registrar does not explicitly deny the Transfer within the applicable Transfer Pending Period.

Registrar Transfer entails a specified extension of the expiry date for the object. The Registrar Transfer is a billable operation and is charged identically to a renewal for the same extension of the period. This period can be from 1 to 10 years, in 1 year increments.

Since Registrar Transfer involves an extension of the registration period, the rules and policies applying to how the resulting expiry date is set after Transfer, are adopted from the Renewal policies on extension.

A domain name cannot be transferred to another Registrar within the first 60 days after Registration. This also continues to apply if the domain name is renewed during the first 60 days.

Transfer of the domain name changes the sponsoring Registrar of the domain name, and also changes the subordinate, local hosts (not foreign hosts, on other TLDs) associated with the domain name.

Deletion

The domain name can be deleted, unless it has any statuses that prohibit this operation. Such status will for the Registrar be protocol specific (EPP or RRP) but will have a consistent internal specification (non-protocol specific). Protocol-specific statuses are described in more detail in Section C17.2

A domain name is also prohibited from Deletion if it has any in-zone child hosts. For example, the domain name example.org cannot be deleted if an in-zone host ns.example.org exists.

The Deletion mechanism marks the name for deletion, and invokes the Deletion Mechanism. The current version of the Deletion Mechanism is described in the “Deletion Pending Period” section below. Future versions of the Deletion Mechanism may change the way a domain is deleted.

Contact objects (only exist in the Registry in “thick” mode)

Transfer

Transfer of contact objects is allowed provided that no other objects are associated with the Contact object being transferred

Deletion

Deletion of contact objects is allowed provided that no other objects are associated with the Contact object being deleted

Billing

A contact object is not a billable object.

Usage

A contact object can be referred to by Registrars other than the Sponsoring Registrar.

Associations

·         Sponsoring Registrar

Host objects

Types of host objects

There are two types of hosts: In-zone hosts and out-of-zone hosts.

In-zone hosts

An in-zone host is a Host object where the host address is on .org.

Creation

Creation of an in-zone host is only allowed if the host address is on a domain name object owned by the Registrar.

For example, only the Registrar owning the domain name object example.org can create the host ns.example.org .

Uniqueness/Multiplicity

Only single entries of in-zone hosts can exist in the Registry.

Usage

Any Registrar can refer to a host object in this category.

Deletion

Deletion of an in-zone host is allowed provided that no other objects are associated with the host.

Transfer

Explicit transfer of an in-zone host is not allowed (only implicit transfer through transfer of the domain name on which the host resides).

Associations

·         A host can be associated with up to 13 IP Addresses

·         A creation date and modification date

·         Status(es)

·         Sponsoring Registrar

·         Parent domain (e.g. "example.org" for "ns1.example.org")

Out-of-zone hosts: Host objects where the host address is not on .org

Creation

Any Registrar can create an out-of-zone host.

Uniqueness/Multiplicity

Multiple identical entries of out-of-zone hosts can exist in the Registry, however, out-of-zone hosts are unique per Registrar.

Usage

Only the Registrar owning a host object in this category can refer to it.

Deletion

Deletion of an out-of-zone host is allowed provided that no other objects are associated with the host.

Transfer

Explicit transfer of an out-of-zone host is not allowed.

Associations

·         A creation date and modification date

·         Status(es)

·         Sponsoring Registrar

Common rules for both types of hosts

String restrictions

Length restriction

o        Minimum length of the host name is determined by the minimum length of a second level and third level as specified for the domain names.

Billing

A Host object is not a billable object.

Registrar objects

Associations

The ICANN Accredited Registrar is associated with at least one of each of the following contact points. Up to 13 instances of each contain point can be associated.

·         Registrar Balance

Create

A Registrar object can only be created by the Registry.

Modification

Modification of a Registrar object is allowed, by the Registrar.

Transfer

Transfer of a Registrar object is not allowed.

Deletion

Deletion of a Registrar object can only be done by the Registry.

Billing

A Registrar object is not a billable object.

General rules for Registry Objects

General rules for objects

Add Grace Periods

All registrars and all registrations will have a 5 day (120-hour) add grace period.

The Add Grace Period is a specified number of hours following the initial registration of a domain. The current value of the Add Grace Period for all registrars is 5 days (120 hours). If a Delete, Extend (Renew), or Transfer operation occurs within the 5 days (120 hours), the following rules apply:

Delete

If a registration is deleted within the Add Grace Period, the sponsoring registrar at the time of the deletion is credited for the amount of the registration. The domain is deleted from the Registry database according to the Deletion mechanism in use at the time. See Overlapping Grace Periods for a description of overlapping grace period exceptions.

If a domain name is deleted after the 5-day (120 hours) grace period expires, it will be placed on HOLD for 5 days and then deleted through the system.

Renew

If a registration is extended within the Add Grace Period, there is no credit for the Add. The account of the sponsoring Registrar at the time of the extension will be charged for the initial add plus the number of years the registration is extended. The expiration date of the domain is extended by the number of years, up to a total of ten years, as specified by the registrar's requested Extend operation.

Transfer (other than ICANN-approved bulk transfer)

Registrar Transfers may not occur during the Add Grace Period or at any other time within the first 60 days after the initial registration. Enforcement is the responsibility of the registrar sponsoring the domain name registration and will be enforced by the SRS.

Bulk Transfer (with ICANN approval)

Bulk transfers with ICANN approval may be made during the Add Grace Period. The expiration dates of transferred registrations are not affected. The losing Registrar's account is charged for the initial add.

Renew/Extend Grace Period

All registrars and all registrations will have a 5 day (120-hour) Renew/Extend grace period.

The Renew/Extend Grace Period is a specified number of hours following the renewal/extension of a domain name registration period. The current value of the Renew/Extend Grace Period is 5 days (120 hours). If a Delete, Extend, or Transfer occurs within that 5 days (120 hours), the following rules apply:

Delete

If a registration is deleted within the Renew/Extend Grace Period, the sponsoring registrar at the time of the deletion receives a credit of the renew/extend fee. The domain is deleted from the Registry database according to the Deletion mechanism in use at the time.

See Overlapping Grace Periods for a description of overlapping grace period exceptions.

Extend (Renew)

A registration can be extended within the Renew/Extend Grace Period for up to a total of ten years. The registrar's available credit will be charged for each of the additional number of years the registration is extended.

Transfer (other than ICANN-approved bulk transfer)

If a registration is transferred within the Renew/Extend Grace Period, there is no credit. The expiration date of the registration is extended by one year and the years added as a result of the Extend remain on the domain name up to a total of 10 years.

Bulk Transfer (with ICANN approval)

Bulk transfers with ICANN approval may be made during the Renew/Extend Grace Period. The expiration dates of transferred registrations are not affected. The losing Registrar's account is charged for the Renew/Extend operation.    

Transfer Grace Period

All Registrars and all registrations will have a 5 day (120-hour) Transfer grace period.

The Transfer Grace Period is the 5 days (120 hours) following the transfer of a domain. If a Delete, Extend, or Transfer occurs within that 5 days (120 hours), the following rules apply:

Delete

If a domain is deleted within the Transfer Grace Period, the sponsoring Registrar at the time of the deletion receives a credit of the transfer fee. The domain is deleted from the Registry database and is immediately available for registration by any Registrar. See Overlapping Grace Period for a description of overlapping grace period exceptions.

Renew

If a domain is extended within the Transfer Grace Period, there is no credit for the transfer. The Registrar's account will be charged for the number of years the registration is extended. The expiration date of the domain is extended by the number of years, up to a maximum of ten years, as specified by the registrar's requested Extend operation.

Transfer (other than ICANN-approved bulk transfer):

If a domain is transferred within the Transfer Grace Period, there is no credit. The expiration date of the domain is extended by one year up to a maximum term of ten years.

Bulk Transfer (with ICANN approval)

Bulk transfers with ICANN approval may be made during the Transfer Grace Period. The expiration dates of transferred registrations are not affected. The losing Registrar's account is charged for the Transfer operation that occurred prior to the Bulk Transfer.     

Bulk Transfer Grace Period

There is no grace period associated with Bulk Transfer operations as initiated by ICANN according to the Registry-Registrar Agreement. Upon completion of the Bulk Transfer, any associated fee is not refundable.

Overlapping Grace Periods

If an operation is performed that falls into more than one grace period, the actions appropriate for each grace period apply except as follows:

If a domain name is deleted within a multiple of Renew/Extend Grace Periods (upon several subsequent Renewals), then the registrar is credited the extend amounts, taking into account the number of years for which the registration and extend were done.

If a domain is deleted within the Add Grace Period and the Extend Grace Period, then the registrar is credited the registration and extend amounts, taking into account the number of years for which the registration and extend were done.

Grace Periods Overlap Exception

If a domain is deleted within one or several Transfer Grace Periods, then only the current sponsoring Registrar is credited for the transfer amount. For example if a domain is transferred from Registrar A to Registrar B and then to Registrar C and finally deleted by Registrar C within the Transfer Grace Period of the first and second transfers, then only the last transfer is credited to Registrar C.

If a domain is extended/renewed within the Transfer Grace Period, then the current Registrar's account is charged for the number of years the registration is extended.

Note: If several billable operations, including transfers, are performed on a domain and the domain is deleted within the grace periods of each of those operations, only those operations that were performed after the latest transfer, including the latest transfer, are credited to the current Registrar.   

Transfer Pending Period

The Transfer Pending Period is a specified number of hours following a request from a registrar (“Gaining Registrar”) to transfer a domain in which the current registrar of the domain (”Losing Registrar”) may explicitly approve or reject the transfer request.

The current value of the Transfer Pending Period is 5 days (120 hours) for all registrars. The transfer will be finalized upon receipt of explicit approval or rejection from the Losing Registrar.

If the Losing Registrar does not explicitly approve or reject the request initiated by the Gaining Registrar, the registry will approve the request automatically after the end of the Transfer Pending Period. During the Transfer Pending Period:

TRANSFER request or RENEW request is denied.

DELETE request is denied

MODIFY request is denied (modification of associated contact handles or Host handles)

Bulk Transfer operations are allowed.

Delete Pending Period

The Delete Pending Period is a specified number of hours following a request to delete a domain in which the domain is placed in HOLD status without removing the domain from the Registry database. In this status, the domain name can not be re-registered.

The current value of the Delete Pending Period for all registrars is 5 days (120 hours).

Registrars may request retraction of a delete request by calling Registry Operator Customer Service staff within the Delete Pending Period. The Registrar will need to provide the security phrase to the Registry Operator staff member prior to the staff member performing this retraction. This operation cannot be performed by the Registrar. Retraction requests processed during the delete pending period are currently at no cost to the registrar. If, by the end of the Delete Pending Period, no action is taken the domain will be deleted from the Registry database and returned to the pool of domain names available for registration by any registrar.

Renew requests are denied.

Transfer request is denied.

MODIFY request is denied (modification of associated contact handles or Host handles).

Bulk Transfer operations are allowed.

Object Transfers under EPP

(Upon changes to the EPP standard, the exact procedure described below may change, however according the same principles)

The EPP <transfer> command is used to manage changes in Registrar sponsorship of a known object. Registrars may initiate a transfer request, cancel a transfer request, approve a transfer request, and reject a transfer request using the "op" command attribute of the <transfer> command.

The EPP <transfer> command can only be used to explicitly transfer Domain, Emailforwarding and Contact objects. Nameserver (host) objects are implicitly transferred when a domain is transferred between registrars.

Initiation of Transfers

A Registrar that wishes to assume sponsorship of a known object from another registrar uses the <transfer> command with the value of the "op" attribute set to "request". Once a transfer has been requested, the same Registrar may cancel the request using a command with the value of the "op" attribute set to "cancel". A request to cancel the transfer must be sent to the Registry before the current sponsoring Registrar either approves or rejects the transfer request and before the Registry automatically processes the request due to inactivity on the part of the current sponsoring Registrar.

Approval/Rejection

Once a transfer request has been received by the Registry, the Registry will notify the current sponsoring Registrar of the requested transfer. This notification will be put in the message queue of the affected Registrar and be retrieved when the Registrar later uses the EPP <poll> command. The current status of a pending <transfer> command for any object may be found by the losing and gaining registrar using the <transfer> query command.

The current sponsoring Registrar may explicitly approve or reject the transfer request. The Registrar may approve the request using a <transfer> command with the value of the "op" attribute set to "approve". The Registrar may reject the request using a <transfer> command with the value of the "op" attribute set to "reject".

Authorization

Every <transfer> command must include an authorization identifier to confirm transfer authority. This element contains authorization information associated with the object, or alternatively for domain and email forwarding objects, authorization information associated with the registrant or associated contacts as specified in the EPP drafts.

The Authorization identifier information must not be disclosed to any other Registrar or third party. A Registrar that wishes to transfer an object on behalf of a third party must receive authorization identifier information from the third party before a command can be executed.

Automatic Transfer

The Registry will automatically approve all transfer requests that are not explicitly approved or rejected by the current sponsoring Registrar within five calendar days of the transfer request. The losing registrar will be notified of the automatic transfer via email and through the EPP.

Transfer Notification

Transfer notifications will be put in a message queue in the Registry System. These notifications can be retrieved and acknowledged through the EPP <poll> command at any time. Information about the request can also be found using the <transfer> query command.

Protocol Details

When using the EPP <transfer> command for domain objects, the Registrar will specify the fully qualified domain name of the object for which a transfer request is to be created, approved, rejected or cancelled. For email forwarding objects, the fully qualified email address of the object should be specified, and for contact objects the contact ID that serves as a unique identifier should be specified.

For domain objects and email forwarding objects, the Registrar may also provide an element that contains the number of years to be added to the registration period of the object if the transfer is successful. The minimum and maximum allowable values for the extension is one year and ten years, respectively, and the default value is one year. Registry operator policy restricts the maximum outstanding expiration period for domain objects and email forwarding objects to ten years. A transfer with an extension period that exceeds this limit will be rejected. Exception: If the addition of the minimum allowable value of extension would extend the registration period past the maximum outstanding expiration years, the transfer will go through, but with no registration extension.

Every EPP <transfer> command issued by a Registrar must contain an "op" attribute that specifically identifies the transfer operation to be performed. Valid values, definitions, and authorizations for all attribute values are defined in the EPP specification.

Reserved Strings

Global Name Registry would reserve certain strings from registration if this would help to ensure stability of the .org name space and operations. In particular, this may concern multi-lingual registrations (if any) already made in the .org zone by Verisign as a result of MLD trials conducted earlier. Global Name Registry proposes to establish such a list of reserved names in cooperation with ICANN and any other relevant parties.

 

C17.3. Database capabilities

Database size, throughput, scalability, procedures for object creation, editing, and deletion, change notifications, registrar transfer procedures, grace period implementation, reporting capabilities, etc.

C17.3. Database capabilities24

Overview. 24

The databases25

Intercontinental Registry Replication. 26

Messages used for replication. 27

Global Name Registry operates Multiple databases28

Some considerations in Moving from a “Thin” SRS Database to a “thick”                           SRS Database  29

Reporting (handled by separate database)29

Data Validation and consistency checking. 30

Database Structure and Table structure. 30

Scalability of the database. 31

performance. 33

Database Hardware, Size, Throughput, and Scalability34

Hardware platform and specifications34

Database design parameters34

Load optimization. 36

Database Administration. 36

Database Backup/Restore. 37

Disaster Recovery37

Database Security & Access Privileges37

Overview

Global Name Registry is running a number of large databases today for .name, and will provide for .org an enterprise-strength, fault-tolerant database system capable of managing large databases and high transaction-processing loads reliably, with scalable growth to accommodate change.

The databases currently in operation by Global Name Registry include the .name DNS (a database of domain names mapping to IP addresses), Whois (a database of contact information, nameserver information and domain name information), .name Mail (a database of email addresses and forwarding addresses to which incoming email is forwarded), and SRS (the authorative database of domain name registrations on .name)

The database system supports asynchronous replication of data between two SRS data centers, which are geographically dispersed.  The benefit to the Internet community is reliable, stable operations and scalable transaction-processing throughput to accommodate Internet growth. 

Global Name Registry anticipates moving the .org database from its current structure, a so-called “thin” database, where no contact information is stored, to a “thick” database, where contact information is stored with each object. The thick database structure has a number of advantages for the community, as explained in Section C18 to this application. Section C18 details how the transition of the database from thin to thick will happen.

The databases

The SRS built by Global Name Registry runs on a database consisting of three separate database servers in addition to an array of whois database servers. Splitting the database tasks on multiple servers results in a highly dedicated and optimized system. The servers are each performing different tasks, which greatly increases the SRS performance and optimizes data quality and access speed. In particular, all check queries done by the Core SRS will be handled by the scalable array of Whois servers, discussed in the "Database scalability using Whois" Section.

Figure 2 Core SRS database distribution at main site

In addition, Global Name Registry operates a second database-set, also consisting of three database servers and a Whois array, running on the Global Name Registry failover site. This failover site, and its role in taking over the SRS operations in case of main site failure, is described in another part of this application.

The databases in operations are Oracle installations running on the AIX operating system on IBM hardware. It has been scaled to achieve high performance consistently during operations and protect the data quality in the Registry. The selection of hardware and third party software allows for future scaling to protect the initial investment associated with building the Registry.

The Whois array is a GNR-db implementation running on the Linux operating system on IBM hardware. This database is constructed to allow for extreme query performance combined with ease of data propagation and a high degree of reliability. The array will by updated by the Update Handler mechanism described in C17.1 and C17.4.

The database is the primary container of all registered transactions and domain names. A Business Logic layer is responsible for the communication with the database via custom C++ applications and stored programs and procedures, all developed by Global Name Registry.  Recoverability of the database is enhanced through the use of redundant systems and failover databases, which includes Whois databases as well as the Core SRS database.

In designing the database(s), Global Name Registry has made significant efforts and consideration to split the database and associated logic onto several dedicated systems. Among others, there is a separate database (both in terms of hardware and software) for reporting, a separate database for data validation and consistency checking, and separate servers run different parts of the logic and business rules that access the database. More on this can be explored in sections C22 and C17.2 (RRP and EPP server descriptions).

The two Global Name Registry sites replicate data continuously to ensure that in disaster scenarios, the database stays intact and correct. In such scenarios, the Core SRS can do a failover to the disaster recovery site and run without loss of consistency and authority.

Intercontinental Registry Replication

The authorative database will be replicated to our main failover site in Norway using a persistent MQ. The choice of technology is based on the following main goals:

1.     Minimal latency in Core SRS message processing

2.     Reliability in message distribution

3.     Redundancy in failover situations

Using the replicated database, the backup site will itself maintain the other databases needed for a swift failover, running the same software as on the main site.

 

Figure 3 Database distribution and replication main site – backup site

 

Messages used for replication

Archive Log Message

The Archive Log Message will consist of an Oracle Archive Log and corresponding status information including a strictly increasing Archive Log ID, and a reference to the last Update Message ID processed in the current Archive Log batch.

Single Update Message

The Single Update Message will be used to complete the backup database in case of a SRS failover. The message includes the update to be done in the database, and a strictly increasing Update Message ID. The messages will be held internally by the Replicator Software until an Archive Log containing the update is successfully processed by the backup database. To bring the backup database up to current state, the Replicator Software will make sure that no Single Update Messages are left in the MQ, and then execute all the Single Update Messages received since the last successful Archive Log. This ensures that even in the case of failover, the database will be consistent up to the last second.

DB Validation Message

To validate the consistency of the backup database against the main authorative database, the Core SRS system will initiate a DB Validation using a DB Validation Message during a regularly scheduled maintenance window. The message itself contains an Archive Log ID and MD5 sums for all tables included in the database. The process of validation will include the following steps on the main site:

1.     Stop updates

2.     Send Archive Logs to backup site

3.     Dump all tables to clear text

4.     Restart updates

5.     Generate MD5 sums, and send DB Validation Message to backup site

Upon receiving a DB Validation Message, the Replication Software will insure that the backup database is in the state according to the Archive Log ID supplied, dump the tables to clear text, and check that the MD5 sums match up with the expected result.

Global Name Registry operates Multiple databases

There are a multiple of databases in operation by Global Name Registry at all times:

1.     SRS Database —This database’s primary function is to provide highly reliable persistent storage for all of the registry information required to provide domain-registration services.  The SRS database is highly secured, with access limited to authenticated registrars, trusted application-server processes, and the registry’s database administrators. 

 

2.     Billing Database —This database ensures that billing is handled for all Registrars, and is an integral part of the SRS database. However, this database communicates with Global Name Registrys financial systems operated by the Financial Controllers, rather than with the EPP server (as the SRS database does). Registrars can download and view their billing data through a secure web site at all times.

 

3.       Whois Database —The Whois database is a searchable database that any Internet user can access to view details of the domain name stored in the SRS. The Whois database maintains data about registrars, domain names, nameservers, IP addresses, and the associated TLD contacts. The Whois database is continuously updated from the SRS database through a data quality assurance and replication process. In addition, there are feedback loops that continuously monitor the quality of the Whois data.

In addition to these databases, Global Name Registry maintains various internal databases to support various operations, e.g., authorizing login userids and passwords, authorizing telephone conversations with pass phrases and passwords, authenticating encryption keys, maintaining access-control lists, content databases for i.e. www operations, MX databases for email provisioning, etc.

The Whois and DNS database systems are handled in other chapters of this application. Therefore, in the following, we have chosen to focus on the SRS database system which acts as the authorative repository of all .org domain names and associated objects. This will sometimes be called the “SRS database”.

Some considerations in Moving from a “Thin” SRS Database to a “thick” SRS Database

Global Name Registry anticipates moving the .org database from a “thin” database, where no contact objects are stored in the Registry, but rather held by the Registrar(s) for each of their domain objects, to a “thick” database, similar to the .name database, where each domain name is associated with the relevant contact objects also held in the database. The benefit of a thick database is that Whois can be centralized by the Registry and allows for faster, easier and a much more consistent access to Whois information.

The transition from "thin" to "thick" will be a gradual process. At first, all objects transferred from Verisign will exist in the Global Name Registry without contact information. When Registrars start using the EPP protocol for accessing the Registry they will be able to add contact information to the domains. When all Registrars are converted to the EPP protocol, all new registrations must contain contact information. To migrate existing objects, there will also be a requirement that all domains to be renewed must contain contact information. Realizing that a abrupt switch from "thin" to "thick" database is not possible, the next best thing would be a time limited one-way road towards a complete "thick" registry.

In the following, the database aspects related to continuing operations on the existing (“thin”) .org data structure is described. Until a transition is completed, Global Name Registry will run the .org database as it is run by Verisign today – without extended and centralized information. However, in some of the sections below, considerations that have to be made for the database when moving from “Thin” to “Thick” are discussed. It should be noted that Global Name Registry has extensive experience in building and operating a thick database, since the .name Top-Level domain most likely is the “thickest” and most comprehensive TLD database in existence today, with a multitude of interrelated services (e.g. domain names, email and Defensive Registrations) and associated contacts and statuses.

Reporting (handled by separate database)

The reporting will be handled by a separate database as shown in figure 1. This will be an aggregated database, where all information needed by the reporting software will be easily accessible. The Report Data Aggregation Process is run once per day, and extracts all transactions from the previous run. The Reporting Database will also keep history of Registrar balances for easy extraction.

The separate reporting database will allow for online reporting capabilities without straining the authorative database with massive queries.

Data Validation and consistency checking

The Global Name Registry system for data validation and consistency checking is thoroughly described in Section C17.1

Database Structure and Table structure

All data on .org domain names, hosts, audit tables, account data, domain history, etc, is stored in the database. This authorative data is used for most or all of the previously described operations and rules in the SRS, and some of this authorative data is replicated to the external services, like Whois, DNS, and zone file access.

The following table structure is a proposed structure for how the .org data may be structured internally in the database:

 

Figure 4: Illustration of data model

 

Additionally, when “Thick”, the database will store information on the registrant.  This will include name, address, telephone and email.  Other details pertaining to the registrant’s financial status with the Registrar are not to be stored by the Registry Operator, thus allowing the Registrar to registrant relationship to remain private. The Registry Operator will additionally store similar data, where applicable, for the administrative, technical, and billing contacts.

The audit table will include all changes made to the objects in the database, and the amount of data held in the table will constantly increase. To keep the database to a manageable size, this table will once a month be purged to include the last two years history only. All older records will be written to a flat file archive outside the database. They can, if necessary, be retrieved by Customer Services operational interface.

Scalability of the database

In planning for growth, Global Name Registry operates a database system with the ability to add resources on an as-needed basis without interrupting processing. The database platform is extensible in several ways, including the following:

·         Size of storage and backup– the physical size of the database continuously increases and therefore the need for disk storage (and backup) increases. The database servers have no internal storage and are connected to the external ESS. The ESS (Enterprise Storage System) from IBM is a fully redundant, internally mirrored, RAID’ed storage solution which currently has 1.5 Tb of storage space, and can be easily expanded to 22 Tb. The backup solutions have to follow the storage space, and the Global Name Registry backup robots back up the database while it is hot, and completes a daily backup cycle onto tape, which is transported to an offsite secure location every week.

·         Memory—As the volume increases, so does the need for increased buffer and lock-pool storage. The database platform has 8 GB of internal RAM, and scales to 16 Gb.

·         CPUs – additional CPUs can be added as appropriate. The current hardware runs on four-way processors, and can scale up to 32 processors if system CPU load becomes an issue

·         Adding database servers – Global Name Registry has the option to add dedicated database servers when needed to improve performance. The separation between the data validation database, the reporting database and the main/authorative database allows Global Name Registry to scale on any of the two former databases by adding servers to take off load from the main database. For example, information queries (that do not change the content of the database) can be uniquely sent to a set of reporting databases that are constantly updated from the main database, thus offloading the check volumes on the main database. See also the separate "Database scalability using Whois" section.

·         Adding logic processing units – Given the separation Global Name Registry has made between various parts of the logic that assesses the Registrar queries, Global Name Registry can move additional load on to the linearly scalable front ends and business logic processors to alleviate the load on the authorative database. This comes in addition to adding database servers for specific purposes like reporting.

Database scalability using Whois

To ease the stress on the authorative database and allow for unlimited horizontal scalability of check queries, we would use an array of Whois servers to answer any check query done in the Core SRS. This will enable the Global Name Registry .org SRS to handle large amount of checks and failed adds, which constitutes most of the queries from the Registrars. In particular during an "add storm" for a popular domain, this schema will guarantee a responsive system throughout the ordeal.

The Whois array will be updated as part of the standard Update Handler procedure described in Section C17.1, and will usually be updated within seconds of changes to the main database, and at most within 15 minutes. The array of servers will not be in public Whois production, but exclusively used by the Core SRS to relieve the authorative database.

 

Figure 5 Core SRS – Whois communication

The possible problems of using a non-authorative data source as the preferred data source for checks in the .org namespace should be investigated. The two conditions that need attention are the following:

1.     Object present in authorative database, but not in Whois

2.     Object not present in authorative database, but still present in Whois

1. Object present in authorative database, but not in Whois

Check: Core SRS will report the domain as available. A simple check will never guarantee the availability of a domain, but be a good indication. In this case the indication will include all operations since the last update, most likely only seconds ago.

Add: The add command will always do an additional check against the authorative database before a transaction is initiated. The overhead of a simple check is far less than that of a failed add transaction. During an "add storm" we want to keep the number of transactions started for a given domain to an absolute minimum.

Should the number of double checks turn out to be a performance problem, it can be reduced be giving the update message from an add command priority through the Update Handler system.

2. Object not present in authorative database, but still present in Whois

Check: The result from the Whois-array will be considered authorative, and Core SRS reports the name as not available.

Add: The result from the Whois-array will be considered authorative, and Core SRS reports the name as not available.

Fairness: Since all Registrars share the same pipe to the Whois array, no Registrar will receive any systematic advantage of a more up to date Whois server.

performance

The database and SRS system is designed to provide the following functions:

1.     Persistence—storage and random retrieval of data

2.     Concurrency—ability to support multiple users simultaneously

3.     Distribution (data replication)—maintenance of relationships across multiple databases

4.     Integrity—methods to ensure data is not lost or corrupted (e.g., automatic two-phase commit, physical and logical log files, roll-forward recovery

5.     Availability—support for 24 x 7 x 365 operations (requires redundancy, fault tolerance, on-line maintenance, etc.

6.     Scalability—unimpaired performance as the number of users, workload volume, or database size increases

The system is designed for high performance and scalability. Global Name Registry has scalability plans in place for scaling further from tens of millions of registered names and onwards. Scalability is handled in more detail in Section C17.10.

Database Hardware, Size, Throughput, and Scalability

The following table lists design parameters for the initial design of the three major databases.  The parameters are based on projected volumes in the first two (2) years.  Scalability term in the table refers to the database’s ultimate capacity expressed as a multiple of the initial design capacity in terms of size and transaction processing power. 

Hardware platform and specifications

For the SRS database platform, Global Name Registry uses the following proven hardware platform (the same platform is currently operated by Global Name Registry)

Database design parameters


SRS Database

Hardware

·         IBM B80 PowerPC 64 Bit high end transaction server

·         4 processor RISC CPU I 450 Mhz

·         64-bit architecture

·         Fitted with 8 Gb of memory, extensible up to 16Gb.

·         Connected to an Enterprise Storage Solution (ESS) from IBM, with 1.5 Tb of hot-backed up RAID1 storage

·         Triple, redundant hot-swappable power supplies

·         Dual-attach 1000 BaseTX/FX Ethernet Adapter

·         Event-management software for remote management

Software

·         Oracle 8i

·         IBM AIX Operating system

Capacity of Domain registrations

20 million

Database throughput

1500 transactions per second

Storage available

Up to total ESS volume of 22 Tb

Database scalability strategy

Scales from 8 (current) to 32 processors

Scales from 8 (current) to 16 Gb memory

Clustering using Oracle clustering technology

 

 


Reporting database

Hardware

·         IBM B80 PowerPC 64 Bit high end transaction server

·         2 processor RISC CPU I 450 Mhz

·         64-bit architecture

·         Fitted with 1 Gb of memory, extensible up to 16Gb.

·         Connected to an Enterprise Storage Solution (ESS) from IBM, with 1.5 Tb of hot-backed up RAID1 storage

·         Triple, redundant hot-swappable power supplies

·         Dual-attach 1000 BaseTX/FX Ethernet Adapter

·         Event-management software for remote management

Software

·         Oracle 8i

·         IBM AIX Operating system

Capacity of Domain registrations

20 million

Database throughput

1000 transactions per second

Storage available

Up to total ESS volume of 22 Tb

Database scalability strategy

Scales from 2 (current) to 32 processors

Scales from 1 (current) to 16 Gb memory

Clustering using Oracle clustering technology

 


QA database

Hardware

·         IBM B80 PowerPC 64 Bit high end transaction server

·         2 processor RISC CPU I 450 Mhz

·         64-bit architecture

·         Fitted with 1 Gb of memory, extensible up to 16Gb.

·         Dual 37Gb Internal SCSI RAID controlled harddrives

·         Triple, redundant hot-swappable power supplies

·         Dual-attach 1000 BaseTX/FX Ethernet Adapter

·         Event-management software for remote management

Software

·         Oracle 8i

·         IBM AIX Operating system

Capacity of Domain registrations

20 million

Database throughput

1000 transactions per second

Storage available

Up to total ESS volume of 22 Tb

Database scalability strategy

Scales from 2 (current) to 32 processors

Scales from 1 (current) to 16 Gb memory

Clustering using Oracle clustering technology

 


Whois database

Hardware

·         IBM x330 server

·         Intel Pentium III Dual CPU 1Ghz

·         Fitted with 1 Gb of memory, extensible up to 8Gb.

·         Connected to an Enterprise Storage Solution (ESS) from IBM, with 1.5 Tb of hot-backed up RAID1 storage

·         Double, redundant hot-swappable power supplies

·         Dual-attach 1000 BaseTX/FX Ethernet Adapter

 

Software

·         Global Name Registry Whois DB

·         Linux operating system

Capacity of Domain registrations

20 million

Database throughput

Average of 110 reads per second per server, total of 330 reads/second on UK main site

Average of 100 inserts per second

Storage available

Up to total ESS volume of 22 Tb

Database scalability strategy

Scales from 2 (current) to 32 processors

Scales from 1 (current) to 8 Gb memory

Can add virtually unlimited number of servers in load balancing to increase read capacity

Can use solid state storage instead of hard drives to increase insert transaction capacity

Load optimization

The SRS database is optimized to handle large quantities of data and support both a thin and a thick data model. Global Name Registry has extensive experience with the thick data model from running the .name Registry. The multitude of objects on .name (including the .name Email and the Defensive Registrations) make the .name Registry a complex structure which has to support a number of applications and operations.

Further, the database is optimized for a high number of users and transactions. In designing the database, Global Name Registry has taken into consideration that the volume of SRS transactions contain a high ratio of Reads versus Writes. (“CHECK” vs “REGISTER”). The database is designed to have the highest possible response time for all operations and scale well for future applications and volumes.

Database Administration

Global Name Registry  personnel who administer and maintain the database will perform their tasks at times and intervals scheduled to ensure maximum system availability. Typical database-administration tasks include the following:

·         Monitoring and tuning

·         Dumping of audit tables to disk, and storing

·         Starting and stopping

·         Backing up and recovering

·         Adding additional data volumes

·         Defining clustering strategies

·         Reorganizing

·         Adding and removing indexes

·         Evolving the schema

·         Granting access

·         Browsing and querying

·         Configuring fault tolerance

Database Backup/Restore

The backup of the database is assured both by the mirroring of the main database to the QA database, and also by storing backup images of the database both on disk (in the ESS) and on tape. Please see Section C17.7 for more detail on how the database is backed up.

Disaster Recovery

The main database asynchronously replicate over to the QA database. Additionally, the database in the Disaster Recovery Site is replicated from the main site for the unlikely event of a catastrophe that forces a failover. This procedure is described in more detail in Sections C17.14, C17.15, C17.16.

Database Security & Access Privileges

Security is described in more detail in other parts of this application. A brief summary of the database security measures includes:

·         Database servers are physically protected by locks, cages and hosting suites in the hosting center.

·         Only database administrators have privileges on the database.

·         Global Name Registry does logging/auditing/monitoring of all database access to ensure there is no unauthorized access.

·         Registrar access to the database is only via the protocol interface.

·         Global Name Registry has routine auditing/monitoring features to ensure there is no unauthorized activity, and will periodically review security features to ensure that the system is functioning as needed

C17.4 Zone file generation

Procedures for changes, editing by registrars, updates. Address frequency, security, process, interface, user authentication, logging, data back-up.

C17.4 Zone file generation. 38

The update process and zone file generation. 38

Frequency40

Security and authentication. 42

Backup and recovery42

Zone file access42

For this section, refer also to C17.5, as the generation of Zone files and their distribution is closely linked due to the real time characteristics of the zone file update process. For an overview of the end to end process of zone file update, see Section C17.1.

The traditional way of propagating changes in the Authorative Registry Database to the Master DNS servers, where master zone files are completely regenerated at periodical intervals and then distributed to the resolving servers by means of for example FTP or other transportation technologies, implies in most cases a up to 24 hours delay before changes in the SRS are reflected in the resolving services.

To avoid this delay, and assure a faster and more efficient propagation of changes to the resolving services, Global Name Registry has designed and currently operates for .name a system that solves the problem of zone file generation and zone file distribution (treated in succeeding chapter C17.5) in a slightly different way than the traditional one. Global Name Registry therefore rarely uses the concept of zone file generation in isolation, but rather considers the fuller process associated with real-time object updates.

This Section C17.5 and the following C17.6 present a focused view on the DNS as described in chapter C17.1, concentrating on what can be called respectively the zone file generation part and the zone file distribution part of the specified system. Note that figures in this Section are occasionally split in two, and the remaining part appears in C17.6.

The update process and zone file generation

Since there never will be a complete zone file regeneration, unless in case of major inconsistencies or disaster recovery where the zone file is rebuilt from information contained in the Authorative Registry Database, we will here define the zone file generation process to be the process initiated by the Update Message Generator in the Core SRS (see Section C17.1, under Core SRS) and ending in the updating of the zone file held in the Master Servers memory.

From a deployment view the involved parts of the DNS system can be illustrated as in the figure on the following page (non-relevant system components are shaded):

Figure 1: Zone file generation - a deployment view

For details concerning hardware specifications see chapter C17.1, section 2.4 – The DNS Package.

Frequency

As the system is designed to provide near real time reflection of changes made to the Authorative Registry Database in the Resolving Services, the actual zone file update frequency depends on the frequency of changes made in the Authorative Registry Database.

For the purpose of the zone file residing on the Master DNS, this is more a matter of latency than of frequency, as the zone file is continuously updated whenever changes occur. The latency aspect is described in more detail in Chapter C17.1 section 1.2.2 - Real time and asynchronous communication between Registry Systems.

In cases of heavy system load some latency may be caused by the accumulation of MQ Messages in the different queues that a message may pass on its way to the DNS Update Server. However, under normal operational conditions there should only be a minor delay caused by the processing time associated with the different steps a message undertakes. The update latency under normal operating conditions would in most cases be measured in seconds. See chapter C17.10 for some more details about congestion control.

The process of updating the DNS Masters zone file is illustrated in the figure on the following page (non-relevant processing steps are shaded):

Figur 2: Zone file generation - a process view

This is a stepwise explanation of zone file update process run by the DNS Master server as illustrated above:

1.     The update process MQ2DNS sends out dynamic DNS update messages to the DNS Master server according to RFC2136.

2.     Master BIND9 receives dynamic updates on port 53 as a series of DNS "update" requests, and sends out reply messages acknowledging them or reporting errors. Multiple updates may be sent in one message. Each update message (potentially containing 1 or more updates) causes BIND9 to update the zone file in memory, making the requested changes and increasing the serial number of the SOA record by one, and also the journal file is updated to reflect the changes between the previous zone and the new one.

3.     The zone is then updated/distributed as described in C17.6

Security and authentication

As illustrated in preceding figure, all communication between the main site where the Update Handler resides and the Regional sites where the DNS system (Local Update Server and DNS Master Server) reside run over secure TCP/IP connection (VPN), thus preventing tampering with the information.

Point-to-point authentication and integrity checking according to RFC2845 using shared secret keys to establish a trust relationship between two entities is used to safeguard the communication between the MQ2DNS process and the Bind9 process both running on the DNS Master Server. This prevents any succeeding intruders from sending fake DNS update messages.

An outmost important component assuring the validity and consistency of the zone file information is the Automated Consistency Validation system, described in detail in chapter C17.1 section 2.6. This systems main task is to assure the consistency between information stored in the Authorative Registry Database and information available through the Resolving Services, hereunder the DNS service.

Backup and recovery

The DNS zone file is not backed up. Rather, all the information contained in the zone file is backed up regularly and any DNS server zone file can be restored at any time. If a DNS server crashes or otherwise becomes corrupt so as to be restored, a full reload is initiated from the Master DNS server. The Master DNS server can in turn be fully reloaded from the database, which is carefully backed up regularly as described in C17.7.

Since a local zone file backup would always be out-of-date, not backing up the zone files leads to far higher consistency.

See chapter C17.7, C17.9 and C17.15 for more on system backup and recovery.

Zone file access

There are three ways of “accessing the zone file”:

1)     Registrars can modify the contents of the zone file by entering updated through the protocol interface to the SRS, or

2)     any entity can download a copy of the zonefile from the Global Name Registry Zone File Access FTP server, which is updated every 12 hours, or

3)     ICANN or IANA can transfer the zonefile from one of the DNS servers to a specified IP address at any point in time.

There is no other way of accessing the zone file as the zone file is the prime responsibility of the Registry and it is therefore strictly controlled and protected.

 

C17.5 Zone file distribution and publication

Locations of name servers, procedures for and means of distributing zone files to them.

C17.5 Zone file distribution and publication. 44

Locations of DNS servers44

Distribution of Zone File. 46

Frequency49

Security and authentication. 49

For this section, please refer also to C17.4, as the distribution of zone files and their generation is closely linked due to the real time characteristics of the zone file update process. For an overview of the end to end process of zone file update, see preceding chapter C17.1.

What we will do in this chapter is to present a focused view on the DNS package described in chapter C17.1, concentrating on what can be called the zone file distribution part of the specified system.

Locations of DNS servers

Global Name Registry will operate five regional DNS server sites for .org distributed throughout the world:

·         United Kingdom (UK), this is the main site

·         Norway (NO), this is the disaster recovery site

·         Hong Kong (HK)

·         USA – east coast (US-e)

·         USA – west coast (US-w)

 

New locations may be added as load grows.

Each site consists of a Local Update Handler, responsible for the distribution of MQ Messages to the local Resolving Services, a Master DNS Server which acts as a stealth primary DNS server, and a cluster of Slave DNS servers responsible for providing the public accessible DNS resolving service. The deployment diagram on the following page gives an overview of the servers residing at each site (system components not directly relevant to the zone file distribution process are shaded):

Figure 6: Zone file distribution - a deployment view

For details concerning hardware specifications see chapter C17.1, section 2.4 – The DNS Package.

Each of the regional sites is identically configured, containing a cluster of DNS servers running standard Bind9 software answering query request on port 53 behind a set of redundant firewalls and load balancers. In addition there is a Master DNS Server, responsible for the propagation of zone file changes to each of the Slave DNS Servers.

Distribution of Zone File

The DNS servers will all be continuously updated from the Master DNS Server, which is running as stealth primary DNS. We use BIND9, developed by ISC, which supports the DNS extensions necessary to allow Incremental Zone Transfers (IXFR). The sole mission of the Master DNS Server is to notify the Slave DNS Servers of zone file updates, and provide the updated zone file information when the Slaver DNS Servers request it.

The figure on the following page describes the distribution process from the Registry System to the DNS Slave Servers (non-relevant processing steps are shaded):

Figure 7: Real time zone file distribution - a process view

What happens in the DNS Master to Slave Server zone file distribution process as illustrated above is:

Upon having updated its local zone file and journal, the Master DNS Server (master) sends out a notify message, according to RFC1996, which instructs the Slave DNS Serves (slave) to initiate an immediate refresh of the zone.

The slave contacts the master server to get the current SOA record. If the serial number of the returned record is greater than that held on the slave, the slave would send out an IXFR request (as per RFC1995) to the master, with the serial number of its zone file in the request.

The master takes the two serial numbers and consults the journal to determine the necessary changes - if it doesn't have enough information, it will send a full zone transfer. If it has history information for all changes between the two serials, it can send an IXFR consisting of only the changed records.

Each slave, once it has performed a zone transfer, will in turn send out a notify, ensuring that if a packet is lost, multiple backup lines of communication will handle it.

In addition to this process triggered the DNS Notify message, the “refresh" value defined by the SOA record defines a timer, and when this timer is reset the same zone file distribution process is triggered. This timer mechanism prevents any two servers from being more than two hours, or whatever interval has been set, out at any time. A typical SOA might look like this:

name  SOA   ns1.nic.name. hostmaster.nic.name. (

                  200206010   ; Serial number

                  7200        ; Refresh

                  3600        ; Retry

                  86400       ; Expire

                  300         ; Negative cache

            )

 

This negative cache value ensures that the near real-time updates will be found promptly by the resolving client.

Figure 8: Periodical zone file distribution - a process view

Frequency

The distribution of zone file updates is a continuous process triggered by the masters sending of DNS Notify messages. Under normal conditions, when any slave DNS server receives a DNS Notify message from the Master, it immediately triggers an update which synchronizes the slave with the Master.

Security and authentication

As is the case for what has been called the zone file generation process in the previous chapter, the distribution process is also secured by a number of means. The public query interface provided by the cluster of Slave DNS Servers is available only through a firewall, and any attempts on intrusion or blocking through (distributed) Denial Of Service attacks or other commonly known hacking patterns will be detected by intrusion detection systems which will take action. For a fuller description of the security precautions taken see chapter C17.9.

The Master DNS Server and the Slave DNS Servers use TSIG verification for the IXFR to ensure appropriate security and consistency. In particular, the signature with TSIG guarantees that no tampering has taken place during transmission. This is the same point-to-point authentication and integrity checking mechanism that is used for the MQ2DNS to Master DNS Server communication.

This prevents any one process but the Master DNS Server to update the Slave DNS Servers.

C17.6. Billing and collection systems.

This section details Global Name Registry’s billing capabilities both in terms of processes and accounting, and also further describes the Billing systems briefly presented in C17.1.

C17.6. Billing and collection systems.50

Global Name Registry billing and VAT Classification. 50

Overview of billing systems52

Mapping components to hardware – Deployment view of the Billing system. 54

The billing process – a process view of the Billing system. 55

Accessibility and Reporting. 58

Increasing credits and adding accounts58

Reporting to Registrars58

System security58

Basic Billing rules in the System. 58

Deferred revenue calculation. 59

VAT calculation and VAT information. 60

Global Name Registry billing and VAT Classification

When Global Name Registry launched the gTLD .name Registry in January 2002, it was the only company in the UK providing such supply. The gTLD operation was not an established category with the UK Customs, and as part of its preparations of the billing systems and processes, Global Name Registry desired to clarify the tax and VAT implications of its supply in advance to ensure correct, consistent and legal billing of its operations, including billing to Registrars.

With the assistance of its accounting firm, PricewaterhouseCoopers, Global Name Registry obtained advice on the proper classification from UK Customs and is now adequately classified for billing of gTLD operations according to UK VAT rules.

The letter on the following page is from UK Customs and demonstrates the advice on the Global Name Registry billing classification:

Overview of billing systems

The figure below illustrates the Billing system at a high-level view.

.

Figure 9: High-level system view for Billing

The Financial Controller is an employee of Global Name Registry who administers Registrar’s accounts and the billing of Registrars.  He/she has an interface to the Authorative Registry Database where handling and editing of Registrar’s accounts can be done. This interface is offered by the Registrar Account Administration. The Financial Controller updates the balance of the Registrar’s account in the Registry System through this interface each time the Registry’s bank notifies him/her that funds have been received.

The Core SRS is an essential part of the Billing system because the Core SRS holds the accounts, Reporting Database, from which billing information is polled and debits of accounts are processed in the case of a billable operation.

The Registrar Account Info enables the Registrar to access the billing information and account status through a secure WWW interface, and request the information needed, or download an FTP file from an FTP server at UK main site. The support team, key account manager and accountants are also available as telephone support for this purpose, should extra information be needed.

The Financial System an off-the-shelf application, SAGE 100 from SAGE ltd. SAGE i.a. fetches statements and data from the Bank.

Each Registrar has a credit limit of $1000. This allows the Registrar balance to go negative, until a minimum of $1000 is reached, after which further billable operations will be denied. The negative balance has been granted by Global Name Registry to each Registrar to buffer away any unexpected fluctuations that otherwise would restrict the Registrar’s operations.

The billing at the end of the month happens in the following way:

·         An accountant generates a list of all Registrars, with the number of domain names registered during the corresponding month. For each Registrar, a list of every domain registered for this billing period will be included.

·         Summary data goes into the financial system, SAGE 100, and invoices are generated, printed out and sent by email.

The security of the system is assured by its manual nature. As long as the number of registrars is feasibly handled with automated generation of invoices and report generation from the database, the process can be handled by an accountant. Additional recruiting of accountants can scale to higher number of Registrars. The financial controller will implement such controls as are necessary to ensure that no errors are made.

Mapping components to hardware – Deployment view of the Billing system

The figure below presents the components in the Billing System.

Figure 10: Deployment view of the Billing package

The Reporting Database in Core SRS offers an interface to the Account Info Generator, which is polling account and billing information from the Reporting Database. Account Info Generator generates reports to the Financial Controller and files with the Registrar’s account status and billing information. These files are pushed to an FTP server through the FTP Server IF, and to a WWW server through the WWW Server IF. Registrars can then access account information by downloading the file from the FTP server, or using a web-browser.

The communication between the Billing system and the Reporting Database in Core SRS runs over the secure internal Global Name Registry network. The Registrar inquiry goes via the WWW-interface, or the FTP server. The WWW session can be encrypted with SSL, at the Registrar’s option. The FTP file retrieved can be encrypted with the Registrar’s PGP key, at the Registrar’s option.

The billing process – a process view of the Billing system

In the two subsequent figures on the following pages the process of billing in relation to a billable operation are shown.

Figure 11: The Billing process in accomplishment of a billable operation

Figure 12: The Billing, debit process

The figure below illustrates the process of crediting an account.

Figure 13: The Billing, credit process

Accessibility and Reporting

Increasing credits and adding accounts

The Financial Controller has an interface to the Authorative Registry Database, where he/she can manually modify Registrar’s accounts, if needed. Adding and removing accounts has to be done manually, as well as modifying account information

Reporting to Registrars

Registrars can access information about their Registry account in two ways: downloading account information from an FTP server, or using a WWW interface where balance, transactions and statements are available.

System security

The only interface for changing the SRS credit information is held by the Financial Controller. The credit information can only be updated from one particular location in the Global Name Registry offices on a secure network. This secures the SRS credit information and ensures that no unauthorized changes to Registrar balances can be made.

The bank interface to SAGE is secure and encrypted with software and processes from Barclays Bank plc.

The Registrar WWW interface is running over SSL and is therefore reasonably secure. Balances can not be updated in the WWW interfaces, and it only offers reading of account information and transactions. Should the Registrar compromise its password to the www interface, it can be promptly changed by the Global Name Registry Customer Support Team.

Employees are trained and followed up on security compliance. This is described in more detail in Section C17.9

Basic Billing rules in the System

Global Name Registrys current billing system operate accounts for the registrars as follows:

·         Registrar’s accounts start at 0, but all Registrars have a credit limit, and all Registrars have the same credit limit. This credit limit is currently 1000 USD. This allows Registrars to overdraw their accounts up to 1000 USD in the case. Global Name Registry grants this credit to Registrars in an attempt to reduce failed billable operations as a result of problems with fund transfers etc, but do not encourage Registrars to use it intentionally.

·         For the purpose of .org operations, billable operations are described in Sections C25-C27.

·         Should the account balance reach the credit limit, further billable operations will be denied.

·         Any billable operation will reduce the balance.

·         Any refund will increase the balance.

·         Manual adjustments can be made by Global Name Registry Financial Controller(s) to increase the balance whenever a Registrar pays Global Name Registry, or to reduce the balance whenever a Registrar requests money back. Manual adjustments can also be applied for other reasons.

·         Manual adjustments generate entries for all relevant audit tables in the Registry.

·         All manual adjustments must include a field for free text description to describe the reason for the manual adjustment. This aids tracking and auditing of transactions.

·         Grace periods are implemented as described in Section C17.3

Deferred revenue calculation

·         Each billable transaction has a start-date and end-date

·         Start date must be set to the current date at the time of the transaction except for renewals, where start-date must be set to the start of the renewed period. For example, if a domain is to expire at March 13th 2002, and is subsequently renewed for one year, the start-date is March 13th 2002, and the end date is Marc 13th 2003, regardless of the date the transaction actually occurred.

·         End date must be set to the start-date + period of registration. For example: If the start date is April 5th 2002, and the period is 5 years, the end date should be set to April 5th 2007.

·         Due to the expiration-date policies in the Registry, end dates may be truncated if the expiry date would have ended up more than 10 years (but less than 11 years) into the future. (As explained in the Section C17.3, transactions that seek to set the expiry date to more than 11 years in the future will be denied)

·         "Period" should be taken to mean the amount of time (in days) that was bought with the transaction. 

·         Refunds as a result of delete during the applicable grace period(s) must also have start and end-dates set after the same rule. Example: If the start date original transaction was April 5th 2002 for 5 years, and the delete happened on April 8th 2002, the start date of the refund transaction should be April 8th 2002, and the end-date April 8th 2007.

VAT calculation and VAT information

The following rules apply for VAT codes and VAT calculation:

TAX CODE

VAT

COUNTRY

COMMENT

VAT_UK

17.50%

UK only

VAT number optional

VAT_UK_EU

17.50%

EU except the UK

VAT number has not been supplied

VAT_EU

0.00%

Any EU country

VAT number must be supplied

VAT_NONEU

0.00%

Any non-EU country

VAT number is optional

 

When working out the VAT per unit, the amounts of VAT is worked out to 4 digits after the decimal point and then rounded to 3 digits. For example, if the VAT is 0.0024, it is rounded to 0.002.

C17.7. Data escrow and backup.

Frequency and procedures for backup of data. Describe hardware and systems used, data format, identity of escrow agents, procedures for retrieval of data/rebuild of database, etc.

C17.7. Data escrow and backup.61

Backup. 61

Overview. 61

Backup policy62

Tivoli Storage Manager Implementation. 63

Domains and Management Classes65

Storage Pools66

Schedules66

Administrative Schedules67

Procedures for taking tapes off-site. 67

Testing and QA of backup. 67

Backup agent contract/proposal68

Escrow. 68

Schedule for escrow deposits68

Escrow deposit format specification. 69

Distribution Of Public Keys.75

Escrow Transfer Procedure. 76

Escrow Verification Procedure. 76

Escrow Retrieval And Rebuild Procedure. 77

Escrow Agent Proposal78

Backup

Overview

The backup solution in use by Global Name Registry consists of a central backup server, software installed on the server and on all clients where data is taken from, as well as a tape library robot connected to the backup server. The diagram on the following page illustrates the backup solution:

Figure 14: Backup solution overview

The backup jobs run every 24 hours, and writes all files that shall be backed up to the Backup Server (IBM RS6000), after which the backup is written to tape by the tape robot holding more than 2TB of tape storage.

The TSM solution is based on a Network Storage Manager package sold by IBM. The solution is comprised of an IBM H70 running AIX 4.3.3 and a SCSI attached 3583 library with two LTO drives. The IBM hardware was delivered with AIX 4.3.3 and TSM 4.1.3.0 pre-installed. Sagitta carried out the physical installation and implemented the configuration.

Backup policy

The data backed up includes all data which is not part of the base build or application build. All data changed or added by applications is backed up. The following illustrates this backup policy:

Figure 15: All application data is backed up

The backed up data is therefore all the data necessary to reconstitute the Registry at one single point in time. All the non-backed up data are standard server builds and software that can be reinstalled from ESS or CDs by the Operations team.

Backup of the main database is stored on tape, but Global Name Registry is also holding at least two backups of the database on harddisk storage. This is in order to ease and speed up retrieval should restoration of the database be necessary.

The backup solution can back up a data volume filling its entire tape library of 2.3TB in less than 24 hours. With its current operations of about 140,000 domain names and .name email addresses, the full backup volume is around 200GB and is completed in less than 2 hours. Global Name Registry believes that the current backup solution is sufficient to back up the entire .org database in addition to the .name database (see Section C17.3 for the database sizing discussion), but should the backup volume grow to a level where full backups take more than 16 hours, Global Name Registry will add additional backup units to share the backup volume evenly between units. This can be easily handled by the Tivoli Backup Manager software.

Tivoli Storage Manager Implementation

Tivoli Storage Manager has been employed to provide automated backup and recovery for the Global Name Registry (GNR) Linux environment. All TSM clients serviced by the TSM server are doing LAN based backups and restores. All backups are flatfile backups. 

A detailed listing of all configuration information completed on the TSM server can be found at appendix A and appendix B.

Backup overview

 

Figure 16:Diagram of the dataflow

During a backup cycle, the TSM server based on its predefined schedules will poll the TSM client to begin its backup. The characteristics of the backup are defined on the server, hence the server tells the client what should be backed up. The start of each backup will depend on the availability of the clients and balancing of the backup load on the TSM server.

Referring to the number 1 on the Figure above, user data (flatfiles) clients will backup across the LAN to the disk storage pool on the TSM server. Once backups has completed, data is migrated from disk to tape by the TSM server (number 2). Once the migration has completed, all data is copied from tapes in the primary storage pool to tapes in the copy storage pool. This is done as part of the administrative tasks performed every day (see later section). Thus, two copies of all backed up data exist.

Domains and Management Classes

One TSM domain is defined on the TSM server. This domain contains one policyset with one management class. See figure below for an overview.

 

Figure 17:Domain, Management Classes, Copygroup and storage pool structure

All clients are defined to the GNRDOM TSM domain. All backups end up in the DISKPOOL disk storage pool, which effectively means that all backups are backed up to disk. As the disk storage pool fills up, data is migrated to the LTOPOOL tape storage pool. Later all data in the LTOPOOL is copied to the COPYPOOL tape storage pool. Tapes belonging to the COPYPOOL should be taken off-site on a weekly basis.  

The properties for the standard copy group in the default management class of the GNRDOM domain are:

·         Versions exist: 3

·         Versions deleted: 2

·         Retain extra version: 180

·         Retain only version: No Limit

Storage Pools

Storage pools are the management units that TSM uses to manage the data. Different types of data are assigned to different Storage pools, for easier collective management. There are two types of storage pools:

·         Primary storage pool. Main storage pools are where the data is held and managed. A primary storage pool can be on either disk or tape.

·         Copy storage pool. A storage pool, which is a complete duplicate of a primary storage pool. You can have a one to one relationship or a many to one relationship

The TSM client systems are backing up all flatfile data to primary disk storage pools defined on the TSM server and later migrated to tape. The primary storage pools are backed up to copy storage pools. Copying from primary to copy storage pools is done as part of the daily administrative tasks (see next section).

Schedules

A number of schedules are running on the TSM server. These are either client schedules that deal with client data backups or administrative schedules that perform several administrative tasks on the TSM server, e.g. TSM DB backup and copying of data from primary storage pools to copy storage pools.

Client Schedules

One client schedule is currently running on the TSM server. That is

DAILY_INCR, runs every day at 01:00. The schedule performs an incremental backup of all associated nodes.

Administrative Schedules

Three administrative schedules are running on the TSM server. They are

·         start_admin_tasks, runs every day at 03:00. The schedule runs the TSM server script start_admin_tasks. This script verifies that all client backups have completed. Once that has happened, the start_admin_tasks script kicks off the do_admin_tasks script, which in turns does the migration from disk pool to tape pool, backs up the primary storage pool to the copy storage pool, backs up the TSM DB backup and expire inventory. A printout of all TSM server scripts can be found at appendix B.

·         run_copypool_space_reclaim, runs every Saturday at 07:00. The schedule runs the TSM server script do_space_reclaimation with the argument copypool (do_space_reclaimation is a generic script, that will do space reclamation on the storage pool named by the argument passed to the script).

·         run_ltopool_space_reclaim, runs every Saturday at 11:00. The schedule runs the TSM server script do_space_reclaimation with the argument ltopool.

Procedures for taking tapes off-site

All procedures for taking tape to an off-site location and bringing tapes back from the off-site location must be handled manually.

The scripts implemented on the TSM server will handle all preparation (TSM database backups and creation of copy storage pool volumes). But it is the TSM administrator responsibility to identify the volumes that must be taken off-site and update the volumes access value according to their location. It is also the TSM administrator responsibility to trace all off-site volumes, including TSM database backups. An unsupported script has be provided to help identify volumes that should go off-site and volumes that should be returned to site, but this script only works for data volumes (and only if volume access is updated correctly as tapes are taken in and out of the library) and does not work for tapes containing TSM database backups.

For DR purposes, a copy of the devconfig.out and volhist.out files must also go off-site along with the TSM database backup. These two files are only a few kilobytes each, therefore the easiest way of taking the files off-site is to mail them to an off-site location or print them at a remote printer. 

Testing and QA of backup

Global Name Registry regularly tests reconstruction from backup to ensure that backup systems, methods and processes work properly. This ensures that in the event a real backup restoration should be necessary, all systems and personnel will be primed to restore the data with no errors.

Backup agent contract/proposal

Please see appendix 14 for the Backup agent, IronMountain, contract for offsite tape storage. Also see appendix 13 for the Sagitta report on Global Name Registry backup installation.

Escrow

The Escrow software developed by Global Name Registry is running on an IBM350 server, with 4 processors, RAIDed disks, 3 separate power supplies, 4GB RAM. The server is  configured according to standard Global Name Registry configurations.  It runs Linux operating system which is custom hardened and secured by Global Name Registry engineers.

Schedule for escrow deposits

Full Deposit Schedule

Full Deposits consist of data that reflects the state of the registry as of 12:00 GMT (“Full Deposit Time”) on each Sunday.  Pending transactions at that time (i.e. transactions that have not been committed to the Registry Database) are not reflected in the Full Deposit.

Full Deposits start, according to the transfer process described in this document, within a four-hour window beginning at 06:00 GMT (“Full Deposit Transfer Time”) on the following Monday.  The time for transferring data will change during the ramp-up period to a start time that is related to the optimal bandwidth, within the 24 hour time limit from the Deposit Time on each Sunday. The escrow agent will be notified about the change within the time that Global Name Registry and the escrow agent agree upon.

Incremental Deposit Schedule

Incremental Deposits reflect database transactions made since the most recent Full or Incremental Deposit.  The Incremental Deposit for Monday includes transactions completed that had not been committed to the Registry Database at the time the last Full Deposit was taken. Incremental Deposits on Tuesday through Saturday includes transactions completed by 12:00 GMT on the day of the deposit that were not reflected in the immediately prior Incremental Deposit.

Incremental Deposits will start, according to the transfer process described in this document, within a four-hour window beginning at GMT on following day.  The time for transferring data will change during the ramp-up period to a start time that is related to the optimal bandwidth, within 24 hours time limit from 06:00  GMT on each day, the Incremental Deposit will be made. The escrow agent will be notified about the change within the time that Global Name Registry and the escrow agent agree upon.

Escrow deposit format specification

This format is subject to change by agreement of Global Name Registry and ICANN as well as during the IETF standards process.  In addition, Global Name Registry will implement changes to this format as specified by ICANN to conform to the IETF provreg working group’s protocol specification no later than 135 days after the IETF specification is adopted as a Proposed Standard (RFC 2026, section 4.1.1).

Each Full and Incremental Deposit consists of a series of files that are generated in the Escrow Process.

Full Deposit Contents

The reports involved in a Full Deposit for .org will include, but not be limited to:

·         Registrar Object Report - this will detail the contents of known registrars in the Registry Database.

·         Domain Object Report - this will detail the contents of valid domains in the Registry Database

·         Contacts Object Report - this will detail active contacts within the Registry Database

·         Nameserver Object Report - this will detail all known nameservers within the Registry Database

Incremental Deposit Contents. 

This will solely consist of a transaction report.  The transaction report will detail the contents of all transactions records included in the Incremental Deposit.

Format of the Reports. 

All reports are to be formatted in XML format.  In compliance with the XML 1.0 specification, certain characters in the data must be "escaped", as described in item “Escape-Character Requirements” below. Each Report shall then be prepared according to the general XML format described in subsequent items below. Item “The Report Container” describes the report container that is common to all reports. Subsequent items describe the structure of the contents of the report container for each of the specific reports.  The format of the reports may be amended or changed upon 90 days notice if the database structure or number or structure of objects change.

In compliance with the XML 1.0 specification, the following characters in any escrowed data elements must be replaced with the corresponding escape sequences listed here:

Character

Escape Sequence

"

&quot;

&

&amp;

'

&apos;

<

&lt;

>

&gt

 

The Escrow Container. At its highest level, the XML format consists of an escrow container with header attributes followed by escrow data.

Attributes:
Tld – the name of the top level domain
Date – date of the escrow export
Report – type of data contained in the escrow container
Type – type of export (Full or Incremental)
Version – version number of the escrow

Elements:
Registrar - container for registrar data
Domain – container for domain data
Nameserver – container for nameserver data
Contact – container for contact data
Transaction – container for transaction data

E.g. an example escrow container may have the following :

<?xml version="1.0" encoding='UTF-8' ?>
<!DOCTYPE escrow SYSTEM "org-escrow-export.dtd" >
<escrow tld="org" date="1-jan-2003 3:15:00AM" report="domain" type="Full" version="1.0" >

 

{Here is where the report containing the actual data being escrowed is placed. It contains one element for each object of the type (domain, SLD email, nameserver, contact, registrar, defensive registration, namewatch or transaction) covered by the report. The specific format for each report is described in the items below.}

</escrow>

The Registrar Element.  The registrar element is a container consisting of the following data:

Attributes:
ID – unique identifier for this object type (corresponding to an entry in the IANA registrar-id database)
Status – the current status of the Registrar Element

Elements:
ContactID – contact point of registrar
URL – the home page for the company
Created - timestamp detailing when this record was created
Modified - timestamp detailing when this record was last altered

 

E.g. an example registrar container may have the following :

 

<registrar id="42" status="ok">

<contactid type="REGISTRAR">123</contactid>

<url>www.acmeregistrar.co.uk</url>

<created>2001-08-10-01.01.01</created>

<modified>2001-08-10-01.01.01</modified>

</registrar>

 

 

The Domain Element.  This element will be a container holding the following data:

Attributes:
ID - unique identifier for this object type
FQDN - Fully Qualified Domain Name
Status - the current position of the domain

Elements:
RegistrarID - the identifier for the registrar within the Registry Database
Created - the timestamp detailing when this record came into existence
Expires - when the active status of the domain will expire
Modified - the timestamp stating when this record was last altered
NameServerID - up to 13 different nameservers for this particular domain record
ContactID - up to 4 different types of contact id for this particular domain record

 

E.g. an example domain container may have the following :

 

<domain id="123" fqdn="redcross.org" status="ok">

 <registrarid>42</registrarid>

 <created>2001-08-10-12.34.56</created>

 <expires>2003-08-10-12.34.56</expires>

 <modified>2001-08-10-12.34.56</modified>

 <nameserverid>312</nameserverid>

 <nameserverid>321</nameserverid>

 <contactid type="REGISTRANT">111</contactid>

 <contactid type="ADMINISTRATIVE">222</contactid>

 <contactid type="TECHNICAL">333</contactid>

 <contactid type="BILLING">444</contactid>

</domain>

 

 

The Nameserver Element.  This element consists of the following data:

Attributes:
ID - unique identifier for this object type
FQDN - Fully Qualified Domain Name for the nameserver
Status - current standing

Elements:
RegistrarID - the identifier for the registrar within the Registry Database
Created - timestamp detailing when this record was created
Modified - timestamp detailing when this record was last altered
Ipaddress - up to 13 unique IP addresses for each nameserver

 

E.g. an example nameserver container may have the following :

 

 <nameserver id="312" fqdn="dns1.example.org" status="ok">

 <registrarid>42</registrarid>

 <created>2001-08-10-22.31.12</created>

 <modified>2001-08-10-22.31.12</modified>

 <ipaddress>192.168.1.11</ipaddress>

 <ipaddress>192.168.1.12</ipaddress>

 </nameserver>

 

The Contact Element.  The contact element is a container consisting of the following data:

Attributes:
ID – unique identifier for this object type
Status - current standing

Elements:
Name - the name of the contact
RegistrarID- the sponsoring registrar for this record
Address - a free form field for the address of the contact
City - the town or city where the contact resides
State/Province - the state/province where the contact resides
Country - the country for this contact point
PostCode - the postal code where the contact resides
Telephone - the voice telephone number for this contact
Fax - a facsimile number for this contact
Email - an electronic address to reach the contact by
Created - timestamp detailing when this record was created
Modified - the timestamp detailing the time of the last update

 

E.g. an example contact container may have the following :

 

<contact id="111" status="ok">

 <name>John Smith</name>

 <registrarid>222</registrarid>

 <address>32, Hill Avenue</address>

 <city>New Town</city>

 <state>Beluga</state>

 <country>UK</country>

 <postcode>PP1 2PP</postcode>

 <telephone>+44.2055551111</telephone>

 <fax>+44.2072061234</fax>

 <emailaddress>john@smith.name</emailaddress>

 <created>2001-08-10-02.31.12</created>

 <modified>2001-08-10-10.58.12.123456</modified>

 </contact>

The Transaction Element.  This element consists of the following data:

Attributes:
ID – unique identifier for this object type
Type - the action which was performed

Elements:
RegistrarID - the identifier for the registrar performing the update within the Registry Database
Object - the identifier to the object with the Registry Database
Timestamp - the time of the transaction within the database, and the moment this record came into existence

 

E.g. a transaction container may hold the following data :

 

 <transaction id="1234" type="INSERT">

 <registrarid>42</registrarid>

 <object type="DOMAIN">123</object>

 <timestamp>2001-08-10-12.34.56.123456</timestamp>

 </transaction>

DTD for escrow files

The following section illustrates the DTD for validating Escrow files.

<?xml version="1.0" encoding='UTF-8' ?>

 

<!ELEMENT escrow (registrar*, domain*, nameserver*,

                 contact*, transaction*)>

<!ATTLIST escrow

          tld NMTOKEN #FIXED 'name'

          date CDATA #REQUIRED

          report (domain | nameserver | contact

                  | registrar | transaction) #REQUIRED

          type (Full | Incremental) #REQUIRED

          version CDATA #FIXED '1.0'>

 

<!ELEMENT registrar (contactid, url, created, modified)>

<!ATTLIST registrar

         id CDATA #REQUIRED

         status (inactive | ok) #REQUIRED>

 

 

<!ELEMENT domain (registrarid, created, expires, modified,

                  nameserverid*, contactid+)>

<!ATTLIST domain

          id CDATA #REQUIRED

          fqdn CDATA #REQUIRED

          status (clientDeleteProhibited | clientHold

                  | clientRenewProhibited | clientTransferProhibited |

                 clientUpdateProhibited | inactive | ok | pendingDelete

                 | pendingTransfer | pendingVerification |

                 serverDeleteProhibited | serverHold

                 | serverRenewProhibited | serverTransferProhibited |

                 serverUpdateProhibited) #REQUIRED>

 

 

<!ELEMENT nameserver (registrarid, created, modified, ipaddress+)>

<!ATTLIST nameserver

          id CDATA #REQUIRED

          fqdn CDATA #REQUIRED

          status (clientDeleteProhibited | clientUpdateProhibited

                  | linked | ok | pendingDelete |

                  pendingTransfer | serverDeleteProhibited

                  | serverUpdateProhibited) #REQUIRED>

 

 

<!ELEMENT contact (name, registrarid, address, city, state,

                   country, postcode, telephone, fax,

                   emailaddress, created, modified)>

<!ATTLIST contact

          id CDATA #REQUIRED

          status (clientDeleteProhibited | clientTransferProhibited

                  | clientUpdateProhibited | linked | ok |

                  pendingDelete | pendingTransfer | serverDeleteProhibited

                  | serverTransferProhibited |

                  serverUpdateProhibited) #REQUIRED>

 

 

<!ELEMENT transaction (registrarid, object, timestamp*)>

<!ATTLIST transaction

          id CDATA #REQUIRED

          type (INSERT | DELETE | TRANSFER | UPDATE) #REQUIRED>

 

 

<!ELEMENT name (#PCDATA)>

 

<!ELEMENT address (#PCDATA)>

 

<!ELEMENT city (#PCDATA)>

 

<!ELEMENT state (#PCDATA)>

 

<!ELEMENT country (#PCDATA)>

 

<!ELEMENT postcode (#PCDATA)>

 

<!ELEMENT telephone (#PCDATA)>

 

<!ELEMENT url (#PCDATA)>

 

<!ELEMENT created (#PCDATA)>

 

<!ELEMENT modified (#PCDATA)>

 

<!ELEMENT expires (#PCDATA)>

 

<!ELEMENT registrarid (#PCDATA)>

 

<!ELEMENT nameserverid (#PCDATA)>

 

 

<!ELEMENT forward (#PCDATA)>

<!ELEMENT ipaddress (#PCDATA)>

 

<!ELEMENT fax (#PCDATA)>

 

<!ELEMENT emailaddress (#PCDATA)>

 

<!ELEMENT report EMPTY>

<!ATTLIST report

          type (DAILY|WEEKLY|MONTHLY) #REQUIRED>

 

<!ELEMENT string (#PCDATA)>

 

<!ELEMENT wildcard EMPTY>

<!ATTLIST wildcard

       type (STARTS_WITH|CONTAINS|ENDS_WITH) #REQUIRED>

 

 

<!ELEMENT object (#PCDATA)>

<!ATTLIST object

          type (DOMAIN | EMAIL | NAMESERVER |

                CONTACT | NAMEWATCH | DEFREG) #REQUIRED>

 

<!ELEMENT timestamp (#PCDATA)>

Distribution Of Public Keys. 

Each of Global Name Registry and escrow agent will distribute its public key to the other party (Global Name Registry or escrow agent, as the case may be) via email to an email address to be specified. Each party will confirm receipt of the other party's public key with a reply email, and the distributing party will subsequently reconfirm the authenticity of the key transmitted. In this way, public key transmission is authenticated to a user able to send and receive mail via a mail server operated by the distributing party.  Escrow agent and ICANN shall exchange keys by the same procedure.

Escrow Transfer Procedure

Global Name Registry shall prepare and transfer the Deposit file by the following steps, in sequence:

1.     The files making up the Deposit will first be generated according to the format specification. (See above, "Escrow Deposit Format Specification").

2.     The Reports making up the Deposit will be concatenated.  The resulting file shall be named according to the following format:  "orgSEQN", where "SEQN" is a four digit decimal number that is incremented as each report is prepared.

3.     Next the Deposit file will be processed by a program (provided by ICANN) that will verify that it complies with the format specification and contains reports of the same date/time (for a Full Deposit), count the number of objects of the various types in the Deposit, and append to the file a report of the program's results.

4.     Global Name Registry may optionally split the resulting file using the Unix SPLIT command (or equivalent) to produce files no less than 640MB each (except the final file). If Deposit files are split, a .MD5 file (produced with MD5SUM or equivalent) must be included with the split files to isolate errors in case of transfer fault.

5.     Thereafter encrypt the files using escrow agent's public key for PGP and signed using Registry Operator’s private key for PGP, both versions 6.5.1 or above, with a key of DH/DSS type and 2048/1024-byte length. The file will be named in the form <TLD>CCYYMMDD.PGP (e.g. org20030810.PGP).  (Note that PGP compresses the Deposit file(s) in addition to encrypting it (them).)

6.     The formatted, encrypted and signed Deposit file(s) will be sent, by anonymous file transfer, e-mail, or similar, which will be agreed by Global Name Registry and the escrow agent, starting within the specified time window.  Global Name Registry appreciates that after a certain period of time, the size of the files requiring transfer to escrow may exceed a size that will be safe for electronic transmission.  At that time manual procedures will be developed to ensure that the escrow transfer obligations of the Registry continue to be complied with.

Escrow Verification Procedure

Escrow agent will verify the format and completeness of each Deposit by the following steps:

1.     When the transfer is done, all Deposit files will be moved to a not-publicly-accessible directory and the existence and size of each will be noted.

2.     Each Deposit file will be decrypted using escrow agent's private key for PGP and authenticated using Registry Operator’s public key for PGP. The PGP software will additionally decompress the data therein.

3.     If there are multiple files, they will be concatenated in sequence.

4.     The escrow agent will run a program on the Deposit file (without report) that will split it into its constituent reports (including the format report prepared by Global Name Registry and appended to the Deposit) check its format, count the number of objects of each type, and verify that the data set is internally consistent. This program will compare its results with the results of the Registry-generated format report, and will generate a Deposit format and completeness report.  The program will encrypt the report using ICANN's public key for PGP and signed using escrow agent's private key for PGP, both versions 6.5.1 or above, with a key of DH/DSS type and 2048/1024 – byte length.  (Note that PGP compresses the Deposit file(s) in addition to encrypting it (them).)

5.     The decrypted Deposit file will be destroyed and the database dropped to reduce likelihood of data loss to intruders in case of partial security failure.

Escrow Retrieval And Rebuild Procedure

It is important to realize that reconstructing the Registry, any Registry, from an Escrow file, is likely to require good knowledge about the Registry data model and database structure. If this is the case, ICANN, a designated recipient or Global Name Registry can retrieve the Registry and rebuild from Escrow according to the following procedure:

1.     Each of the latest Deposit file(s) will be retrieved from the Escrow agent through FTP download or by courier of an optical medium, whichever format was used to originally deposit the Deposit File(s)

2.     Each Deposit file will be decrypted using either the escrow agent's private key for PGP or Global Name Registry’s private key for PGP. The PGP software will additionally decompress the data therein.

3.     If there are multiple files, they will be concatenated in sequence to reconstitute the entire XML Escrow file.

4.     The Escrow file will be validated with the Escrow File DTD specified in this document. Any errors at this point are unlikely, and must be investigated manually.

5.     Global Name Registry will parse the Escrow file using the XML parser Xerces.

6.     After the parsed information is available, based on the object type (domain, contact, nameserver), a script will generate an appropriate SQL statement and insert the information directly into the database, which structure is identical to the database from which the Escrow was taken, using Global Name Registry database schemas. Alternatively, Global Name Registry has the option of generating an EPP command from the parsed information and passing it through the EPP servers. This would be far slower than the direct database approach, the latter which is preferred.

7.     The auth-info token(s) and the object history (past transfers, past operations on the object) is not included in the Escrow specification and will not be part of the rebuild. However, this will not preclude restored operations but means that Registrars will have to re-set their Auth-info tokens and that the object history will have to be regenerated from another source.

8.     The database can be subjected to Quality Assurance after which it can be put into production.

Escrow Agent Proposal

The Escrow agent procedure is illustrated in the figure below:

Figure 18: Escrow agent procedure

The Escrow Agent Proposal and Contract is listed in Appendices 10 and 11 to this proposal.

C17.8 Publicly accessible look up/Whois service

This Section’s first part documents the Global Name Registry Whois from a technical point of view, by looking into its functionality, mapping the functionality to the hardware run, and looking at the Whois interfaces and their functionality. The second part of this Section describes the proposed Whois policies, returned results and otherwise issues that belong more in the policy arena than the technical arena.

C17.8 Publicly accessible look up/Whois service. 79

Technical functioning of the Whois79

Deployment of the Whois Service. 81

Port 43 Whois service. 83

Web-based Whois service. 85

Hardware. 87

Software. 87

Security and reliability87

Whois policy and Format of responses88

Modifiers88

Returned Whois Results88

Data format policies92

Technical functioning of the Whois

The Global Name Registry WHOIS service is able to handle a sustained and significant load. The available WHOIS servers are distributed at the regional sites on a high availability active/passive load balanced failover system. New servers can be easily added. Currently, there are four Whois servers in operation to fulfil the service volumes. Peak capacities are described in more detail in Section C17.10.

The Whois system has been designed for robustness, availability and performance. Detection of abusive usage, like excessive numbers of queries from one source, has been taken into account, and other countermeasures against abuse will be activated if necessary.

The WHOIS service will only give replies for exact matches of the object ID. Search capabilities beyond this will be implemented should the policy development allow it.

The Global Name Registry Whois system operates independently of other Whois systems. In the .org context it is proposed that all clients requesting Whois queries for .org domains will only need to utilize the Global Name Registry Whois system. This is related to the migration Global Name Registry proposes from a “thin” .org Registry to a “thick” .org Registry. This transition is described in more detail in Section C18 and C17.3. A thick .org Registry will allow Whois to be centralized and take the inconveniences of a fragmented Whois away.

This section is only describing the services that Whois offers to the public clients. The updates of the Whois servers are described in section C17.1. Except in the figure below; subsystems, packages or components only related to the publicly accessible look-up will be presented in this section.

Figure 19: Overview of the Whois system

The figure above is an overview of the Whois service and the systems of which the service is constituted. The Whois Public Interface has a Whois database consisting all Whois entries and offers a public interface to the Whois database for look-ups through a port 43 interface accessible via UNIX commands like fwhois, and a web-interface.

In addition, the Whois System generates an Escrow file for the purpose of Appendix P (“Whois provider Data Specification”) to the Registry Contract. The Whois Escrow does not offer any public service, and will therefore not be described here. Please see Section C17.1 for a description of the Whois Escrow.

Global Name Registry has developed a custom database for the operations of Whois. The database developed is optimized for the type of requests that is served by Whois and greatly improves the Whois single-server performance compared to other designs. More details on the performance of the Global Name Registry Whois database can be found in Section C17.10.

Deployment of the Whois Service

The figure on the following page presents the use of the Whois resolving services by a deployment diagram. The diagram shows a client executing a Whois request, and the message flow between components deployed to hardware. The Whois Server and WWW-Whois Server are located at regional sites, and are both a part of the Whois Public Interface System, presented in the figure above.

Figure 20: Deployment diagram for the Public accessible Look up

The Whois service is compliant with RFC 954. It substantially consists of two parts:

·         Port 43 Whois services

·         Web-based Whois services

 

These two services, functionally similar but operating on two different interfaces and therefore slightly varying, are described in the following

Port 43 Whois service

When a client makes a Whois query through port 43, one of the regional Whois servers are invoked.

Component descriptions

Rate controller: The rate controller controls the number of queries from one client and is able to detect and limit abusive usage, like excessive numbers of queries from one source. It will then block the source.

The Whois logic basically processes the queries towards the Whois database. The output data of the query is determined by modifiers given in the query (see the ‘Whois Policy and Format of Responses’ section below for more about modifiers). The Whois logic also processes queries clients do towards the Web-based Whois interface.

The response of any request may be an error or a successful query. If an error occurs, the service uses different error messages, depending on the severity and cause of the error. The service will send a message describing the error, the reason for the failure, and possibly an explanation of how to solve the problem. Error handling is in the Whois Services handled by the Whois logic.

In the Whois Database, at each site, all object required for Whois are stored. There are 4 object types in the Whois Database:

·         Domain

·         Contact

·         Nameserver

·         Registrar

 

Whois Log logs all queries processed in the Whois Database, and offers an interface to the Reporting System and the Monitoring System (see Section C17.1). Due to requirements of reporting of the Whois service activity, the Whois Log has a logic, which each night analyzes the log for the last 24 hours. Queries from port 43 and the web-based interface are logged separately.

Control flow

The figure below illustrates the control flow in a Whois query through port 43.

Figure 21: Control flow for a port 43 Whois look-up.

Send query result to client indicates a response which can be both an error message or returned data.

Web-based Whois service

The Web-based Whois service is a Web-based interface to the port 43 Whois service, utilizing the services and components at the Whois server.

Component descriptions

Rate controller: see description in relation to port 43 Whois service.

The WWW-Whois logic contains a CGI script, which has responsibilities to make queries to the Whois server in case of public look-ups. The WWW-Whois determines to which Whois server the query are to be sent, and on which port as well. Extensive queries are sent using a different port than the regular queries, which are sent using port 43.

Extensive Whois queries can only be requested from the WWW-Whois server, and the WWW-Whois server uses the Whois server to process the requests. Information of the requestor is required when performing an extensive lookup (see section about Whois policy and format of responses), which is logged separately. The CGI in WWW-Whois logic are requesting on TCP port 43 on regular look-ups, while another TCP port is used on extensive queries. Returned data from an extensive query is sent by email to the email-address given by the requestor.

WWW-Whois log logs all Web-requests. Information given by the requestor in relation to an extensive query is logged in a separate file.

Control flow

The figure on the following page illustrates the control flow in a Whois query through the Web-based Whois service.

Figure 22: Whois look up through the Web-based interface

Hardware

Specifications:

·         Dual Pentium III 1 Ghz processor

·         1 GB Ram

·         100 Mbit switched connection

·         2 double network cards (both internal and external)

 

This hardware performs extremely well on the custom database Whois software developed by Global Name Registry. It also has the extreme advantage of being horizontally scalable at low cost. For performance of the Whois service, please see Section C17.10.

Software

The Whois service runs on Linux OS.

Security and reliability

The Whois service has been through thorough testing to prevent attempts of denial-of-service and distributed denial-of service.

There are firewalls and load balancers in an active/passive configuration in front of the Whois servers and WWW-Whois servers. This configuration provides a possible failover. All servers contains all objects and records, so if one goes down, queries are routed to another server. The Whois service is easy scalable due to the Update Handler. By adding servers, the performance increases linearly (see Section C17.10).

Whois policy and Format of responses

Modifiers

A modifier can be applied to the Whois query, which determines the amount of data of the requested object returned. The table below present types of modifiers valid for public accessible look-ups, and the amount of data returned of requested objects.

Query type/ modifier

~

None

=

Domain

Summary

Detailed

Detailed

Contact

Summary

Standard

Detailed

Nameserver

Summary

Standard

Detailed

Registrar

Summary

Standard

Detailed

Table 1: Query types and modifiers supported by the Whois service

Returned Whois Results

The subsequent tables describe the returned data from successful Whois queries. Global Name Registry supports public Whois with three modifiers; summary, standard and detailed results, as well as extensive Whois. As stated earlier in the section, public Whois can be requested through a port 43 interface, while the web-based interface supports all Whois requests.

The Whois results for an object owned by a Registrar which only supports RRP, will not have contacts associated and therefore no contacts will be displayed in Whois for this object until the Registrar moves to EPP. This transition is described in more detail in Section C22, C17.3 and C18.

Flags and Public/Extensive Whois fields:

·         X - Field will always be output if data is available.

·         O - Field is optional, and may not be displayed

·         M - Field may be represented as multiple key/value pairs

Sections:

·         M - Multiple sub-records may be displayed

 

If the data is not available, the key will not be displayed.

If the value is a handle, and the handle cannot be resolved in detailed queries, the actual handle will be displayed instead with a message telling that the rest of the data was not available.

The "flags" column applies to all output formats.

Domain record

 

 

Public Whois

Extensive Whois

Section

Field name

Flags

Summary

Standard

Detailed

 

Domain Name ID

 

X

 

 

 

 

Domain Name

 

 

X

X

X

 

Sponsoring Registrar

 

 

 

X

X

 

Sponsoring Registrar ID

 

 

X

 

 

 

Domain Status

M

 

X

X

X

 

Registrant ID

 

 

X

X

X

 

Registrant Organization

O

 

 

X

X

 

Registrant Name

 

 

 

X

X

 

Registrant Address

 

 

 

X

X

 

Registrant City

 

 

 

X

X

 

Registrant State/Province

O

 

 

X

X

 

Registrant Country

 

 

 

X

X

 

Registrant Postal Code

O

 

 

X

X

 

Registrant Phone Number

 

 

 

 

X

 

Registrant Fax Number

O

 

 

 

X

 

Registrant Email

 

 

 

 

X

 

Other names registered by registrant

OM

 

 

 

X

 

Admin ID

 

 

X

X

X

Admin Organization

O

 

 

X

X

Admin Name

 

 

 

X

X

Admin Address

 

 

 

X

X

Admin City

 

 

 

X

X

Admin State/Province

O

 

 

X

X

Admin Country

 

 

 

X

X

Admin Postal Code

O

 

 

X

X

Admin Phone Number

 

 

 

X

X

Admin Fax Number

O

 

 

X

X

Admin Email

 

 

 

X

X

 

Tech ID

 

 

X

X

X

Tech Organization

O

 

 

X

X

Tech Name

 

 

 

X

X

Tech Address

 

 

 

X

X

Tech City

 

 

 

X

X

Tech State/Province

O

 

 

X

X

Tech Country

 

 

 

X

X

Tech Postal Code

O

 

 

X

X

Tech Phone Number

 

 

 

X

X

Tech Fax Number

O

 

 

X

X

Tech Email

 

 

 

X

X

 

Billing ID

 

 

X

X

X

Billing Organization

O

 

 

X

X

Billing Name

 

 

 

X

X

Billing Address

 

 

 

X

X

Billing City

 

 

 

X

X

Billing State/Province

O

 

 

X

X

Billing Country

 

 

 

X

X

Billing Postal Code

O

 

 

X

X

Billing Phone Number

 

 

 

X

X

Billing Fax Number

O

 

 

X

X

Billing Email

 

 

 

X

X

M

Name Server

 

 

 

X

X

Name Server ID

 

 

X

 

 

 

Created On

 

 

 

X

X

 

Expires On

 

 

 

X

X

 

Updated On

 

 

 

X

X

Table 2: Returned data for the Domain object from Whois queries

Contact record

 

 

Public Whois

Extensive Whois

Section

Field name

Flags

Summary

Standard

Detailed

 

Contact ID

 

X

X

X

 X

 

Contact Name

 

 

 

X

 X

 

Contact Registrar

 

 

 

X

 X

 

Contact Registrar ID

 

 

X

 

 

 

Contact Organization

O

 

 

X

 X

 

Contact Address

 

 

 

X

 X

 

Contact City

 

 

 

X

 X

 

Contact State/Province

O

 

 

X

 X

 

Contact Postal Code

O

 

 

X

 X

 

Contact Country

 

 

 

X

 X

 

Contact Email

 

 

 

 

X

 

Contact Telephone

 

 

 

 

X

 

Contact Fax

 

 

 

 

X

 

Contact Status

OM

 

 

X

 X

 

Created On

 

 

X

X

 X

 

Updated On

 

 

X

X

 X

Table 3: Returned data for the Contact object from Whois queries

Nameserver record

 

 

Public Whois

Extensive Whois

Section

Field name

Flags

Summary

Standard

Detailed

 

Name Server ID

 

X

X

X

 N.A

 

Name Server Name

 

 

X

X

 N.A

 

Name Server Registrar ID

 

 

X

 

 N.A

 

Name Server Registrar

 

 

 

X

 N.A

 

Name Server Status

M

 

X

X

 N.A

 

IP Address Associated

M

 

X

X

 N.A

 

Created On

 

 

X

X

 N.A

 

Updated On

 

 

X

X

 N.A

Table 4: Returned data for the Nameserver object from Whois queries

Registrar record

 

 

Public Whois

Extensive Whois

Section

Field name

Flags

Summary

Standard

Detailed

 

Registrar ID

 

X

X

X

 N.A

 

Registrar Name

 

 

X

X

 N.A

 

Registrar URL

 

 

X

X

 N.A

 

Registrar Status

M

 

X

X

 N.A

 

Registrar Address

 

 

X

X

 N.A

 

Registrar City

 

 

X

X

 N.A

 

Registrar State/Province

O

 

X

X

 N.A

 

Registrar Country

 

 

X

X

 N.A

 

Registrar Postal Code

O

 

X

X

 N.A

 

Registrar Phone Number

 

 

X

X

 N.A

 

Registrar Fax Number

O

 

X

X

 N.A

 

Registrar E-mail

 

 

X

X

 N.A

M

Admin ID

 

 

X

X

 N.A

Admin Organization

O

 

 

X

 N.A

Admin Name

 

 

 

X

 N.A

Admin Address

 

 

 

X

 N.A

Admin City

 

 

 

X

 N.A

Admin State/Province

O

 

 

X

 N.A

Admin Country

 

 

 

X

 N.A

Admin Postal Code

O

 

 

X

 N.A

Admin Phone Number

 

 

 

X

 N.A

Admin Fax Number

O

 

 

X

 N.A

Admin Email

 

 

 

X

 N.A

M

Tech ID

 

 

X

X

 N.A

Tech Organization

O

 

 

X

 N.A

Tech Name

 

 

 

X

 N.A

Tech Address

 

 

 

X

 N.A

Tech City

 

 

 

X

 N.A

Tech State/Province

O

 

 

X

 N.A

Tech Country

 

 

 

X

 N.A

Tech Postal Code

O

 

 

X

 N.A

Tech Phone Number

 

 

 

X

 N.A

Tech Fax Number

O

 

 

X

 N.A

Tech Email

 

 

 

X

 N.A

M

Billing ID

 

 

X

X

 N.A

Billing Organization

O

 

 

X

 N.A

Billing Name

 

 

 

X

 N.A

Billing Address

 

 

 

X

 N.A

Billing City

 

 

 

X

 N.A

Billing State/Province

O

 

 

X

 N.A

Billing Country

 

 

 

X

 N.A

Billing Postal Code

O

 

 

X

 N.A

Billing Phone Number

 

 

 

X

 N.A

Billing Fax Number

O

 

 

X

 N.A

Billing Email

 

 

 

X

 N.A

 

Created On

 

 

X

X

 N.A

 

Updated On

 

 

X

X

 N.A

Table 5: Returned data for the Registrar object from Whois queries

Data format policies

Port 43 format

·         The format of responses will follow a semi-free text format outline below, preceded by a disclaimer.

·         The data reported will be formatted in such a way that it should be possible to, by relatively simple means, use software to extract data.

·         Each data object shall be represented as a set of key/value pairs, where each key runs from the start of the line, until the first colon (":"), and where any white space found immediately preceding the first colon shall not be counted as part of the key. All data excluding the first continuous sequence of white space following the first colon, up to but excluding the line feed should count as part of the value.

·         All Whois data will be in the ASCII character set, which has encoding compatible with UTF-8 for easy transition to including internationalized data, and as per the IETF's recommendations on i18n in Internet protocols. For fields where more than one value exists, multiple key/value pairs with the same key shall be allowed (for example to list multiple name servers). The first key/value pair after a blank line should be considered the start of a new record, and should be considered as identifying that record, and is used to group data, such as hostnames and IP addresses, or a domain name and registrant information, together.

·         For the fields returned from the Port 43 based Whois query, please see the “Returned Whois Results” section in this document.

Format of responses from the www interface

Two different interfaces on the www

The Web-based Whois interface on its website which can also be linked to by each ICANN-Accredited Registrar that is a party to a Registry-Registrar Agreement with Global Name Registry. The information available in the Whois database will be returned in two formats: (i) a results page on the website and (ii) where requested as described below, an e-mail report.

Returned data on the Web-based Whois service

For the fields returned from the Web based Whois query, please see the “Returned Whois Results” section in this document.

Returned data in the Extensive WHOIS

The e-mail report will be compiled instantly and sent to the e-mail address specified by the requestor. The e-mail report will be provided upon request, subject to the requirements described below. The interface for requesting an e-mail report will be provided on the same web page as the interface for the website report.

When requesting the e-mail report, the requestor must provide the following fields:

1)     Domain name on which information is sought.

2)     A declaration that the data is being requested for a lawful reason, and that the data will not be used for marketing purposes, spamming or any other improper purpose.

3)     A declaration that the reason for collecting the data is to protect legal rights and obligations. Such a reason could be, but is not limited to:

a)     Investigating and defending a possible violation of intellectual property;

b)     Seeking information for use by a law-enforcement agency;

c)      In pursuit of defamation proceedings;

d)     Information collected for use within the applicable Dispute Resolution Procedures under the UDRP or ERDRP.

4)     The name, postal address, e-mail address, voice telephone number and (where available) fax number of the requestor, and declare that this information is correct.

5)     The e-mail address to which the report will be instantly issued.

6)     Data collected from or about requestors will be used only to document the request and will not be used for any commercial purpose whatsoever.

For the fields returned from the Web based Extensive Whois query, please see the “Returned Whois Results” section in this document.

C17.9. System security

Technical and physical capabilities and procedures to prevent system hacks, break-ins, data tampering, and other disruptions to operations. Physical security.

C17.9. System security. 95

Overview. 95

Firewalls97

Intrusion Detection System (IDS)97

Encrypted channels99

Jump points and gateways100

Physical security of the facilities100

Alerts102

Log and Monitoring Systems102

Data protection Procedures103

Passwords and pass phrases103

Information control103

Software security104

Software diversity105

Human Security105

Assured Asynchronous communication with MQ. 105

Registrar security and authentication. 106

Security of offices and employees106

Overview

The systems and security measures described below are a reflection of current compliance by Global Name Registry with its .name ICANN Agreement, which requires a high level of security on many fronts.  In working toward a secure and stable .org registry operated by Global Name Registry, Global Name Registry would use many of the existing systems in place for .name.  If enhanced security systems are required by the .org ICANN Agreement, Global Name Registry proposes simply to build on existing systems to effect compliance with the .org ICANN Agreement.

To ensure the security of the entire installation, and protection against loss of data, Global Name Registry has deployed several security measures across the entire organization and operations, from technology to human resources. Among the most important of these measures are the following, each of which will be described further down in the document:

1.     Firewalls - State-of-the-art firewalls, Cisco PIX, are employed on the borders of the network Different firewalls and settings are used for the different interfaces (e.g. for VPN and public traffic).

 

2.     Intrusion Detection Systems – Global Name Registry is using two simultaneous IDS, passive systems listening to all traffic and all packets on the connected network, and uses an extensive database of known vulnerabilities and intelligent algorithms to detect traffic and behavior that deviates from the normal and therefore could mean intrusion.

 

3.     Encrypted channels to protect network – All channels between the world wide sites operated by Global Name Registry for DNS, Whois, escrow servers, FTP and MX servers (operated for .name) are encrypted. Logon services are only available through these encrypted channels.

 

4.     Jump points – Log on to systems can only happen from certain “jump points” or “safe havens”, which are protected by the firewalls.

 

5.     Tight physical security - The Global Name Registry premises and hosting locations are physically secured from unauthorized access and have been established in ways to minimize external influence on its operations, even in the case of major events.

 

6.     Alert systems – Monitoring systems (other than the IDS) are in place to alert operators about events that are normal but of high importance, such as Super User logons to any system.

 

7.     Log and monitoring systems - Important system parameters are constantly monitored to make sure all operations have the necessary infrastructure to run smoothly, such as available disk space, running processes, CPU load, bandwidth, and others.

 

8.     Data Protection Procedures – Procedures for updating data and protecting vital data from faults and compromises both internally and externally are designed to withstand faults such as memory errors or server breakdowns. Such vital data includes authentication tokens and the authorative database. Data protection also includes use of Non-Disclosure Agreements with all third parties who may, due to a business relationship with Global Name Registry, obtain access to proprietary information of the registry. Data protection also includes consistency checking and data validation.  Finally, as a data processor under the UK Data Protection Act 1998, protection of data by Global Name Registry is subject to the protections afforded by such act.

 

9.     Software security procedures- the systems run extremely tight software, either long time proven (such as IBM AIX), or fully controlled by Global Name Registry (such as open source Linux), or extremely limited software (such as stripped down Linux). All software always run with the latest security patches applied.

 

10.Software diversity – Vital systems run different software versions and types, meaning that an attacker (that can bypass other security measures) may be able to compromise one set of software due to an unknown exploit, but not all, and some of the services will therefore keep running uncompromised.

 

11.Human Security policies – All employees of Global Name Registry have been trained in accordance with the Code of Conduct under the .name ICANN Agreement, both of which require certain levels of data protection.  In addition, all operators, developers and other employees with any level of access to secure networks and systems are subject to stringent security policies. This includes thorough background checks, password protection policies, frequent password changes, security interviews and restrictions. Only a select few people have access to the full set of system passwords.

 

12.Assured Asynchronous communication with MQ – Global Name Registry uses state-of-the art queuing systems to ensure that communication between systems and applications is assured at all times.

 

13.Registrar security and authentication – Registrars must authenticate themselves through a set of passwords and passphrases when communicating with Global Name Registry customer services representatives. No exceptions are made. This ensures security from social engineering and minimizes risks of unauthorized access to Registrar-specific data.

 

14.Security of offices and employees – All entrances to the Global Name Registry offices are electronically locked and monitored by CCTV and can only be opened by presenting a valid keycard or badge to the electronic readers, or be opened by the receptionist/guard from within.  All guests of Global Name Registry are required to log in and out of the premises as well as to wear valid visitor badges, which allow more restricted access.

Each of these measures is described in more detail below:

Firewalls

The PIX series is Cisco's most advanced enterprise firewall, based on a proprietary operating system that not only allows the PIX to handle very high volumes of traffic (up to 250 000 connections per second), but also greatly improves security. It uses an adaptive algorithm for stateful inspection (SI) of the source and destination address, sequence number, port number and flags of the TCP connections.

This design is superior to the traditional packet filtering approach as it eliminates the need for complex filtering rules and allows for higher-level decisions.

The network diagrams shown in other parts of this document show how the PIX firewalls are protecting the different internal networks and how they protect the external connections.

Global Name Registry uses separate PIX firewalls for the separate systems, as shown on the system diagrams in this Section’s hardware topology section.

Intrusion Detection System (IDS)

The IDS is a passive system listening to all traffic and all packets on the connected network, and uses an extensive database of known vulnerabilities and intelligent algorithms to detect traffic and behavior that deviates from the normal and therefore could mean intrusion.

Global Name Registry uses two different IDS systems which are both active at the same time, consistent with Global Name Registry’s desire to ensure software diversity and to benefit from two different software vendors’ algorithms and databases simultaneously. Both of the IDS alert the 24/7 Intrusion Response Team and System Administrators of any suspicious events.

The IDS systems are connected to, and listening to, all packets in the De-militarized Zone (DMZ) of the internal network.

One of the IDS systems monitor all traffic before it reaches the firewalls, while the other IDS system monitors all traffic in the internal Network.

This deployment approach allows traffic to be seen before it reaches the firewall, or in other words it will see all malicious traffic coming in, regardless if the firewall passes the traffic on or not, and it will see the traffic over the IDS that actually passed the firewall. This is important for the detection of any stealth attack that might possibly pass the firewall regardless of the installed policy. Examples here would be any attack that might use a weakness in the firewall software and pass traffic on even though the access should be blocked by the firewall.

The detection of all possible malicious traffic and the detection of malicious activity that was able to pass the firewall are important mainly for forensic analysis in case an intrusion took place. In such a case it is necessary to detect the single steps an attacker took to compromise the security of the site.

Each IDS has its own weaknesses and strengths, and the use of different IDS makes it even harder for a potential attacker to flood for example one IDS with ‘noise’.  The second IDS might not be susceptible for this flooding. 

Both IDS are not visible to any attacker since they use passive sniffing technologies. Here the actual sniffing interface has no IP Address and is therefore not visible to the network. Also all sniffing interfaces are read only and will not send any traffic back to the network they are connected to. To ensure this read only mode the Network cables are especially configured to not allow any write operation to the network. This is done by connecting only the RX lines, and leaves the TX cables disconnected in the RJ45 plug.

The IDS uses plain e-mail alerting and logs all found signatures into text files on the IDS itself. In case the connection between the IDS and the mail server fails the Global Name Registry Operations  Team will not be able to receive any alert, or will not be able to detect the malfunction. To circumvent this important fact all connections are also monitored by other means (here Big Brother, a Linux based monitoring tool, as required by the .name ICANN Agreement). In case of an erroneous connection Big Brother will generate mail alerts, and additionally also issue audio-visual alerts on the monitoring console in the NOC. This will draw immediate attention of the Operations Team to this issue.

One of the IDS systems communicates with a management console in the NOC. This console is responsible for collecting all information from the IDS sensors in a central place and then generates, according to the installed policies, alerts and/or log entries. The management console displays all data in real time as data gets logged. Since there is no need to remotely connect to a mail server in order to receive alerts there is no need for extensive external monitoring of the management console. Also if the communication between an IDS sensor and the management station fails an alert is generated automatically and the sensor is visible marked as not reachable. In such a case the Operations Team must investigate the cause of this communications loss.

The communication between the sensor and the management console is encrypted so that potential intruders cannot access any information about the internal network structure. Since the standard encryption is not very strong, the sensors and management station only communicate by means of strong encrypted VPN connections. This will also ensure that no intruder is capable of forging configuration settings and sending these to the single sensors. This might be used, for example, to disable all known signatures so in case of an intrusion the sensor would not issue any alert or log entry. The IDS uses off-site logging to make it harder for an attacker to disable the overall logging system. To ensure this extra stone in the way of every attacker the Hosting Sites and the Global Name Registry headquarters use different firewall technologies.

The use of different IDS allows also for statistical queries against the logged intrusion attempts. It is easy to generate a report that for example states how often a certain source attempted any intrusion against our networks. Also as mentioned earlier these statistics are important in case a source, notwithstanding the security measures in place, was able to infiltrate the security of the site.  Here, we would use such statistics to determine how the attack was planned. Also the logging allows for trend analysis. Over the course of any day, roughly 800 alerts are expected; if this count should change significantly, we would know that a new exploit was found, or that an intrusion has taken place.

Since we know that our sites will not need to initiate any connections to the outside world the occurrence of any internal address as a source means that we have to deal with an intrusion. The exception here is that the DNS and SMTP Systems will also initiate connections to outside servers for the services TCP/domain, UDP/domain and SMTP. Any other service will trigger an immediate alert (for example FTP).

The IDS does not comprise the entire security operation by itself. The IDS is part of an overall security infrastructure, which also is comprised of the firewalls, service monitoring, logging Systems, and InfoSec Policy.

Encrypted channels

All channels between the world wide sites operated by Global Name Registry for DNS, Whois, escrow servers, FTP and MX servers (which Global Name Registry operates on .name) are encrypted. Logon services are only available through these encrypted channels.

Some of the channels from Global Name Registry hosting centers to external locations are protected in different ways. The regular data transfers to the assigned escrow agent will be done over a standard FTP channel, which as a transport channel is unencrypted. It is vital to ensure that the transferred data is not detected or tampered with, and all transferred data will therefore be encrypted and signed using the asymmetric encryption method PGP (Pretty Good Privacy). The receiving escrow agent will be the only entity with the appropriate keys and ability to decrypt the data and take it into escrow. The PGP keys, although virtually impossible to crack, will be changed every 6 months to reduce the risk of compromised keys due to human errors.

Global Name Registry has additionally been using PGP to secure Registrar .name Landrush submissions (although it has been optional for Registrars), and a large portion of the Registrars used this actively during the .name Landrush to have a fully secure and tamper-proof transmission of files to the Global Name Registry Landrush FTP server. Global Name Registry has developed fully automated systems that can use PGP with asymmetric key-sets to (optionally) split up large files, encrypt, transmit and decrypt escrow files over FTP channels.

Jump points and gateways

Log on to Global Name Registry systems can only happen from certain “jump points”. These jump points are recognized by the firewalls as the only allowed entry points for certain kinds of traffic and to certain network cards. Further, the jump points are reachable for logon services only from known sources. These sources are the Global Name Registry networks in the hosting sites and the Global Name Registry headquarters. The jump points themselves are highly protected and allow Global Name Registry Security Administrator(s) to focus on the jump points for securing up larger parts of the network. This leverages security efforts and greatly improves total security since an attacker cannot use logon services of any kind freely from the Internet, which severely restricts attack possibilities.

Physical security of the facilities

Global Name Registry controls and uses several facilities all over the world. The most important include:

1.     UK main hosting centre

2.     UK Operations Control Centre

3.     UK Offices

4.     Norway failover hosting centre

5.     Norway Offices

6.     Hong Kong hosting center

Global Name Registry also currently uses on an outsourced basis two DNS locations in the US for the .name operations.

The following diagram shows the geographical spread of the current Global Name Registry locations:

Figure 23: Geographical spread of Global Name Registry operations

For .org operations, it is important to note that Global Name Registry will not be using any outsourced services other than escrow (as described in Sections C17.7).

Therefore, Global Name Registry proposes to set up two additional hosting centers for .org in the United States, one on the East Coast and one on the West Coast.

The security measures in all Global Name Registry hosting centers includes the following, which will also be Global Name Registry’s standard for new hosting centers:

1.     None of the Global Name Registry equipment is on the ground floor, which makes it less vulnerable to physical attacks.

2.     CCTV cameras monitored by protected and remote staff are active and covering areas both before entering the exterior of the premises (like parking space), all entrances (if there are more than one), and interior.

3.     It is not possible to park cars or vehicles in close proximity of the building.

4.     All visitors are required to wear visible badges when entering the premises.

5.     There are guards guarding the entrances and reception.

6.     There are alarm systems to alert guards of unauthorized intruders.

7.     There is a text record of all visitors to the center.

8.     All Global Name Registry racks are in locked cages, with the cage extending from the floor to the roof, where it is securely and irreversibly mounted in both floor and roof.

9.     All Global Name Registry servers are in lockable racks, with all rack doors closed when no changes to equipment is being made or no one is present in the cage.

10.There are three different power feeds to each rack.

11.There are triple UPS available for each server (Global Name Registry operates one set of UPS inside the racks, hosting center operates one set of UPS for all cages, hosting center operates external generators with close access to fuel)

Further, each hosting center has strict environment controls to further improve equipment, data and human security (the latter of which should not be forgotten). Environmental controls in each data center include:

·         Smoke, fire and water leak detection systems

·         UPS/CPS power feeds that ensure 99.99% power availability

·         Heating, ventilation and air conditioning systems

·         FM-200 Fire suppression system which does not harm electronic equipment and is safe for humans, unlike CO Systems which can suffocate humans who can’t leave fast enough.

Obviously, the hosting centers also have a number of non-security related characteristics, like fast Internet connections with high level network availability, as described in Section C17.1.

Alerts

Alerts are sent to operators upon occurrences of several types of events. This includes:

1.     log on of any user on any system

2.     change of user privileges to ROOT (super-user)

Any such alerts results in human analysis and pursuit, and allow System Administrators to take action against possible intruders immediately.

Log and Monitoring Systems

Events are written to logs that are sent offsite. This means that a person compromising one system will have to comprise yet another system to get access to the logs which will describe how the attacker got in.

As required by the .name ICANN Agreement, Global Name Registry uses several monitoring systems to observe and protect operations, systems, premises and employees. Such monitoring systems include CCTV (for premises), Big Brother, MRTG, cross-network transport time monitors (for network operations), entry logs, etc.  These monitoring systems are described in more detail in each appropriate section of this Proposal, including Section C17.1 (for premises), C28 (monitoring of performance), and this section C17.9 also contains more information on monitoring of employees.

Data protection Procedures

Data protection consists of measures to ensure that vital information (outside of the SRS database, Whois and similar databases described in Section C17.3) is not lost or compromised. These measures include passwords, pass phrases, but also other types of less operationally related information, such as registry business information that would typically be protected with Non-Disclosure Agreement(s).

Passwords and pass phrases

Passwords are never stored in an electronic format, which would be subject to remote intrusion. No full set of passwords exists in one location and they cannot be compromised simultaneously by one person. Certificate based authentication is only used where necessary and only for unprivileged accounts. This prevents administrative errors.

Information control

As required by the .name ICANN Agreement, all Global Name Registry Personnel and other employees who have a need to know Global Name Registry business undergo a formal Training Program, providing the staff members with a clear understanding of the requirement to control proprietary and confidential information and the staff members’ responsibility in that respect.  Formal training is, and will continue to be, required before any potential staff member is given an assignment or access to Global Name Registry material.  Formal refresher training will be given on an annual basis.

Upon completion of the training program, all Global Name Registry Personnel and other employees who have a need to know Global Name Registry business will be required to sign a non-disclosure agreement and a Global Name Registry Business Avoidance Certification acknowledging, among other things, his/her understanding of the requirements, and certifying that he/she will strictly comply with the provisions of the Plan.  The signed agreements will be maintained in the program files and the individual’s personnel file.  Each staff member acknowledges verification of the annual refresher training required by the .name ICANN Agreement.

The information control practices of Global Name Registry already in place are described in more detail in Appendix 31.

Data validation and consistency checking

Global Name Registry uses state-of-the art queuing systems and software between platforms and applications to ensure that messages are not lost and delivered correctly and completely.

Global Name Registry uses separate and autonomous Quality Assurance mechanisms that constantly validate and checks consistency of all external information services. These automated QA mechanisms allow Global Name Registry to update resolution services, Whois and MX (for .name Email forwarding addresses) continuously by checking their consistency and accuracy with the SRS database on an ongoing AND incremental basis (as each new or updated record is put in place)

To update information, Global Name Registry uses a (set of) Update Server(s) (or sometimes also called Update Handler) to allow Global Name Registry to rapidly distribute modifications to the DNS zone and Whois to all servers. The Update Handler takes messages from the SRS whenever information is changed in the database, and pushes increments/modifications/deletions to all resolution services. All communication to and from the Update Handler is done with using the assured delivery mechanisms of MQ. 

The Update Handler, however, is more than a router of update messages. In addition, it can process the XML data it receives according to pre-defined rules or plug-ins. This makes adding new services extremely rapid since processing and messaging (or only messaging) to new service applications can be triggered upon certain events depending on the XML content of messages.

Software security

The servers run stripped down versions of Linux, only offering one service each in addition to the remote login service through SSH. This approach makes it simpler to monitor and maintain the systems, and minimize the damage in case of a security breach or other events resulting in system downtime.

The services are using a wide extent of open source software. While statistics show that open source software has more or less the same amount of security problems as proprietary software, security patches are usually available much faster, often within 24 hours. Security staff daily monitor security related web sites for relevant security problems, and apply patches as soon as they are available.

In cases where especially problematic security problems are found and/or patches do not seem to become available within a reasonable time, the open source software model allows the development and security staff of Global Name Registry to write/create a patch for the discovered breach. Global Name Registry has written such patches on several occasions and has made such patches (and the existence of the vulnerabilities, which previously were unknown to the public) available to the Internet public.

All of the systems will be running the secure shell (ssh) service, which utilizes heavily encrypted connections and strong authentication, to provide remote administration capabilities. The ssh service has been the standard secure remote login service for several years, and has no known security problems. The minimum protocol version used for ssh is V2.

The software that is open source, like BIND and LINUX will be monitored daily for updates and patches to cover potential security holes and possible malfunctions. It will be a daily task for the system administrator to check the relevant sites for updates to the current software installation.

Software diversity

Global Name Registry secures its most vital operations by using different software versions to operate the same function simultaneously. This ensures that consistent faults in one version of the software (should there be any) will not affect the entire function, but leave parts of the function unaffected and fully operational. For example, Global Name Registry uses two different versions of BIND, the software that powers the .name DNS. Should there be any exploit or error in BIND (in spite of every single line having been read and inspected by the open source community and Global Name Registry network engineers) that due to some cause may surface and take down this version of BIND, the remaining DNS servers not running BIND are likely to remain functional and stable and ensure continuous service.

Human Security

All employees of Global Name Registry have been trained in accordance with the Registry Code of Conduct under the .name ICANN Agreement, both of which require certain levels of data protection.  In addition, all operators, developers and people with any level of access to secure networks and systems are subject to stringent security policies. This includes thorough background checks, password protection policies, aging, changes, security interviews and restrictions. Only a select few people have access to the full set of system passwords.

Further, the physical security of each Global Name Registry employee is taken extremely seriously by the company. Measures to increase such security include regular fire-training, requirements that 24x7 system operations always contain two or more persons, night guards on premises, remote alert (to security forces and police), alarms, etc.

Assured Asynchronous communication with MQ

MQSeries is a communications system that provides assured, asynchronous, once-only delivery of data across a broad range of hardware and software platforms. These characteristics make MQSeries the ideal infrastructure for application-to-application communication, and make it an appropriate solution whether the applications run on the same machine or on different machines that are separated by one or more networks.

MQSeries supports all the important communication protocols and even provides routes between networks that use different protocols. MQSeries bridges and gateway products allow easy access (with little or no programming) to many existing systems and application environments.

The assured delivery capability reflects the many functions built in to MQSeries to ensure that data is not lost because of failures in the underlying system or network infrastructure. Assured delivery enables MQSeries to form the backbone of critical communication systems and to be entrusted with delivering high-value data.

The asynchronous processing support in MQSeries means that the exchange of data between the sending and receiving applications is time independent. This allows the sending and receiving applications to be decoupled so that the sender can continue processing, without having to wait for the receiver to acknowledge that it has received the data. In fact, the target application does not even have to be running when the data is sent. Likewise, the entire network path between the sender and receiver may not need to be available when the data is in transit.

MQ ensures that communication between Global Name Registry systems is assured at all times, avoids duplicated messaging and corruption, which is vital when ensuring a consistent and error-free operation of the world wide DNS system, Whois service, SRS service and other services Global Name Registry performs for the global public.

Registrar security and authentication

Registrars must authenticate themselves through a set of passwords and passphrases when communicating with their customer services representatives. No exceptions are made. This ensures security from social engineering and minimizes risks of unauthorized access to Registrar-specific data.

Security of offices and employees

All entrances to the Global Name Registry offices are electronically locked and monitored by CCTV and can only be opened by presenting a valid keycard or badge to the electronic readers, or be opened by the receptionist/guard from within.

As required by the .name ICANN Agreement, only assigned personnel employed or contracted by Global Name Registry will have regular badge access to the premises and any other person will be treated as a visitor to the facility and will gain access only through established visitor sign-in and identification badge procedures.  Global Name Registry maintains an entry/exit log for all persons who enter the facility.

Global Name Registry provides access to all Registry customers through the mechanisms described above.

The office security mechanisms and physical access policies required by the .name ICANN Agreement have been implemented and are also generally described in more detail in Appendix 31 to this Proposal.

C17.10. Peak capacities

Technical capability for handling a larger-than-projected demand for registration. Effects on load on servers, databases, back-up systems, support systems, escrow systems, maintenance, personnel

C17.10. Peak capacities107

Introduction. 107

Projected Registry volumes108

Handling “Add storms”. 109

Overview of elements influencing Global Name Registry capacity110

The current capacity of Global Name Registry Registry Systems111

Whois capacity111

DNS capacity113

Backup. 117

SRS capacity118

WWW services122

Mechanisms used by Global Name Registry to handle peaks and achieve                         peak capacity123

General Scalability design. 123

Load balancing. 123

Queuing and batch allocation mechanisms124

Use of solid state storage. 124

Use of ESS126

Burstable bandwidth. 126

Introduction

This chapter contains information about the scalability and capacity of Global Name Registry in the following areas. A higher-than-projected demand for services run by the Registry are taken into consideration, including volumes of modifications, checks, requests to web services, more frequent escrow demands, Whois, DNS, zone file access, etc.

In order to define higher-than-projected demand on Registry services, this chapter also includes a projection of what those volumes may be on .org, as a basis for the further discussions in this chapter.

Projected Registry volumes

The following table is a summary of Verisign service volumes as reported on http://www.gtldregistries.org

Figure 24: Operational query volumes on entire Verisign Registry (com, net, org)

Assuming that the .org operations account for their pro-rata percentage of transactions, the numbers for .org would be the following:

Figure 25: Operational transaction volumes pro-rated for .org

Handling “Add storms”

Note that the number of reported “Add” transactions in the database  (about 66 million per month) is far higher than the number of “Delete” transactions (about 0.28 millions). Therefore, the zone size should grow by roughly 66 million per month if the difference between Add and Delete Transactions were committed transactions. Since the zone size is quite static, even slowly decreasing, the “Add” transactions cannot be adding any objects to the zone. (The majority of the number of Add transactions does not lead to any zone growth). It is likely that Registrars are using “Add” directly, instead of doing a check on the desired object first, before attempting to register it, and that the majority of Add transactions therefore fail because of already registered object. This leads to an enormous volume of ADD queries that are denied because the name is not available.

The following extract from the table above shows the numbers characteristic of the Add storm:

Figure 26: Evidence of Add Storms in the Verisign System, March 02 (numbers pro-rated for .org from total Verisign numbers, as found on gtldregistries.net)

In the Global Name Registry system, such “Add storms” would be filtered out and handled by a horizontally scalable database layer, instead of the main database. Any attempted Add where the object already exists would not reach the database layer, but would rather be filtered out and answered by the layered business logic processing the non-protocol specific issues. It is done according to the following procedure:

Any check-command on its own, or associated with the Add, will first look up in the Whois layer if the object exists. If the object exists, it will deny the Add. If the object does not exist in Whois, it will check in the main database. The Whois layer can be scaled horizontally as described further below in this section to serve hundreds or even thousands of requests per second. This greatly relieves the main database from the load experienced during Add Storms.

More information about how such “Add storms” are being handled by redirecting check commands to the Whois layer, can be found in the database section C17.3.

Due to this efficient and robust structure, Global Name Registry’s peak performance during Add storms can be increased to meet even higher demands than what VeriSign is experiencing currently and may experience the future.

Overview of elements influencing Global Name Registry capacity

In designing its systems, Global Name Registry took into consideration that the Registry and its services (Whois, DNS, MX, zone access, etc) would grow significantly during long periods and reach high volumes. The performance specification proposed is described in more detail in Section C28.

The Registry systems designed and built by Global Name Registry were designed to handle up to 25 million domain names. Global Name Registry has also planned the scalability to more than 50 million registered names, which under the system design can be easily accommodated with relatively simple measures like adding memory to DNS servers, moving Whois servers to solid state storage and clustering the main database.

The mechanisms to increase peak capacity, provide a predictable peak capacity and ensuring stable operations even in the case of excess loads are described and analyzed in the following, including the capacities of the following elements:

1.     Whois

2.     DNS

3.     Backup

4.     SRS systems

5.     Database systems

6.     EPP servers and RRP servers

7.     Www services

Mechanisms used by Global Name Registry to handle peaks and achieve peak capacity that will be described include the mechanisms/functions/processes below:

8.     General Scalability

9.     Load Balancing

10.Queuing mechanisms

11.Round Robin mechanisms

12.Storage capacity

13.Hot spare servers

14.Burstable bandwidth

15.Personnel

16.Maintenance systems

The current capacity of Global Name Registry Registry Systems

Whois capacity

The Global Name Registry Whois is dimensioned to handle tens of millions of domain names, hundreds of updates (inserts and modifications) per second, simultaneous updates/queries, and asynchronous queries/updates. The main Global Name Registry Whois service is located in the main hosting facilities in London, but Global Name Registry also operates a fully functional Whois from the hot standby failover hosting location in Norway.

The current Whois capacity of Global Name Registry is described in the following stress test results. The quoted results describe the capacity in terms of queries and updates (inserts or modifications) on the production system when subjected to significant stress loads generated from a world wide network of servers querying the Whois while the Whois was being updated from the SRS. Capacity was tested for all objects that will be present in the .org Whois service: domains, contacts and nameservers.

The following numbers are per Whois server. The capacity for queries across all Whois servers scale linearly with the number of servers deployed due to the loadbalancing methods used. However, the capacity for updates to the Whois information does not scale linearly since updates are implemented on all Whois servers simultaneously. The capacity for updates therefore cannot be increased by adding servers, but Global Name Registry has in place plans for increasing the insert/update capacity of the Whois servers. This will be described further below.

Figure 27: Whois peak performance when returning queries, returning negative results, or inserting new entries

The test results were done on the following hardware, identical to the hardware currently in operation on the Global Name Registry UK main site.

o        1 Gb PC133 SDRAM memory

o        Global Name Registry proprietary Whois Database software

The numbers are when querying domain names in a thick registry, therefore with contacts associated.

DNS capacity

The peak capacity of the Global Name Registry network is designed and built to serve billions of queries per month, with less than 300ms latency time as measured described in section C28.

The tested capacity on the Global Name Registry DNS network is plotted in four different graphs below:

1.     number of queries per second per single DNS server

2.     number of queries per second per DNS site

3.     number of queries per second across the entire Global Name Registry DNS network

4.     zone loading times and memory usage per server

All figures are real and measured over at least a three sample average, on a system with the following specifications:

o        4 x 1 Gb PC133 SDRAM memory

o        BIND9 w/threading enabled

One Global Name Registry DNS server can at peak capacity perform 8000 queries per second, as shown below:

Figure 28: Peak performance per second of single DNS server

Each DNS site contains multiple, load balanced DNS servers. The architecture of each site is described in more detail in Section C17.1. The total capacity per DNS site is shown below, as well as the total peak capacity across the entire Global Name Registry DNS network of 5 independent sites on different backbones and continents.

Figure 29: Peak perfomance per second on single DNS site, and across Global Name Registry DNS network

The numbers for the graphs above is listed in the table below:

zone size

Queries per second per server with logging

Queries per second per DNS site

Queries per second on entire network

100,000

7,900

31,600

189,600

200,000

7,900

31,600

189,600

500,000

7,800

31,200

187,200

1,000,000

7,900

31,600

189,600

2,000,000

7,800

31,200

187,200

3,000,000

7,600

30,400

182,400

5,000,000

7,700

30,800

184,800

10,000,000

7,500

30,000

180,000

20,000,000

6,900

27,600

165,600

30,000,000

4,600

18,400

110,400

* ALL QUERIES WERE DONE WITH SIMULTANEOUS UPDATES TO THE ZONE 

Note that it is visible that the server begins swapping at about the 25 million level. This is due to the entire zone being larger than the available memory, and the swapping can be removed simply by adding more memory to the DNS servers.

It is also worthwhile to note that at the peak running rate, the DNS queries generate about 400MB of log for each server every 15 minutes, or 1.6GB/hour. In case of experiencing longer peak periods, the log volume will be handled by the Global Name Registry Operations team, which will compress, remove and store large logs.

The figures do not include any DNS-SEC data. As described in Section C12, Global Name Registry is fully compliant with the DNS-SEC standards. If implemented, DNS-SEC will result in a higher consumption of storage space, which Global Name Registry has tested and found to be about 3x the storage needed without DNS-SEC.

The following graph shows the memory usage and disk usage of the DNS server during peak load

Figure 30: Zone loading time (full loading, not incremental), memory usage and disk usage

Zone loading time is a linear variable, spending about 0.07ms per entry. The system spends all the available memory of 4GB when the zone size reaches about 24 million.

Disk space consumed is linear variable, approximately following the formula:

space (bytes) = names * 47.2

Memory usage converges to about 3.5x the size of the zone file on disk. Internal structures and other overhead make this figure variable.

Backup

The IBM tape backup solution is a high-performant IBM backup solution with a 2.3TB tape library, and regularly backs up the entire Registry dataset. Global Name Registry does incremental backups every day and full backups every week. (Backup procedures are described in more detail in Section C17.7

The data backed up includes all data which is not part of the base build or application build. All data changed or added by applications is backed up. The following illustrates this backup policy:

Figure 31: All application data is backed up

The backed up data is therefore all the data necessary to reconstitute the Registry at one single point in time. All the non-backed up data are standard server builds and software that can be reinstalled from ESS or CDs by the Operations team.

The tape robot can back up a data volume filling its entire tape library of 2.3TB in less than 24 hours. With its current operations of about 120,000 domain names and .name email addresses, the full backup volume is around 200GB and is completed in less than 2 hours. Global Name Registry believes that the current backup solution is sufficient to back up the entire .org database in addition to the .name database (see Section C17.3 for the database sizing discussion), but should the backup volume grow to a level where full backups take more than 16 hours, Global Name Registry will add additional backup units to share the backup volume evenly between units. This can be easily handled by the Tivoli Backup Manager software.

SRS capacity

The SRS access points, database and business logic is a system consisting of several separately scalable components. Some of the components are illustrated in the diagram below:

 

Figure 32: The Registrar interface to EPP/RRP servers

Figure 33: Core SRS overview

The SRS capacity is determined by the individual capacity of its three main elements: 1) the EPP/RRP access point for Registrars (also described in Section C17.2); 2) the Registry Policy Handler in the core SRS; and 3) the Core SRS database, which consists of several independent databases and servers

The separation of the SRS into these three layers gives the system an extreme scalability and peak performance. The following diagram is an overview of how these three layers interact:

Figure 34: The layered structure of the EPP server, business logic and database logic

The EPP/RRP access point scales linearly due to the load balancing on the front end towards the Registrar. Thanks to centralized storage (these servers are diskless) there is no practical limit to the number of EPP/RRP servers that can be added to the system to provide more connections and processing capacity of the protocols supported. Further, all protocol specific processing and logic is residing on these frontends. This takes a lot of load off the business logic and database(s), and Global Name Registry’s EPP testing in particular shows that about 50% of the total load in processing an EPP transaction comes from the XML parsing of the protocol. By putting this parsing on the linearly scalable front ends, Global Name Registry can off load the underlying system and scale in an almost unlimited fashion in terms of protocol processing power.

The Registry Policy Handler contains all the non-protocol specific rules and processes. Its job is to translate the non-protocol specific commands into appropriate database commands according to the database structure and business rules. The Registry Policy Handler scales linearly and more Registry Policy Handlers can be added just like EPP servers, since the Policy Handler does not lock any particular information internally for the transactions. Global Name Registry has the option of implementing a load-balancing policy into the EPP servers, which would allow them to use a layer of multiple Registry Policy Handlers. To further help scalability, the Policy handler does a significant amount of consistency checking and rules checking involving read commands in the database, which offloads the database and allows for more processing to be done in a linearly scalable layer

For more information on the business rules this layer processes, please consult Section C17.3 (database) and C22 (protocol specific to non-protocol specific mapping).

The authorative database is by nature not linearly scalable. Although it would be possible to split the database onto several separate servers and put rules in the Policy Handler to access the different database servers depending on the objects (e.g. all objects starting with a – d are in database_one), Global Name Registry has not seen the need for this complexity. Given that so much of the processing happens before the command reaches the database, and given that the database software and hardware are both state of the art and optimized for high capacity, one logical authorative database can easily handle a Registry up to tens of millions of registrations and billions of transactions per month. The following are peak capacities of the SRS database:


SRS Database

Hardware

·         IBM B80 PowerPC 64 Bit high end transaction server

·         4 processor RISC CPU I 450 Mhz

·         64-bit architecture

·         Fitted with 8 Gb of memory, extensible up to 16Gb.

·         Connected to an Enterprise Storage Solution (ESS) from IBM, with 1.5 Tb of hot-backed up RAID1 storage

·         Triple, redundant hot-swappable power supplies

·         Dual-attach 1000 BaseTX/FX Ethernet Adapter

·         Event-management software for remote management

Software

·         Oracle 8i

·         IBM AIX Operating system

Capacity of Domain registrations

20 million

Database throughput

1500 transactions per second

Storage available

Up to total ESS volume of 22 Tb

Database scalability strategy

Scales from 8 (current) to 32 processors

Scales from 8 (current) to 16 Gb memory

Clustering using Oracle clustering technology

 

 


Reporting database

Hardware

·         IBM B80 PowerPC 64 Bit high end transaction server

·         2 processor RISC CPU I 450 Mhz

·         64-bit architecture

·         Fitted with 1 Gb of memory, extensible up to 16Gb.

·         Connected to an Enterprise Storage Solution (ESS) from IBM, with 1.5 Tb of hot-backed up RAID1 storage

·         Triple, redundant hot-swappable power supplies

·         Dual-attach 1000 BaseTX/FX Ethernet Adapter

·         Event-management software for remote management

Software

·         Oracle 8i

·         IBM AIX Operating system

Capacity of Domain registrations

20 million

Database throughput

1000 transactions per second

Storage available

Up to total ESS volume of 22 Tb

Database scalability strategy

Scales from 2 (current) to 32 processors

Scales from 1 (current) to 16 Gb memory

Clustering using Oracle clustering technology

 


QA database

Hardware

·         IBM B80 PowerPC 64 Bit high end transaction server

·         2 processor RISC CPU I 450 Mhz

·         64-bit architecture

·         Fitted with 1 Gb of memory, extensible up to 16Gb.

·         Dual 37Gb Internal SCSI RAID controlled harddrives

·         Triple, redundant hot-swappable power supplies

·         Dual-attach 1000 BaseTX/FX Ethernet Adapter

·         Event-management software for remote management

Software

·         Oracle 8i

·         IBM AIX Operating system

Capacity of Domain registrations

20 million

Database throughput

1000 transactions per second

Storage available

Up to total ESS volume of 22 Tb

Database scalability strategy

Scales from 2 (current) to 32 processors

Scales from 1 (current) to 16 Gb memory

Clustering using Oracle clustering technology

 

For more information on the database, please consult Section C17.3.

WWW services

The Global Name Registry WWW services are mainly divided into the Global Name Registry websites, including the public site, the corporate site, the Registrar site (only access for Registrars) and the family page site(s) (for domain names of the type www.smith.name or any other family name on .name)

The WWW servers powering the Global Name Registry websites are powerful dual processor servers with fast I/O due to their connection to the ESS. Global Name Registry currently has 9 such servers deployed and operational uniquely for serving www pages.

They are part of a loadbalanced layer and new WWW servers can be added at any time to increase capacity. Global Name Registry has estimated that the currently 9 deployed WWW servers are able to serve more than 100 million pageviews per month in their current roles (this includes the Registrar reporting interface where billing reports can be fetched).

Global Name Registry believes that this is more than sufficient to serve the Registry websites both for .name and for .org, including the traffic to the voting mechanisms and community sites proposed for .org.

Mechanisms used by Global Name Registry to handle peaks and achieve peak capacity

General Scalability design

Each of the components of the Registry frontends and backends are scalable. This includes Whois, DNS, FTP and WWW servers. New servers can be added at any point, and will get a full software install from the ESS, and access to the relevant data on the ESS once install of the software image is complete. This means that a new DNS, Whois, FTP or www server can be installed and operational within minutes.

The Global Name Registry software powering the Registry is structured in independent layers with messaging systems between them allowing separate scalability of each layer. Such messaging systems include the custom EPP-superset interface between the EPP server layer and the business logic layer, or the IBM MQ messaging system between e.g. the Update Handler and the DNS servers.

The databases are not horizontally scalable in the same way. Having three database servers with separate roles ensure that no single server is overloaded. The three-database server structure is designed to handle upwards of 40 domain name registrations/modifications per second, while simultaneously running reports and scrubbing data.

Load balancing

The load balancer(s) in operation by Global Name Registry allows the traffic to be equally distributed over several hardware components. Diagrams of the network structure and load balancing servers can be found in Section C17.1 to this .org Proposal.

Loadbalancing makes it possible to put a virtually unlimited number of servers (limited obviously by factors like rack space and external power supply) in operation for certain tasks where loadbalancing is an option. Like described in this chapter, Global Name Registry has designed its systems and software so that as many as possible of the tasks can be performed by loadbalanced configurations.

Queuing and batch allocation mechanisms

Use of MQ messaging system between components in the Registry guarantees delivery of messages between systems. Also, it allows messages to be queued at any part in the system flow, in the event of extremely high loads, which ensures that system components are not overloaded from spikes of load. Queuing softens such spikes and ensures that all requests are fulfilled and none are dropped.

Global Name Registry also has developed and operated a Round-Robin mechanism for fair allocation of requests during extremely high load periods. This method, called the Landrush procedure, was in operation during the first part of the startup period of the .name TLD. Global Name Registry queued up requests from each Registrar, and requests were then processed in a random fashion until all queues were depleted. This allocation ensure a fair distribution of requests amongst Registrars during extremely high volume periods.

This same mechanism will be available for the .org TLD should the system need to allocate requests randomly during extreme volume periods, although as an established TLD, there will be no so-called Landrush opening period for the Global Name Registry operations of .org.

Use of solid state storage

Global Name Registry has tested and experimented with alternative storage solutions for extremely I/O intensive processes and applications. The Whois service is one such service which in the Global Name Registry implementation relies on I/O access to fulfill a Whois request.

As shown in the Whois performance testing graphs, the performance of Whois is limited to how many queries can be inserted/updated per second, since updates have to be made simultaneously on all Whois servers. The update performance is only limited by the disk access seek and write times. Further, the performance of queries in general is directly relying on disk seek times.

Using solid state storage is mainly a way to increase the number of inserts per second, since the number of lookups (single server capacity is 8.6M per day, or about 20 million if no results are found) can be increased tenfold by adding ten loadbalanced servers.

If the performance of roughly 8.6M updates per day should not be sufficient, there are ways to improve this dramatically.

Global Name Registry has an ESS solution which greatly improves disk access speeds compared to ordinary internal hard drives. However, Global Name Registry planned ahead and tested a solid state storage system for situations when the bottleneck for a function is the I/O throughput. The results achieved for Whois were astonishing.

The Platypus is a silicon RAM based disk with no moving parts, which to the operating systems operate like a hard drive. Internally, the Platypus consists up to 128GB of RAM, structured as a RAID mirrored filesystem. The RAM has an internal UPS which in the case of power failure will ensure that the RAM can be written to two internal, mirrored hard drives for safe storage until the external power comes back on (and all Global Name Registry hosting centers have 3 hours of UPS and generators).

More information about the solid-state solution used by Global Name Registry can be found in an Appendix 25 to this proposal.

Traditionally, milliseconds ms (1/1000) are used when measuring HDD access speed. This represents the time taken for the storage system to locate the start address of the data block(s) requested. Solid state systems measure access times in microseconds µs (1/1,000,000). Further, HDDs are limited in their ability to stream data by both the speed of the spinning magnetic platter, and the need of the HDD controller to verify that each write has been accepted. Silicon (DRAM) based storage can stream data almost instantaneously.

 

The following figures is an illustration of what was obtained when using the solid state storage:

Figure 35: Illustrated Whois performance on solid state storage

(Note that testing was only performed in the areas around 0 zone size and 14 million zone size, the remaining data is interpolated. However, the testing showed that performance did not decrease with the size of the zone. Rather, the performance was totally CPU bounded, not disk bounded)

Should the Whois load, or load on any other function, be hindered by the speed of mechanical hard drives, Global Name Registry intends to deploy a solid state storage solution. Global Name Registry is confident that the Global Name Registry software on a solid state storage solution can remove any bottleneck from I/O intensive Registry services should volumes be higher than expected.

Use of ESS

The ESS (Enterprise Storage System) from IBM is a centralized storage solution that connects to each server in the system through SCSI or fiber connections, allowing servers to be without any internal disk, which greatly increases scalability through the capability of adding new servers at any time. Installation packages and applications are all stored on the ESS, so each new server can get its software from the ESS upon its first connection. The ESS also eases other operations like restore from backup, since servers are diskless.

The ESS uses redundant hardware and RAID 5 disk arrays to provide high availability. The ESS has all hardware functions mirrored and can function if any single set of hardware fails.

The Global Name Registry ESS is currently fitted with 800Gb of storage, and disks can be added to easily provide a total of 11TB storage with one ESS.

Both for .name and .org, even significant volumes of Registry operations will be sufficiently served with the storage capacity of the ESS.

Burstable bandwidth

Global Name Registry has burstable bandwidth available in each hosting center. The data transfer capacity can also be increased should Global Name Registry need more bandwidth in any location.

 

Global Name Registry currently has the following bandwidth available in its three hosting centers:

 

UK

Norway

Hong Kong

2x100mbit

34Mbit

10Mbit

 

Further, as described in more detail in the technical executive summary to this .org Proposal, Global Name Registry will add two more hosting centers, mainly for DNS, to its network for the operations of .org. This will result in the following bandwidth capacity:

 

UK

Norway

Hong Kong

USA 1

USA 2

2x100Mbit

34Mbit

10Mbit

2Mbit

2Mbit

 

 

 

C17.11. Technical and other support.

Support for registrars and for Internet users and registrants. Describe technical help systems, personnel accessibility, web-based, telephone and other support, support services to be offered, time availability of support, and language-availability of support.

C17.11. Technical and other support.127

Global Name Registry’s Dedication to Customer Service. 127

Technical help systems128

Operational Procedures and Practices129

Accreditation and initiation of Customer Support129

Registrar Notification Procedure. 129

Registration Requirements130

Registrar Tool Kit130

Security and availability of Customer support131

Caring for Security131

Global Name Registry supports a vast number of languages131

Availability and roles132

Support Priority levels132

Escalation paths137

Categories of Customer support Processes138

Global Name Registry’s Dedication to Customer Service

Global Name Registry contains a dedicated Customer Support team and a dedicated Account Management team (hereafter collectively referred to as “Customer Service”). These two teams exist solely to serve the Registrar community and assist each individual Registrar in their daily operations for .name, as well as guiding new Registrars through the accreditation process and connecting to the Registry. Additionally, these teams are constantly available to existing Registrars when changes or upgrades to the Global Name Registry systems is being deployed in the OT&E environment and Registrars require assistance to understand and implement (any) necessary changes on their side.

Through its entire business operations, Global Name Registry considers its customer support sacrosanct to its success in the market place and constantly strives to provide the utmost service to its network of Registrars all over the world. Specifically, the Customer Service aims to fulfill the following goals:

1.     Global Name Registry Customer Service shall act as the communication hub between the Registrars, Registry & other departments. Each Registrar shall have a named, personal contact point in Global Name Registry and always receive a personalized service. Our personnel shall be trained to deal with or forward the Registrar’s queries to the appropriate person(s) in or outside Global Name Registry to resolve the matter as promptly as possible.

2.     Global Name Registry Customer Service shall aim to provide each Registrar with the highest quality of service within the Registry Community. Global Name Registry shall be known for its responsiveness to Registrar issues and its dedication to results and resolutions. Through excellence in this field, Global Name Registry aims to be known in the Registrar Community as the most responsive, most innovative and most customer friendly Registry.

3.     Global Name Registry Customer Service shall aim to empower and motivate each Registrar to provide the same high standard of service to their customers. Global Name Registry and the Registrar live and die by the same person: the Subscriber. The Subscriber’s satisfaction is the ultimate goal for both of our Companies and has to be achieved through all available means. Although the Subscriber’s relationship is with the Registrar, Global Name Registry aims to be of maximum assistance to the Registrar in acquiring, motivating and retaining the Subscriber. Through marketing assistance, marketing material, research and knowledge about the Subscriber, Global Name Registry will aid the Registrar to fulfill a positive and excellent Subscriber Experience.

Global Name Registry uses technical help systems to help distribute support knowledge and the leverage customer satisfaction. The Global Name Registry website leverages Customer Support by storing information in FAQs, trouble ticketing systems, etc. that helps Customers by making Global Name Registry Customer Support even more available and accessible.

Different requests naturally have different priorities. Global Name Registry receives a high number of requests and questions and have to prioritize depending on their severity and importance to the operations of the Registry in a stable and efficient manner.

Also, requests “escalate” through the system. Escalation means that a request cannot be handled by its current owner, and will be passed on to a specialist/expert in the area, for example with spam enquiries, or to a manager in the case of more serious or un-precedented issues.

The technical help systems, priority level definitions and process flows, and escalation procedures are described in the following:

Technical help systems

Global Name Registry has the following technical help system to aid in its Customer Service experience:

1.     Email support with a ticketing system for Registrars and for Internet users and registrants.

 

2.     Telephone support with a ticketing system predominantly for Registrars but also some support for Internet users and registrants.

 

3.     FAQ’s on website. This can be found on www.gnr.com and on www.gnr.com/registrars

 

4.     Web form for consumer queries. This can be found on www.gnr.com

 

5.     Escalation procedures. The email support system Ejournal supports escalation to other parts of the CSR team and Global Name Registry management.

Operational Procedures and Practices

Accreditation and initiation of Customer Support

Upon Registrar's request to become an authorized registrar for the Registry TLD, Global Name Registry will assign Registrar a dedicated Account Manager. The Account Manager will be the primary contact point for Registrar, to whom queries of any nature may be directed. Registrars will conduct the majority of interactions with the Global Name Registry through its Customer Service Department and the dedicated Account Manager, each of which will be responsible for escalating all queries according to the Escalation Procedures as described this Chapter.

Registry Operator's procedure for setting up a new Registrar account will consist of three phases; initiation, completion and approval. The tasks within each phase must be completed satisfactorily in order for Registrar to proceed to the next phase.  Prior to the initiation phase, upon initial contact with a Registrar interested in working with the Registry Operator, the assigned Account Manager will send out an information package that will include all necessary documentation, contracts, forms, and software required by Registrar to complete all necessary phases. 

The web interface for the knowledge base will become available after signature of the Registry Agreement prior to the launch of the Registry TLD.

Registrar Notification Procedure

Global Name Registry is careful of notifying registrars of events that affect the performance of the Registry System.  These events may include but are not limited to; planned downtime, unplanned downtime due to; force majeure (natural disaster), security breach, Denial of Service (DoS) attacks or other such malicious events and other system failure events.

Global Name Registry follows strict guidelines for notifying registrars of any planned downtime.  At a minimum all registrars will be made aware of any scheduled downtime in no less than 10 working days prior to the event. 

Further, Global Name Registry will use all reasonable efforts to alert all affected registrars of any unplanned downtime immediately.  

Registration Requirements

In addition to the relevant provisions on Registration Requirements that may exist in the Registry-Registrar Agreement, Registrar may be required in its registration agreement with each Registered Name Holder to require such Registered Name Holder to:

1.     represent and warrant that the data provided in the registration application is true, correct, up to date and complete;

2.     represent and warrant that the registrant will use best efforts at all times during the term of his or her registration to keep the information provided above up to date;

3.     represent and warrant that the registration satisfies any applicable Eligibility Requirements;

4.     agree to be subject to the Uniform Domain Name Dispute Resolution Policy (the "UDRP")

Global Name Registry will also as a part of its Customer Service advise Registrars about other terms and conditions the Registrar should or may choose to include in the Registration agreement with the Registered Name Holder.

Registrar Tool Kit

Global Name Registry provides software to registrars enabling them to write client applications to interface with the Registry System interface. This software consists of among others, a Java API with sample code and C/C++ API and sample code.

Examples of XML message assembly will be shown in the sample code in addition to code illustrating how static XML requests can be sent to Registry Operator. The Registrar Tool Kit will give the Registrar a reference implementation that conforms to the RRP.

The Registrar Tool Kit will also contain documentation explaining the protocol specification in detail. Information included in this documentation will contain the format of request messages and possible response messages, as well as command sequences and message assembly rules for each SRS operation.

Descriptions of the software implementing the RRP specification will be in the documentation. This includes details about the software package hierarchy and explanations of the objects and methods defined within.

The Registrar Tool Kit will be licensed under the GNU Lesser General Public License and this Registry-Registrar Agreement.

Consistent with its dedication to Customer Support, Global Name Registry is also considering widening the toolkit support to also encompass point-of-sale systems and measurement and reporting systems. This may be made available to Registrars during the term of the Registry agreement.

Security and availability of Customer support

Caring for Security

Global Name Registry is extremely concerned about security and the CSR team will not give out any confidential or Registrar-specific information without gaining the authority to do so. Each Registrar has been assigned a unique passphrase, which must be given prior to any exchange of confidential information.

Further, Global Name Registry maintains a list of authorized contacts at each Registrar, and will not give information to people not on the list of authorized persons. The list can be updated when employees leave Registrars and the Registrar wishes to withdraw privileges from ex-employees. The Registrar must supply a list of no more than 10 named individuals that are authorized to contact the Global Name Registry by phone or email. Each of these individuals will be assigned a unique pass phrase such that the Global Name Registry can then authenticate the authorized Registrar representatives. Registrar will also designate the primary and secondary CSS Security List Managers, who are authorized to modify the list of 10 representatives.

Global Name Registry supports a vast number of languages

Through its many-faceted and versatile Customer Support Team, as well as by drawing upon its staff from many different parts of the world, Global Name Registry can offer support in a multitude of languages.

While the main support is in English, Global Name Registry can also read, respond or communicate in the following languages

1.     Norwegian

2.     German

3.     French

4.     Spanish

5.     Italian

6.     Japanese

7.     Chinese Cantonese

8.     Chinese Mandarin

9.     Punjabi

10.Gaelic

11.Polish

12.Swedish

13.Dutch & Russian

Availability and roles

Global Name Registry has at any time people available for support queries. Times and roles are divided as follows:

·         Operations Team

o        Available 24x7 the entire year

o        Emergency escalation only

 

·         Customer Services

o        available 09:00 – 17:30, extended hours where necessary.

o        general enquiries and technical guidance

 

·         Account Managers

o        available 09:00 – 17:30, extended hours where necessary.

o        general enquiries and sales guidance.

Support Priority levels

Different requests naturally have different priorities. Global Name Registry receives a high number of requests and questions and have to prioritize depending on their severity and importance to the operations of the Registry in a stable and efficient manner.

The figure below illustrates how priorities are set for different queries:

Figure 36: Customer Support Priority levels

Examples of Priority 1 questions:

·         I can’t log on to the production EPP Server.

·         The Whois server is showing no information

·         I can’t do a Whois query on whois.name

 

Possible priority 2 questions:

·         I can log on but it won’t let me send any EPP commands

·         I have forgotten my password

·         I there is information missing in some parts of the Whois output

·         A particular domain name is not showing up on the Whois, but I have had confirmation of this registration

·         Your website is down (www.gnr.com or www.name)

·         Email I was sending to you (registrars@gnr.com or customer.service@gnr.com) bounced back, with a delivery failure message

Figure 37: Priority 1 process flow

Figure 38: Priority 2 process flow

Figure 39: Priority 3 process flow

Escalation paths 

Escalation paths describe the pre-defined route a query shall take. Certain types of queries will be only solved at certain levels of the support system, while others will have to be escalated further, to Global Name Registry management.

As an example, escalations may occur when

1.     Registrar needs an immediate answer to an urgent query that has not been previously resolved.

2.     Registrar is dissatisfied with the product, systems or services provided.

3.     The employee receiving the request does not have the authority or sufficient knowledge to resolve the query.

The escalation paths for various priority levels is outlined below:

1.     Escalation for Priority 1:"Major Production System"   

a.     Step 1: Request is received through a Customer Service Representative (CSR), the dedicated Account Manager, or the emergency line.

b.     Step 2: Request is escalated immediately to the Operations Manager and/or the appropriate technical operations staff.

c.      Step 3: If the Operations Manager and/or technical operations staff are unable to resolve the issue in a timely manner it is escalated to the Chief Technology Officer (CTO) and/or other members of the Executive Committee.

2.     Priority 2 & 3: Primary Features or Non-Critical Features

a.     Step 1: Request is received through a Customer Service Representative (CSR) or the dedicated Account Manager.

b.     Step 2: If not resolved the request is transferred to the relevant personnel (Technical Operations, Finance, Compliance, Account Management).

c.      Step 3: If the issue is not resolved in a timely manner, it will be escalated to the Customer Service Manager.

Target resolution times for different priority levels is outlined below:

Priority

Description

Response Time

Resolution Time

Priority 1

A "major production system" is down or severely impacted.

Action taken as soon as reported

60 minutes

Priority 2

A serious issue with a "primary feature" necessary for daily operations.

First level analysis: 2 hours

12 hours

Priority 3

An issue with a "non-critical feature" for which a work-around may or may not exist, or a “general enquiry” or request for feature enhancement

First level analysis: 24-48 hrs

TBD

Categories of Customer support Processes

Global Name Registry has processes for each of the following categories of Customer Support issues:

1)     Technical

a)     Resolution Services Problems

i)       DNS

ii)     Whois

b)     Planned and Unplanned outages

c)      Accreditation

d)     OT&E

e)     Website problems

 

2)     Finance

a)     Balance queries

b)     Monthly statements

c)      Transaction queries

d)     Fund submissions

e)     Refunds

 

3)     Contractual/Legal/ICANN/Authorities

a)     Compliance which ICANN agreement

b)     Dispute resolution queries

 

4)     Product queries (domain, email, namewatch, defensive registrations)

a)     Registrations

b)     Renewals

c)      Transfers

d)     Deletions

 

5)     End-user

a)     Registrar complaints

b)     Products

c)      Legal

d)     Some Technical problems

 

6)     Email abuse

a)     ensure adherence to policy for email abuse

 

7)     Escalation

a)     Priority one

b)     Priority two

c)      Priority three

d)     Complaints

 

8)     Manual product maintenance

 

9)     Reporting

a)     Crystal Reports

b)     financial reports

 

10)Ejournal information maintenance

11)Up-to-date information

12)Scheduling mass mailings

C17.12. Compliance with specifications.

Describe the extent of proposed compliance with technical specifications, including compliance with at least the following RFCs: 954, 1034, 1035, 1101, 2181, 2182.

C17.12. Compliance with specifications.140

Other RFCs with which Global Name Registry is totally compliant141

RFC0954 Nicname/Whois141

RFC1034 STD0013 Domain Names - Concepts And Facilities144

RFC1035 STD0013 Domain Names - Implementation And Specification. 144

RFC1101 Dns Encoding Of Network Names And Other Types144

RFC2181 Clarifications To The Dns Specification. 145

RFC2182 BCP0016 Selection And Operation Of Secondary Dns Servers145

Other relevant RFCs with which Global Name Registry has total compliance. 145

RFC1995    Incremental zone XFR. 145

RFC1996    DNS Notify messages145

RFC2136    Dynamic updates for DNS145

RFC2845    TSIG Transaction signatures146

RFC2535    DNS-SEC Security extensions for DNS146

 

Number

Title

Author or Ed.

Date of RFC

Global Name Registry compliance

RFC1101

DNS encoding of network names and other types 

P.V. Mockapetris

Apr-01-1989

Fully Compliant

RFC1035

STD0013

Domain names - implementation and specification 

P.V. Mockapetris 

Nov-01-1987 

Fully Compliant

RFC1034

STD0013

Domain names - concepts and facilities 

P.V. Mockapetris 

Nov-01-1987 

Fully Compliant

RFC0954

NICNAME/WHOIS 

K. Harrenstien, M.K. Stahl, E.J. Feinler

Oct-01-1985

Fully Compliant

RFC2181

Clarifications to the DNS Specification 

R. Elz, R. Bush

July 1997

Fully Compliant

RFC2182

BCP0016

Selection and Operation of Secondary DNS Servers 

R. Elz, R. Bush, S. Bradner, M. Patton

July 1997

Fully Compliant

Other RFCs with which Global Name Registry is totally compliant

Number

Title

Author or Ed.

Date of RFC

Global Name Registry compliance

RFC1995

Incremental Zone Transfer in DNS 

M. Ohta

August 1996

Fully compliant

RFC1996

A Mechanism for Prompt Notification of Zone Changes (DNS NOTIFY) 

P. Vixie

August 1996

Fully compliant

RFC2136

Dynamic Updates in the Domain Name System (DNS UPDATE) 

P. Vixie, Ed., S. Thomson, Y. Rekhter, J. Bound

April 1997

Fully compliant

RFC2845

Secret Key Transaction Authentication for DNS (TSIG) 

P. Vixie, O. Gudmundsson, D. Eastlake, B. Wellington

May 2000

Fully compliant

RFC2535

Domain Name System Security Extensions 

D. Eastlake

March 1999

Fully compliant if/when the standard reaches widespread acceptance

RFC0954 Nicname/Whois

Global Name Registry is fully compliant with RFC 0954, although RFC 0954 is on certain occasions rather vague with regards to the exact specification of Whois.

Excerpt from RFC0954: "Note that the specification formats will evolve with time; the best way to obtain the most recent documentation on name specifications is to give the server a command line consisting of "?<CRLF>" (that is, a question-mark alone as the name specification)."

The following query types are described in the RFC, but none are explicitly mandatory. They just describe what one Whois server from 1985 responded to.

"The following three examples illustrate the use of NICNAME as of October 1985."

 

Smith     (looks for name or handle SMITH)

!SRI-NIC  (looks for handle SRI-NIC only)

.Smith, John (looks for name JOHN SMITH only)

 

Adding "..." to the argument will match anything from that point,

e.g. "ZU..." will match ZUL, ZUM, etc.

 

To search for mailboxes, use one of these forms:

 

Smith@     (looks for mailboxes with username SMITH)

@Host      (looks for mailboxes on HOST)

Smith@Host (Looks for mailboxes with username SMITH on HOST)

As an illustration, Global Name Registry support the following query types for .name:

·        Domain (“domain =” or none)

o       Lookup types

§         Domain name. The input string is searched in the Domain Name field

o       Returned information

§         If object exists: detailed, summary or short listing, as described below and in the query examples.

§         If object does not exist: Message that object is available for registration

§         If object reserved because the Corresponding Service is registered, a message indicating that the object is reserved

 

·        Contact (“contact =”)

o       Lookup types

§         Contact object ID

o       Returned information

§         If object exists: detailed, summary or short listing, as described below and in the query examples.

§         If object does not exist: Message that object does not exist

 

·        Nameserver (“nameserver =”)

o       Lookup types

§         For local nameservers (hosts on the .name TLD)

·        Nameserver name

·        Nameserver IP address

§         For foreign nameservers

·        Nameserver name. If multiple foreign hosts with the same name exist, all will be returned.

o       Returned information

§         If object exists: detailed, summary or short listing, as described below and in the query examples.

§         If object does not exist: Message that object does not exist

 

·        Registrar (“registrar =”):

o       Lookup types

§         Registrar name

§         Registrar ID

o       Returned information

§         If object exists: detailed, summary or short listing, as described below and in the query examples.

§         If object does not exist: Message that object does not exist

 

·        Defensive registration (“blocked =”)

o       Lookup types

§         Defensive Registration name. If multiple Defensive Registrations with the same name exist, all will be returned.

o       Returned information

§         If object exists: detailed, summary or short listing, as described below and in the query examples.

§         If object does not exist: Message that object is available for registration

 

·        Blocked Personal Name (“blocked”)

o       Lookup types

§         Domain name or SLD Email: This will return a summary listing of all Defensive Registrations blocking the domain name or SLD Email.

o       Returned information

§         If object exists: detailed, summary or short listing, as described below and in the query examples.

§         If object does not exist: Message that object is available for registration

 

·        SLD Email Forwarding (“email=”)

o       This lookup is only available during Landrush for the purpose of availability checks

o       Lookup types

§         SLD Email Forwarding

o       Returned information

§         If object exists: Only the short listing is available for this object

§         If object does not exist: Message that object is available for registration

§         If object reserved because the Corresponding Service is registered, a message indicating that the object is reserved

 

·        Handle/ID (”handle=”)

o       Lookup types

§         Handle for object

o       Returned information

§         If object exists: Only the short listing is available for this object

§         If object does not exist: Message that object does not exist

 

·         Help (“?”)

o        Lookup type

§         Help file/specification for Whois

o        Returned information

§         Human readable result of supported query types

 

For the lookup types proposed supported for .org, please see Section C17.8 (Whois).

RFC1034 STD0013 Domain Names - Concepts And Facilities

Global Name Registry has 100% compliance to this RFC. Global Name Registry uses BIND for its DNS servers, and this application is the reference implementation of the standard DNS RFCs and has total compliance with the core DNS standards.

RFC1035 STD0013 Domain Names - Implementation And Specification

Global Name Registry has 100% compliance to this RFC. Global Name Registry uses BIND for its DNS servers, and this application is the reference implementation of the standard DNS RFCs, and has total compliance with the core DNS standards.

RFC1101 Dns Encoding Of Network Names And Other Types

This RFC is not directly applicable to the Global Name Registry DNS network and systems. RFC 1101 is primarily concerned with reverse-mapping IP addresses using the in-addr.arpa domain, which of course is a completely different zone to .name or .org. We provide reverse mapping for all our nameservers, but for the .org/.name zones, this RFC is irrelevant.

Registered domains in .org/.name contain only pointers (NS records, called "delegations") to other nameservers which contain A records, service information such as WKS/SRV records, etc.

This RFC also refers to YP (Sun's NIS / NIS+) services, which Global Name Registry does not provide, therefore have no reason to support.

RFC2181 Clarifications To The Dns Specification

Global Name Registry has 100% compliance with this RFC, again due to Global Name Registrys reference implementation of BIND. Global Name Registry has designed its zone files for .name with this RFC in mind.

RFC2182 BCP0016 Selection And Operation Of Secondary Dns Servers

Global Name Registry is 100% compliant with this RFC.

It recommends three, four or even five total servers for a high-availability zone. Global Name Registry currently provides six total servers for .name, and will provide at least five for .org.

Another requirement of this RFC is that all servers listed as authoritative for .org or .name will be accessible by anyone on the internet, which is also the case for our system. Also, we follow the recommendations here for wrapping of serial numbers should they become too large for the 32-bit limit - see RFC1982 for more information.

Other relevant RFCs with which Global Name Registry has total compliance

Global Name Registry also has total compliance with other RFCs, including the following:

RFC1995                Incremental zone XFR

Global Name Registry is 100% compliant with this RFC. The incremental zone transfer process is used internally among Global Name Registry servers and will not be visible to users external to the company.

RFC1996                DNS Notify messages

Global Name Registry is 100% compliant with this RFC. The DNS NOTIFY mechanism is used internally among Global Name Registry servers, for the purpose of signaling availability of a new zone to slave servers, and will not be visible to users outside the company.

RFC2136                Dynamic updates for DNS

Global Name Registry is 100% compliant with this RFC. The dynamic update mechanism is used for updating the master server, although this process is transparent to end-users since it only occurs between internal Global Name Registry servers.

RFC2845                TSIG Transaction signatures

Global Name Registry is 100% compliant with this RFC. Updates and zone transfers are signed using these transaction signatures, to enhance security. TSIG is not used for end-user transactions with Global Name Registry nameservers.

Global Name Registry has the option of supporting the following when and if they gain widespread acceptance:

RFC2535                DNS-SEC Security extensions for DNS

DNSSEC provides signature records for all entries in the zone, which can be used to guarantee the authenticity of data received from Global Name Registry nameservers. This has the twofold benefit of providing a chain of trust for nameservers from the .name or .org zone downwards, and also the future possibility of signing public keys stored in DNS records in registered third-level domains.

C17.13 System reliability

Define, analyze, and quantify quality of service.

The concept of Quality of Service (QoS) is vast and comprehensive, and Global Name Registry constantly strives to achieve Quality of Service in each of its fulfillment steps, from Registrars to Registrants to Community. Global Name Registry glue the term of Quality of Service to all business aspects of the Registry operation.

On a high level and in a simplistic way, Global Name Registry defines QoS as:

·         Satisfied Registrars

·         Satisfied Internet users

·         Satisfied DNS Community

·         Satisfied stakeholders

Global Name Registry achieves the QoS goal by constantly analyzing and benchmarking itself against the factors below:

·         Availability: Availability is the quality aspect of whether the service is present or ready for immediate use. Availability represents the probability that a service is available. Also associated with availability is ‘time-to-repair’, that is possible downtime of the Registry System.

·         Accessibility: Accessibility is the quality aspect of a service that represents the degree it is capable of serving a service request. It may be expressed as a probability measure denoting the success rate or chance of a successful service instantiation at a point in time. High accessibility of services can be achieved by building highly scalable systems. Scalability refers to the ability to consistently serve the requests despite variations in the volume of requests.

·         Integrity: Integrity is the quality aspect of how the service maintains the correctness of the interaction in respect to the source. Proper execution of service transactions will provide the correctness of interaction.  Further, Integrity is a desired quality aspect of the Authorative Database.

·         Performance: Performance is the quality aspect of service, which is measured in terms of throughput and latency. Higher throughput and lower latency values represent good performance of a service. Throughput represents the number of service requests served at a given time period. Latency is the round-trip time between sending a request and receiving the response.

·         Reliability: Reliability is the quality aspect of a service that represents the degree of being capable of maintaining the service and service quality. The number of failures per month or year represents a measure of reliability of a service.

·         Regulatory: Regulatory is the quality aspect of the service in conformance with the rules, the law, compliance with standards, and the established service level agreement

·         Security: Security is the quality aspect of the service of providing confidentiality and non-repudiation by authenticating the parties involved, encrypting messages, and providing access control

·         Community: The DNS community is a vibrant community where Global Name Registry intends to participate and contribute. Further, the .org space is a community in itself, which Global Name Registry intends to serve through any measures described in C38 and the above points

 

Much of the efforts and work that has gone into Quality of Service has been represented in this .org Proposal through its Sections and Chapters. In particular, technical QoS has been demonstrated among others in C17.1, C17.9, C17.10, C17.16.

Some aspects of the Global Name Registry Quality of Service are not explicitly discussed in the above chapters, and we believe they are worthwhile mentioning. A discussion of these follows.

Analysis and quantification of QoS

In addition to the definitions given above, Global Name Registry defines Quality of Service as the achievement and fulfilment of a two sets of goals: A strict set of goals (“hard goals”) that should not be stretched under any circumstances, and a set of intentions and aims (“soft goals”) that should be met, but can be stretched under special circumstances. Hard goals are listed below, soft goals include a large number of processes in the Registry, and only some are mentioned below. 

Several of the high level goals may seem obvious, but it is important to realize that the completion of these goals is not straightforward. Significant effort has been deployed to make sure that QoS is maximized in all areas within the possible and that the following conditions will be met:

Hard goals:

·         Registered domains will not disappear

·         Once a registration has been confirmed to the Registrar, the domain name is registered.

·         All domain names in the DNS servers will be properly registered domain names.

·         All domain names in the WHOIS servers will be properly registered domain names.

·         A properly registered domain name will after updates be in the DNS servers.

·         A properly registered domain name will after updates be in the WHOIS servers.

·         The database of registrations can be fully restored even in the case of a partial of full destruction of the main data center, with the possible loss (but not mandatory loss) of the last domain name in case of a full destruction.

Global Name Registry considers the above goals met when they are each and all quantified with “yes”.

Global Name Registry also operates under a number of hard goals that are not binary. They include things such as uptime, response times, average transaction times, etc. The performance specification define this aspect of the Global Name Registry quality of service:

1.     DNS Service.  Global Name Registry considers the DNS Service to be the most critical service of the Registry, and will ensure that unavailability times are kept to an absolute minimum.  The hardware, software and geographic redundancy built into the DNS Service will reduce unavailability times to a minimum.

a.     DNS Service Availability = 99.999%.  Global Name Registry will provide the above-referenced DNS Service Availability.  Global Name Registry will log DNS Service unavailability:  (a) when such unavailability is detected by the monitoring tools described in Exhibit A, or (b) once an ICANN-Accredited Registrar reports an occurrence by phone, e-mail or fax as described in the customer support escalation procedures described in Appendix F.  The committed Performance Specification is 99.999% measured on a monthly basis.

b.     Performance Level.  At any time, each nameserver (including a cluster of nameservers addressed at a shared IP address) MUST be able to handle a load of queries for DNS data that is three times the measured daily peak (averaged over the Monthly Timeframe) of such request on the most loaded nameserver.

c.      Response Time.  The DNS Service will meet the Cross-Network Nameserver Performance Requirements described in this document.

2.     SRS Service.  Global Name Registry provides built-in redundancy into the SRS Service in the form of 2 databases capable of running the SRS Service.  Such redundancy will ensure that SRS Unavailability is kept to an absolute minimum.

a.     SRS Service Availability = 99.4%.  Global Name Registry will provide the above-referenced SRS Service Availability. Global Name Registry will log SRS Unavailability once an ICANN-Accredited Registrar reports an occurrence by phone, e-mail or fax. The committed Performance Specification is 99.4% measured on a monthly basis.

b.     Performance Level.  The Global Name Registry will, on average, be capable of processing 40 Transactions per second.

c.      Response Time.  The SRS Service will have a worst-case response time of 3 seconds, not including network delays, before it will be considered Unavailable.

3.     Whois Service.  Global Name Registry provides built-in redundancy into the Whois Service in the form of multiple servers running in 2 different data centers.  Such redundancy will ensure that unavailability of the Whois Service is kept to an absolute minimum.

a.     Whois Service Availability = 99.4%.  Global Name Registry will provide the above-referenced Whois Service Availability.  Global Name Registry will log Whois Service unavailability:  (a) when such unavailability is detected by the monitoring tools described in Exhibit A, or (b) once an ICANN-Accredited Registrar reports an occurrence by phone, e-mail or fax as described in the customer support escalation procedures described in Appendix F.  The committed Performance Specification is 99.4% measured on a monthly basis.

b.     Performance Level.  Global Name Registry will offer a Whois Service to query certain domain name information.  Whois Service will, on average, be able to handle 200 queries per second.

c.      Response Times.  The Whois Service will have a worst-case response time of 1.5 seconds, not including network delays, before it will be considered unavailable.

4.     Cross-Network Nameserver Performance Requirements.

a.     Nameserver Round-trip time and packet loss from the Internet are important elements of the quality of service provided by Global Name Registry.  These characteristics, however, are affected by Internet performance and therefore cannot be closely controlled by Global Name Registry

b.     The committed Performance Specification for cross-network nameserver performance is a measured round-trip time of under 300 ms and measured packet loss of under 10%. 

c.      The measurements will be conducted by sending strings of DNS request packets from each of four measuring locations to each of the .name nameservers and observing the responses from the .name nameservers.  (These strings of requests and response are referred to as a "CNNP Test".)  The measuring locations will be four root nameserver locations (on the US East Coast, US West Coast, Asia, and Europe).

d.     Each string of request packets will consist of 100 UDP packets at 10 second intervals requesting ns records for arbitrarily selected .name second-level domains, preselected to ensure that the names exist in the Registry TLD and are resolvable.  The packet loss (i.e. the percentage of response packets not received) and the average round-trip time for response packets received will be noted.

e.     To meet the packet loss and Round-trip-time requirements for a particular CNNP Test, all three of the following must be true:

                                                                   i.      The Round-trip time and packet loss from each measurement location to at least one .name nameserver must not exceed the required values.

                                                                 ii.      The Round-trip time to each of 75% of the .name nameservers from at least one of the measurement locations must not exceed the required value.

                                                               iii.      The packet loss to each of the .name nameservers from at least one of the measurement locations must not exceed the required value.

f.      Any failing CNNP Test result obtained during an identified Core Internet Service Failure shall not be considered.

g.     To ensure a properly diverse testing sample, the CNNP Tests will be conducted at varying times (i.e. at different time of the day, as well as on different days of the week).  Global Name Registry will be deemed to have failed to meet the cross-network nameserver performance requirement only if the .org nameservers persistently fail (see item 2.1.3 above) the CNNP Tests with no less than three consecutive failed CNNP Tests to be considered to have persistently failed.

Example of some Soft Goals:

·         A properly registered domain name will be in DNS and Whois within 15 minutes

·         Asynchronously mirrored databases shall be synchronized every 5 seconds

·         Customer service shall answer the phone within 15 seconds during working hours.

·         The Global Name Registry technical operations emergency line phone shall be answered within 15 seconds 24 hours per day.

The statements, interpretations and implementations of these numerous goals are described throughout the entire .org Proposal.

C17.14. System outage prevention.

Procedures for problem detection, redundancy of all systems, back up power supply, facility security, technical security, availability of back up software, operating system, and hardware, system monitoring, technical maintenance staff, server locations.

C17.14. System outage prevention.152

Problem detection. 152

Daily backups transported offsite. 157

Redundant Power Systems158

Redundant network. 158

Proven software and operating systems, open source software used                            where appropriate.158

Multiple Server locations158

Location. 159

Disaster recovery site. 160

24/7 availability of technical maintenance staff160

Triple database servers and centralized database storage. 160

Layered architecture. 161

Option to add servers to the “hot” system. 161

Mirrored, backed up storage for all data and software. 161

Continuous log of database transactions transported offsite. 162

Hardware encrypted communications with Registrars and external DNS                      servers162

PIX firewalls in failover configuration.162

High availability active/active failover system on critical systems162

Servers and hard disks in stock, pre-configured and ready to boot from                      central storage  163

Facility security including access control, environmental control,                           magnetic shielding and surveillance.164

REdundant DNS servers on different backbones164

Repeatedly proven software and hardware. 165

Focus on top class hardware, standardized on few different products from                    solid vendors165

Some of the highest experience and competence in the industry on DNS                          and Registry operations165

Problem detection

Global Name Registry operations teams and customer support teams are the most important feedback loops internally in Global Name Registry. Problems can be detected at many levels but with very few (and non-critical) exceptions, problems have always been detected by Global Name Registry testing, quality control processes or operational problem detection and monitoring.

Figure 40: Illustration of feedback loops to prevent outage

Using a combination of stand-alone and client-server systems, monitoring data is collected using both push and pull technology. Statistical analysis is performed on this data to allow more useful representation of the information and trend analysis to aid in diagnosing behavioural patterns. Operations Staff are promptly notified of any errors, problems, and suspicious or unexpected behaviour on any server in any site.

All services are also monitored externally by AlertSite, Servers Alive, Big Brother and several bespoke tests. It is essential that systems not be monitored by any one system and that they be tested by several means and alternate sources. This ensures that we have the a clearer picture of what is happening on a global scale with ample redundancy and no single points of failure.

Global Name Registry is constantly monitoring the entire system to ensure that the entire Registry operates optimally. Performance problems, bottlenecks and any other issues that may impact operational performance and stability can be spotted long before they ever surface.

The following monitoring tools are in operation by Global Name Registry, some with custom extensions and modifications developed by Global Name Registry:

Big Brother

Homepage:    http://www.bb4.com/

This is the main monitoring tool. Its high extensibility means that we can monitor any aspect of the system. Big Brother uses a client-server architecture combined with methods that both push and pull data. One or more network monitors poll all monitored services and report these results to the display and notification server(s). For internal system information, a BB client is installed on the each machine, which will send CPU, process, disk space, and log file status reports periodically. Each report is timestamped with an expiration date allowing us to know when a report is no longer valid, which is usually an indication of a more serious problem.

MRTG

Homepage:    http://people.ee.ethz.ch/~oetiker/webtools/mrtg/

The Multi Router Traffic Grapher (MRTG) is a tool to monitor the traffic load on network-links. MRTG generates HTML pages containing graphical images that provide a live visual representation of this traffic. The graphs drawn by MRTG do not satisfy our needs so RRDtool is used in its place.

RRD

Homepage:    http://people.ee.ethz.ch/~oetiker/webtools/rrdtool/

RRD is a system to store and display time-series data (i.e. network bandwidth, machine-room temperature, server load average). It stores the data in a very compact way that will not expand over time, and it presents useful graphs by processing the data to enforce a certain data density.

RRDtool is a replacement for MRTG’s graphing and logging features - magnitudes faster and more flexible.

SNMP

Homepage:    http://net-snmp.sourceforge.net/

Various tools relating to the Simple Network Management Protocol. The SNMP daemon makes various statistics and information relating to the activities and performance of the machine. Global Name Registry Ltd also has its own enterprise namespace where we publish data generated by our own software and custom mining scripts.

Snort

Homepage:    http://www.snort.org/

Snort is an open sourced, lightweight, network intrusion detection system.  It makes use of an easy to learn rules system to detect and log the signatures of possible attacks. This is covered in greater detail under Intrusion Detection.

Servers Alive

Homepage:    http://www.woodstone.nu/salive/

Servers Alive is an end-to-end network monitor program. Among the many checks it can do: it can monitor any Winsock service, ping a host, check if an NT service/process is running, check the available disk space on a server, retrieve an URL, check your database engine, and more. When it detects a down condition it can warn you in various ways, including sending you an email (SMTP) saying what is down, or paging you with a numeric or alphanumeric warning.

AlertSite

Homepage:    http://www.alertsite.com/

AlertSite monitors web sites and other Internet-connected devices to ensure that they are reachable and performing optimally. Staff are promptly notified if any aspect of the systems are not meeting expected service levels.

Global Name Registry is constantly monitoring the entire system to ensure that the entire Registry operates optimally. Performance problems, bottlenecks and any other issues that may impact operational performance and stability can be spotted long before they ever surface. The following are screenshots from some of the monitoring systems Global Name Registry Operations use to ensure stability:

Figure 41: BigIP (loadbalancer) traffic reporting on the UK main site (Note that all IP numbers are anonymized for security)

Figure 42: Global Name Registry monitors response times from all its DNS locations and to/from the  Root Servers

Figure 43: Cross-location response times to all Global Name Registry services (and nameserver response quality)

In the case of any malfunction or bottleneck, the relevant staff will be alerted via the monitoring systems in the Network Operations Center, and additional staff will receive alerts on email and SMS, at all hours. This will ensure that appropriate measures are taken if the failover systems or others are at risk.

Daily backups transported offsite

There will always be a backup available less than 24 hours old, on tape. The backup robot stores the entire ESS, that is all registry data, configurations and partitions of the servers in the centre to tape, which is transported offsite daily by a security firm dedicated to this service for the co-location centre. The tapes are then stored in magnetically shielded vaults away from the site.

Global Name Registry uses the IBM product Tivoli Backup Manager to ensure that backups are made consistently and accurately on schedule.

Incremental backups are made every day, full backups every week. Tapes are rotated every week and the shielded offsite-location keeps 4 months of backups available.

Global Name Registry uses IronMountain, Inc (www.ironmountain.com) for safe storage and electronic vaulting of backup. The IronMountain contract is attached as appendix 14.

Redundant Power Systems

Global Name Registry has redundant power supplies to all of its servers. Each server, ESS and Foundry switch and other equipment has 2 independent power feeds, and are additionally connected to UPS (battery based power), which will keep servers running for up to 3 hours while generators are starting up. All hosting centers where Global Name Registry hosts its servers and equipment have generators located in the building and fuel storages locally for up to 3 days of power failure, after which fuel deposits must be replenished from external sources.

Redundant network

All servers run dual network cards, and there are redundant LANs in every server location. Should a network card or an entire LAN fail, the network and services on the network will continue to operate unhindered.

Proven software and operating systems, open source software used where appropriate.

All Global Name Registry operating systems have custom hardened kernels to limit unnecessary services and secure operations.

Open source software is running much of the Internet infrastructure today, and well proven open source software is used wherever appropriate. As an example, Linux is extensively used by Global Name Registry because of the level of control it is possible to have over its functionality.

This is extremely useful in case of intrusion attempts, DdoS attacks (change in TCP layers), but also during normal operations, since it is well known with much competence available. The DNS software, BIND, is also open source, as is Apache, the web server. This software is continuously updated and improved, and is among the safest available.

Multiple Server locations

Global Name Registry will operate 5 server locations for the operations of .org. Three of these will be the same locations powering .name currently, the latter two will be two new hosting locations that Global Name Registry will set up (will not be outsourced) in the US for the purpose of .org DNS.

The diagram below illustrates the Global Name Registry locations:

Figure 44: Geographical spread of Global Name Registry locations

Location

UK

Norway

Hong Kong

USA 1

USA 2

Bandwidth

2x100Mbit

34Mbit

10Mbit

2Mbit

2Mbit

Provider

GlobalSwitch

Colt

MCI WorldCom

NTT

MCI WorldCom

Level 3

Status

Currently fully operated and owned by Global Name Registry

*Currently outsourced, but will be wholly operated by Global Name Registry for .org

 

Disaster recovery site

Global Name Registry operates a full subset of all services on a Disaster Recovery Site. As shown in the diagram under “Server Locations” in this document, the Disaster Recovery Site is operated in Norway. Global Name Registry has personnel in place, both operational and development, to ensure that the Disaster Recovery Site is ready to take over the main operations at all times. Should a failover to the Disaster Recovery Site be necessary because of the UK main site going down, all services, including SRS, Whois, update services, data validation, MX (for .name), etc, will be run from Norway. However, the capacity of the Disaster Recovery Site is not as high as the UK main site, as the table above indicates. However, it will ensure that operations run continuously without interruption even in the case of catastrophic failure of the UK site.

Global Name Registry has both tested failover to the Disaster Recovery Site, and done a real failover from UK to Norway of all services (as described in more detail in Section C15 to this .org Proposal) and knows well the challenges involved. However, both testing and real failover has worked successfully and has not resulted in any downtime or loss of authority of any of the Global Name Registry services.

24/7 availability of technical maintenance staff

Global Name Registry operations staff is in place 24/7/365 (24 hours per day, 7 days per week, the whole year) and at least two operations staff are always on site in the UK.  For all external locations, Global Name Registry has staff or outsourced staff available for either full engineering support (knows system, code and more about the services and equipment) (UK and NO), or green-light management (personnel trained to reboot servers or gain terminal access to servers in case of lock-ups). All 24/7 personnel will alert management should any issue not be immediately resolved and threaten operational stability.

Triple database servers and centralized database storage

The Registry database is running on three separate database servers each running Oracle on the IBM operating system AIX, an extremely stable combination. The database data for each server is stored in the ESS, an internally mirrored storage solution, running RAID on each internal storage structure. This double duplicity gives extended security.

Using three database servers, each with dedicated tasks, is a massive protection against outage. As described further in section C17.3 to this .org Proposal, the dedication of each component results in higher performance, higher security, higher consistency between databases and external services, and increased error correction/detection and consistency checking.

Layered architecture

The system designed, built and operated by Global Name Registry has a layered design that makes it possible to scale linearly on almost any element in the system. The design makes the layers inter-independent, as long as they know how to communicate. For example, the EPP/RRP frontend layer does all the protocol specific processing of the Registrar requests. By adding more servers to the front end layer, more EPP/RRP requests can be forwarded on to the business logic layer (which is not protocol specific) per time unit. The business logic layer, in turn, can be reinforced if it becomes a bottleneck, which will result in more processing of non-protocol specific consistency checking, etc. This is explained in more detail in Section C17.2 and C22 to this .org Proposal.

The layered architecture permeates the entire Global Name Registry system and makes it easy and cost-efficient to scale up any particular function ­at will.

Option to add servers to the “hot” system

Global Name Registry has constructed its architecture to support additions of servers at any time should a particular processing task become a bottleneck. This includes adding EPP/RRP servers, adding business logic processors, Whois servers, DNS servers, WWW servers, FTP servers, update handlers, data validation servers, MX servers, etc. Servers of these types can be added while the system is fully operational. When a new server is added, it will take its necessary software from the ESS, install and boot, and be ready for operation within minutes, rather than hours. The layered architecture design makes this possible.

Mirrored, backed up storage for all data and software

The ESS is an internally mirrored, RAID controlled system which is backed up daily. The internal mirroring ensures that the entire RAID array of disks on one side can break down and it will still retain full functionality. It can accommodate a total of 11TB of storage, should the Global Name Registry databases grow significantly.

Full breakdown of the ESS is extremely unlikely unless in the case of fire. (Fire prevention systems are obviously in use in the hosting center). The ESS can also as a future addition be mirrored via a dedicated fiber link to an external centre, over a distance up to 100km. This would effectively double the already double storage security, in addition to providing catastrophe protection for the data stored on the ESS.

See Appendix 20 and 21 to this .org Proposal for more information on the ESS

Continuous log of database transactions transported offsite

All transactions committing changes to the database on .org will be logged and the log will be continuously sent to the Disaster Recovery Site over an encrypted link. In case of breakdowns where the internal log is corrupted, the external log can be retrieved.

Additionally, in the need for failover to the Disaster Recovery Site, the transaction log will be used for a final consistency check of the failover database (which would have been updated by the Update Handlers, unless some changes in the Update Handler never reached the Disaster Recovery Site just before the breakdown) 

Hardware encrypted communications with Registrars and external DNS servers

All communication between Global Name Registry offices, Global Name Registry UK hosting center and Global Name Registry Disaster Recovery Site is strongly encrypted with a Netscreen 204 VPN link and protected against eavesdropping and/or decryption.

PIX firewalls in failover configuration.

The PIX firewall is Cisco’s most advanced enterprise firewall and will be operating in an active/passive failover configuration. In case one goes down, the second will take over immediately. The firewall is described in more detail in the Security section of this proposal, in C17.9.

High availability active/active failover system on critical systems

Both firewalls, load-balancers, Registrar interface servers, web-servers, DNS servers, WHOIS servers and database servers are operating in high-availability active/active failover system. This is shown in the network diagram on the following page:

Servers and hard disks in stock, pre-configured and ready to boot from central storage

Global Name Registry has a selection of critical hardware in stock, in preparation of any hardware error that will result in server failures or degraded performance. This includes memory, harddrives, network cards, Foundry blades, and others.

Global Name Registry does not have a large stock of servers, since the operational policy has been, and continue to be to deploy all servers rather than stocking them. As an example, Global Name Registry has 26 DNS servers serving queries for .name, far more than the current volume of requests dictate.

A few servers are spare, and are configured to boot from the right partition of the ESS, and can be put in as replacement for servers where hardware error occurs, or where the reason for failure is not established or known.

The numerous extra hard drives, memory, Foundry Blades, processors, power supplies, etc, can be hot-swapped while servers are still running. This will ensure continuous operations in case of hardware failure.

However, given that more servers are deployed than necessary to run services, failure of one server or a part of its hardware does not normally mean that it needs to be fixed as an emergency, but rather can be taken down entirely and replaced in calm while all services remain fully functional and performant. This is part of Global Name Registry’s operational planning.

Facility security including access control, environmental control, magnetic shielding and surveillance.

The physical security of the hosting centers is important to prevent system outages. These security measures are described in more detail in this proposal’s Section C17.9.

Further, the physical conditions in the hosting centers are important to prevent system outage. Global Name Registry assures the following in its hosting centers:

·         24 hours monitoring of the physical environmental factors, including temperature, humidity, power, security systems, water detection etc. If any system is found to be operating outside normal working parameters, the on-site staff will investigate and arrange for the appropriate service or maintenance work to be carried out.

·         Redundant airconditioning capable of maintaining the environmental temperature at 20°C ± 5°C and humidity between 20% and 65% suitable for server equipment.

·         A fully automatic fire detection and alarm system that is linked to automatic suppression systems is required, with the suppression system based on Argonite (or similar).

·         Smoke, fire and water leak detection systems

·         UPS/CPS power feeds that ensure 99.99% power availability through battery and generator power back up supplies

·         Heating, ventilation and air conditioning systems

·         FM-200 Fire suppression system which does not harm electronic equipment and is safe for humans, unlike CO Systems which can suffocate humans which can’t leave fast enough.

REdundant DNS servers on different backbones

The most critical element of the Registry operations is the DNS servers ensuring stable operations of the registered domain names all across the world. The DNS servers are multiplied in failover systems, and spread across different backbones to ensure continuous and stable operations at all times.

Repeatedly proven software and hardware

Oracle on AIX on IBM M80 is one of most proven configurations in the industry. It is a tested, tried and operated solution in many of the highest performing transaction operators today.

The same system will run important parts of the Registry for Global Name Registry.

Focus on top class hardware, standardized on few different products from solid vendors

By focusing the hardware to certain well-proven series, the Global Name Registry will operate a minimum of different hardware making it easier to maintain, replace, install, upgrade and secure. Top class suppliers have been chosen to provide the best of breed solutions.

Global Name Registry draws extensively on IBM hardware and has a very close relationship with IBM. All Global Name Registry servers are stable and well known configurations of IBM hardware, and the Global Name Registry team is experienced and confident in setting up, modifying and trouble-shooting the IBM hardware.

Global Name Registry uses the network equipment from Foundry, which takes the network to the next level in terms of quality, reliability and speed. The Foundry network equipment is described in more detail in Appendix 22, 23 and 24 to this proposal.

Some of the highest experience and competence in the industry on DNS and Registry operations

Through its build-up, launch, and operations of .name, Global Name Registry has gained valuable experience in Registry operations that is among best-of-breed in the marketplace today. The Global Name Registry team will ensure consistent and stable operations also of .org.

C17.15. System recovery procedures

Procedures for restoring the system to operation in the event of a system outage, both expected and unexpected. Identify redundant/diverse systems for providing service in the event of an outage and describe the process for recovery from various types of failures, the training of technical staff who will perform these tasks, the availability and backup of software and operating systems needed to restore the system to operation, the availability of the hardware needed to restore and run the system, backup electrical power systems, the projected time for restoring the system, the procedures for testing the process of restoring the system to operation in the event of an outage, the documentation kept on system outages and on potential system problems that could result in outages.

C17.15. System recovery procedures166

Defining Outage. 166

Overview of events that could lead to outages167

Outage events and procedures for restoring operation. 167

Single server failure. 167

ESS failure. 170

Data Center Destruction or otherwise complete one-sided data                        destruction  170

Software errors171

Inconsistency172

Complete data loss on main site and disaster recovery site. 172

Restoring Software. 173

Restoring Data. 173

Recovery Training of Technical Staff and testing of procedures173

Projected time for restoration of system. 174

Summary of restoration procedures174

Providing Service during outage. 175

Extremely redundant systems175

Failover to Disaster Recovery Site. 175

Backup power systems175

Protecting against unexpected outages176

QA team. 176

Potential system problems that may result in outages176

Documentation of System Outages176

Defining Outage

An outage is an event that will take down a part of the Global Name Registry system. An outage has a severity level depending on the system component that suffers the outage. Most components in the Registry are redundant and outages in any such component has a lesser impact that where major elements or multiple elements are concerned.

While this is not a rigorous definition, the effect of an Outage may include any of the following effects:

1.     Cause Service denial on one or multiple services

2.     Cause major service disruption

3.     Cause performance degradations

4.     Cause minor performance degradations unlikely to be noticeable to anyone outside of the Registry

5.     No effect at all on any services

Overview of events that could lead to outages

Outages caused by any of the following events will be discussed:

1.     Single server failure.

2.     ESS failure

3.     Data center destruction

4.     Software failure

5.     Inconsistency discovery

6.     Power failure

7.     Network failure

8.     Vulnerabilities in software and intrusion attempts

These events, their impact and their restoration procedures will be discussed below.

Outage events and procedures for restoring operation

In the event of Outage, operations will be restored as quickly as possible.

Single server failure

Should a single server fail, Global Name Registry can quickly either remove the server from operations, which would be done where the server was part of a redundant layer, or replace the server with another.

General on server failures

Servers may fail for a variety of reasons. This includes disk failure, network card failure, power supply failure, etc. Global Name Registry can replace most components of each of its servers, and in the case of an unknown failure, the server can be replaced with another. All server applications and software reside on readily available CDs or DVDs, which means that any server can be reinstalled with the appropriate software within minutes. The next sections go in more detail on the consequences of single server failure, while the restoration or fix procedure for the servers is mostly identical.

Protocol servers like EPP and RRP

In the case of outage to any EPP or RRP server, the server will be taken out of operation and either fixed or replaced. In the meantime, protocol requests will be handled by the remaining servers in the load balanced layer.

Business logic servers

In the case of outage to a BL server, the server will be taken out of operation and fixed/replaced. In the meantime, protocol independent requests will be handled by the remaining servers in the load balanced layer.

Whois server

In the case of outage to a Whois server, the server will be taken out of operation and fixed/replaced. In the meantime, the remaining load balanced Whois servers will continue to serve requests to the Whois service. The Whois service is scaled to handle such outages without any impact on service.

When replaced/reinserted, the Whois server will send a request to the Update Handler for an update of Whois information which will trigger a reload of the Whois data to the Whois server. After this is complete, the Whois server will resume operations.

DNS server

In the case of outage to a DNS server, the server will be taken out of operation and fixed/replaced. This includes DNS servers on any of Global Name Registry’s remote locations, where staff, engineers or green-light management are available. In the meantime, the remaining load balanced DNS servers will continue to serve requests to the DNS service. The DNS service is scaled to handle such outages without any impact on service.

When replaced/reinserted, the DNS server will send a request to the Master DNS server for a reload of the zone file, which will synchronize the DNS server. After this is complete, the DNS server will resume operations.

WWW server

In the case of outage to a WWW server, the server will be taken out of operation and fixed/replaced. In the meantime, the remaining load balanced WWW servers will continue to serve requests to the WWW service. The WWW service is scaled to handle such outages without any impact on service.

FTP server

In the case of outage to a FTP server, the server will be taken out of operation and fixed/replaced. The FTP servers mostly serve zone-file access requests or Registrar invoices, which are not time-critical requests. If the FTP server cannot be fixed in a reasonable timeframe (24 hours), another server will be converted or put in place to fill its role.

Update server

In case of an unexpected system outage on the update server, operation can be rapidly resumed.  In the event of fatal hardware errors, the update server can be replaced with identical hardware.

Once the update server is started, it will install its software from the ESS or from a CD/DVD fed by the operator, and resume operations. The MQ logic ensures that no messages will have been lost on the update server and all messaging will be resumed and restored.

OT&E environment

The OT&E environment consists of several servers that are treated as a non-critical service cluster. In the case of an unexpected system outage to an element in the OT&E that makes it unavailable or partly unavailable, Global Name Registry will inform all Registrars that the OT&E is temporarily unavailable. Since the data in OT&E is non-critical, Global Name Registry will restore the OT&E environment from the last backup, which resides in the on-site tape library. This will ensure that the environment comes back up. In the case of hardware failure, Global Name Registry will follow its procedures for hardware failure. The monitoring systems will in most cases of HW failure let the Operations team know which component failed, so it can be easily replaced or fixed.

Database

In case of outage to one of the two non-authorative SRS databases, either the QA database or the reporting database, operations would continue since the main database is only asynchronously mirrored to these two databases. In the case of Expected Outage or Unexpected Outage to any of these servers, Global Name Registry operations team may choose to fail over to DRS depending on the evaluation of the outage, time-to-fix and impact on operations.

In case of outage to the main SRS database, the authorative database used when committing any changes to the Registry data, the main database would be failed over to the mirrored QA database, which effectively also serves as a hot standby to the main database, in addition to its normal usage as a data quality and consistency checking database.

The QA database has a separate disk array from the main database, so failure to the main database disks will not affect the QA database.

Since the QA database is asynchronously mirrored from the main database, a (low) number of transactions may not be committed to the QA database. Operators would therefore have to manually replay the last transactions from the logs (and they can choose from both protocol logs, MQ logs, and database logs, depending on what is available) before failover to the QA standby. Therefore, an outage to the main database would result in up to 2 hours of unexpected downtime of the SRS.

In the case of Expected Outage to the main database server, the outage would be communicated well in advance and the server taken up again as planned. By definition, if Global Name Registry chose to take Expected Outage to the main database, it would not fail over

ESS failure

In the case of ESS failure, when both of the internally mirrored data storage areas are destroyed or fails, as would be a possibility in the case of fire in the ESS, all authorative data on the main site would be inaccessible or destroyed.

In such a case, Global Name Registry would fail over operations to the Disaster Recovery Site. Continuous replication between the UK main site and the Disaster Recovery Site in Norway keeps the entire authorative dataset mirrored on another ESS. Global Name Registry would then resume operations on the Norwegian Disaster Recovery Site until the UK main site could be re-instated.

Global Name Registry has previously both tested Disaster Failover and performed a real Disaster Failover (see Section C15 for more detail), without any loss of uptime.

In case of an ESS breakdown, the following procedure would also be followed:

·         An analysis of the reasons why the ESS broke down would be conducted.

·         The ESS would be replaced with another identical ESS, or a similar storage facility

·         The disaster recovery site would be mirrored back to the UK main site, while continuing to operate. Since the UK ESS then would be “empty”, this procedure would be similar to the Transition process described in Section C18.

·         All servers could be rebooted from the central storage where each of the partitions would then be remounted

·         The systems would be tested and consistency checked before “failing back” to the UK site.

Data Center Destruction or otherwise complete one-sided data destruction

In the unlikely event of the whole data-center being destroyed, by fire, bombing, earthquakes or other force majeure that could impact the strongly protected data center severely (ref. also Section C17.1 on the hosting environment, which is extremely protected), the system can still be recovered.

It is notable that in all events except for a full internet breakdown or full destruction of all production centers and external DNS centers, the operation of the DNS and the WHOIS would go on normally, but new registrations and updates to the DNS records would be halted in the case of major destructions in the main data center.

It is highly unlikely that an event would occur so as to destroy the hosting center or otherwise render the database system inaccessible through full destruction. Such an event as a nuclear attack, major earthquake or similar would have other, and more severe impacts on the society than the unavailability of new registrations. The DNS would in such a case still be up.

In such a case, Global Name Registry would fail over operations to the Disaster Recovery Site. Continuous replication between the UK main site and the Disaster Recovery Site in Norway keeps the entire authorative dataset mirrored on another ESS. Global Name Registry would then resume operations on the Norwegian Disaster Recovery Site until the UK main site could be re-instated.

In case of a full data center destruction, a full or at least partial server park would need to be acquired from the supplier, IBM.

All software and applications running on the server park is written to CDs and DVDs and stored offsite. These would be recovered and re-installed on the UK restored site, before QA and testing of all systems.

In the meantime, the Disaster Recovery Site would continue to operate all services.

Global Name Registry has previously both tested Disaster Failover and performed a real Disaster Failover (see Section C15 for more detail), without any loss of uptime.

Software errors

Software errors or “bugs” can cause outage events. Global Name Registry categorizes software errors according to their impact:

1.     Level 1 bug – This is an error that threatens the system integrity and is likely to damage data in the registry if not immediately fixed.

a.     Global Name Registry operations team will take down the system immediately.

b.     Global Name Registry operators and developers will evaluate whether the problem may be solved by rolling back the software part where the bug was discovered, to an earlier version.

c.      A Level 1 bug will thus result in unexpected downtime.

2.     Level 2 bug – An error that does not threaten system integrity but should be fixed as soon as possible

a.     The software correction will be commenced immediately, and depending on the complexity of the fix, a maintenance window will be scheduled for the release. If the bug is complex, the maintenance window will be scheduled after the software correction is well underway.

b.     A level 2 bug will thus not result in Unexpected Outage, but may result in Expected Outage.

3.     Level 3 bug – An error that is non-critical

a.     The bug does not warrant immediate correction, but a fix will be scheduled and will await installation until next appropriate maintenance window/release.

b.     A level 3 bug will thus not result in any additional Expected Outage.

Inconsistency

The automated consistency validation (ACV) system developed and operated by Global Name Registry will constantly monitor the system for inconsistencies as described in Section C17.1

Discoveries of major inconsistencies will result in Global Name Registry operations taking the system down for examination and review. For example, this would happen in cases where DNS, Whois, MX or FTP servers are inconsistent with the QA database. Possible reasons could be incomplete mirroring from main database to QA database; error in ACV; error in update handler mechanisms; error in queue handling, etc.

In such cases, Global Name Registry operations would block the system for further changes, and manually investigate why the system would not appear to be consistent. Upon conclusion, Global Name Registry Operations would restore consistency on zone files, Whois, MX and other services, manually if necessary.

Any software errors would be subject to the software error correction procedures.

Complete data loss on main site and disaster recovery site

The extremely unlikely scenarios that would give a complete loss of operational data and require restoration from backup, are the following:

1.     The ESS in the UK data center breaks (or all of the mirrored ESS partitions where database data is stored breaks) and the offsite log sent to the Disaster Recovery Site breaks, both beyond recovery, at the same time, before the updates are distributed to the WHOIS and DNS servers.

2.     The entire UK main data center is destroyed by fire, earthquake, bomb or another catastrophe, and the offsite log sent to the Disaster Recovery Site is not received.

In both cases, Global Name Registry would analyze why the ESS (in case of 1) and the log broke and would at the same time fail over to the failover data center. Some unexpected downtime would occur since the failover could not be planned even a minute before.

Global Name Registry would then analyze which data source would be the last updated: a) the backup (taken daily) or, b) the replicated database in the failover site (which has not been updated with the log, which mysteriously failed)

Global Name Registry would then restore the data from the last updated data source, possibly from backup which could be fetched from offsite (especially in the case of 2). The number of transactions lost would be the minimum of a) the number of transaction in the log, or b) the number of transactions since the last backup.

These lost transactions could be recovered from the database log (the Oracle log), if the harddrives were recoverable (even fire damaged drives can be restored to some extent, Global Name Registry would use Ibas for this (www.ibas.com)), alternatively, the lost transactions could be recovered from logs at the protocol servers, which may not have been lost.

Restoring Software

All software on any server is residing on CD or DVD, which is readily available to the Global Name Registry operations team. Any software can be restored/rebuilt on a new or replaced server at any time.

Restoring Data

All application data is backed up regularly and can be restored promptly. Backups are residing both on-site and off-site. Tapes are brought off-site weekly. There is therefore a probability of 1/7 that the last backup taken is off-site, in which case the operations team must fetch it from IronMountain, the backup storage provider.

Otherwise, the data for restoration will be on-site and can be restored very easily through the Tivoli backup system. Restoration to a previous state can be done within few minutes depending on which data shall be restored. A full restore of all ESS partitions may take up to a few hours.

Recovery Training of Technical Staff and testing of procedures

The Global Name Registry Operations team regularly trains on recovery training. Through detailed operations manuals provided by suppliers or written by the Global Name Registry development teams, any operator on shift has access to full descriptions of restoration procedures and installation procedures.

Further, each operator is thoroughly trained in handling disaster scenarios:

·         Backup restoration and testing is performed monthly. The operators do a full restore to a point in time to “clean” partitions of the ESS. (This to ensure that operations are not affected)

·         Each operator has been part of server installations and is well aware of the hardware, network and components like cards, blades and disks. Server replacements are tested monthly, by taking a horizontally scalable server out of production, simulating a fix, and replacing it.

·         The entire operations team has tested a disaster failover to the Norwegian site. Also, the entire operations team has performed a REAL disaster failover to the Norwegian Disaster Recovery Site, when it became known that KPNQwest would turn off their network (see C15 for more information). The Operations team demonstrated that they could fail the system over without incurring ANY downtime on the services.

Projected time for restoration of system

See table below for indicative restoration times. The actual restoration time may differ depending on the severity of the outage and the circumstances of each particular case.

Summary of restoration procedures

Outage type

Indication of time to restore

Impact on Service

Solution Method

Documentation on outage

Failed single server in loadbalanced layer (www, EPP, RRP, DNS, Whois, API, FTP, etc)

Few hours

none

Analyze, replace element in server or take out of production

Outage logged

Operations manual(s) describe diagnosis and fix

Failure on mirrored databases

Hours to days

ACV (automated consistency verification) halted. Otherwise, none

Take down mirror, fix problem, reinstate mirroring. If complex, fail over to DRS

Outage logged

Operations manual(s) describe diagnosis and fix

Failure on main database

Hours to days (depending on severity)

Few minutes or hours downtime

Fail over to DRS

Outage logged and reported to CTO and supplier

Failure on ESS

Hours (replace disks), days (acquire new ESS)

Few minutes or hours downtime

Fail over to DRS

Outage logged and reported to CTO and supplier

Denial of Service Attack

Minutes or hours

Low

Fail over. Manually block attack

IDS may stop or halt attack

Attack logged

IDS, Operations and developers may stop attack

Software Bug

Minutes or hours

Most likely low, but may result in hours of downtime

Restore previous software version

Bug logged, and if appropriate, reported to Sourceforge

Software vulnerability

Hours to day(s)

Most likely low

Apply patch

Logged and if appropriate, reported to Sourceforge

Failure of data center

Days to weeks

Few hours of downtime. May in extreme cases lose last (few) transaction(s)

Fail over to DRS

Outage logged and reported to CTO, suppliers, management and CEO

Providing Service during outage

Depending on the severity of the outage, full service can be maintained during outages.

Extremely redundant systems

Global Name Registry has extremely redundant systems which will allow minor outages to go unnoticed.

Failover to Disaster Recovery Site

Global Name Registry will in the case of major outages fail over to the Norwegian DRS to keep services up during the outage in the UK.

Backup power systems

There are triple UPS available for each server (Global Name Registry operates one set of UPS inside the racks, hosting center operates one set of UPS for all cages, hosting center operates external generators with close access to fuel). These backup power systems can power the Global Name Registry systems for up to three days in the absence of external grid power supply.

Protecting against unexpected outages

QA team

Global Name Registry has a QA team which validates all software releases. The process involving the QA team is the following:

1.     A software part approaches its release date

2.     A release candidate is set up and given to QA

3.     The QA team installs the release candidate on the QA testing system, a setup similar to the platforms on which the software will run, and checks all business rules and functionality that the release candidate should fulfill.

4.     If there are any errors or discrepancies found, they are flagged to the development team

5.     Problems are fixed and a new release candidate is given to QA. This can happen as quickly as few hours after the first release. New release candidates are made as many time as necessary until QA signs off the release.

6.     The release is given to Operations, which will install it on the production system in the next available maintenance window.

The release cycle for software upgrades is usually one week.

Potential system problems that may result in outages

Global Name Registry has no known system problems that may result in outages. It is Global Name Registry policy to correct all known problems as soon as possible.

Documentation of System Outages

Global Name Registry documents all system outages and performance problems. When experiencing any outage or performance problems, the Global Name Registry operations team will answer each of the following five questions, known as the Five Qs:

The Five Qs of Outages:

1.     What happened?

2.     When did it happen?

3.     What was affected

4.     What was done to fix it?

5.     What has been done to prevent recurrence?

Global Name Registry will then document the event in one of two ways: 1) In the case of smaller incidences, the event documentation will be stored in the event ticketing system, an event reporting system for this purpose, or 2) in the case of major events, a formal report to the CTO.

C17.16. Registry failure provisions.

Please describe in detail your plans for dealing with the possibility of a registry failure due to insolvency or other factors that preclude restored operation.

 

C17.16. Registry failure provisions.177

Insolvency of Registry Operator177

Destruction of UK offices178

Emergency Registry Transfer179

Conclusion. 181

This chapter deals with Registry Failure in the unlikely event that the registry experienced an event that would preclude further operations (events, even on a catastrophic level, that will not preclude full recovery of the registry are described in chapter C17.15).

The following registry failure scenarios are described in this chapter:

1.     Insolvency of Global Name Registry.

2.     Destruction of UK offices during working hours (loss of key personnel).

Common for these scenarios is that they have the potential to severely obstruct or prohibit further registry operations. These scenarios are described and analyzed in more detail in the following.

Insolvency of Registry Operator

Global Name Registry employs well-trained and experienced financial accountants, lawyers, bankers and analysts to administer and maintain Global Name Registry assets. Additionally, Global Name Registry has a Board which includes executive and non-executive directors whose primary role is to protect and further the shareholder value, fulfilling their fiduciary duties as directors of a UK company. Among other tasks, this includes taking the necessary steps to ensure correct and proper accounting and proper analysis of Global Name Registry’s financial status. It is therefore extremely unlikely that Global Name Registry should enter insolvency without fair warning and planning for the insolvency process.

However, there are certain scenarios that can cause unexpected  insolvency. These include the loss of major contracts (ie ICANN Agreement), insolvency or discontinuation of services by key suppliers or major customers, lack of access to financing sources, acts of god or unexpected lawsuits.  Because Global Name Registry tracks the market generally and has implemented measures to ensure preparedness in adverse situations, while each of these events may happen, the occurrence of any of them is unlikely to cause insolvency.

With respect to the loss of major contracts, Global Name Registry believes that it maintains good working relationships with all entities with which it has entered into contracts.  Therefore, Global Name Registry is confident that if a material threat to any contract integral to its existence arose, there would be ample time to cure any threat. 

Regarding the discontinuation of key services, in May 2002, Global Name Registry demonstrated its ability to anticipate adverse events in the market by reacting nimbly to the insolvency proceedings of KPNQwest, the co-location service and web provider, which has provided a majority of the infrastructure services to enabling Global Name Registry to provide registry services to its customers. 

To ensure the company’s longevity, senior management, together with the Board of Directors of Global Name Registry Limited, keeps a keen eye on the capital markets, to ensure that prior to Global Name Registry’s reaching profitability, it has continued access to financing sources. 

In the UK, a company is rarely subject to covering non-material damages, so the likelihood of a large debt from lawsuit is unlikely if the legal fees could be covered.  On the legal fee side, Global Name Registry employs lawyers and budgets for, and uses, top-ranked law firms to assist Global Name Registry in all legal matters. Global Name Registry is therefore properly resourced to handle lawsuits and other legal matters.

All of this being said, in the unlikely event that Global Name Registry finds itself contemplating or near insolvency proceedings, Global Name Registry would adopt the following procedures:

1.     Enter into discussions with creditors and establish whether creditors would write off debt and continue operations.

2.     Report to all creditors, business partners and others and otherwise fulfill the company’s duties under UK insolvency laws, including informing Registrars and ICANN about the insolvency.

3.     Initiate a transfer of registry operations to another registry operator (as described in “Registry Transfer” below)

Destruction of UK offices

A complete destruction of the UK offices during working hours, either due to 1) an act of god; or 2) terrorism targeted specifically at the  registry; or 3) another type of man-made destructive event would have a significant and possibly crippling effect on the Registry due to the potential loss of key personnel.

The event’s maximum consequence would be obtained if the destruction happened during working hours and most of the Global Name Registry staff and management would be present in the building at the time.  The event could possibly lead to serious injury or death of some, most or all of Global Name Registry staff and management and would have a significant effect on normal operations.

However, the hosting center where the main .org servers would be located would remain intact and fully functional even in such an event, including SRS servers, Whois servers, DNS zone file generation servers, storage, escrow servers, backup servers, monitoring systems, etc. Additionally, Global Name Registry has a 24/7 operations staff which works on shifts. Even a complete destruction of the Global Name Registry offices during office hours would leave 66% of the operations staff intact.

Additionally, Global Name Registry employs staff on its failover location in Norway, a site which has the capacity to run full DNS and Whois in case of UK mainsite takedown. Such staff would not be affected physically (although they obviously would be emotionally) by the catastrophic event.

The emergency plan in this scenario consists of:

1.     Creating an emergency plan to deal with exactly this scenario.

2.     Connecting and informing remote staff, off-duty staff, non-executive board members and investors of the destruction.

3.     Establishing treatment, counsel and care for the remaining staff and board members.

4.     Reporting to Registrars, ICANN and UK government of the destruction.

5.     Assessing whether continued technical operations can be assured until new management is in place.

6.     Assessing whether new management and staff can be, or should be, obtained to re-establish normal operations (customer support, account management, etc).

7.     Depending on the outcome of the previous points,

a.     Fund and assign non-technical operations to outsourced operator, preferably another registry operator until operations can be re-assumed by Global Name Registry; or

b.     Initiate registry transfer to another registry operator (as described in “Registry Transfer” below)

In the event of a natural catastrophe that would take affect Global Name Registry so materially, it is likely that also other countries and continents would be similarly affected, and the Internet as such could be down or materially down in large parts of the world. This may mean that Global Name Registry instead of transferring the registry to another registry operator (other operators may be affected as well), will take the time to re-establish the .org registry over several months or years, while the world is recovering from the event.

Emergency Registry Transfer

In the event of a “Registry Failure,” as defined by this section of the .org Proposal, Global Name Registry believes to have shown in Section C18 of this proposal that a Registry Transfer to another operator is possible. However, a complete transfer may be difficult to initiate and complete in emergency circumstances where possibly only a few months or a few weeks are available.

As outlined in the proposal’s Section C18, a Registry Transfer is not a task to be taken lightly and involves meticulous planning and benefits from maximum cooperation from the registry migrating away the .org registry.

Should an emergency Registry Transfer be necessary, through escrow arrangements required by the .org ICANN Agreement, ICANN would be able to effect a transfer of the required data to a new registry operator.  It is important to note that Global Name Registry maintains excellent relationships with the other gTLD registry operators which would be sufficiently equipped to handle a transition and subsequent ongoing maintenance of the .org registry in the event of an emergency Registry Transfer.

The following are elements Global Name Registry would provide in an emergency Registry Transfer:

·         Any commercially reasonable resources, efforts and cooperation to ensure that the emergency Registry Transfer could be initiated and completed as smoothly as possible.

·         Escrow files and/or database dumps to newregistry operator.

The procedures and descriptions from Section C18 would be extremely relevant in case an emergency Registry Transfer would become necessary. Even given the extent of the catastrophes described in the first part of this document, Global Name Registry believes is could still operate the technical operation of the TLD in a more limited fashion, either through failing over to the Global Name Registry disaster recovery site or continuing limited operations on the main site.

Global Name Registry intends to make available an “insolvency fund” that will be built up during the Global Name Registry operation of the .org registry and used exclusively to ensure available resources while the registry is being transitioned in an emergency Registry Transfer. This insolvency fund would be capable of ensuring continued operations of the Global Name Registry services for existing users, including the DNS network, MX and potentially also modifications/updates in the SRS. Additionally, it would ensure cover of personnel and management costs to handle the transition.

Should the entire operations of Global Name Registry be made wholly unusable and impossible transfer, ICANN will have full access to escrow files stored in a safe location as described in the escrow section of the .org Proposal (Section C17.7), and could re-generate the Global Name Registry registry functions by handing the escrow file over to another registry operator.  However, if all Global Name Registry SRS data centers (including Disaster recovery Site in other countries) had been destroyed, some downtime would be present on the SRS, although DNS and MX services would still be fully functional during the SRS downtime.

See the transition section C18 for a more complete response to the registry transfer considerations.

Conclusion

Global Name Registry believes that a Registry Failure scenario is extremely unlikely, and the probability of the need for an emergency Registry Transfer, where the Registry Failure could not be foreseen is virtually nil. However, even in such scenarios the entire Registry function(s) operated by Global Name Registry will be possible to restore while retaining most, if not all, of the vital functions of the Registry.


 [k1]This level in the TOC is input manually.