Switch ORG Proposal, Appendix B: Registry Concept | back to index | ||||||||||||||||||||||||||||||||||||||||||||||||
Table of content
|
|||||||||||||||||||||||||||||||||||||||||||||||||
1.General description of the registry systems (Zürich and Geneva) The excellent registry model relies on technical expertise gained from operating CH and LI top level domains since 1987, a financially stable and robust organization and the aim to deliver state-of-the-art performance to the ORG community. The application describes a synchronized dual registry system with identical components, one located in Zürich at the Swiss Federal Institute of Technology (ETHZ), the other at CERN in Geneva, both connected over redundant gigabit fiber optic lines with each other and to the Internet. Both systems additionally have local backup and a disaster recovery with copies of the data bases to be stored in a Swiss mountain. The systems are designed to handle exceptionally high loads and provide equal access for registrars using RRP and EPP protocols. The proposed design uses system similar to one already in use for CH and LI domain name registration but with many enhancements and improvements. It has been possible to specify and measure the parameters for the ORG TLD based on the system in use (its OT&E components) and this system will serve for further developments and measurements.
|
top | ||||||||||||||||||||||||||||||||||||||||||||||||
2.Physical locations and buildings Two identical and synchronized registry systems, located in two data centers with more than 250 km distance between will be used, one in Zürich at the Swiss Federal Institute of Technology (ETHZ, www.ethz.ch) and one in Geneva (Meyrin) at the European Organization for Nuclear Research (CERN, www.cern.ch). The following security requirements are met at each location:
Both data centers have protective and preventative measures and procedures in case of fire and water. The main machine rooms are equipped with a sensitive laser based detection system at ceiling level and a less sensitive system in the false floor plenums. An on-site fire brigade is alerted if smoke is detected by either system. CERN also have 24x7 on-site operator cover. In both locations there is no (or almost no) water piping in the machine rooms. Zürich and Meyrin (where CERN is located) are not in the lowest earthquake risk category, but risk is very low for both sites. Refer to paras. 7 f and g for protective measures to recover from such events.
|
top | ||||||||||||||||||||||||||||||||||||||||||||||||
3.Internet connectivity, hardware architecture a. Internet Connectivity Both sites are located near the core nodes of the SWITCH backbone in Geneva and Zürich. The underlying DWDM infrastructure allows for the installation of a dedicated gigabit Ethernet channel between both systems. SWITCH has peerings of up to 1 Gbit/s with global transit providers at the TIX and CIXP Internet exchange points in Zürich and at CERN, respectively (http://www.tix.ch/ and http://wwwcs.cern.ch/public/services/cixp/), and with many other co-location carriers at these peering nodes. SWITCH is also part of the community of European educational and research networks connected by the GEANT backbone at 2.5 Gbit/s, which ensures excellent connectivity to the North American educational and research network Internet2 and its peers. The connectivity of the data centers at the SWITCH backbone is fully redundant. b. Hardware Architecture Each of the two identical systems uses distributed architectures for the different registry components. The load is distributed therefore, and the specifications can be met simply by extending systems. The specifications are described in detail in Section Hardware Specifications (para. 3 c). The two identical systems will both deal with peak loads. This concept provides high reliability, availability and security. On a system or network crash or simple unavailability, one system switches over to the second location. Each location has 1 Gbit/s connectivity between both systems, and redundant connections to the Internet. Availability is defined by specifying scheduled service outages, but during normal operation or maintenance, the registry service will not be interrupted. If the system has to be interrupted, the registry will send notifications to the registrars one week or more in advance. Scalability or extensibility is another key issue. With the planned differentiation of the ORG name, an additional demand for ORG domain names has to be anticipated. Each component has been foreseen to be scalable. The Gateway, the RRP server and the WHOIS directory service can be scaled by using parallel computer park architectures (clustering). Before moving toward any parallel architecture, the registry will upgrade each single hardware component, allowing approximately a doubling of the specified load. The database is realized with ORACLE software and is extendable using parallel computer parks (clusters) within the proposed concept. The hardware concept uses a gateway and a dedicated RRP server, allowing a smooth transition to EPP. Once the EPP draft recommendations become a proposed standard, the gateway can serve both RRP and EPP servers, which will run in parallel. The diagram on the next page shows the proposed registry system. c. Hardware Specifications 2 web servers and 2 gateway servers
Reporting and administration server
2 database servers and 2 RRP servers
2 Whois servers
|
top | ||||||||||||||||||||||||||||||||||||||||||||||||
4.System capabilities and upgrades a. Database capabilities The database size is ca. 75 GB, distributed over 8 discs to ensure a balanced load. The size can be scaled up to 20 times the projected size. Database throughput has been measured on the actual size of the CH and LI registry and was extrapolated to the ORG registry size. Calculated throughput ranges from 200 up to 400 add or modification commands per second (average). These values will not be influenced by the synchronization of the lookup tables on the RRP as synchronization will be deferred. The Oracle database scalability is evaluated for a size of up to 1 TB. If higher capacities are required, the registry will change to Oracle database clusters to cope with such additional requests. Increasing numbers of registered domain names will cause a more than proportional increase of the number of requests. As a starting point, a square potential of the number of modification commands has to be expected. b Reporting capabilities The hardware diagram of the architecture (para. 3. a) shows multiple computers involved to produce reports. A synchronized reporting only database will be used. Therefore the main and the second databases do not need to provide reporting capabilities. The intention is to at least provide the same information in the reports of the new registry as those currently provided by the current operator, VeriSign Inc. (VGRS), and it is understood that sample reports will be made available in September 2002. See transition plan (para. 9) for time schedule. c Peak capabilities for larger-than-projected demands Considerations for each component and measures required for handling continued high demands (larger-than-projected demands) are outlined below. The gateways provide SSL connectivity and forward requests to the RRP server. This simple architecture can deal with about 2000 connections in parallel at any time. Increasing system memory will increase this maximum peak capacity, if required. The RRP server has a queue which is normally empty. This queue will begin to fill up during larger-than-projected peak demands and the time for processing a request will increase. If the number of demands remains high, the computers involved can be upgraded to deal with at least twice the projected demands. Should further demand be encountered, the new registry will migrate to parallel computer architectures. The database can handle 10 times the current total of ORG domain names (2.7million). A critical issue is the number of add or modification demands per second. With an upgrade of the hardware, twice the projected load can be processed. If demands increase still further, an Oracle cluster can be used for the main and second databases. WHOIS directory services are also designed for parallel architecture. They are therefore scalable, using additional parallel computer parks (clusters) on the same anycast IP address. With larger-than-projected-demands, the only effect on backup will be increased times and larger files. In the data escrow and backup section (para. 7) an option to change backup systems to larger tape systems for storing more data is mentioned. An online backup system will be used for the databases and since the registration system is available during such tasks, there will be no adverse effects for registrars connected to the registry. The personnel count for registrar services (helpdesk) is calculated according to the number of anticipated trouble tickets, phone calls and e-mails per time and the estimated probability that such requests need to be escalated to back office personnel. Some estimates on average weekly customer-service volumes can be made from the document .org Metrics Data Package and this is used for a starting point. For further details see section technical and other supports (para. 11).
|
top | ||||||||||||||||||||||||||||||||||||||||||||||||
5.Billing and collection systems The default principle is: domain name registrations can be made until the amount of money on the registrars bank accounts of and the bank guarantees is reaching the value zero. Each registrar will be notified if a certain registrar-specific threshold has been reached (these thresholds can be set by the registrars). The main database contains accounting information for each registrar. Security for accounting information is realized by only allowing restricted access to the database using special user and table-spaces in the Oracle software package. The accounting tables for each registrar are modified (charged or credited) according to payable transactions. Synchronization of the registry system with a dedicated on-site accounting system is ensured by inserting the reporting database between. Reports are required if the number of payable transactions counted by the registry system and the amount of money on the registrars accounts is not consistent or contested. The dedicated on-site accounting systems perform monthly clearings and bookkeeping tasks. Accounting arrangements with each registrar will be made, enabling registrars to use banks in their neighborhood (Asia, America or Europe). The reporting database and the dedicated on-site accounting system will provide accounting information for monthly and other reports. This information will be accessible at each registrar services office in Singapore, Memphis and Zürich and registrars can download lists displaying payable and received transactions and access further accounting information from a special individually restricted Web site.
|
top | ||||||||||||||||||||||||||||||||||||||||||||||||
6.Publicly accessible look up / WHOIS service a. General information The hardware architecture diagram in para. 3 above specifies two parallel computers for WHOIS directory services on both locations using an anycast IP address. No commercial bulk access to WHOIS data will be provided. Upgrading WHOIS directory services to handle increased demands is a linear scaling issue and will be solved with additional parallel computer parks (clustering). Just to provide an estimate: the system used so far for CH and LI second level names is set up on one computer and can deal with up to 300 WHOIS requests per second. The search engine realized for CH and LI can deal with substrings and presents results within 6 seconds (see https://wwws.nic.ch/reg/domain/searchdomain.cfm). The availability of the registry WHOIS directory service (thin registry concept) will be 99.9%. The thin registry concept employed in the beginning can be changed to a thick registry model if the need arises (see appendix D, Community Concept, for methods to seek input from the ORG community). Coordination with other WHOIS directory services, such as UWHOIS, is already established for CH and LI and will be extended for ORG. SWITCH also closely follows developments using SRV records to locate WHOIS servers, the referral RWHOIS service (RFC-2167) and the WHOIS++ index service. b. WHOIS compliance Compliance with specifications of the NICNAME/WHOIS (RFC-954) is assured. WHOIS directory service for ORG is provided with one anycast IP address for the parallel working computer parks to ensure highest availability. New developments for WHOIS or similar directory services will be evaluated and supported. Each registrar will be contractually required to run an independent and full WHOIS directory service and to provide accurate contact information of registrants. The registry itself runs a thin registry at least in the beginning and its WHOIS directory services will only provide domain name, name servers, registration date information and details of the sponsoring registrar.
|
top | ||||||||||||||||||||||||||||||||||||||||||||||||
7.Security, reliability, availability, registry failure provisions and backup a. System security ORG registry security system: Physical security of each data center is discussed in section hardware (para. 3) above, security for each registry service is outlined below. All computers providing WHOIS directory service have software firewalls. The SSL port will be used to separate WHOIS access and internal network. The computers hosting Web sites for registrar related issues will be accessed via HTTPS and login requires individual username and password for each registrar. These computers have restricted IP routing access. The SSL port and a software firewall will be used to distribute and update Web site information. The gateway servers have a restricted IP routing access and provide the SSL connections for the registrars. The RRP server processes only the RRP instruction set with additional improved commands desired from the registrars. All other computers are accessible only locally. The main database, carrying sensible data, can only be accessed through RRP/EPP server commands. Security for such sensible data is thus assured and recovery, fault prevention and backup strategies are described in the corresponding sections below. Due to security reasons, all other than ports than those mentioned above are disabled for all computers. b. General availability Availability has been defined to be ca. 99.4% for registry services. This means that a disruption of services will last no longer than 2.19 days per year or 52.56 hours per year. During normal monthly maintenance work, the second system will take over. The Oracle data base is specified for 99.9% availability with redundant systems, which corresponds to 8.76 hours unavailability per year. Working on the basis that all 3 components (gateway, RRP/EPP server and database) have identical failure rates, half of the unavailability (26.28 hours per year) will be free for additional required outages or unplanned disasters. Planned outages, when the registry system will not be available, will be announced to the registrars at least one week in advance. Such outages will last no longer than 8 hours per event. The unplanned disasters referenced above result in the failure of the systems at both sites. Otherwise, operation will continue immediately from the second system. c. System reliability (QoS) Quality of service can be defined for the proposed system as follow:
Quality aspects specified in a), b), c) and d) above will be provided by the system, even for peak capabilities, unless demand is significantly above the projected demands described in para. 4 above. Quality aspects specified in e), f) and g) will be covered with a ticket system for registrar requests and system monitoring tools (see section d) below). d. System outage prevention The registry will perform measurements for all registry services except for name server performance measurements. A registry system is deemed to be not available, if the quality points a), b), c) or d) described in the System reliability section above are not fulfilled. The registry uses the Big Brother tool for monitoring network, computer and application status. This tool uses a client/server architecture combined with methods to push and pull data. Each service will have one or more tests on availability and performance. Redundancy is implemented on an architectural structure (by multiple systems). In addition to the architectural structure prevention, each component has a backup for application and data recovery. The possibility of power failure is covered by multiple UPS on both systems and other means. All critical components are built with Sun computers. In the case of failure, an on-site service agreement with Sun will guarantee to have a technician on the site within 4 hours during working hours. Both locations have additional technical staff for maintaining or repairing hardware failures. e. System recovery procedures The hardware architecture (see diagram in para. 3) shows a second identical system to be used in case of failures or unavailability of the other. Therefore there is normally no need to recover data or system components. Recovery procedures are only needed in case of unavailability of both systems. In this case each system has recovery strategies, which have been fully tested on the CH/LI system in use. These strategies guarantee that only the last open transaction will be lost. The critical points for data loss are the centralized databases of each system. SWITCH has technical staff with more than 12 years Oracle database experience. Knowledge of backup and recovery strategies has been carefully built up at SWITCH and the new registry will be able to benefit from that. An example of recovery strategies in the case of all local discs having lost their data: The online backup files are used to recreate the data at the time the backup was performed. The archived log files are then applied and Oracle recovers the data for the period up to the time of the crash. Because the transaction log files are very important, multiple mirroring of these files is required, not only on local, but also on external discs. If both systems fall victim of a majeure force and no hardware can be reused, the following service agreements will be made and the following time statements for restoring the system can be issued:
Time calculations:
f. Data escrow and backup A backup of the database will be performed each day by local tape robots installed in each registry site, while the registry system is operation. This backup is used for restoring registry data. Additional log files for transactions are used to recover the database to the point the crash has taken place. These additional log files will be multiple times mirrored for optimum security and reliability. This multiple mirroring is realized on hardware using RAIDs and software using internal and external discs. Additional software is being used to mirror these files also on different systems and storage media, such as a NETAPP filer. All mirroring is performed immediately and not delayed, with the result that in case of a crash all mirrored log files are up to date. An external data escrow agent receives daily updates of the entire data base and these files can be used as escrow data (see para. 8 below). The zone file is escrowed to a dedicated secondary name server (C) and the manager of this name server acts as escrow agent for the zone file. It should be remembered that ORG is operated on the thin registry model, where the registry has no data on registrants, just domain name, name servers, registrar name and certain time information. g. Registry failure provisions An external backup agent will receive a backup of the entire registry data base file once or twice a day. SWITCH is in negotiation with MOUNT10, a Swiss based expert for storage infrastructures, with offices in Germany (Munich, Dresden and Hamburg), Austria (Vienna), Finland (Helsinki, Lappeenranta) and USA (Houston). This backup data is intended to serve as disaster recovery in case of severe crashes. The recovery data is stored in a converted Swiss military bunker in a Swiss mountain (data fortress) and is secure against earthquakes and most possible man or nature made incidents. This system is also fast and efficient and it includes assessment, alignment, planning, certification, implementation and automation processes of the solution at the registry site (see http://www.mount10.ch, section disaster protection, for more information). Other failures of the registry due to physical vulnerabilities are highly unlikely. A failure due to insolvency of the foundation can be neglected due to the non-commercial character of the foundation, sound financial management, multiple sources of income and the public sector interests in the foundation: the Swiss government and eight Swiss cantons are the founders of SWITCH.
|
top | ||||||||||||||||||||||||||||||||||||||||||||||||
8.Description and implementation of RRP and migration to EPP server a. Registry-registrar model and protocol The registry will be a "thin-registry" and will use RRP, version 1.1.0 (May 2000), as described in the Revised VeriSign Registry Agreements Appendix C. Transition from the VeriSign registry system to those operated by SWITCH will be transparent to the registrars. b. Details and Performance of the RRP implementation For smooth transition to the new registry SWITCH will still be accepting the certificates used by the registrars, originating from the appointed commercial certification authority. The RRP can be divided into informational commands and add/modification commands. The informational commands, such as "check", "describe", "status" and "quit" are processed directly on the RRP server using a lookup table, synchronized with the main Oracle database. The add/modification commands, such as "session", "add", "del", "mod", "renew" and "transfer" will operate on the centralized database. Publically available data from ICANN and VeriSign Inc. show that informational commands (malformed and correct) account for in excess of 98% of the command stream, and data published by ICANN in August 2001 on "add storms" load are reflected by this approach to system design. This allows load distribution on a protocol layer. The response time for the informational commands will be less than the 3 seconds specified for peak loads. For the add/modification commands the response time will be less than the 5 seconds specified for peak loads. Up to 500 informational commands per second can be handled. This means up to 43 million requests per day. The planned maximum peak will be 1000 requests per second. The system can handle up to 400 modification commands per second. This means up to 34 millions requests per day. The planned maximum peak will be 600 requests per second. For the purposes of understanding the scaling capabilities, the system could handle more than 10 renew requests per day for each of the existing ORG domain name list (approx 2.7m). c. Procedure for object modification
The add command creates objects like domain names or hostnames. The accepted add request is immediately processed on the main database and the registrar is charged for the registered domain name.
All modification commands change the object in the main database after the commands are accepted either by the registry or the registrar. The centralized authority database guarantees data integrity and data consistency. Transfer and other pending requests are served from a request queue, this queue ensures first-come-first-served principles. The 'Renew' command will be immediately processed and be charged to the registrar's account. If a registrar does not renew a domain name before expiration date, an auto-renew is initiated by the registry and the registrar is charged. The expiration date of the domain is extended for one year. The "transfer"command will insert a pending request in the request list. This request list ensures equal treatment of all registrars on a first-come-first-served basis. An e-mail is sent to the losing registrar. The losing registrar can approve or reject the transfer during the transfer pending period. After that period the registry automatically approves the transfer and sends a change notification to the losing registrar. The "delete" command will update the domain status from registered to registry-hold. During this period the domain name is no longer in the zone file. The registrar receives a delete notification. After the delete commands there is a delete pending period. After this period the domain name will be deleted from the registry database and the domain name is available for new registration. d. Grace period The add grace period will be 5 days. During the period the following rules are applied (consistent with VeriSign practices):
During the renew/extend grace period, 5 days after the renew commands, the following rules are applied (consistent with VeriSign practices):
The "Auto-renew" grace period will be 45 days. During the auto-renew grace period, the following rules are applied (consistent with VeriSign practices):
The transfer pending period is set to 5 calendar days. During the transfer pending period, the following rules are applied (consistent with VeriSign practices):
During the delete pending period the following rules are applied (consistent with VeriSign practices):
e Exceptions for the grace periods
If an operation is performed, that falls into more than one grace period, the actions appropriate for each grace period apply (with some exceptions as noted below)
f. Migration to provreg standard (EPP) Wampumpeag, LLC (Eric Brunner-Williams, General Manager) is developing a high-performance, 2nd-generation RRP server in C++. This RRP server is scheduled to be operational at SWITCH in 4Q02 (Fall, 2002). Wampumpeag began work on a reference implementation of a multi-protocol registry backend, supporting the EPP (IETF), RRP (VGRS), and SRS (CORE) protocols, over TCP, BEEP, FTP, and SMTP, in January of 2002. Wampumpeag's EPP server is scheduled to be operational at SWITCH in 1Q03 (Winter 2003). SWITCH is aware that RRP is used by many ccTLD registries and its registrars and that RRP is also important for the smooth continuing of operation of many ICANN accredited registrars. SWITCH will license its RRP server to ccTLD registries at no cost, either upon transitioning the ORG registry from RRP to EPP, or at an earlier point in time. Within the proposed registry system the parallel usage of both protocols is foreseen. The gateway server will forward the demands to the parallel operating EPP and RRP servers. An intermix of usage, for example polling from the EPP-protocol, after a transfer request through the RRP protocol has been established, will be assured. The protocol transition is therefore individual for each registrar. For the registrar an extra web-site with hints and a discussion forum for the transition toward EPP will be enabled. An EPP reference client will be posted on the registrar web-site for downloading. Further help on problems discussed in the forum will be provided. . The transport layer for EPP is TCP, secured by SSL.
|
top | ||||||||||||||||||||||||||||||||||||||||||||||||
a. Time schedule of the transition with milestones:
b. Interruption of any registry services The first number in the list below is the expected time the second is the maximum time for transition, if all transitions proceed as planned.
c. Effect on ORG registrants The registrants will not be able to register domain names, until the new registry service is available for the registrars. See b) above for expected and maximum times. d. Effect on Internet users seeking to resolve ORG domain names There will be no interruption in name server services during the transition from the current operator (VeriSign, Inc.) to the new registry. e. Criteria for a good transition
f. Experiences The ccTLDs CH and LI had a system change in December 1999. Zone file updates were interrupted for several days until the new registry system could provide Web based registration access.
|
top | ||||||||||||||||||||||||||||||||||||||||||||||||
a. Mechanism to ensure compliance with ICANN developed policies and requirements of the registry agreement The new registry operator (SWITCH) will participate in appropriate stakeholder committees established by ICANN and will abide by ICANN developed policies and requirements of the registry agreement for the ORG TLD. The mechanisms to ensure compliance are a) to follow closely the developments within the ICANN community and b) to put such policies in force in close cooperation with the ORG community (registrars and registrants) and SWITCH personnel working on the implementation. Please also refer to the appendix D, Community Concept, for specific processes proposed for ORG. If the registry agreement needs to be changed by either party (ICANN or SWITCH), SWITCH enter into negotiations with ICANN to resolve such issues. b. Provision for equivalent access by accredited registrars The gateway server will have capabilities for all registrars to be connected on a non-discriminating basis. The Web sites for registrars will have a trouble ticket system and provide online information regarding open tickets until they are closed. The 24/7 hours helpdesks will be accessible also on a non-discriminating basis it will be ensured that all registrars receive help. The three proposed locations of the helpdesk (in Switzerland, Asia and America) will allow for close interaction of registrars with the registry. c. Registry Code of Conduct SWITCH recognizes that in most instances the Domain Name System (DNS) is the means by which businesses, consumers and individuals gain access to, navigate and reap the benefits of the global Internet. SWITCH also recognizes that the DNS resources need to be administered in a fair, efficient and neutral manner. The following Code of Conduct will therefore be applied by SWITCH:
|
top | ||||||||||||||||||||||||||||||||||||||||||||||||
11.Technical and other support a. Registrar services Three registrar services offices, one in Zürich, one in the US, one covering the Asia Pacific region will provide customer care for registrars on a 24/7 basis. These three registrar services will monitor both ORG registries and the worldwide name server network for ORG domain names and serve as first level support for registrars. The headquarters will be located in Zürich, with approximately 12 to 15 FTEs: 8 to 10 for first level support and marketing, 4 to 5 for accounting. The office in Asia will initially be set up with 4 FTEs and the office in the US with 5 to 6 FTEs. Supported languages will be English, Spanish, French, Chinese, Japanese, Portuguese and German, with each office free to offer additional support. The distributed locations of registrar services allows for registrars to use financial institutes in their neighborhood. b. Back-office services Back-office services will provide expert know-how for in-depth technical problem analysis and accounting issues for the ORG registry. The SWITCH network (NOC) group comprises of 7 persons and will work on all issues related to name servers and the network. The group of system administrators at SWITCH (8 persons) will maintain ansd upgrade the registration systems and SWITCH security experts (5 persons) will monitor the systems from a security point of view and take action in case of events. Personnel from the CH and LI front- and back-offices can be assigned to ORG on short notice in case of need of human resources (more than 40 persons) and the SWITCH Web design team will provide help setting up Web pages (3 persons).
|
top | ||||||||||||||||||||||||||||||||||||||||||||||||
12.Contributions from CORE Internet Council of Registrars CORE has agreed to contribute staff resources, expertise and software for .org registry run by SWITCH. This collaboration is based CORE's commitment to the cause of operating shared registries on a not-for-profit basis in the public trust. CORE's contribution helps to increase the level of awareness for the concerns of registrars and providers at large (not just large ICANN-accredited registrars).
CORE has extensive experience in co-operative development and operation of shared provisioning systems. Developers are affiliated with CORE members and tasks are distributed to achieve redundant availability of data, systems and expertise. CORE currently operates as a decentralized ICANN-accredited registrar enabling its members - under equal terms for all members - to register domains for their clients and resellers through CORE's shared registration systems. Currently CORE supports registrations in under biz, .com, .info, .name, .net, .org and .us. Since mid 1999, has performed more than 1.1 million new domain registrations for its members. CORE also developed and operates the first shared registry for a restricted (sponsored) gTLD, .aero, under an outsourcing agreement with SITA.
Contributions of CORE include: (1) contribution to
the development and extensive testing of RRP and EPP server and client
tools, (2) contributions to the development and standardization of critical
process (e.g. bulk data, list requests, notifications) not defined in
RRP and EPP, (3) input and critical review for the transition process
from VGRS to the new registry in particular with respect to the handling
of accounts, grace and pending periods and NS glue records, (4) input
for
CORE Internet Council of Registrars is an international
not-for-profit Association of registrars. Its legal form is that of an
Association as defined in Swiss Law (Art. 60-79 CC) established in October
1997 for the purpose of creating a shared domain name registry for the
introduction of new TLD's. The mission of CORE is to develop and operate
standards
The membership of the CORE association spreads
over four continents and 25 countries. It includes highly specialized
registrars, major Telecommunications or Internet infrastructure and application
service providers. |
|||||||||||||||||||||||||||||||||||||||||||||||||