C17.1. General description of proposed facilities and systems. Address all locations of systems. Provide diagrams of all of the systems operating at each location. Address the specific types of systems being used, their capacity, and their interoperability, general availability, and level of security. Describe buildings, hardware, software systems, environmental equipment, Internet connectivity, etc.
Unity Registry will implement a world class registry system that meets the toughest testing criteria and highest standards ever imposed on any existing registry system. The system we plan to implement will use the already extensively tested and deployed registry software built by AusRegistry.
The software is fully EPP version 6.0 compliant to the current draft and implements all elements of the protocol. The framework of the software will be easily adaptable to implement the required RRP protocol (see Section 17.2).
Also supplied by AusRegistry will be a fully RFC 954 compliant WHOIS server that integrates with the purposed registry software. This has also been extensively tested and proven in a production environment. The software is currently in use in the .au Registry and is being maintained and supported by AusRegistry during the entire period of this contract.
The registration software will be backed by an industry leading Oracle9i database (see 17.3), which will be providing the relational database storage for the registry system.
Powerful software is useless without powerful hardware to match. Unity Registry will use only highest quality, “best of breed” equipment. All equipment used in the registry will be duplicated at the standby location. Every application machine is capable of being either an EPP, RRP or WHOIS (or combinations) service provider. WHOIS and OTE requests will be served from the standby location through the application machines configured there while EPP and RRP requests will be served from the primary location.
Overview of data flow and load balancing
(The following discussion applies to both standby and primary registry locations.)
As a request from a registrar enters the registry network, it will come in through one of two Cisco 3640 Routers redundantly configured using HSRP.
From here the request will pass through a Packeteer packet shaper which will apply bandwidth utilization policies and rate limiting, as well as logging the request. The packeeter will pass the request into a Cisco 417G Load Balancer, and this load balancer will select a machine from the application cluster to service the request using a predetermined policy based on machine utilization and number of serviced queries. The load balancer constantly monitors each machine in the cluster and when it detects that the service is no longer offered it removes that machine from the list of those available to service the request.
The Application servers themselves are dual processor Pentium III based systems, each with 2GB of memory, running the Linux operating system with a version 2.4 kernel, and AusRegistry’s EPP/RRP daemon and/or WHOIS daemon. These machines will be connected to the load balancer via a 100 megabit network, they will also be connected to the database machine via gigabit backbone.
A Sun Fire 4800 series machine with 6 x 900MHZ Ultrasparc III CPUs will power the Oracle database. The Sun Fire will run the Solaris 8 operating system and Oracle 9i, and the entire database will be replicated through the redundant dual gigabit fiber links to the standby site ( for more detail see section 17.3). A Fiber Channel storage disk array will provide the required storage for the database system. The Sun StorEdge 3900 with 655GB of disk that run at 15000 RPM will provide more then sufficient storage at a high speed. The disks will be configured in a RAID 10 configuration meaning that they will provide both the high speed reading capabilities of a striped system, and the reliability and redundancy of mirroring, providing the ability to reconstruct a failed disk, without the need for computing a parity at each disk write. This is turn provides the highest possible reading and writing speeds in the disk array without sacrificing redundancy.
The SunFire device, allows for total redundancy as every component, from CPU to power supplies is able to be 'hot swapped' without the need to power down the machine. The SunStor storage is also able to have disk drives inserted and removed without the need to power down either. This allows for a highly available and highly scalable database solution.
The network through which all this traffic will flow has been designed to ensure that there is no single point of failure. There is also more then one path a packet can follow to reach its destination. Packeteers, and load balancers are duplicated for redundancy so that if one fails the other can continue in its place. They are linked by their own independent serial cables, so that they are not reliant on the network for synchronizing their data (internal caches etc), and the process of one failing over to the other is transparent to the users of the network.
Management machines are situated at each data center and are used by Unity Registry staff to perform management tasks: they are also the machines that run monitoring scripts, SMTP gateways, SMS messaging services etc. These machines are also the only machines directly accessible using ssh from outside via their “live” interfaces. For a staff member to start or stop services on an application machine, they would need to first connect to one of these management servers and then connect via ssh to the internal interface of the application machine. Shell access to the Registry system will be limited to critical Unity Registry staff.
Two nameservers are also located at each location. One of them accepts dynamic updates from the database while the other, using a combination of monitor and heartbeat scripting, acts as a secondary until it detects a failure of the master. It then assumes the masters IP address and becomes the master. This fail over ensures that DNS service will constantly be available.
Application machine failoverwill be handled by the load balancer, which will detect machines that are no longer supplying their designated services and remove them from the configuration. When it returns to service the load balancer will detect this and reinstate the machine in the cluster. For a full discussion of this refer to Section 17.14 below.
If major networking problems occur, or in the unlikely event that all application machines fail at the same time the load balancers can transparently “fail over” the application services to the secondary site through the dedicated gigabyte links. This automated transition is expected to take seconds to perform and should pass by unnoticed to clients, except that they will have to reconnect to the registry. It also means that service performance may suffer during the period of the failover whilst the issue is resolved. Naturally registry engineers will be working on this non-stop until the issue is resolved.
Should the SunFire or its associated storage fail, the application machines will be able to redundantly fail over to the secondary site database. This automated transition process should be seamless to the registrars and will not even require them to reconnect to the registry services.
A major networking or hardware failover can cause a complete failover of all application and database services to the secondary site. This change over involves a shift in BGP advertisement to “show” the Internet the new route to our registry services. It is expected that this transition should take about 15mins to complete. We feel that 15mins downtime for this highly unlikely event is an acceptable amount of time for entire site failover.
Location of Equipment
Unity Registry partners have experience of building the infrastructure for both a gTLD (.coop) and a ccTLD (.au). We believe that the correct approach is to build the .org infrastructure in a similar way.
Unity Registry believes that the choice of data centers results in a highly available solution which delivers carrier dispersion and the best “bandwidth” access both to America and Europe. We have decided not to base our core registry systems in Australia as the majority of current and prospective registrars are based in the US and Europe.
The main network operation center for Unity Registry will be located at the Business Serve Data Center in Salford, England.
The secondary network operations center will be co-located with the Poptel-managed .coop registry in TeleCity, Manchester, England. A standby network operations center could be commissioned at short notice at AusRegistry’s Melbourne data center.
Figure 3: Primary Data Center Schematic
Main NOC: Business Serve Data Center
The Business Serve Data Centre is located at Salford Quays within the Greater Manchester area, with sufficient distance between Salford and Manchester TeleCity centres that even in the most extreme of circumstances (for example a terrorist attack) both centres would not be affected at the same time.
Business Serve was identified as the most appropriate company for Poptel to partner with as they have a Dual Triangulated Network across the UK.
Their dedicated facilities at Manchester Site that both physical and technology security measures.
The Business Serve Data Centre is the only NEBS3 Compliant Carrier Class ISP in the UK to date and offers:
· diverse transit connectivity from a number of IP Bandwidth Suppliers
· full 3 phase UPS
· diesel generator built into the basement of the Data Centre
· 24 hour building security guards
· dedicated 24 hour data centre security guards
· steel belted window bars
· physical and electronic tag key access (transactions are logged)
· 24 hour web cam access
Unity Registry’s Operations Centre at Salford is “On-Net” with the Business Serve Data Centre. The current transit capability of Business Serve includes 100Mb/s from Genius and 100Mb/s from Tulia (Backbone currently runs at 1Gb, but is upgradeable on demand to the maximum of current technology), Business Serve supply the Unity Registry Operations Centre with two dedicated 100Mb/s links traffic usage with a guaranteed bandwidth and a dedicated burstable ceiling. Both Business Serve and Unity Registry continually monitor traffic utilization for billing purposes, and bandwidth utilization for guaranteeing SLA’s. Upstream requirements can be fine-tuned on demand, with a supplier turn-around of under 12 hours.
For service and content providers, peering with TeliaNet gives rapid access to Internet content anywhere in the world, via one of the world's largest and best-managed IP backbones.
Poptel’s Network Operation Center (NOC) was designed from the outset with the key objective of ultra high availability. It is located in a purpose built suite in TeleCity Manchester (http://www.telecity.co.uk/), a facilities managed data operations center specifically targeted at ISPs and Telcos. Consultants to the project included the A L Digital Group (http://www.aldigital.co.uk/) one of the UK’s leading experts on high-availability Open Source architectures.
Incorporating a range of equipment by leading manufacturers such as Cisco and Foundry the network is highly available and poses no single point of failure, and incorporates the following features:
· it has a UPS capable of a full hour of operation
· it has a standby diesel generator
· it features 24 Hour manned building security
· it has steel belted window bars
· there is physical and electronic tag key access (transactions are logged)
· it has an air conditioning unit and backup with FM200 Fire Suppressant
Manchester Science Park is a multi building development. Poptel occupies space in two buildings. Williams House (Telecity) is a major peering point in Manchester and purposely designed and constructed for use as a secure location for Highly Available Computer Equipment. It is a multi storey building with the Operations Centre being located on the Ground floor. Security requirements are met by means of CCTV, 24 hour Staff and Electronic KeyTags that have each transaction logged.
Rutherford House is a multi-storey building comprising a range of office suites. A computer network is installed with multiple Servers, PC and peripheral connections and dedicated links to the Operations Centre at Williams House. Staff use both desktop and laptop equipment to access data stored on the servers etc.
The file servers are used to record all of Poptel’s documentation (Board Minutes, change control, project papers, technical papers, etc) paper copies of which are usually filed in filing cabinets at Rutherford House. Poptel makes extensive use of e-mail and electronic distribution of information via the Internet. All information contained within Poptel’s file server is backed-up regularly and stored off-site.
Poptel’s Manchester Operations Center incorporates both an Air Conditioning unit and a fail over air conditioning unit to control temperature within the Center.
FM 200 gas fire suppression is also installed to ensure that in the event of a fire, oxygen is deleted from the Center quickly, without damage to the equipment
Manchester’s Operations Center includes the following security measures:
24-hour building security guards
CCTV at building perimeter (motion sensing / 24 hr recorded)
24-hour operations staff on-site
Secure access systems: electronic Key Tags(logged transactions), Access Control Lists (ACL’s)
CCTV cover of data center (24hr Recorded)
Easynet’s network is truly optical end to end (SDH and DWDM) with both massive capacity and high fiber count: Easynet Telecom has more available modern fiber bandwidth nationwide than almost every other UK carrier. There are minimum 48 fibers deployed throughout the network with a maximum of 240. Each fiber is 80 wavelengths capable with each wavelength operating at 10Gbps.
The core network is fully redundant i.e. for each path connection there is a permanently provided standby path. If the main path breaks, the standby is switched over within 50 milliseconds. Network resilience is based on a network architecture that mixes SDH rings and a mesh topology.
Underpinning the physical network is a leading-edge service management center (SMC) that delivers unbeatable network availability, because it has been designed as a single logical network with one management interface and a single, integrated network management system.
Currently, XO has multiple high capacity 622Mbps transatlantic links and connectivity into continental Europe with peering points in Amsterdam, Frankfurt, and Paris.
One of only a few Tier One providers, XO has international and peering arrangements second to none.
With over 200 public and private peering arrangements with other major Internet backbones worldwide, customers benefit from huge flexibility in where and how XO can provision capacity beyond its network. Traffic is exchanged directly so that packets reach their end destination quickly and without any loss.
Poptel, Unity Registry’s partner, is currently working towards BS7799 - the British Standards Institute, standard in Information Security Management. To that end Poptel has a cross-organization information-security working group to coordinate good information security practices across the whole organization. In line with the standard we are currently defining the scope and controls used for our Information Security Management System. When this process is completed they will seek certification from a third party accreditation body.
In addition, Poptel has negotiated with Akita Systems (http://www.akitasystems.co.uk), a third party computer security companies to provide a minimum of an annual scan of its networks to check for security problems, but this is increased on demand whenever network changes are carried out. This is in addition to the usual monitoring of security mailing lists for security problems in the operating systems and applications, performed by our system administrators. Poptel also uses SNORT (http://www.snort.org/) an Open Source network intrusion detection system, with weekly updates of intrusion signatures to monitor suspicious network traffic.
Poptel filters connections through the NOC using ACLs on both the border routers and the Ethernet switches. An IDS (intrusion detection system) forms part of the Bandwidth Monitoring servers. IDS checks all packets flowing in or out of the NOC for suspicious activity based on attack signatures such as overflow code or common worms. A rule-set is used to identify events that warrant SMS and email alerts to the security officer.
Victoria Building at Salford Quays is a multi storey “Flagship” building, both physically and technically secure. The building contains a number of offices and an open foyer and conference area.
The dedicated suite housing the data centre contains swipe security and key security as well as alarm (24 hour monitored) and closed circuit television. Security guards man the data centre and separate guards control building access.
A separate office suite is furnished with normal office equipment, including desks, filing cabinets and telephones. An internal server room is separately secured with further access limitations and a separate alarm.
A computer network has been established with a number of servers and network connections in each of the offices. Staff make use of desktop and laptop computers to connect to the network and access shared files and peripherals (e.g. printers).
The file servers connect to s Storage Area Network (SAN), used to record all of Business Serve’s documentation (Board Minutes, Committee Minutes, working papers, technical papers, etc) paper copies of which are usually filed in filing cabinets. Business Serve makes significant use of e-mail and electronic distribution of information via the Internet. Information contained on Business Serve’s network is backed-up regularly.
Business Serve provides both an Air Conditioning Unit and fail over air conditioning unit.
Fire and water detection is also installed and in the event of fire, FM 200 gas fire suppression is provided. Further fire protection is provided by the One hour fire rated shell of the data center.
Business Serve offers the following security measures:
To meet stringent security measures Business Serve utilise under-floor cabling throughout and access is restricted per rack cabinet as all sides and doors are fitted, secure and double locked.
Business Serve provides Dual Phase power to each socket within the data centre, further maintained by UPS for one hour.
In exceptional circumstances, the Diesel generator built into a well beneath the Data Centre will ensure power is maintained indefinitely due to its “In-Flight” refuelling capability.
Further details of Business Serve and their capabilities can be found in Section C13.
Poptel’s Manchester Operations Centre has redundant power to each rack within the data centre, further maintained by UPS for one hour. In exceptional circumstances, a Diesel generator will ensure power is maintained for up to 24 hours.
Unity Registry will adopt all of these procedures in its registry operation.
Inter-Data Center Connectivity
Unity Registry also connect Poptel's Network Operational Centre at Williams House, Manchester Science Park and Business Serve's Salford facility at BGP and Core Switch Level using SingleMode Fibre, providing Quad high-speed links at gigabit speed. This allows for a much wide carrier dispersion and allows for cross site redundant network configurations
Poptel’s London Office
Poptel’s Business Office is located at
21-25 Bruges Place
Power is currently maintained by UPS with enough power to last 4 hours.
The server room contains both climate control and duplicate back-up of office administration systems. Back up tapes are also created regularly and kept off site. In the event of the London office becoming unavailable, the Manchester Office could act as a back-up office, having a mirrored system, until either the London premises were made good or until alternative an office suite was sourced in London. The latter, fully equipped, are readily available and in this event, tape backups could be used to recover office files.
Poptel's London offices are centrally located in Camden Town. Bruges Place is a modern development and the Poptel offices are fully equipped and cabled, including video links to the Manchester offices in Rutherford House. Connectivity to the Manchester offices in Rutherford House is provided by both leased line and ISDN. It has secure entry systems, closed circuit TV to monitor and control access, and is fully alarmed. This location does not house any of the technical infrastructure proposed for the .org Registry.
Bruges Place is a sound and physically secure building. It is a Multi-storey building with the office being located 2nd floor above ground. It contains swipe card access to the building and has swipe security and key security to the office as well as alarm (24 hour monitored). The building contains a number of offices for staff and an open conference area. The building is furnished with normal office equipment, including desks, filing cabinets and telephones. A server room is separately secured with further access limitations and a separate alarm.
A computer network has been established with a file server and network connections in the office. Staff make use of desktop and laptop computers to connect to the network and access shared files and peripherals (e.g. printers).
The file server is used to record all of Poptel’s business documentation (Board Minutes, Committee Minutes, working papers, technical papers, etc) paper copies of which are usually filed in filing cabinets and on the Intranet at Poptel. Poptel makes significant use of e-mail and electronic distribution of information via the Internet. Information contained on the Poptel network is backed-up regularly and stored off-site.
10 Queens Rd
The building is sound and physically secure. It is a Multi-storey building with the office being located 6 floors above ground. It contains swipe card access to the building after hours. It has swipe security and key security to the office as well as alarm (24 hour monitored) and closed circuit television. The building contains a number of offices for staff and an open conference area. The building is furnished with normal office equipment, including desks, filing cabinets and telephones. A server room is separately secured with further access limitations and a separate alarm.
A computer network has been established with a file server and network connections in each of the offices and public areas. Staff make use of desktop and laptop computers to connect to the network and access shared files and peripherals (e.g. printers).
The file servers are used to record all of AusRegistry’s documentation (Board Minutes, Committee Minutes, working papers, technical papers, etc) paper copies of which are usually filed in filing cabinets at AusRegistry and with various executives. AusRegistry makes significant use of e-mail and electronic distribution of information via the Internet. Information contained in AusRegistry’s file server is backed-up regularly.
A back-up office is currently being developed in Sydney (1000 kilometres away) in case of disaster. This office will have a mirrored server and similar security to the Melbourne office.
As a further back-up, copies of the data will be stored overseas. In case of system loss the server and internal network can be replaced by off-the-shelf items obtained from local computer stores. The information contained in the file server will be loaded from off-site back-up copies of the data. Such an occurrence would be considered an inconvenience rather than a catastrophe.
AusRegistry maintains dual Internet access at each of the business offices. Staff laptops also have built-in modems, and dial-up access to the Internet would be available even if Internet communication to the AusRegistry file server were interrupted.
This redundancy provides protection, though long term failure of Internet access to AusRegistry’s Business Office would impact on the efficiency of the office not performance of the registry.
Registry Hardware & Software
As the details of the equipment at each site are identical the following information applies both to the primary and the secondary locations.
1 x Intel SR1200 1RU Server Chassis (KCR)
1 x Intel PRO/1000 XT Server Adapter (PWLA8490XTL)
1 x Intel SCB2 SCSI Server Board (SCB2SCSI)
1 x Intel SCSI Backplane (BCR1USBPWB)
2 x Intel Pentium III 1.13GHZ Processors 512kb Cache
1 x Intel Slim line CD-ROM/Floppy Combination (AXXCDFLOPPY)
2 x 1GB 133MHZ RAM (Intel Certified)
1 x 18GB SCSI Ultra3 HDD (Intel Certified)
(see Appendices A-D for White Papers on Server)
SUN Database Server:
1 x SubFire 4800/4810
6 x Ultrasparc III Cu 900MHZ 8MB Cache Processors
1 x 6GB RAM
2 x Gigabit Nic
1 x Storage Board with 2 x 18GB SCSI HDD
1 x Fiber Channel Controller
(see Appendices A-D for White Papers on Storage)
SUN Database Storage:
655GB 15000 RPM
FCAL Disk Drives
(see Appendices A-D for White Papers on Server)
4 x Cisco Catalyst 2950 10/100 24 port Switch
2 x Cisco Catalyst 3550 10/100/1000 12 port Switch
1 x Fiber GBIC
2 x Cisco Catalyst 4000
1 x 32x10/100/1000 Module
2 x 24x10/100 Module
1 x Fiber GBIC
2 x Cisco Router 3640
Including “Fire walling Plus” IOS
2 x Cisco Local Director 417G
2 x Packeteer Packet Shaper 6500 Series
(See Appendices A-D for White Papers on Networking Equipment)
Category 6 Networking Cables.
Category 5 Networking Cables.
Serial Link Cables.
Two Operating systems are involved:
OS (Intel Application Servers) - Linux (Kernel 2.4)
OS (Sun Database Servers) - Solaris 8
Database Software - Oracle 9.2i, Advanced Dataguard
Backup Software (Sun) - Sun Solstice
Backup Software (Linux) - Legato
Web Server - Apache
NameServer - BIND 9
SSL Libraries - OpenSSL