The network infrastructure is designed to be robust, fast and scalable. The design is based on a fairly standard design that has been well tested and found to be very reliable. It is designed to have no single point of failure, and for the network to continue operating even in the event of failure in multiple systems.
The network will consist of several zones: border, core and access. At the border, two Cisco 7206 routers will attach to upstream ISPs via Fast Ethernet and DS-3 interfaces. Each router will operate a Border Gateway Protocol (BGP) peering session with the ISPs that attach to it. BGP will be used to make route announcements for all public IP addresses used within the location, as well as to receive routing updates from the ISPs. The two routers will be connected to one another via a gigabit Ethernet link. For redundancy purposes, the routers will use Cisco’s Hot Standby Router Protocol to provide for automatic failover capabilities in the event of a failure. The routers will have public IP addresses, but they will make use of extensive access lists to serve as a preliminary layer of network security and prevent large streams of malicious packets from reaching the firewalls.
Also in the border zone are the firewalls. Currently, Newco is planning to use firewalls running Checkpoint’s Firewall-1 software on a Sun hardware platform. Stonebeat software from Stonesoft will be used to provide high availability monitoring and failover capability. All traffic will pass from the routers through the firewalls before moving to the internal network. The firewalls will use public IP address space on their outside interfaces and private IP address space on their inside interfaces.
The final component of the border zone is a subzone known as the “demilitarized zone” (DMZ). The demilitarized zone contains publicly addressable servers that are not protected by the additional security precautions created through the use of load balancing equipment in the core zone. Currently, no services have been planned that make use of the demilitarized zone, but it has been created as a security contingency for unexpected future services.
In the core, Newco will operate two Cisco 6509 switches each equipped with a multilayer switch feature card (MSFC) and policy feature cards (PFC). These multilayer switches will provide core switching (layer 2) and routing (layer 3) functions. Network paths will exist from both of these switches through the firewalls to the border routers. Additionally, these switches will be connected via redundant gigabit links on different modules. The 6509s will be configured primarily with gigabit Ethernet interfaces, although one module of Fast Ethernet interfaces will also be provided. The switches will use internal IP address space only.
The other service provided in the core zone is load balancing. Two Big-IP load balancers from F5 networks will be used for this function. Each of these devices will attach to one switch via two gigabit Ethernet connections. (The two connections do not provide redundancy; one is considered the “outside” interface by the Big-IP and will use public IP addresses, and the other is considered the “inside” interface by the Big-IP and will use private IP address space.) Requests from outside the network for public services such as Whois or the shared registry service will actually be routed to addresses on the “outside” Big-IP interface. The request will be processed by the load balancer and handed off to an appropriate internal system. The Big-IP can use a number of algorithms to determine the best server to hand an individual request to, but generally requests are handed to individual hosts in a cluster on a round robin basis. Note that the load balancer will only attempt to process requests destined for legitimate public services. No attempt is made to translate packets and move them into secure internal systems such as the database or NFS storage arrays. The Big-IPs will use a high availability configuration. At any given time, only one of the switches will be active. If a Big-IP or one of its network links should fail, the other load balancer will become active and take over all virtual IP addresses used to provide load balancing functions.
Finally, the access layer will consist of a number of Cisco 3548 layer-2 switches. These switches will connect to each of the Catalyst 6509s via a gigabit Ethernet connection, and will use the spanning tree algorithm to prevent routing loops and allow for redundancy in the event of a link or core switch failure. Individual hosts will attach to the 3548 switches via Fast Ethernet. Hosts requiring gigabit Ethernet access to the network will attach directly to the Catalyst 6509s.