Technical Summary









NOTE: Due to the confidential nature of the business information, certain

proprietary designs and methodologies have been omitted from this technical

summary but will be graciously supplied with proper acknowledgement of













2292 Paradise Drive Tiburon California 94920

Phone +1(415)437-4700 fax +1(603)462-3939 e-mail:

Copyright © 2000. All rights reserved.

Technical Summary

Design of the hardware and software infrastructure is being undertaken internally under the direction of Dan Sokol. The company is creating product proposals to software development companies. We are considering a combination of experienced experts including IBM, DigiPro, and Sapient. Software maintenance will be internally managed.

dotYP is creating a unique registry outside the traditional sense. All .yp URLs are by definition, valid, thereby obviating the need to sell them.

  1. Hardware

dotYP's servers will be located in Hong Kong (PRC), London (UK), Bombay (India), Sydney (Australia), and Santa Clara (California). They will be hosted at major co-location facilities (Exodus, iAsiaWorks, IBM). Bandwidth limitations at these facilities is a non-issue.

A. Facilities.

All facilities, both presently contracted and those in negotiations include:

    1. Redundant high availability power
      1. Where available, two phases of power from local grid
      2. High-capacity UPS
    2. High velocity air conditioning
    3. Enhanced security
      1. Observed man-trap entry into facility
      2. Gated and guard-monitored access to network operations center
      3. Transparent, fully-enclosed, and locked "cage" for hardware
      4. Monitored walkways
      5. Badge-only access to building
    4. Raised flooring and lowered ceiling

B. Machines.

The machines will be grouped in logical clusters. Multiple clusters may exist in any one YP-Root location. One YP-Root location may serve multiple regions. One YP-Root location will be designated the master back-end server, whose duties will be explained below. Any back-end machine will be able to function permanently or temporarily as a master back-end.

YP Root Locations. Every YP-Root location will contain:

    1. Border/Edge
      1. 2 Cisco 7200 series routers (with plans for 12000 series as need expands)
        1. Running BGP4 to the outside
        2. Running HSRP inside
        3. Connectivity via peering arrangements per location.
          1. ATM nodes (where applicable) and AS numbers pending assignment
        4. Extended access control lists for security
    2. YP-Root server cluster (see below)
    3. Two Cisco PIX
      1. Virtual private network on T1 to administrative WAN
      2. Security between back end and rest of network
        1. Minimize loss of response time due to security overhead without compromising security
    4. YP-Root server back end
      1. IBM RS/6000 SP, HP 3000, HP 9000, or Sun 4500 with fail-over machine/nodes
        1. Number of nodes (IBM)/processors (others) scalable as per need
        2. All four solutions are being considered. Two have already been proven effective
      2. Compiles regional zone tables for each yp.<topic>, yp.<location>.<topic>, and yp.<location> zone
      3. Disseminates local regional zone table updates to master back-end server
        1. [Location acts as start of authority for designated geographic region(s)]
      4. Incorporates zone tables from master back-end server to generate zone files
      5. Verifies database with master back-end server
        1. Validates connectivity with master back-end server and stores hierarchical list for selecting new master back-end server
        2. Hourly version checks
        3. 8 daily hash comparison checks
        4. Daily full database verification and correction
      6. Performs backups
        1. DLT tape jukebox on fiber-channel storage area network
        2. In RS/6000 variant, Magstar Tape Subsystem
        3. Daily full backups will run for the database
        4. Weekly full backup for system
          1. Daily incremental backup for system
          2. Hierarchical primary for master back-end server will perform daily full backup for system
      7. Disseminates completed zone files to local clusters
      8. Stores billing information gathered from middle-layer machines and generates both daily reports and reports on demand for directory publishers

YP-Root Server Clusters. Every YP-Root server cluster will consist of:

    1. 2 layer 3 switches (model to be determined)
      1. Extended access control lists for security
    2. YP-Root server front end (3 variants tested)
      1. Variant 1: 2 to 8 Sun E250s
        1. 2 to 4 processors
        2. 1 GB RAM per processor
      2. Variant 2: IBM RS/6000
        1. 2 or more nodes
        2. 2GB RAM per node
      3. Variant 3: 6 to 8 x86 machines running Linux
        1. Two processors
      4. Processes queries
        1. Converts erred or non-standard queries into <topic>.<location>.<region>.yp
        2. Responds to DNS queries with business telephone directory IP addresses corresponding to appropriate locations and topics
      5. Collects statistical data for billing
        1. Appropriate publisher(s) associated with each listing as provided
    3. YP-Root server middle layer
      1. IBM RS/6000
        1. 4 or more nodes
        2. 2 GB RAM per node
      2. Preprocesses regional data
      3. Generates common-use tables keyed on <topic> and <location> fields within each region
        1. Allows a domain "ls" in either zone to show every name entry within that zone in order of regional proximity to query source
        2. Does not require any changes to DNS protocol (only extends its utility using short times-to-live and a robust, high availability root service)
      4. Gathers billing statistics from front-end machines and preprocesses them for delivery to back end
        1. Multiple billing schemes will be in place to support sliding-scale-billing for increased access by developing nations and industrialized developing nations by reducing cost of entry
  1. Database.
  2. The database size is limited by a finite organizational nomenclature set (approximately 4,000 topical subcategories) recognized by existing directory publishers throughout the world, a finite number of geographically distinct regions, and, most importantly, a finite group of businesses. The former two lists are common, public knowledge and freely available, and the latter database will be provided by current publishers as a value-added resource to their paying customers.

  3. Privacy, Security, and Whois.
  4. DNS queries to the .yp gTLD will be forwarded to a YP-Root server. Coordination with Whois has yet to be determined, as the data may differ greatly by region. Network intrusion will be monitored at each location both by software and by network operations staff. Intrusions will be treated as a data integrity failure, so tampered systems will be disconnected from other local machines and backed up with relevant forensic data preserved to the extent possible and rebuilt from last backups, verifying the database as per backup plan (above).

  5. Technical Failure Contingency Plans.

Each location will be devoid of any known single points of failure and all devices will be selected and configured for high availability, ease of replacement, and rapid response (where applicable) as noted above. All on-site network operations staff will have simple, step-by-step instructions for disaster recovery. In the event of any data integrity failure event compounded by loss of connectivity or failure to verify secure connection to other locations, databases will run from a last-known-good model until verification and correction are once again available. Network operations staff will record all disasters, real or otherwise, and record circumstances and response times. After response to disasters is initiated, the dotYP central office will be contacted as quickly as possible for process verification. Recovery as per plan will take precedence over plan verification in all cases. Regional account managers will provide telephone and e-mail support. Each region will maintain language support in the dominant regional language (as available) and English.