Phase 1: The Original NSF funded Research Network
The current implementation of the Penn State Research Network consists of a central network core and edge (Brocade 6740) switches which were paid for by National Science Foundation CCNIE Program (NSF 12541). To ascertain the extent of the data movement problem, network research flows were monitored on the existing network and locations were identified where the largest data movements were occurring (e.g. from national atmospheric and environmental data sources to Walker Building, and from Huck Institute buildings to the Computer Building Data Center). Edge switches were added to those buildings in an effort to address the lion’s share of the research data movement. This had the inherent effect of making utilization of the Research Network contingent upon the location of the researcher.
Phase 2: Scaling the Research Network
With a desire to remedy the location dependence of the existing Research Network, we have developed the following plan to scale out the network and make it more accessible to all researchers with large data sets.
- Premium (Available Now): At the premium level, a researcher, department, or College could purchase additional Brocade 6740 switches to expand the network out to their “big data” location. This option is the most similar to an edge connection of the existing Research Network. A group considering this should meet with our Engagement and Implementation teams to discuss the specifications for the device and any physical or geographic limitations. This 20Gb/s option provides two (2), 10Gb/s connections to the Research Network Core and up to 48, 10Gb/s or 1Gb/s connections to computer and equipment (these can be “mixed and matched”). This option has the same advanced switching capability that the original Research Network locations have.
- Data Center (Available Now): Researchers with equipment already in a Data Center (either the Computer Building Data Center or the forthcoming Data Center on Tower Road) are encouraged to connect to the Penn State Research Network via the RN aggregation switches in those Data Centers at 10Gb/s. This will also provide that researcher with direct connections to ICSACI compute clusters and resources located in those Data Centers. Provisions can be made for those connections to comply with different levels of Federal and/or granting agency requirements. This option also includes the above mentioned advanced switching capability.
- Ethernet Fabric (Available Fall 2016:Testing complete, In trials): Another highspeed option consists of a 10Gb/s Ethernet Fabric Switch. This option provides one (1), 10Gb/s connection from the switch to the Research Network, an additional 10Gb/s fiber edge port, and either 24 or 48 1Gb/s connections to individual research workstations or instruments. This option will provide faster access to other points on the Research Network including the ICSACI equipment and reduce network congestion on a department or College’s firewall and local area network (LAN). The existing building wiring plant should suffice to allow for 1Gb/s connections over Category 6/5e, copper Ethernet connections and wall jacks. We are investigating the design of a Federal/granting agency compliant solution on a switch by switch basis. Again, this option should be coordinated with our Engagement and Implementation teams to assure seamless integration into the Research Network.
- Compliance Port (Proof of Concept): At base level of connectivity, we can provision an individual “research or compliance port” on an existing ITS managed, converged network switch. Using the capabilities of these switches, a wall jack network connection can be “virtualized” as a connection on the Research Network. This will be the least expensive solution. It is unclear at audit level whether the virtual port can be made Federal/granting agency compliant. Further investigation is needed. As with the above solution, this will provide a single, 1Gb/s connection to the Research Network.