In November we attended the 2017 International Conference for High Performance Computing, Networking, Storage, and Analysis — better known as the 2017 Supercomputing Conference (SC17). Each year at the Supercomputing Conference, researchers and vendors from academia and industry assemble SCinet: a supercomputer and network fabric that serves as testbed and proof-of-concept for new high performance compute, networking, and storage technologies. Premio is one of the vendors who contributed hardware and expertise to the 2017 SCinet in the form of two FlacheSAN1N4C-D4 data node enclosures, with each node benchmarking up to 24GB/s (168Gbps) sustained read throughput and up to 3.6M IOPS. This allowed SCinet to have the capacity to transfer large datasets at a fast cache speed.
In November we attended the 2017 International Conference for High Performance Computing, Networking, Storage, and Analysis — better known as the 2017 Supercomputing Conference (SC17). Each year at the Supercomputing Conference, researchers and vendors from academia and industry assemble SCinet: a supercomputer and network fabric that serves as testbed and proof-of-concept for new high performance compute, networking, and storage technologies. Premio is one of the vendors who contributed hardware and expertise to the 2017 SCinet in the form of two FlacheSAN1N4C-D4 data node enclosures, with each node benchmarking up to 24GB/s (168Gbps) sustained read throughput and up to 3.6M IOPS. This allowed SCinet to have the capacity to transfer large datasets at a fast cache speed.
The Premio FlacheSAN nodes took on several distinct roles during the 2017 Supercomputing Conference, including test point for large data transfer projects, a platform for large flow projects, and an experimental delay/disruption-tolerant network services provider for the SC conference and the SCinet WAN. That first role is probably the most significant for industry observers like ourselves who were scrutinizing SCinet this year for indications about the future of large data transfer technology.
SCinet has a track record for demonstrating network technologies that have subsequently entered the enterprise information technology marketplace including Asynchronous Transfer Mode, the Fiber Distributed Data Interface, and the High Performance Parallel Interface (HiPPi). During SC17, Brian Beeler, StorageReview Editor-in-Chief, spoke with members of a Northwestern University research team that works with the FlacheSAN1N4C-D4. Outside of the Supercomputing Conference proper, Premio's FlacheSAN1N4C-D4 is being used to migrate petabyte-scale datasets from CERN's Large Hadron Collider as data is tiered from CERN's first tier in Chicago, Illinois to Caltech in Pasadena, California.
The enterprise storage sector has been working for years to remain ready for the ever-expanding amount of data that is being generated and processed. One specific challenge that is taking on growing importance today is the need to move massive amounts of data quickly and fluidly. In the future, it will become increasingly common to transfer large datasets across long geographical distances, between dissimilar platforms, and in ways that can handle delays and other disruptions that are inherent to complex and large-scale data transfer. In this regard, SCinet is dealing with data transfer realities today that enterprises will be dealing with within five to ten years.
To use the Caltech research project as an example, a Large Hadron Collider experiment can easily generate raw sensor data in the neighborhood of 25 Petabytes of that is in turn tiered to affiliated research institutions around the globe for analysis. In fact, it is compute, networking, and storage constraints that have proven key bottlenecks to the rate at which experimentation can be conducted at the collider, so advancing performance in these area is of critical concern. On the enterprise side, recent advances in Artificial Intelligence and other Big Data analytics suggest that it won't be long before transferring such massive datasets become common in the private sector (think everything from self-driving cars to environmental sensors on mining reclamation sites).
The FlacheSAN1N4C-D4 is powered by dual Intel Xeon E5-2600v4/v3 Broadwell/Haswell processors, which manage a storage array composed of four NVMe PCIe3 x8 Flash drives. FlacheSAN1N4C-D4 is able to benefit from Intel’s 14 NM Broadwell processor technology because it enhances power consumption and better clock frequencies at equivalent or lower TDP's than its predecessor. The FlacheSAN1N4C-D4 is specified for 24GB/s sustained read throughput of 3.6M IOPS, performance which is essential to move the massive quantities of data required by SCinet and other cutting-edge high performance computing networks.
Premio FlacheSAN1N4C-D4 Specifications
- Supported CPU: Dual Intel Xeon E5-2600v4/v3 Broadwell/Haswell up to 135W TDP socket R3
- Chipset: Intel C612 chipset
- Memory Support: 16x DDR4 ECC RDIMM/LRDIMM 1600/1866/2133/2400MT/s max. 1TB capacity
- Expansion Slots: Support up to 2x Full Height PCIe3 x16 1x PCIe3 x8 IO Module
- Storage:
- 4x Low Profile Hot-Swappable PCIe3 x8 NVMe
- 2x Internal 2.5-inch bays for OS
- Network:
- Dual GbE Intel i210
- Optional 56Gb FDR Infiniband QSFP+ or Dual 40GbE Ethernet through I/O Module
- 1x IPMI Management RJ45 port
- 2x PCIe3 x16 slots for other NIC card option
- Power: 1+1 750W AC/DC 80 Plus Platinum Redundant PSU
- Security: Intel Trusted Execution Technology; TPM 1.2
- Supported OS: Windows 2012 R2, RHEL 6.5, SLES 11 SP3, Windows 208 R2, VMWare ESXi 5.5, FreeBSD 9.2, Centos 6.5
- Front Panel: Power On/OFF with LED, reset Switch, NMI switch, Locate Switch with LED, 4x LAN LED, Warning LED
- Rear I/O:
- DB15 VGA, 2x RJ45 1GbE, 1x Serial DB9,
- 1x RJ45 MGMT, 2x USB 3.0, 2x USB 2.0
- 1x ID LED, optional dual QSFP+
- Cooling: 3x 97mm cooling fans
- Other Features: Dedicated GbE for IPMI 2.0
- Weight
- Gross: 23KG/50LBS
- Net: 17KG/37.4LBS
- Dimensions System: 31.38”x19”x1.75” (LxWxH)
- Packaging: 37.8”x 24”x9.45”(LxWxH)
- Logistic HTS Code: 8473 30 5100; ECCN: 4A994
- Environmental Operating Temperature: 0°C to 35°C
- Non-Operating Temperature: -20°C to 70°C
- Humidity: 5% to 95% non-condensing
- Compliance CE, FCC Class A, RoHS 6/6 compliant
The FlacheSAN1N4C-D4 incorporates four front-accessible bays for low profile PCIe3 x8 NVMe storage. These NVMe slots comprise the storage, which is available for use by data-in-flight in a data transfer node deployment. Two internal 2.5-inch drives host the operating system.
This FlacheSAN array's storage pool is hot-swappable and is certified for use with a variety of NVMe storage device. Premio publishes an Approved Vendor List on the FlacheSAN1N4C-D4 product information page derived from internal Premio laboratory testing.
The rear of the array provides access to the integrated Dual GbE Intel i210 interfaces. The array can optionally be configured with 56Gb FDR Infiniband QSFP+ or Dual 40GbE Ethernet through I/O Modules. There is also an IPMI RJ45 management port along with two PCIe3 x16 slots for other NIC options.
Premio's FlacheStreams server line focuses on NVMe Flash arrays as the vehicle for the Premio "balanced-architectural" bottleneck elimination strategy, which balances resources between the NVMe drives and the Ethernet cards. FlacheStreams systems can also provide direct access to the array's NVMe storage across the network fabric to reduce latency.
Building on Premio's experience working with Caltech to tier experimental data from Chicago to Pasadena, the company contributed two FlacheSAN1N4C-D4 arrays to SCinet to serve as Data Transfer Node (DTN) servers. In aggregate, the two Premio DTNs offered 400 Gigabits of disk-to-disk throughput for technology demonstrations including the integrated Jupyter frontend for data-intensive science workflows, network fabrics requiring a programmable API, and the array's DTN monitoring API.
The kinds of workloads being managed by the Large Hadron Collider's experimental data distribution network, as well as those being modeled by SCinet at the Supercomputer Conference, are important bellwethers for how large data transfer technology might be deployed in the private sector in the future. Whether transferring across large distances or between dissimilar information management systems, large data transfers are going to be the new reality.
Sign up for the StorageReview newsletter