OCZ Technology has announced that its Deneva 2 Series SSDs will be used as the storage device of choice in a pending ‘Data-Scope’ research project at The Johns Hopkins University (JHU), which will be used to create servers for scientific data processing. The project will be led by Dr. Alexander Szalay, Alumni Centennial Professor in the university’s Department of Physics and Astronomy and Director of the JHU Institute for Data Intensive Engineering and Science.
OCZ Technology has announced that its Deneva 2 Series SSDs will be used as the storage device of choice in a pending ‘Data-Scope’ research project at The Johns Hopkins University (JHU), which will be used to create servers for scientific data processing. The project will be led by Dr. Alexander Szalay, Alumni Centennial Professor in the university’s Department of Physics and Astronomy and Director of the JHU Institute for Data Intensive Engineering and Science.
The JHU project comprises a system of almost one hundred servers using hundreds of OCZ Deneva 2 SSDs. This is combined with regular hard disk drives with two tiers for storage and computing. These powerful, affordable systems also serve to expose students and researchers to leading-edge technology at an early stage.
One of Data-Scope’s projects is a digital "multiverse,” which will contain a database of the most astronomical objects ever detected, allowing any astronomer on the planet to perform their own data analyses through remote access to the entire database without the need of downloading tens to hundreds of terabytes of data. Similar projects are in the works to analyze hundreds of terabytes of genomic data and petabyte-scale numerical simulations in turbulence, cosmology and ocean circulation; they are all "Big Data" problems, which do not fit traditional methods of scientific computing.
When Data-Scope is completed, it will drive a new approach to science, where discovery is driven by large data set analysis. Scientists must have the ability to simultaneously build statistical aggregations over petabytes of data, yet explore the smallest aspects of the underlying collections. The unique advantage of this system is that it enables users to function both as a "microscope" and as a "telescope" for data in addition to its storage capacity of 6 petabytes, and its 500 gigabytes per second sequential I/O performance and 20 million IOPS. SSDs also provide a smaller operating footprint over traditional HDDs and significantly reduces power consumption while still delivering the same amount of IOPS performance.
Random access data is streamed directly from SSDs into the co-hosted GPUs over the system backplane, taking advantage of the benefits of General Purpose Computing on GPUs (GPGPU) for scientific and engineering computing. The two major benefits of this architecture are the elimination of access latency by the SSD tier of the storage hierarchy and the elimination of the network bottleneck, which is accomplished by co-locating storage and processing on the same server.
The JHU Data-Scope project is scheduled to begin this spring.