One of StorageReview’s hallmarks has been our consistent testbeds that enable direct comparison of a wide variety of drives, not just those found within a given review. Our third-generation Testbed has carried us for more than 3.5 years. Testbed4’s era now dawns. The hardware has been updated. Software has been revised. Temperature assessment has been overhauled. There are winners and there are losers. Join us as we take a look at SR’s updated hard drive test suite and see how your favorite disk stacks up!
A Brief History of StorageReview’s TestbedsWhen we launched StorageReview back in 1998, one of our principal goals was to maintain a consistent, unchanging test platform that would enable readers to directly compare a wide variety of drives with each other. Back then, when one could find them at all, hard drive reviews were always conducted on “the latest and greatest” machine that the individual reviewer could put together. Though such articles occasionally featured one or two other drives tested in the same machine for comparison, by and large, it was difficult to directly compare contemporary drives with one another. This changed, however, with StorageReview’s debut of Testbed1. Testbed1: Our initial testbed was a 440LX-based 266 MHz Pentium II machine featuring an ATA-33 controller operating off of the PIIX4 southbridge. An Adaptec 2940U2W provided Ultra2 (80 MB/sec) SCSI functionality. Windows 95 and NT 4.0 laid the foundation for our tests, ZD’s WinBench 98 and Adaptec’s ThreadMark 2.0. |
Despite its flaws, WinBench 98 truly was the best tool to measure single-user disk performance at the time. ThreadMark was our initial attempt to present multi-user performance. Looking back, however, its clear that the benchmark came up short.
Testbed1 nonetheless carried us through dozens of drive reviews spanning two years from March 1998 to March 2000. By then, it was clear that both the hardware and benchmarks required updating.
Seagate, Maxtor, and Western Digital have all entered the fray with SATA units specifically tuned for the enterprise sector. While leveraged from consumer-class SATA designs, these differentiated models undergo tests under different workloads, often enjoy longer factory burn-in cycles, are rated for longer mean times between failures, and are backed by a more business-oriented 5-year warranty.
Let us take a closer look at three 500-gigabyte units that squarely aim to seize the burgeoning nearline enterprise sector where cost and capacity rather than sheer IOps drive the market.
Testbed2:
Testbed2 initially hinged upon three key factors. First was the impending introduction of Windows 2000, which, at the time, was heralded as the release that would unify Microsoft’s consumer (Win9x) and professional (NT) kernels. Next was Intel’s i820 chipset, the first chipset that would introduce Rambus memory, the RAM of the future. Lastly, Testbed2 was to take advantage of ZD’s WinBench 2000 to update our single-user performance tests.
All three of these updates failed to materialize. Microsoft decided it needed more time to move their consumer operating system to the NT core and instead updated Windows 95 yet again in the form of Me. The i820 chipset suffered from delays and bugs… and Rambus memory, of course, never took off. Finally, it turned out that WinBench 99 was the last iteration of ZD’s venerable component-level benchmark.
Thus, we stuck with a 700 MHz Pentium III paired with Intel’s tried-and-true 440BX chipset. Promise’s Ultra66 provided ATA-66 operability while Adaptec’s 29160 delivered Ultra160 SCSI compatibility. We chose to go with Windows 2000 Professional and abandoned the Win9x core entirely since the former paired with the NTFS file system represented the future of desktop machines.
Though we presented WinBench 99 results on Testbed2, our big focus was on testing with IOMeter. On the surface, IOMeter’s highly-customizable nature seemed promising- tinkering with its settings yielded a pattern that we dubbed “workstation,” one that we believed could best represent single-user performance. Unfortunately, we were dead wrong. IOMeter lacks any facilities to simulate localized drive access- i.e., the tendency for a given piece of required data to be very close to the last piece of data accessed. This delivered “workstation” numbers that differed very little from the server results returned by IOMeter. Though Testbed2 was originally mapped out for a two-year run, it became painfully clear that our methodologies were flawed and that an update was needed as soon as possible. Hence, Testbed2 lasted just 19 months.
Testbed3:
Testbed3 was truly a massive undertaking, the fruition of nearly one-thousand hours of research into developing the ideal way to assess single-user performance. The hardware consisted of an i850-based Pentium 4 featuring Promise ATA and SATA controllers and Adaptec’s 29160 SCSI controller. The most significant change, however, was on the software side. Microsoft finally released Windows XP, the version that finally unified both its OS lines into a single whole. More importantly, Intel IPEAK SPT 3.0 provided us with an opportunity to capture and exactingly reproduce the accesses generated by any Windows-based application. IOMeter, which was always a great tool for assessing multi-user performance, remained for our server-side tests.
We thus segued from a “dark age” of sorts where Testbed2 presented inadequate and downright misleading results to a renaissance with Testbed3- IPEAK in effect allows us to design our own custom benchmarks, the StorageReview Desktop DriveMarks. Testbed3 debuted in November of 2001 and provided a stable, unchanging platform for nearly four years. It was retired this summer.
This brings us to Testbed4. Unlike our three previous changes, Testbed4 is far from revolutionary. When we moved from Testbed1 to Testbed2 and from Testbed2 to Testbed3, we were motivated primarily by concerns that our methodologies were yielding inaccurate results. Testbed3, however, has stood the test of time- it is only the dated hardware and captures of aging applications that have finally driven us to move on. Testbed4 is an evolutionary update- IPEAK SPT and IOMeter remain as our cornerstone benchmarks. We recommend that readers re-read Testbed3’s introduction for extensive information and discussion of IPEAK SPT and its superior ability to assess single-user drive performance.
HardwareUnlike StorageReview’s three previous testbeds, Testbed4 aims to be significantly more enterprise-oriented as we prepare the machine for regular tests of multiple-drive arrays. Hence, while Testbeds 1, 2, and 3 were all built largely from an enthusiast’s point of view, Testbed4 is predominantly an enterprise-style machine with some enthusiast-like concessions thrown in. Key among these is a low noise floor- while Testbed4 needs to be a beefy machine, low noise levels remain a key factor in subjectively assessing drive acoustics. Motherboard: SuperMicro X6DAE-G2 Once again, for the fourth time in a row, we have chosen to go with an Intel-based platform based on the 7525 (Tumwater) chipset and a pair of 800 MHz FSB Nocona Xeons. AMD’s Opteron has made considerable strides into the business world, but at the time of assembly and installation, no supporting chipset and motherboard combined the features we sought. |
The SuperMicro X6DAE-G2 delivers the wide variety of expansion slots necessary to conduct tests with controllers/HBAs of varying types. A 16x PCIe slot anchors the configuration while a secondary 4x slot provides a path through which newer controllers may be integrated. Three PCI-X slots, all 100 MHz or greater, provide the backbone necessary to incorporate the wide variety of tried-and-true solutions available today.
Processors: 2x Intel Xeons, 3.0 GHz, 800 MHz FSB
In many ways, processor speed remains among the least important factors when it comes to assessing storage subsystem performance. This time around, the duty falls to a pair of 3.0 GHz Noconas- beefy enough to handle most tasks thrown at them.
The fans that accompany the retail processors and heatsinks are quite noisy. Bart Lane of 1COOLPC was kind enough to lend a hand by removing the fans and strapping on a pair of far-quieter Vantec units.
Memory: 4x Crucial 512 MB PC3200 DDR2 Registered SDRAM w/ECC
Initially we sought to keep Testbed4’s RAM to no more than one gigabyte in an effort to maintain as much disk access as possible when capturing key application traces. Such a configuration, however, starkly contrasted the machine from servers in the enterprise which routinely feature far more memory. As a compromise, we equipped Testbed4 with 2 GB of RAM.
Display Adapter: MSI NX6600GT-TD128E
Among the least expensive PCIe video cards we could find, this Nvidia 6600GT unit occupies the motherboard’s 16x PCIe slot and provides enough horsepower to run the titles featured in our gaming performance tests.
SATA Host Adapter: Silicon Image SI3124-2 Reference Board
A key drawback of virtually all server/workstation motherboards is the lack of an NCQ-capable SATA controller built-in to the southbridge. The 7525 is no exception- it relies on the tried-and-true ICH5 rather than the newer and NCQ-capable ICH6. As a result, we had to turn to an add-on controller. Though SR has used Promise adapters in the past, Silicon Image has made great strides and has established itself as a de facto standard in the SATA world. The SI3124-2 features full 133 MHz PCI-X support, SATA-2 style 300 MB/sec transfers, and NCQ as well as legacy TCQ support.
SCSI Host Adapter: LSI 21320 Dual-Channel Ultra320 SCSI
Though SR has equipped past testbeds with an Adaptec SCSI HBA, it is clear that in today’s world, LSI rules the SCSI roost. The 21320 provides a robust PCI-X dual-channel Ultra320 solution to Testbed4 as well as delivering base-level mirroring and striping capability.
SAS Host Adapter: LSI SAS1068-IR
An 8-port SAS HBA, this PCI-X board also incorporates some rudimentary software RAID functionality.
Boot Drive: 2x Western Digital Raptor WD740GD
Configuring and running StorageReview testbeds over the years has involved a lot of imaging and restoring using PowerQuest’s Drive Image utility. With Testbed3 we relied on a single Barracuda ATA IV divided into two partitions. Though an ultra-quiet drive, the Barracuda’s single actuator struggled through the copy and restore operations so ubiquitous to the machine.
Testbed4 features dual Raptors. WD’s second-generation units are speedy and, just as important, very quiet. Independent use of these two drives (no RAID here!) permits one drive to serve as a constantly refreshed boot unit while the second disk stores all necessary images and archives test results.
Chassis and Power Supply: SuperMicro 743i-645
Housing all of Testbed4’s components is SuperMicro’s 743i, a modular 4U rackmount chassis that also works well as a stand-alone tower. The case integrates perfectly with the SuperMicro motherboard and provides easy access and great cooling for up to eight hot swappable drives.
The basic 645 watt power supply that accompanies the 743i, surprisingly enough, operates as quietly as the manufacturer boasts. Its healthy output and wonderful acoustics make it the ideal power supply for Testbed4.
Fans: 6x Panaflo 80mm
Testbed4’s use of a rack-mount chassis rather than an enthusiast-oriented case allows for a move from individual drive coolers to a more holistic approach of robust system-wide cooling.
While they certainly move a lot of air, the fans that come with the SuperMicro case are far from quiet. Swapping them out for some of the ever-popular 80mm Panaflos yields a system that maintains decent ventilation even with multi-drive arrays.
Miscellania:
- Display: Dell UltraSharp 2001FP
- Sound Card: Realtek AC’97 Audio (Built-in)
- Speakers: Cambridge Soundworks Soundworks
- Optical Drive: NEC ND3500A
- Keyboard: NMB RT8255TW+
- Network Interface Card: Intel 82546GB Dual Port Gigabit
- Mouse: Logitech MX1000
Software
- Operating System: Windows XP Professional SP2
- Chipset Driver: Intel 7525 6.3.0.1005
- Display Driver: Nvidia ForceWare 7.1.8.9
- SAS Driver: LSI SAS 3000 series 1.20.5.0
- SCSI Driver: LSI Ultra320 SCSI 200 series 1.20.5.0
- SATA Driver: Sil 3124 SATALink 1.3.0.16
- Sound Driver: Realtek AC’97 5.10.0.5790
- WinBench 99 v2.0
Benchmarks
- Business Winstone 2004 v1.1
- Multimedia Content Creation Winstone 2004 v1.1
- Intel IPEAK SPT v3.0
- IOMeter v2004.7.30
- FarCry v1.3
- The Sims2 University v1.0
- World of Warcraft v1.4
The 2006 Desktop DriveMarksWith Testbed3, we debuted tests based on capture and playback of actual drive use via Intel’s IPEAK Storage Performance Toolkit v3.0. IPEAK SPT’s WinTrace32 and RankDisk offer the best way to exactingly capture a “real world” sequence of disk accesses which may then be precisely replayed on a variety of target storage subsystems. For more information, check out this writeup. The 2002 Desktop DriveMarks that premiered with Testbed3’s introduction consisted of captures of 8 separate disk activities. The Office DriveMark 2002 represented thirty minutes of typical use by yours truly. The High-End DriveMark 2002 was a capture of Veritest’s 2001 Content Creation Winstone. Due to popular request (though we do not believe such performance is truly relevant), we also included a capture of Windows XP’s bootup process. Finally, captures of 5 different popular PC entertainment titles represented the Gaming DriveMark 2002. |
Our goal with the 2006 Desktop DriveMarks was to update the represented applications while streamlining the test process. The 2002 DriveMarks, replete with captures from five different games, took a considerable amount to run on each test drive. Through the reduction of the number of titles represented (from 5 to 3) as well as the time taken to capture each (30 minutes of gameplay for each title in the 2002 DriveMarks as opposed to 5-10 minutes of key disk-intensive gameplay in the 2006 components), we have shaved the time it takes to run a drive through the Desktop DriveMarks by 75%. This in turn will facilitate more timely coverage on newly-released products and permit more in-depth looks at multiple-drive performance. These traces were captured on Testbed4’s Raptor WD740GD boot drive with a single 40 GB NTFS partition running on the Intel motherboard ICH5 controller.
StorageReview.com Office DriveMark 2006 – The SR Office DriveMark is a trace recording of Veritest’s Business Winstone 2004 test. It consists of Office XP, Winzip 9.0, and Symantec Antivirus 2003 run in a lightly-multitasked manner. While the Winstone suite itself primarily aims to measure aggregate system performance, the use of IPEAK SPT to capture and replay the test’s disk access delivers relevant and highly-repeatable results that home in on the performance of the storage subsystem. Let us examine how these applications request data from a drive:
How do contemporary drives stack up in the SR Office DriveMark 2006? Note that in these tests, the “Server” and “Desktop” next to Seagate’s SCSI drives indicate Server Mode (Seatools Performance Mode Off) and Desktop Mode (Seatools Performance Mode On) respectively. SATA drives capable of either TCQ or NCQ were tested with queuing both enabled and disabled. “no CQ” indicates that the drive’s queuing was disabled while the “standard” label represents performance with command queuing enabled.
The 2006 Desktop DriveMarks (Continued)The SR High-End DriveMark 2006 is a trace recording of Veritest’s Multimedia Content Creation Winstone 2004 suite. It consists of Adobe Photoshop 7.01 and Premiere 6.50, Macromedia Director MX 9.0 and Dreamweaver MX 6.1, Microsoft Windows Media Encoder 9, Newtek Lightwave 3D 7.5b, and Steinberg WaveLab 4.0f run in a lightly-multitasked manner. While Winstone suite itself primarily aims to measure aggregate system performance, the use of IPEAK SPT to capture and replay the test’s disk access delivers relevant and highly-repeatable results that zero in on the performance of the storage subsystem. Let’s take a look at how these applications request data: |
What does all this mean for high-level performance? Let us take a look. Note that in these tests, the “Server” and “Desktop” next to Seagate’s SCSI drives indicate Server Mode (Seatools Performance Mode Off) and Desktop Mode (Seatools Performance Mode On) respectively. SATA drives capable of either TCQ or NCQ were tested with queuing both enabled and disabled. “no CQ” indicates that the drive’s queuing was disabled while the “standard” label represents performance with command queuing enabled.
Gaming TestsWith Testbed3’s introduction, we presented access patterns from five different PC games that were subsequently normalized and averaged into the StorageReview Gaming DriveMark 2002. Gaming, after all, represents only one facet of a myriad of applications. By presenting a single gaming score rather than results from five different tests, we balanced our review presentation and avoided overwhelming our articles with figures from entertainment titles. Normalization, however, tends to dilute results between individual test drives and muddies distinctions that may occur. As a result, we are abandoning the averaging process and will instead present individual results from three separate games. |
When it comes to first-person shooters, Half-Life 2 and Doom3 are arguably the latest and greatest. Last year’s FarCry, however, remains king when it comes to sheer level loads… the process is agonizingly slow, usually taking over half a minute on even the swiftest drives. FarCry is thus our title of choice to represent the FPS genre. Our FarCry trace consists of a capture of the game’s initial startup and the loading of three separate maps with one minute of gameplay in between each console-initiated map change.
The Sims franchise has been incredibly successful, serving as publisher EA’s cash cow for over half a decade now. Like its predecessor, The Sims 2 undertakes significant disk access when one switches between neighborhoods and individual households. Equipped with the University expansion pack, The Sims 2 is Testbed4’s “strategy game” entry. Our trace captures the game’s startup, the “import” stage of a university to the default neighborhood, a switch to the game’s character creator, and a few loads of individual lots.
When it comes to online role-playing games, there’s World of Warcraft and there’s everybody else. With Blizzard’s titan boasting more than 4 million players worldwide, WoW represents Testbed4’s online/RPG entry. World of Warcraft’s most notable disk access arises when zoning upon initial game startup as well as when switching between the game’s continents or individual dungeons. Intermittent requests continue throughout the game, however, as textures from lands and characters are swapped into memory. Our trace consists of the game’s initial load, a changing of continents, an entry into an instance, and a long flight that passes over several different lands.
IOMeter (Server/Multi-user) TestsStorageReview introduced Intel’s IOMeter to the media world in March of 2000 with the launch of Testbed2. The benchmark delivered what was at the time a bewildering array of multi-dimensional results difficult to present to readers in an easy-to-comprehend manner. Towards the end of Testbed’s life as well as throughout Testbed3’s reign, we have attempted to simplify results by offering a weighted average of results under a variety of loads. While the “SR Server DriveMarks” distilled figures down to a single number, a weighted average by definition tends to dilute results and mask differentiation. In the year 2000, SR was the only major source presenting IOMeter results. Since then, however, use of the tool has proliferated across the internet. These days, most readers are familiar with the default “File Server” pattern that accompanies the benchmark and understand the varying queue depths under which the pattern may be run. Thus, with Testbed4 we are going to discard the weighted average and return to presenting results over progressively increasing loads. |
Soon after Testbed3’s launch, Intel, which had not updated IOMeter for some time, released the benchmark to the public domain. A team of developers over at sourceforge.net now maintains the project.
At its heart, IOMeter is simply a random access time test. Its strengths arise from its ability to vary the requested block sizes as well as the capability to issue new requests before a previous request is fully serviced and thus driving up the simulated load. IOMeter’s principal weakness remains its lack of facilities to simulate locality. As demonstrated in the IPEAK SPT single-user based tests featured in the SR Desktop DriveMarks, a vast majority of requests occur very close to their immediate predecessors. Fortunately, highly-random non-localized activity simulates the loads under which busier Unix- and Windows-based servers run. IOMeter remains an excellent tool to assess a drive and arrays of drives under multi-user performance conditions.
With Testbed4, we are formally dropping the included “Web Server” pattern. It mainly differs from the “File Server” configuration by consisting purely of reads. While a static http server may indeed exhibit a preponderance of reads, factors such as virtual memory and databases running on the same drive make a 100% read pattern unrealistic. Today’s web servers are similar to file servers- they feature requests varying in sizes and incorporate both reads and writes.
A high-level, “real world” benchmark such as IPEAK SPT’s WinTrace32 and RankDisk duo is by definition heavily influenced by the “recording side” hardware such as processor speed, amount of RAM, etc. IOMeter, on the other hand, is affected only by the controller, controller driver, and drive(s). As a result, the changes in IOMeter results when moving from Testbed3 to Testbed4 are relatively minor and may be attributed predominately to our move from Promise and Adaptec to LSI and Silicon Image controllers. Let us examine how today’s drives stack up:
Contrast these results with those from the Maxtor MaXLine III (featuring NCQ), Seagate Barracuda 7200.8 (also NCQ), and Hitachi Deskstar 7K400 (with TCQ). These drives deliver a steady gain in achieved I/Os per second as loads increase. Hitachi’s drive in particular manages to scale upwards ever so slightly even when moving from 64 to 128 outstanding operations. The MaXLine, while starting slightly behind the Barracuda under a linear load, scales much more elegantly than Seagate’s drive and pulls significantly ahead as loads rise. Even the modest Barracuda, however, demonstrates command queuing’s benefit over drives that have none.
The future of SATA includes NCQ. While it is true that drives lacking command queuing such as the SpinPoint P80 and WD’s Caviars aren’t receiving a fair shake here, the fact remains that most, if not all, upcoming SATA drives will incorporate the feature and as a result will scale properly as loads increase. When publishing IOMeter results in our stand-alone drive reviews, we will always present them with available command queuing options enabled.
The Seagate Savvio 10K.1, alone with its unique 2.5″ form factor (as opposed to the 3.5″ form factor sported by all other enterprise-class drives) and inherently short-stroked design, delivers performance on par with that of the Atlas all the way through a queue depth of 32. The drive regresses a bit at a depth of 64, however, and lags the leaders at 128.
Hitachi’s Ultrastar 10K300 and Seagate’s Cheetah 10K.7 trail the others by a bit throughout all depths. The Ultrastar manages to close the gap between itself and the Atlas to a razor thin one by the time loads hit 128 I/Os. On the other hand, the Cheetah, like the Savvio, stumbles a bit and drops the ball at a queue depth of 64, only recapturing rather than improving upon the performance delivered at 32 I/Os by the time loads hit 128.
WD’s Raptor still stands alone as the sole SATA drive featuring a 10,000 RPM spindle speed. It scales relatively slowly through a depth of 8 I/Os, finally delivering a large jump at the 16 mark. From there the Raptor quickly levels off. In the end, though the WD’s performance remains well above that of a 7200 RPM drive, its implementation of ATA TCQ is not robust enough to let it deal with progressively increasing loads as well as that of a SCSI drive.
Environmental MeasurementsWhen Testbed3 debuted, we discussed in some detail the difficulties associated with assessing the various facets of hard drive acoustics. Please revisit this page of the Testbed3 writeup for an overview of potential problems and caveats. With Testbed4, we are continuing with the approach of objectively measuring the sound pressure of a test drive at idle and providing subjective commentary on seek noise. An important change, however, is the distance from which we take idle measurements. Previously, we settled on a distance of 18 millimeters, one close enough to negate most ambient noise. Four years ago, Seagate’s Barracuda ATA IV was the end-all in quiet drives. The proliferation of fluid dynamic bearing motors and heightened attention to acoustic design in general, however, have combined to significantly reduce the average noise level of today’s hard drive. Further, with Testbed4 poised to evaluate notebook drives, the mean noise level of test drives will drop even further. To deliver meaningful differentiation in measured idle noise, we’ve reduced the distance between the microphone and the drive to 3 millimeters. As one would expect, this raises the scores of most drives by a decibel or two. Here’s a look at how current drives stack up with this adjustment: |
Power dissipation measurements will replace surface temperature assessment. Readers are (or should be, at any rate) concerned about how much heat a drive will add to their system as a whole rather than the temperature the drive itself achieves. Given the (relatively) closed nature of a system case, a drive that dissipates more power will raise the chassis’s overall temperature by a higher amount. The actual temperature hit by various drives given a static amount of power dissipation, of course, may vary. Drive cases may be made out of different materials, some drives feature a serrated design that increases surface area, etc. One should remember, however, that a drive that “feels” hotter may in fact be doing a better job of moving that heat energy away from the internals. Conversely, a drive that remains very cool to the touch may in fact be insulating and thus trapping the heat in a place where it should not be. Over time, however, all drives will release this heat into the chassis environment. It is this holistic temperature that matters. As a result, we are confident that presenting measured drive power dissipation is by far the most useful thermal measurement that StorageReview can offer to readers.
To deliver these results, we are using a custom-built instrument devised and assembled by long-time SR reader and forum participant jtr1962. Featuring a standard 4-pin molex input and output, four LCD displays, and a variety of modes, this device enables us to quantify the instantaneous and peak power draw of drives under evaluation. Here’s how today’s drives fare in this new thermal measurement:
The following graph summarizes the peak power draw of today’s drives on both the 5V and 12V rail when powering up. Generally speaking, the 12V rail achieves its maximum value as the drive’s spindle starts to turn and ramp up to maximum speed. The 5V rail, however, usually peaks upon head/actuator initialization, well after the 12V rail crests.
Concluding ThoughtsNewer hardware and updated application traces shuffle the status quo a bit. On the high end, Maxtor’s Atlas 15K II continues to deliver the best multi-user performance one can get. When it comes to single-user applications, however, Fujitsu’s MAU3147 generally bests the Atlas. The raison d’etre of 15,000 RPM drives is, of course, servers. Thus, the Atlas 15K II retains the leaderboard crown. The MAU3147, however, is the drive to look at for workstations and hot-rod desktops. Things are slightly different when one ratchets it down a notch and looks at 10K RPM drives. Here the Fujitsu MAT3300 scales a bit more robustly than the Atlas 10K V and manages the best server performance under heavy load. Further, Fujitsu’s drive easily offers the best single-user performance one can get out of a 10,000 RPM drive. As a result, the MAT usurps the Atlas 10K’s leaderboard slot. |
On the SATA front, Samsung’s SpinPoint P80 and Western Digital’s Caviar WD2500KS make large strides. The SpinPoint in particular posts solid gains and proves that with today’s applications it no longer competes to bring up the rear. As was the case in Testbed3, however, it is Maxtor’s MaXLine III and Hitachi’s Deskstar 7K400 gunning for the top spot. Though the MaXLine remains impressive overall, the Deskstar fares a bit better with newer hardware and software than it did in the last testbed… and it certainly was no slouch there either. Further, Hitachi’s contender delivers a pleasantly robust legacy TCQ implementation that scales very well when dealing with multi-user loads. So, in the end, the Deskstar 7K400 rises to the occasion and seizes the leaderboard slot.
The WD Raptor WD740GD remains an iconoclast as only 10,000 RPM SATA market offering on the market. Despite its relative age, the Raptor continues to deliver excellent performance and maintains its position as the fastest SATA drive around.
Where do we go from here?
Testbed4’s formal launch clears the way for quite a few exciting developments. A backlog of drives such as the WD Caviar RE2, the SAS Cheetah 15K.4, the Seagate NL35, Deskstar 7K500, and more await formal tests. The new testbed’s ability to easily handle multiple drives will allow us to more frequently present results for multi-drive RAID array. Finally, our new power measurement suite paves the way for a long-requested review- notebook drives! Thanks for sticking with StorageReview throughout the years. We look forward to many more years of delivering the most reliable, consistent, and downright best drive reviews around.
Recommended Reading and Discussion:
Reader Comments on the Office DriveMark
Reader Comments on the High-End DriveMark
Reader Comments on Power Dissipation Methodology