We had 2 Huawei 2488 V5 drawers loaded to hell with 64 GB RAM slots and Melanox, one storage system with 75 SSD disks, 2 WD HGST Ultrastar Data60 racks with 60 disks, half a box of transceivers and a pile of optics, antidepressants and stuff like that, a jar of coffee and two dozen peanut bars.
The only thing that bothered us was the optics. A person with mixed up optics on hand is helpless, irresponsible and immoral. And we knew that pretty soon we would plunge into it.
The market offers a variety of solutions for virtualization of storage systems and construction of software-defined storage systems (SDS). Each system has both positive and negative effects on the infrastructure. Storage virtualization systems can expand functionality and increase productivity, SDS allows to switch to standardized equipment, improve scalability, and reduce the cost of the solution.
WD HGST Ultrastar Data60 by Western Digital is a common solution that we apply in our infrastructure. We talked about our Acronis-based backup storage and the ease of scaling the Veeam repository, with WD HGST Ultrastar Data60 as the basis for one of the layers for implementing SDS solutions along with storage network virtualization. We put a test rig and once again got back into testing. The task was simple at first glance, all we needed to do was:
1. Unpack all the stuff
4. Assemble the pyramid
All the equipment was delivered in bulk, there was no time to take pictures. The arrangement took more than half a day and we needed to move on fast. This is a standard procedure for all large companies, except for how everything is packed. I’m sure that if the truck flipped over, our racks would reach us safe and sound…
Installation and switching were carried out in one full day. Since we had a number of WD HGST Ultrastar Data60 racks, photos will be mixed with another installation of 3.2 PB.
The “fish” was placed under WD HGST Ultrastar Data60 on the neighboring rack.
We’ll skip the details on how we assembled and installed racks. Everything is organized conveniently, logical, with great sleds and handy ARM kits. If you assembled everything correctly, one engineer will be able to reconnect the rack easily, replace the disk and, in general, make any work not related to rack dismantling.
Let’s have a closer look at the architecture of the WD HGST Ultrastar Data60. We have a well-designed and convenient scheme for replacing all rack components and good air conditioning. During our test, disks located in the last rack in slots 48-59 with 35-38 units, did not heat up more than 49 degrees. Disks on the first row had a stable temperature of 26 degrees. And during the load test, which lasted a month, we did not lose a single drive, unlike our NetApps, but this is a different story.
- The system that combines the functionality of SDS and storage virtualization is located on two Huawei 2488 V5 servers. Unfortunately, at the moment, we cannot tell you what solution is used as a basis;
- Huawei Dorado 6000v3 storage system with a breakdown of 6 LUNs per property with a capacity of 30 TB each. 3 LUNs were allocated to each host. The LUNs were unbalanced between storage controllers;
- Each of the NetApp E2812 storage systems was divided into two disk pools, with 3 LUN of 50 TB. Our test included 2 servers and 2 NetApps, because the test implied a complete mirroring of all data. Each host was assigned one NetApp;
- WD HGST Ultrastar Data60 disk racks connected to the servers via MegaRAID SAS 9380-8e + CacheVault LSICVM02 controllers. The controllers installed in PCI-Express 3.0 x8.
There are three options for configuring disk racks:
1. Huawei SP Boot: time-consuming, tiresome and requires a down-time server;
2. StorageCLI MegaRAID: flexible but not very convenient to use;
3. MegaRAID Storage Manager: stable GUI, must have for Windows servers.
WD HGST Ultrastar Data60 configured RAID60 disk drives. Out of 60 disks available, 6 disks were allocated for GlobalHotSpares.
3 RAID60 disk groups were assembled with two SPANs each of 18 disks, in each RAID group 3 virtual disks were created.
In general, we succeeded in creating a structure of resources distribution for disks, symmetrical for each of the servers, with full mirroring of resources.
What happened next?
Cases, cases, cases…
Will it be possible to commoditize the new scheme? The question yet remains unanswered. There are still concerns about the layer of storage virtualization and SDS solution.
It is too early to speak of real results of load testing. I can only point out that we did not get the planned boosted efficiency that we had counted on. And did not reach the calculated bandwidth and IOPs in the framework of the local installation and for each layer separately as well.
We are sure that our architects together with the vendor of the storage virtualization system will be able to solve all technical issues, and we will be able to tell you more about the test results and clearly show real-life examples of the solution in terms of a complex infrastructure. IMHO, the most reliable principle of building an infrastructure using time-tested solutions is the classic one. It is more expedient and brings more advantages for our customers.