I’m really glad I had opportunity to participate on a RedBook residency regarding IBM FlashSystem V9000 and VMware “Best Practices” (it was actually one of my last thing which I did as IBMer) and till the RedPaper is released for public I decided to write a sneak peek with some guidelines so you can get best of it even now.
For those who don’t know IBM FlashSystem V9000 it is an all-flash array by IBM and I can tell you it is most probably the best performing array out there for now. To have even better idea V9000 is basically IBM FlashSystem 900 with two IBM Storage Volume Controller (SVC) nodes packaged and sold together ;). And if it still hasn’t ringed a bell FlashSystem 900 is an absolute beast in all-flash array field regarding performance, however it has one small issue – it is kinda “dumb”.
Therefore FlashSystem V9000 is coming with additional out of box features like:
- Thin Provisioning
- Data Migration
- Real-Time Compression (RTC)
- Remote Mirroring
Additionally you can buy those features for your existing external storage 😉
Some of you may wonder where deduplication is. Unfortunately it is not there, as nothing is perfect, you have to be satisfied with better performance comparing to those vendors who offer it.
IBM FlashSystem V9000 General design guidelines for performance
- Use one mdisk group per flash storage enclosure
- For optimum performance use 4 (redundant) paths to your LUN
- Use one host object per host defined in storage. Use more only if you need to reduce the number of paths – you have more than 2 HBA ports in your server
- To get best Real-Time Compression performance use at least 8 compressed volumes (LUNs) per V9000. Regardless what sales people tell you, it is not good thing from performance point of view to create one big volume (and not even talking from VMware point of view). There are 8 threads dedicated for RTC and one volume can be handled by 1 thread only.
- Use Round-Robin as multipathing policy
I definitely recommend you to check out our paper once it will be published if you want to know more.
ESXi is obviously coming with some preconfigured defaults which work great most of the time for standard environments. I’m not a huge fan of changing defaults, but it is needed sometimes if you want to get the best of it.
Consistent LUN numbering
Although, I think since ESXi 5.0, it is not required to have same number for LUN shared across whole ESXi cluster it is still recommended to keep it consistent. It is required if you are using RDM and MSCS clustering.
If you are for whatever reason using ESXi version prior 5.5 you would have to change it manually.
Round-Robin path switching
By default ESXi is switching path after each 1000 IOPSs, which works generally fine in big environments with lots of LUNs and VMs. However for some workloads especially when you are dealing with single volume you can drastically improve your storage latency and throughput by decreasing this value.
You can do it for all volumes presented from V9000 (you will have to reboot ESXi to have it applied to already present volumes) – note this will actually change it for the other IBM Storwize based systems:
esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "IBM" -M "2145" -P "VMW_PSP_RR" -O "iops=1"
Or per LUN:
esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device naa.xxxx
Adapter queue depth
This is something which you should change only if you know that this is your bottleneck already as increasing this value can increase throughput, but it can have negative impact on latency.
To check your queues and their utilization:
to change them:
and to understand them:
vStorage APIs for Array Integration (VAAI)
This is something which is enabled by default, but if you don’t have it, make sure it is enabled.
Especially atomic locking (ATS) is a must, but accelerated init and copy will not hurt you either.
Important: do not forget to disable ATS Heartbeat feature if you are running vSphere 5.5 U2 or later.
Make sure you have all your VMs running on HyperSwap volume on hosts at one site only, to do this create and maintain DRS “should-run” rules for your VMs based on datastores. HyperSwap has active-active architecture dynamically switching preferred site based on IOs issued. Obviously you will be suffering performance issues when issuing IOs from both sites to a single volume at time.
Dead Space Reclamation
Unfortunately FlashSystem V9000 does not support SCSI UNMAP for a dead space reclamation when using thin provisioning however there are still ways how to do it pretty easily.
As always first step would be to zero out all dead space, which you want to reclaim.
- To do this from the operating system you can use a tool from Microsoft called “sdelete”
- If you want to do it on VMware datastore you can just simple create and then delete a new thick eager zeroed virtual disk (vmdk) with size of the free space which you want to reclaim of course. You can do it from GUI by creating a new virtual machine, assigning new disk to existing one, or you can use vmkfstools from console.
If you are using Real-Time Compression on your volumes, then your work is done as RTC will reclaim it automatically!
In case of only thin provisioned volumes you would have to create a thin provisioned mirror of this volume and delete source volume after synchronization finishes (You have to do it on FlashSystem V9000).
That’s all for now, I hope it was helpful and don’t forget to share 😉
Update: added ATS Heartbeat into VAAI section comment. Thanks to Pavol for pointing that out
Latest posts by Dusan Tekeljak (see all)
- Set up an alert for port blocked by vSwitch security policy - June 12, 2017
- Enabling agentless Guest (VM) RAM monitoring with vRealize Operations 6.3+ - February 14, 2017
- Just Another ESXi 6.0 Storage APD Handling Bug - November 15, 2016