SAP HANA on Google Cloud + NetApp Cloud Volume Service: Resizing volume size and performance to fit your workload needs in a non-disruptive manner.
If your HANA instance is running on Google Cloud, utilizing NetApp CVS, you can take advantage of its non disruptive, flexible volume scaling that fits performance needs. It provides you the flexibility to increase / decrease volume size to juggle between performance and cost in uptime.
For example, you can easily increase the volume size to boost up disk throughput to improve the duration of HANA startup, Data Loading, System Migration, S/4 Conversion, Import/Export, Backup/ Restore, eliminate system standstill/ performance issue during critical workload (month end processing, high volume of change activities, etc) that could possibly caused by long savepoint duration due to disk I/O bottlenecks and etc. Once the ad-hoc workload is completed, the volume can be scaled to a size in uptime that fits your HANA DB size and meets the HANA disk KPIs during normal operation to save some unnecessary cost.
In below, you’ll see the difference on disk throughput varies between volume sizes. Testing Environment: HANA DB Size: ~2TB Row Store size: ~120GB (done with purpose) Reboot the server before initiating HANA startup to ensure RS is loaded from persistent instead of shared memory.
No comments:
Post a Comment