Monday, 3 May 2021

SAP HANA on Google Cloud + NetApp CVS: non-disruptive volume size and performance scaling to fit workload needs

SAP HANA on Google Cloud + NetApp Cloud Volume Service: Resizing volume size and performance to fit your workload needs in a non-disruptive manner.

If your HANA instance is running on Google Cloud, utilizing NetApp CVS, you can take advantage of its non disruptive, flexible volume scaling that fits performance needs. It provides you the flexibility to increase / decrease volume size to juggle between performance and cost in uptime.

For example, you can easily increase the volume size to boost up disk throughput to improve the duration of HANA startup, Data Loading, System Migration, S/4 Conversion, Import/Export, Backup/ Restore, eliminate system standstill/ performance issue during critical workload (month end processing, high volume of change activities, etc) that could possibly caused by long savepoint duration due to disk I/O bottlenecks and etc. Once the ad-hoc workload is completed, the volume can be scaled to a size in uptime that fits your HANA DB size and meets the HANA disk KPIs during normal operation to save some unnecessary cost.

In below, you’ll see the difference on disk throughput varies between volume sizes. Testing Environment: HANA DB Size: ~2TB Row Store size: ~120GB (done with purpose) Reboot the server before initiating HANA startup to ensure RS is loaded from persistent instead of shared memory.

SAP HANA Exam Prep, SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Preparation, SAP HANA Career, SAP HANA Tutorial and Material

HANA Startup:

SAP HANA Exam Prep, SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Preparation, SAP HANA Career, SAP HANA Tutorial and Material

Although Row Store Startup is I/O intensive, the performance is also significantly impacted by amount of log replay, undo, garbage collection, consistency check etc.

SAP HANA Exam Prep, SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Preparation, SAP HANA Career, SAP HANA Tutorial and Material

With 3TB x Performance Extreme disk, you can see the disk throughput is averaging at 390MB/s and the startup time is close to 2 hours for HANA to back to its full operation.

Next, adjust the volume from 3TB to 10TB during uptime. Shutdown HANA, disperse the row store shared memory and issue “HDB start”. You will notice the disk throughput dynamically increase up to 1GB/s and the full operational startup time is reduced from 2 hours to around 50 mins.

SAP HANA Exam Prep, SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Preparation, SAP HANA Career, SAP HANA Tutorial and Material

SAP HANA Exam Prep, SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Preparation, SAP HANA Career, SAP HANA Tutorial and Material

SAP HANA Exam Prep, SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Preparation, SAP HANA Career, SAP HANA Tutorial and Material

After we have achieved what we want (to shorten startup and CS table reload duration), now we can easily reduce the volume size back to its initial value without bringing down HANA, with SGEN currently running.

SAP HANA Exam Prep, SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Preparation, SAP HANA Career, SAP HANA Tutorial and Material

SAP HANA Exam Prep, SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Preparation, SAP HANA Career, SAP HANA Tutorial and Material

During the allocation back to 3TB, there’s no disruption nor system standstill on the ongoing SGEN:

SAP HANA Exam Prep, SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Preparation, SAP HANA Career, SAP HANA Tutorial and Material

NetApp CVS + Google Cloud is SAP HANA certified, and it provides flexibility to scale volume size dynamically to fit performance needs in a non-disruptive manner. If your systems are running on this solution, you can probably play around it in your sandbox environment, test for improvement on different workloads before running it on production systems.

Source: sap.com

No comments:

Post a Comment