Monday, 18 October 2021

Optimizing BDLS For HANA Large Scale Instances (HLIs)

Introduction:

The purpose of this blog post is to give the audience an overview of how I handled BDLS process in the refreshes of HANA Large Scale Instances. I will try to give as much information as possible (which is already publicly available and maybe not all be in a single place) and try to depict how I used it to build a solution.

The Motivation: 

All SAP Basis consultants had to deal with BDLS process in their careers at some point or the other, but it becomes a challenge and a pain when the size of the DB is in several TBs. The process then needs many hours or even days to run.

We had similar challenges while dealing with HANA Large Scale Instances (HLIs) refreshes and the BDLS was eating most of the time and left little or no time as a buffer for any unforeseen issues. Our current process had some level of optimization but still the runtime was huge. Therefore there was a need to find a better optimized solution.

The Challenge:

Current Process:

We used manual BDLS process for the HLI landscape instead of the standard BDLS (which is limited to smaller sized systems like less than 1TB).

This manual run involved running BDLS at two levels

1. SAP: All tables for a conversion are run except for the pre-identified 20 large tables

2. SQL: Parallel execution of SQL statements on the splits of those 20 large tables as “UPDATE TOP” commands. The maximum time taken was in this step

Limitations:

Some of the limitations of the current process were as below:

◉ The effort to generate SQL statements was huge and complex. Multiple things must be done manually to get the required SQL statements

◉ The time taken for the process was huge. If we consider that all SQL statements are run in parallel, then it will take a minimum of 25 hours for the smallest of the HLIs

◉ There is a limitation that the generated update statements for a specific table can not be run in parallel i.e. Until the first update statement for a table completes the second update statement cannot run for the same table

◉ The LSN conversion we are handling is a single LSN and if multiple LSN were to be converted then the times would be multiple of number of conversions

Overall timing can go up to ~96 hours i.e. around 4 days

The Concepts:

I had gone through various blog posts and SAP notes to see what can be done but one of the blog post written in 2012 was exactly what I was looking for. You can read that blog post here.

This blog post although related to Oracle, gave great insights on how to tackle this problem. It suggested to use Create Table As Select method or CTAS instead of using UPDATE queries on the existing tables (that is what our SQL statements or standard BDLS run does).

But Why CTAS?:

◉ CTAS is the approach used in other DBs like Oracle as well to reduce the overall BDLS run time by using “decode” or “case” statements for conversion

◉ It is because select queries run faster than update queries.

◉ This has the potential to reduce the actual runtime of conversion of a table by > 90% and overall runtime of BDLS as a step by more than 60-70% (considering the additional activities to be performed)

The author of the blog post had used “decode” or “case” statements to build a process. But there was nothing like a decode which could handle this in HANA. Therefore I went with MAP function for HANA DB.

MVP of the solution: HANA’s MAP Function

MAP function is used to replace a value with another during a query. So this means if you do a select or an insert with MAP fxn in that query like  “if you got LOST then replace it with FOUND“. Now the result of that query will have FOUND in every spot where you had LOST for the values in the field for which MAP was used.

Syntax: MAP(<FIELDNAME>, ‘<ORIGINAL>’,'<REPLACEMENT>’,<FIELDNAME>)”<FIELDNAME>”

◉ The first <FIELDNAME> is used to tell the function which field to work on.

◉ The <ORIGINAL> and the <Replacement> are self explanatory.

◉ The second <FIELDNAME> is required to tell the function that if you don’t find the ORIGINAL value in some places then leave the field value as is i.e. don’t change it. We don’t want all LSNs to be converted to the same value.

◉ The 3rd and the last <FIELDNAME> is to tell the function that keep the name of the field in output as the fieldname and don’t replace it with your name.

Example:

A simple select query:

select * from T000

Result:

MANDT,MTEXT,ORT01,MWAER,ADRNR,CCCATEGORY,CCCORACTIV,CCNOCLIIND,CCCOPYLOCK,CCNOCASCAD,CCSOFTLOCK,CCORIGCONT,CCIMAILDIS,CCTEMPLOCK,CHANGEUSER,CHANGEDATE,LOGSYS

“500”,”LSN DESCRIPTION”,”Waldorf”,”EUR”,””,”C”,”1″,””,””,””,””,””,””,””,”DDIC”,”20211107″,”SRCCLNT500”

Query with MAP:

select MANDT,MTEXT,ORT01,MWAER,ADRNR,CCCATEGORY,CCCORACTIV,CCNOCLIIND,CCCOPYLOCK,CCNOCASCAD,CCSOFTLOCK,CCORIGCONT,CCIMAILDIS,CCTEMPLOCK,CHANGEUSER,CHANGEDATE,MAP(LOGSYS, ‘SRCCLNT500’, ‘ITWORKED’,LOGSYS) “LOGSYS” from T000

Result:

MANDT,MTEXT,ORT01,MWAER,ADRNR,CCCATEGORY,CCCORACTIV,CCNOCLIIND,CCCOPYLOCK,CCNOCASCAD,CCSOFTLOCK,CCORIGCONT,CCIMAILDIS,CCTEMPLOCK,CHANGEUSER,CHANGEDATE,LOGSYS

“500”,”LSN DESCRIPTION”,”Waldorf”,”EUR”,””,”C”,”1″,””,””,””,””,””,””,””,”DDIC”,”20211107″,“ITWORKED”

We can use MAP module in a CTAS query as shown in the following example:

CREATE TABLE T000_COPY AS (SELECT MANDT,MTEXT,ORT01,MWAER,ADRNR,CCCATEGORY,CCCORACTIV,CCNOCLIIND,CCCOPYLOCK,CCNOCASCAD,CCSOFTLOCK,CCORIGCONT,CCIMAILDIS,CCTEMPLOCK,CHANGEUSER,CHANGEDATE,MAP(LOGSYS, ‘SRCCLNT500’, ‘ITWORKED’,LOGSYS) “LOGSYS” from T000)

So a new table will be created based on existing table but with LSN value changed to ITWORKED. Now the only thing left would be to rename the _COPY table to the original table.

I chose the above examples to showcase two things: First to show how MAP fxn works and second how MAP will be used in a conversion. We need the whole list of fields to get the right result if we are looking to create a copy of an existing table from it. Else, if we shorten the query to just the field in question like LOGSYS then it would be just giving us that output and a whole copy of the table cannot be created.

So some of the most important questions now are:

◉ How to get the tables in real time for each system individually?

◉ How to get the fields in each table to build these queries?

◉ What about tables that are partitioned?

◉ What about indexes on tables?

Brief answers to the above questions are as below:

Q: How to get the tables in real time for each system individually?

Ans: We can use the same logic that SAP is using for its BDLS run i.e. querying table TBDBDLSALL for BDLS relevant tables. So I created an SQL query with multiple JOINs to find largest tables in the system and then find tables relevant for BDLS conversion (based on which tables match the ones in TBDBDLSALL table).

Then ran sorting logic on the shortlisted tables to get tables that have records with a particular LSN in BDLS relevant fields like LOGSYS, AWSYS, LOGSYS_EXT etc which can be found from tables like TBDBDLSUKEY & TBDBDLSUARG.

Q: How to get the fields in each table to build these queries?

Ans: I used R3ldctl to generate STR files for each table which comes as an output (i.e. resultant tables after going through the first answer’s logic). Then using those fields to generate my query with MAP function in between for the BDLS relevant fields.

Q: What about tables that are partitioned?

Ans: We run the first answer’s output to validate which tables are partitioned and generate separate CREATE & INSERT queries for them. The partitioned tables would be converted in two steps instead of single CREATE table used for non-partitioned. This is done to have the benefit of parallelism due to partitions. Therefore we CREATE an empty table with same number of partitions as the source and then run in parallel INSERTs on each partition. This is shown for VBAP below in the conclusion.

Q: What about indexes

Ans: We can have multiple indexes on a table. These indexes need to exist on the converted table after the steps like CTAS, renaming original table to a backup and & then renaming the converted table back to original are completed. So these indexes need to be dropped from the original table and then recreated later on the newly converted table with their original names. Also dropping indexes from the original table reduces it’s size on disk and also reduce the rename time as well.

I used stored procedure get_object_definition in the script to extract DDL statements for all indexes on a table and then used them to recreate the indexes later.

The Solution:

By using inbuilt CTAS function for HANA along with R3ldctl tool to complete the process of convert, drop, rename & rebuild:

1. Convert the selected tables using CTAS in HANA and store it as a temporary table

2. Drop the indexes on the original table

3. Rename the original table to a backup table and rename converted temp table to original table

4. Rebuild the dropped indexes of the newly converted renamed table

We can drop the original tables after validation to save additional disk usage.

For Non-Partitioned tables the process can be depicted as below:

SAP HANA Exam Prep, SAP HANA Career, SAP HANA Guides, SAP HANA Learning, SAP HANA, SAP HANA Study Materials

For Partitioned tables the process can be depicted as below: Important to note that as mentioned before the tables are created as empty tables and then data is inserted in all partitions in parallel.

SAP HANA Exam Prep, SAP HANA Career, SAP HANA Guides, SAP HANA Learning, SAP HANA, SAP HANA Study Materials

After scripting all the logic the overall flow of the BDLS process was as below:

SAP HANA Exam Prep, SAP HANA Career, SAP HANA Guides, SAP HANA Learning, SAP HANA, SAP HANA Study Materials

The script took 3 inputs namely:

1. Limit = which is number of records a table must have to be included in the selection criteria
2. Source LSN = To find the relevant tables having this LSN and the number of records based on limit input
3. Target LSN = Once we are fine with the selection we give target LSN to generate the scripts
Once the script finishes in around 20 mins for a system with 25+ TB, we have the

◉ List of tables
◉ Total records in the table
◉ Relevant records in the table for the source LSN
◉ All indexes for relevant tables
◉ Different sql commands for partitioned and non-partitioned tables

Now we can put all the tables selected for special handling in the exception table BDLSEXZ and run SQL queries via HANA Studio or hdbsql. And in parallel run SAP level BDLS with multiple runs of A* to Z* as shown below

SAP HANA Exam Prep, SAP HANA Career, SAP HANA Guides, SAP HANA Learning, SAP HANA, SAP HANA Study Materials

The Conclusion:


This process gave us the following advantages

1. CTAS because of inherent advantage of select query being faster than update has made the overall conversion very fast

2. Partitioning on the tables is used to achieve in-table parallelism in convert phase for those tables

3. SAP Level execution is reduced significantly due to 1) All major tables converted using script 2) Alphabetical execution of remaining tables

4. Script enables us (due to 2-part output) to run select part of the script multiple times to achieve desired output before generating relevant SQL commands

5. The script can be modified to convert multiple LSNs in a single go. No additional time will be required to convert multiple LSNs

6. The scope of further automation is quite high with a basic target to reduce human intervention to almost 0

We normally ran around 7-8 tables in parallel by a resource with 2 resources working together therefore around 16 tables were done in parallel. Out which VBAP’s snapshot of the time is shown below (it was the longest running table).

It had two partitions of 500 million+ records which had INSERTs run in parallel for each partition. So a runtime of  ~1 hr 13 mins for the conversion of LSN and ~47 mins for the rest of the statements like rename, index recreation etc. Therefore total runtime is ~ 2 hours for VBAP where more than a billion records were converted. Point to note is that in those 2 hours many other large tables like EDIDC, VBRK etc were also converted in parallel.

SAP HANA Exam Prep, SAP HANA Career, SAP HANA Guides, SAP HANA Learning, SAP HANA, SAP HANA Study Materials
VBAP Run

Total runtime: Also a point to note is that these results are consistent for multiple runs and include the total time for complete BDLS i.e. SAP Level plus SQL Level including the script run time for preparation.

SAP HANA Exam Prep, SAP HANA Career, SAP HANA Guides, SAP HANA Learning, SAP HANA, SAP HANA Study Materials

So all in all the solution was designed and implemented by combining MAP Function, SQL JOINs, R3LDCTL, Finding BDLS relevant tables & finally python to code all of this.

I hope with this blog post I was able to provide some valuable insights on how BDLS in HANA can be tackled by existing tools to get astonishing results. And also showcase how MAP functions work and how they can be leveraged to make changes to LSNs without the need for an UPDATE query. The process and features built were specific to our use case where I thought of including some additional functionality in order to get the most flexibility out of the solution with limited manual interventions.

But the readers of this blog post can definitely use the concepts explained here for their systems where they can enhance their own BDLS conversions on HANA even if they are not large scale instances, for example: using a fixed set of tables and using pre-built MAP queries to handle them at SQL level and in parallel running BDLS on SAP level by excluding those tables.

No comments:

Post a Comment