May be some of you know that it is a myth that HANA needs no indexes. Every primary key in an ABAP system has a single index on each column. If for example a primary key is created on columns MANDT, BELNR and POSNR, implicit single column indexes are created on column MANDT, on column BELNR and on column POSNR.
Furthermore, it may be wise to create additionally secondary indexes depending on your data and SQL statements. So, nothing new if you compare it to other DBs like Oracle or DB2.
Initial questions:
◉ Do indexes cost me additionally main storage or disk space?
◉ How to determine the right index?
◉ Are there any tools like index advisors?
◉ How to identify the hash in a root cause performance analysis?
SAP HANA is able to process data efficiently so that often a good performance is possible even without the use of indexes. In case of frequent, selective accesses to large tables it is nevertheless useful to create additional secondary indexes. As a rule of thumb column store indexes should be created on single columns whenever possible, because single column indexes require much less memory compared to multi column indexes.
At the beginning we have to know that there are different kinds of indexes.
Single column indexes on individual columns
and
Indexes on multiple columns aka concatenated indexes.
There are different kinds of this concat indexes which can be viewed in detail here.
Every existing index slows down an INSERT operation. Check if you can reduce the number of indexes during mass INSERTs and data loads. SAP BW provides possibilities to automatically drop and recreate indexes during data loads. Primary index normally mustn’t be dropped. This means you have to evaluate if an index is useful when a lot of the statements consists of inserts, updates and modify operations. More about this in the analysis part.
All initial questions (see above) will be answered in this blog series besides the first one.
A: concat attributes and inverted index structures (main part) will be persisted to disk and need for this reason additional disk space. However, each index needs its own memory and has to be adjusted on data changes.
2.1 High runtime of MIN and MAX searches
Indexes in SAP HANA can be used to support MIN / MAX searches if all of the following conditions are met:
◉ SAP HANA >= 2.0 SPS 04
◉ Column store table
◉ Number column or character column without mixture of upper / lower case letters
In all other scenarios an index can’t be used to identify the maximum or minimum value of a column directly. Instead the whole column / table has to be scanned. Therefore, you avoid frequent MAX or MIN searches on large data volumes. Possible alternatives are:
◉ Definition of an ABAP number range object
◉ Sequences (SAP Note 2600095)
◉ Identities (GENERATE [BY DEFAULT] AS IDENTITY, CURRENT_IDENTITY_VALUE())
◉ Additional selective conditions
◉ Maintaining the MIN and MAX values independently in a separate table
2.2 ORDER BY
In most cases the column store provides better performance of MIN / MAX searches compared to row store, so you can consider a move to column store (SAP Note 2222277) in cases when the performance is linked to a row store table.
SAP HANA indexes don’t support sorting. Therefore, an ORDER BY requires explicit sorting, even if an index on the related column(s) exists. In cases where a high amount of records need to be sorted before the first few records are returned, the runtime can be rather high.
Both limitations resulting from the insert only on delta design of SAP HANA which consists of the two data areas of a CS table => main and delta
◉ The main store is read optimized, sorted and compressed
◉ The delta one is write optimized, not sorted and not compressed
2.3 SPARSE / PREFIXED compression
If you create an index on a col with SPARSE or PREFIXED compression you should trigger a optimize compression afterwards. Normally this will be done after one of the next delta merges but to avoid such a situation just analyze it and trigger it if necessary:
UPDATE "<table_name>" WITH PARAMETERS ('OPTIMIZE_COMPRESSION' = 'FORCE')
This can be analyzed with the Data Collector (HANA_SQL_StatementHash_DataCollector – part II of this blog series) or the ColStore Columns (HANA_Tables_ColumnStore_Columns)
The recommendations of SAP can be found in note 2000002 for the most common tables and SQL hashes. Some recommendations can also be obtained from the tools section below.
Check the systems for statements which are executed frequently and have a high execution per row time.
For an overview you can start with the most scanned columns.
HANA_Tables_ColumnStore_Columns_Statistics_2.00.030+
( SELECT /* Modification section */
'%' HOST,
'%' PORT,
'%' SCHEMA_NAME,
'%' TABLE_NAME,
'%' COLUMN_NAME,
-1 MIN_MEM_SIZE_MB,
10000000 MIN_SCANNED_RECORDS_PER_S,
' ' ONLY_UNUSED_CONCAT_ATTRIBUTES,
'TABLE' OBJECT_LEVEL, /* TABLE, PARTITION */
'NONE' AGGREGATE_BY, /* HOST, PORT, SCHEMA, TABLE, COLUMN, or comma separated combinations, NONE for no aggregation */
'SCANNED_RECORDS' ORDER_BY /* NAME, SCANNED_RECORDS, INDEX_LOOKUPS, SIZE */
Depending on the size and lifetime (amount of statistic data) of your DB you have to adjust the MIN_SCANNED_RECORDS_PER_S.
------------------------------------------------------------------------
|TABLE_NAME |COLUMN_NAME |SCANNED_RECORDS |SCR_PER_S |INDEX_LOOKUPS|IL_PER_S|MEM_SIZE_MB|INDEX_TYPE|
-------------------------------------------------------------------------
|VBAP |MATNR | 1885008657244212| 2089965282| 0| 0.00| 265.15|NONE |
|MSEG |MATNR | 396765902062072| 439906181| 42262934| 46.85| 6873.89|BLOCK |
|MKPF |BKTXT | 292087907375924| 323846568| 0| 0.00| 25.00|NONE |
|MSEG |MJAHR | 276763424959580| 306855858| 7436| 0.00| 662.68|BLOCK |
|VBEP |EDATU | 243911240701207| 270431662| 0| 0.00| 24.75|NONE |
|KONV |KNTYP | 237690523994145| 263534568| 0| 0.00| 2496.31|NONE |
|BSIS |XOPVW | 103255810457226| 114482794| 0| 0.00| 150.09|NONE |
|MSEG |ZEILE | 102177134639927| 113286834| 375568| 0.41| 2704.73|BLOCK |
|MSEG |BUDAT_MKPF | 37645623055548| 41738824| 0| 0.00| 2412.83|NONE |
|VBPA |KUNNR | 32958632854528| 36542218| 1869284| 2.07| 14.67|BLOCK |
-------------------------------------------------------------------------
On every column with index type “NONE” there is room for improvement. This means now we have the columns which are often accessed / scanned with and without an index.
On this base you can look into the expensive statements searching for those tables.
select statement_hash, statement_string, execution_count, total_execution_time from
m_sql_plan_cache where statement_string like '%VBAP%' order by total_execution_time desc;
If you observe a frequently high usage of CPU you should start an analyzes with the TFR (TimeFrameReport):
◉ HANA_Global_CurrentStateReport_2.00.040+ – for just in time workloads
◉ HANA_Global_TimeFrameReport_2.00.040+ – for historical workloads
More experienced experts can also skip this step and go directly to a thread analysis via
◉ HANA_Threads_ThreadSamples_AggregationPerTimeSlice_2.00.040+
◉ HANA_Threads_ThreadSamples_FilterAndAggregation_2.00.040+
◉ HANA_Threads_ThreadSamples_StatisticalRecords_2.00.040+
But for now, we go ahead with the TFR.
Watch out for some special thread methods which can be fast analyzed via this extract of the TFR:
WITH
BASIS_INFO AS
( SELECT
GREATEST(ADD_DAYS(CURRENT_TIMESTAMP, -HISTORY_RETENTION_DAYS - 1), CASE TIMEZONE WHEN 'UTC' THEN ADD_SECONDS(BEGIN_TIME, SECONDS_BETWEEN(CURRENT_UTCTIMESTAMP, CURRENT_TIMESTAMP)) ELSE BEGIN_TIME END) BEGIN_TIME,
LEAST(CURRENT_TIMESTAMP, CASE TIMEZONE WHEN 'UTC' THEN ADD_SECONDS(END_TIME, SECONDS_BETWEEN(CURRENT_UTCTIMESTAMP, CURRENT_TIMESTAMP)) ELSE END_TIME END) END_TIME,
GREATEST(ADD_DAYS(CURRENT_TIMESTAMP, -HISTORY_RETENTION_DAYS - 1), BEGIN_TIME) BEGIN_TIME_ORIG,
LEAST(CURRENT_TIMESTAMP, END_TIME) END_TIME_ORIG,
SECONDS_BETWEEN(GREATEST(ADD_DAYS(CURRENT_TIMESTAMP, -HISTORY_RETENTION_DAYS - 1), BEGIN_TIME), LEAST(CURRENT_TIMESTAMP, END_TIME)) SECONDS,
HOST,
PORT,
THREAD_HISTORY_INTERVAL_S,
TOP_N_CS_SIZE,
TOP_N_RS_SIZE,
TOP_N_DISK,
TOP_N_BLOCKED_TRANSACTIONS,
TOP_N_MEMORY,
TOP_N_IDLE_CURSORS,
TOP_N_LONGRUNNERS,
TOP_N_SQL_TIME,
TOP_N_SQL_EXECUTIONS,
TOP_N_SQL_RECORDS,
TOP_N_EXPENSIVE_SQL_TIME,
TOP_N_EXECUTED_SQL_TIME,
TOP_N_THREAD_SQL,
TOP_N_THREAD_TYPES,
TOP_N_THREAD_STATES_AND_LOCKS,
TOP_N_THREAD_METHODS,
TOP_N_THREAD_DB_USERS,
TOP_N_THREAD_APP_USERS,
TOP_N_THREAD_APP_NAMES,
TOP_N_THREAD_APP_SOURCES,
TOP_N_THREAD_HOST_PORTS,
TOP_N_TABLE_OPTIMIZATIONS,
TOP_N_TRACE_ENTRIES,
LINE_LENGTH
FROM
( SELECT
CASE
WHEN BEGIN_TIME = 'C' THEN CURRENT_TIMESTAMP
WHEN BEGIN_TIME LIKE 'C-S%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(BEGIN_TIME, 'C-S'))
WHEN BEGIN_TIME LIKE 'C-M%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(BEGIN_TIME, 'C-M') * 60)
WHEN BEGIN_TIME LIKE 'C-H%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(BEGIN_TIME, 'C-H') * 3600)
WHEN BEGIN_TIME LIKE 'C-D%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(BEGIN_TIME, 'C-D') * 86400)
WHEN BEGIN_TIME LIKE 'C-W%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(BEGIN_TIME, 'C-W') * 86400 * 7)
WHEN BEGIN_TIME LIKE 'E-S%' THEN ADD_SECONDS(TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS'), -SUBSTR_AFTER(BEGIN_TIME, 'E-S'))
WHEN BEGIN_TIME LIKE 'E-M%' THEN ADD_SECONDS(TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS'), -SUBSTR_AFTER(BEGIN_TIME, 'E-M') * 60)
WHEN BEGIN_TIME LIKE 'E-H%' THEN ADD_SECONDS(TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS'), -SUBSTR_AFTER(BEGIN_TIME, 'E-H') * 3600)
WHEN BEGIN_TIME LIKE 'E-D%' THEN ADD_SECONDS(TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS'), -SUBSTR_AFTER(BEGIN_TIME, 'E-D') * 86400)
WHEN BEGIN_TIME LIKE 'E-W%' THEN ADD_SECONDS(TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS'), -SUBSTR_AFTER(BEGIN_TIME, 'E-W') * 86400 * 7)
WHEN BEGIN_TIME = 'MIN' THEN TO_TIMESTAMP('1000/01/01 00:00:00', 'YYYY/MM/DD HH24:MI:SS')
WHEN SUBSTR(BEGIN_TIME, 1, 1) NOT IN ('C', 'E', 'M') THEN TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS')
END BEGIN_TIME,
CASE
WHEN END_TIME = 'C' THEN CURRENT_TIMESTAMP
WHEN END_TIME LIKE 'C-S%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(END_TIME, 'C-S'))
WHEN END_TIME LIKE 'C-M%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(END_TIME, 'C-M') * 60)
WHEN END_TIME LIKE 'C-H%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(END_TIME, 'C-H') * 3600)
WHEN END_TIME LIKE 'C-D%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(END_TIME, 'C-D') * 86400)
WHEN END_TIME LIKE 'C-W%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(END_TIME, 'C-W') * 86400 * 7)
WHEN END_TIME LIKE 'B+S%' THEN ADD_SECONDS(TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS'), SUBSTR_AFTER(END_TIME, 'B+S'))
WHEN END_TIME LIKE 'B+M%' THEN ADD_SECONDS(TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS'), SUBSTR_AFTER(END_TIME, 'B+M') * 60)
WHEN END_TIME LIKE 'B+H%' THEN ADD_SECONDS(TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS'), SUBSTR_AFTER(END_TIME, 'B+H') * 3600)
WHEN END_TIME LIKE 'B+D%' THEN ADD_SECONDS(TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS'), SUBSTR_AFTER(END_TIME, 'B+D') * 86400)
WHEN END_TIME LIKE 'B+W%' THEN ADD_SECONDS(TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS'), SUBSTR_AFTER(END_TIME, 'B+W') * 86400 * 7)
WHEN END_TIME = 'MAX' THEN TO_TIMESTAMP('9999/12/31 00:00:00', 'YYYY/MM/DD HH24:MI:SS')
WHEN SUBSTR(END_TIME, 1, 1) NOT IN ('C', 'B', 'M') THEN TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS')
END END_TIME,
TIMEZONE,
HOST,
PORT,
THREAD_HISTORY_INTERVAL_S,
TOP_N_CS_SIZE,
TOP_N_RS_SIZE,
TOP_N_DISK,
TOP_N_BLOCKED_TRANSACTIONS,
TOP_N_MEMORY,
TOP_N_IDLE_CURSORS,
TOP_N_LONGRUNNERS,
TOP_N_SQL_TIME,
TOP_N_SQL_EXECUTIONS,
TOP_N_SQL_RECORDS,
TOP_N_EXPENSIVE_SQL_TIME,
TOP_N_EXECUTED_SQL_TIME,
TOP_N_THREAD_SQL,
TOP_N_THREAD_TYPES,
TOP_N_THREAD_STATES_AND_LOCKS,
TOP_N_THREAD_METHODS,
TOP_N_THREAD_DB_USERS,
TOP_N_THREAD_APP_USERS,
TOP_N_THREAD_APP_NAMES,
TOP_N_THREAD_APP_SOURCES,
TOP_N_THREAD_HOST_PORTS,
TOP_N_TABLE_OPTIMIZATIONS,
TOP_N_TRACE_ENTRIES,
LINE_LENGTH
FROM
( SELECT /* Modification section */
'C-H19' BEGIN_TIME, /* YYYY/MM/DD HH24:MI:SS timestamp, C, C-S<seconds>, C-M<minutes>, C-H<hours>, C-D<days>, C-W<weeks>, E-S<seconds>, E-M<minutes>, E-H<hours>, E-D<days>, E-W<weeks>, MIN */
'C-H17' END_TIME, /* YYYY/MM/DD HH24:MI:SS timestamp, C, C-S<seconds>, C-M<minutes>, C-H<hours>, C-D<days>, C-W<weeks>, B+S<seconds>, B+M<minutes>, B+H<hours>, B+D<days>, B+W<weeks>, MAX */
'SERVER' TIMEZONE, /* SERVER, UTC */
'%' HOST,
'%' PORT,
60 THREAD_HISTORY_INTERVAL_S,
10 TOP_N_CS_SIZE,
5 TOP_N_RS_SIZE,
10 TOP_N_DISK,
5 TOP_N_BLOCKED_TRANSACTIONS,
20 TOP_N_MEMORY,
5 TOP_N_IDLE_CURSORS,
5 TOP_N_LONGRUNNERS,
20 TOP_N_SQL_TIME,
20 TOP_N_SQL_EXECUTIONS,
10 TOP_N_SQL_RECORDS,
10 TOP_N_EXPENSIVE_SQL_TIME,
10 TOP_N_EXECUTED_SQL_TIME,
20 TOP_N_THREAD_SQL,
10 TOP_N_THREAD_TYPES,
10 TOP_N_THREAD_STATES_AND_LOCKS,
10 TOP_N_THREAD_METHODS,
3 TOP_N_THREAD_DB_USERS,
3 TOP_N_THREAD_APP_USERS,
5 TOP_N_THREAD_APP_NAMES,
5 TOP_N_THREAD_APP_SOURCES,
5 TOP_N_THREAD_HOST_PORTS,
10 TOP_N_TABLE_OPTIMIZATIONS,
30 TOP_N_TRACE_ENTRIES,
200 LINE_LENGTH
FROM
DUMMY
)
),
( SELECT MAX(IFNULL(RETENTION_DAYS_CURRENT, RETENTION_DAYS_DEFAULT)) HISTORY_RETENTION_DAYS FROM _SYS_STATISTICS.STATISTICS_SCHEDULE )
),
SQLHIST AS
( SELECT
STATEMENT_HASH,
SQL_TYPE,
DURATION_S ELAPSED_S,
ACCESSED_OBJECTS,
EXECUTIONS,
RECORDS,
CASE WHEN DR <= TOP_N_SQL_TIME THEN 'X' ELSE ' ' END TOP_TIME,
CASE WHEN ER <= TOP_N_SQL_EXECUTIONS THEN 'X' ELSE ' ' END TOP_EXECUTIONS,
CASE WHEN RR <= TOP_N_SQL_RECORDS THEN 'X' ELSE ' ' END TOP_RECORDS
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY DURATION_S DESC) DR,
ROW_NUMBER () OVER (ORDER BY EXECUTIONS DESC) ER,
ROW_NUMBER () OVER (ORDER BY RECORDS DESC) RR,
STATEMENT_HASH,
CASE
WHEN STATEMENT_STRING_CLEANED LIKE 'ALTER INDEX%' THEN 'AI'
WHEN STATEMENT_STRING_CLEANED LIKE 'ALTER SYSTEM%' THEN 'AS'
WHEN STATEMENT_STRING_CLEANED LIKE 'ALTER TABLE%' THEN 'AT'
WHEN STATEMENT_STRING_CLEANED LIKE 'ALTER%' THEN 'AL'
WHEN STATEMENT_STRING_CLEANED LIKE 'CALL%' THEN 'CA'
WHEN STATEMENT_STRING_CLEANED LIKE 'COMMIT%' THEN 'CO'
WHEN STATEMENT_STRING_CLEANED LIKE 'CREATE INDEX%' THEN 'CI'
WHEN STATEMENT_STRING_CLEANED LIKE 'CREATE TABLE%' THEN 'CT'
WHEN STATEMENT_STRING_CLEANED LIKE 'CREATE%' THEN 'CR'
WHEN STATEMENT_STRING_CLEANED LIKE 'DELETE%' THEN 'DE'
WHEN STATEMENT_STRING_CLEANED LIKE 'DROP INDEX%' THEN 'DI'
WHEN STATEMENT_STRING_CLEANED LIKE 'DROP TABLE%' THEN 'DT'
WHEN STATEMENT_STRING_CLEANED LIKE 'DROP%' THEN 'DR'
WHEN STATEMENT_STRING_CLEANED LIKE 'EXECUTE%' THEN 'EX'
WHEN STATEMENT_STRING_CLEANED LIKE 'INSERT%' THEN 'IN'
WHEN STATEMENT_STRING_CLEANED LIKE 'REPLACE%' THEN 'RE'
WHEN STATEMENT_STRING_CLEANED LIKE 'ROLLBACK%' THEN 'RO'
WHEN STATEMENT_STRING_CLEANED LIKE 'SELECT%FOR UPDATE%' THEN 'SU'
WHEN STATEMENT_STRING_CLEANED LIKE 'SELECT%' THEN 'SE'
WHEN STATEMENT_STRING_CLEANED LIKE 'TRUNCATE%' THEN 'TR'
WHEN STATEMENT_STRING_CLEANED LIKE 'UPDATE%' THEN 'UP'
WHEN STATEMENT_STRING_CLEANED LIKE 'UPSERT%' THEN 'US'
WHEN STATEMENT_STRING_CLEANED LIKE 'WITH%' THEN 'WI'
ELSE 'unknown'
END SQL_TYPE,
ACCESSED_OBJECTS,
DURATION_S,
EXECUTIONS,
RECORDS,
TOP_N_SQL_TIME,
TOP_N_SQL_EXECUTIONS,
TOP_N_SQL_RECORDS
FROM
( SELECT
S.STATEMENT_HASH,
REPLACE(UPPER(LTRIM(MAP(SUBSTR(TO_VARCHAR(STATEMENT_STRING), 1, 2), '/*', SUBSTR(TO_VARCHAR(STATEMENT_STRING), LOCATE(TO_VARCHAR(STATEMENT_STRING), '*/') + 2), TO_VARCHAR(STATEMENT_STRING)), ' ({')), CHAR(10), '') STATEMENT_STRING_CLEANED,
MAX(TO_VARCHAR(S.ACCESSED_OBJECT_NAMES)) ACCESSED_OBJECTS,
SUM(TO_DOUBLE(TO_BIGINT(S.TOTAL_EXECUTION_TIME + S.TOTAL_PREPARATION_TIME))) / 1000000 DURATION_S,
SUM(S.EXECUTION_COUNT) EXECUTIONS,
SUM(S.TOTAL_RESULT_RECORD_COUNT) RECORDS,
BI.TOP_N_SQL_TIME,
BI.TOP_N_SQL_EXECUTIONS,
BI.TOP_N_SQL_RECORDS
FROM
BASIS_INFO BI,
_SYS_STATISTICS.HOST_SQL_PLAN_CACHE S
WHERE
S.SERVER_TIMESTAMP BETWEEN BI.BEGIN_TIME AND BI.END_TIME AND
S.HOST LIKE BI.HOST AND
TO_VARCHAR(S.PORT) LIKE BI.PORT
GROUP BY
S.STATEMENT_HASH,
TO_VARCHAR(S.STATEMENT_STRING),
BI.TOP_N_SQL_TIME,
BI.TOP_N_SQL_EXECUTIONS,
BI.TOP_N_SQL_RECORDS
)
)
WHERE
DR <= TOP_N_SQL_TIME OR
ER <= TOP_N_SQL_EXECUTIONS OR
RR <= TOP_N_SQL_RECORDS
),
THREADS AS
( SELECT
T.HOST,
T.PORT,
CASE
WHEN T.STATEMENT_HASH = CHAR(63) THEN 'no SQL (' || MAP(T.THREAD_METHOD, CHAR(63), T.THREAD_TYPE, T.THREAD_METHOD) || ')'
ELSE T.STATEMENT_HASH
END STATEMENT_HASH,
CASE
WHEN T.THREAD_TYPE LIKE 'JobWrk%' THEN 'JobWorker'
ELSE T.THREAD_TYPE
END THREAD_TYPE,
T.THREAD_METHOD,
T.THREAD_STATE,
SUBSTR(T.LOCK_WAIT_NAME, MAP(INSTR(T.LOCK_WAIT_NAME, ':' || CHAR(32)), 0, 1, INSTR(T.LOCK_WAIT_NAME, ':' || CHAR(32)) + 2)) LOCK_NAME,
T.USER_NAME DB_USER,
T.APPLICATION_USER_NAME APP_USER,
T.APPLICATION_NAME APP_NAME,
T.APPLICATION_SOURCE APP_SOURCE,
COUNT(*) NUM_SAMPLES,
COUNT(*) / BI.SECONDS * BI.THREAD_HISTORY_INTERVAL_S ACT_THR,
COUNT(*) / (SUM(COUNT(*)) OVER () + 0.01) * 100 THR_PCT
FROM
BASIS_INFO BI,
_SYS_STATISTICS.HOST_SERVICE_THREAD_SAMPLES T
WHERE
T.SERVER_TIMESTAMP BETWEEN BI.BEGIN_TIME AND BI.END_TIME AND
T.HOST LIKE BI.HOST AND
TO_VARCHAR(T.PORT) LIKE BI.PORT
GROUP BY
T.HOST,
T.PORT,
CASE
WHEN T.STATEMENT_HASH = CHAR(63) THEN 'no SQL (' || MAP(T.THREAD_METHOD, CHAR(63), T.THREAD_TYPE, T.THREAD_METHOD) || ')'
ELSE T.STATEMENT_HASH
END,
T.THREAD_TYPE,
T.THREAD_STATE,
T.THREAD_METHOD,
T.LOCK_WAIT_NAME,
T.USER_NAME,
T.APPLICATION_USER_NAME,
T.APPLICATION_NAME,
T.APPLICATION_SOURCE,
BI.SECONDS,
BI.THREAD_HISTORY_INTERVAL_S
),
LINES AS
( SELECT TOP 500
ROW_NUMBER () OVER () LN
FROM
OBJECTS
)
SELECT MAP(BI.LINE_LENGTH, -1, LINE, SUBSTR(LINE, 1, LINE_LENGTH)) LINE FROM BASIS_INFO BI, (
SELECT 5 LINE_NO, '******************************' LINE FROM DUMMY
UNION ALL SELECT 10, '* SAP HANA TIME FRAME REPORT *' FROM DUMMY
UNION ALL SELECT 20, '******************************' FROM DUMMY
UNION ALL SELECT 30, '' FROM DUMMY
UNION ALL SELECT 90, RPAD('Generated with:', 20, CHAR(32)) || 'SQL: "HANA_Global_TimeFrameReport" (SAP Note 1969700)' FROM DUMMY
UNION ALL SELECT 100, RPAD('Start time:', 20, CHAR(32)) || TO_VARCHAR(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS') FROM BASIS_INFO
UNION ALL SELECT 110, RPAD('End time:', 20, CHAR(32)) || TO_VARCHAR(END_TIME, 'YYYY/MM/DD HH24:MI:SS') FROM BASIS_INFO
UNION ALL SELECT 120, RPAD('Duration:', 20, CHAR(32)) || SECONDS || CHAR(32) || 's' FROM BASIS_INFO
UNION ALL SELECT 125, RPAD('Database name:', 20, CHAR(32)) || DATABASE_NAME FROM M_DATABASE
UNION ALL SELECT 127, RPAD('Revision level:', 20, CHAR(32)) || VERSION FROM M_DATABASE
UNION ALL SELECT 130, RPAD('Host:', 20, CHAR(32)) || MAP(HOST, '%', 'all', HOST) FROM BASIS_INFO
UNION ALL SELECT 140, RPAD('Port:', 20, CHAR(32)) || MAP(PORT, '%', 'all', PORT) FROM BASIS_INFO
UNION ALL SELECT 1000, '' FROM DUMMY
UNION ALL SELECT 1010, '*********************' FROM DUMMY
UNION ALL SELECT 1030, '* WORKLOAD OVERVIEW *' FROM DUMMY
UNION ALL SELECT 1040, '*********************' FROM DUMMY
UNION ALL SELECT 1050, '' FROM DUMMY
UNION ALL SELECT 1060, RPAD('ACTIVITY', 20, CHAR(32)) || LPAD('TOTAL', 20) || LPAD('RATE_PER_SECOND', 20) FROM DUMMY
UNION ALL SELECT 1070, RPAD('=', 20, '=') || CHAR(32) || LPAD('=', 19, '=') || CHAR(32) || LPAD('=', 19, '=') FROM DUMMY
UNION ALL
SELECT
1100 + L.LN,
CASE L.LN
WHEN 1 THEN RPAD('Executions', 20, CHAR(32)) || LPAD(W.EXECUTIONS, 20) || LPAD(TO_DECIMAL(W.EXECUTIONS / BI.SECONDS, 10, 2), 20)
WHEN 2 THEN RPAD('Compilations', 20, CHAR(32)) || LPAD(W.COMPILATIONS, 20) || LPAD(TO_DECIMAL(W.COMPILATIONS / BI.SECONDS, 10, 2), 20)
WHEN 3 THEN RPAD('Update transactions', 20, CHAR(32)) || LPAD(W.UPDATE_TRANSACTIONS, 20) || LPAD(TO_DECIMAL(W.UPDATE_TRANSACTIONS / BI.SECONDS, 10, 2), 20)
WHEN 4 THEN RPAD('Commits', 20, CHAR(32)) || LPAD(W.COMMITS, 20) || LPAD(TO_DECIMAL(W.COMMITS / BI.SECONDS, 10, 2), 20)
WHEN 5 THEN RPAD('Rollbacks', 20, CHAR(32)) || LPAD(W.ROLLBACKS, 20) || LPAD(TO_DECIMAL(W.ROLLBACKS / BI.SECONDS, 10, 2), 20)
END
FROM
LINES L,
BASIS_INFO BI,
( SELECT
SUM(EXECUTION_COUNT_DELTA) EXECUTIONS,
SUM(COMPILATION_COUNT_DELTA) COMPILATIONS,
SUM(UPDATE_TRANSACTION_COUNT_DELTA) UPDATE_TRANSACTIONS,
SUM(COMMIT_COUNT_DELTA) COMMITS,
SUM(ROLLBACK_COUNT_DELTA) ROLLBACKS
FROM
BASIS_INFO BI,
_SYS_STATISTICS.HOST_WORKLOAD W
WHERE
W.SERVER_TIMESTAMP BETWEEN BI.BEGIN_TIME AND BI.END_TIME AND
W.HOST LIKE BI.HOST AND
TO_VARCHAR(W.PORT) LIKE BI.PORT AND
W.EXECUTION_COUNT_DELTA >= 0
) W
WHERE
L.LN <= 5
UNION ALL SELECT 80000, '' FROM DUMMY
UNION ALL SELECT 80010, '*********************' FROM DUMMY
UNION ALL SELECT 80030, '* THREAD ACTIVITIES *' FROM DUMMY
UNION ALL SELECT 80040, '*********************' FROM DUMMY
UNION ALL SELECT 80050, '' FROM DUMMY
UNION ALL SELECT 80060, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'STATEMENT_HASH' FROM DUMMY
UNION ALL SELECT 80070, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80100 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || STATEMENT_HASH
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.STATEMENT_HASH,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_SQL
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.STATEMENT_HASH,
BI.TOP_N_THREAD_SQL
)
WHERE
LN <= TOP_N_THREAD_SQL
UNION ALL SELECT 80150, '' FROM DUMMY
UNION ALL SELECT 80160, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'THREAD_TYPE' FROM DUMMY
UNION ALL SELECT 80170, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80200 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || THREAD_TYPE
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.THREAD_TYPE,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_TYPES
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.THREAD_TYPE,
BI.TOP_N_THREAD_TYPES
)
WHERE
LN <= TOP_N_THREAD_TYPES
UNION ALL SELECT 80210, '' FROM DUMMY
UNION ALL SELECT 80211, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'THREAD_METHOD' FROM DUMMY
UNION ALL SELECT 80212, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80220 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || THREAD_METHOD
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.THREAD_METHOD,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_METHODS
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.THREAD_METHOD,
BI.TOP_N_THREAD_METHODS
)
WHERE
LN <= TOP_N_THREAD_METHODS
UNION ALL SELECT 80250, '' FROM DUMMY
UNION ALL SELECT 80260, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'THREAD_STATE_AND_LOCK' FROM DUMMY
UNION ALL SELECT 80270, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 80, '=') FROM DUMMY
UNION ALL
SELECT
80300 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || THREAD_STATE
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.THREAD_STATE || CASE WHEN LOCK_NAME IS NOT NULL and LOCK_NAME != '' AND LOCK_NAME != CHAR(63) THEN CHAR(32) || '(' || LOCK_NAME || ')' ELSE '' END THREAD_STATE,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_STATES_AND_LOCKS
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.THREAD_STATE || CASE WHEN LOCK_NAME IS NOT NULL and LOCK_NAME != '' AND LOCK_NAME != CHAR(63) THEN CHAR(32) || '(' || LOCK_NAME || ')' ELSE '' END,
BI.TOP_N_THREAD_STATES_AND_LOCKS
)
WHERE
LN <= TOP_N_THREAD_STATES_AND_LOCKS
UNION ALL SELECT 80350, '' FROM DUMMY
UNION ALL SELECT 80360, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'DB_USER' FROM DUMMY
UNION ALL SELECT 80370, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80400 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || DB_USER
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.DB_USER,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_DB_USERS
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.DB_USER,
BI.TOP_N_THREAD_DB_USERS
)
WHERE
LN <= TOP_N_THREAD_DB_USERS
UNION ALL SELECT 80450, '' FROM DUMMY
UNION ALL SELECT 80460, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'APPLICATION_USER' FROM DUMMY
UNION ALL SELECT 80470, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80500 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || APP_USER
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.APP_USER,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_APP_USERS
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.APP_USER,
BI.TOP_N_THREAD_APP_USERS
)
WHERE
LN <= TOP_N_THREAD_APP_USERS
UNION ALL SELECT 80550, '' FROM DUMMY
UNION ALL SELECT 80560, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'APPLICATION_NAME' FROM DUMMY
UNION ALL SELECT 80570, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80600 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || APP_NAME
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.APP_NAME,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_APP_NAMES
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.APP_NAME,
BI.TOP_N_THREAD_APP_NAMES
)
WHERE
LN <= TOP_N_THREAD_APP_NAMES
UNION ALL SELECT 80650, '' FROM DUMMY
UNION ALL SELECT 80660, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'APPLICATION_SOURCE' FROM DUMMY
UNION ALL SELECT 80670, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 80, '=') FROM DUMMY
UNION ALL
SELECT
80700 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || APP_SOURCE
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.APP_SOURCE,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_APP_SOURCES
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.APP_SOURCE,
BI.TOP_N_THREAD_APP_SOURCES
)
WHERE
LN <= TOP_N_THREAD_APP_SOURCES
UNION ALL SELECT 80750, '' FROM DUMMY
UNION ALL SELECT 80760, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'HOST_AND_PORTS' FROM DUMMY
UNION ALL SELECT 80770, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80800 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || HOST_AND_PORT
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.HOST || ':' || T.PORT HOST_AND_PORT,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_HOST_PORTS
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.HOST || ':' || T.PORT,
BI.TOP_N_THREAD_HOST_PORTS
)
WHERE
LN <= TOP_N_THREAD_HOST_PORTS
)
WITH HINT (IGNORE_PLAN_CACHE)
*********************
* THREAD ACTIVITIES *
*********************
SAMPLES AVG_ACTIVE_THREADS PCT_OF_TOTAL_LOAD STATEMENT_HASH
=========== ================== =================
2562 21.34 21.43 c3d7d15bec5e66ec6ab15e86527bcca5
2302 19.18 19.26 649e56ca14054ff20c5f60383f2f2a58
1007 8.39 8.42 426bdc1658f5f21d5bebf85ec4ab9c2c
981 8.17 8.20 4d0b16d0bba85c88ba9a09fa0e114adf
671 5.59 5.61 96cf8b00b971bb539ffe871d72691906
490 4.08 4.10 3622648ff8db4fa25550af86a03537ef
241 2.00 2.01 72b41f8cf534d44d538ba38362d45e6f
166 1.38 1.38 0fe42d1615a0666aa17978dde2282c0e
146 1.21 1.22 no SQL (LOBGarbageCollectorJob)
118 0.98 0.98 no SQL (flushing)
116 0.96 0.97 no SQL (ExecQidItab)
114 0.94 0.95 no SQL (ExecutePrepared)
107 0.89 0.89 no SQL (NoAction)
101 0.84 0.84 no SQL (prepareDeltaMerge)
95 0.79 0.79 961e8ea25c70c3660bdcd998e9611f2c
92 0.76 0.76 1c5843065bedd70dc5cabba36f24cfe1
83 0.69 0.69 6656adc361a7df7ff1fec2ec7a88f9c6
83 0.69 0.69 fd2ed14393cfdce3bd5f20a23cfc9fd4
57 0.47 0.47 17b7b782cef1b6bac3786b122c0a0afb
52 0.43 0.43 4d0240f500d53f7da0223d2f415a5d5b
SAMPLES AVG_ACTIVE_THREADS PCT_OF_TOTAL_LOAD THREAD_TYPE
=========== ================== =================
6296 52.43 52.68 SqlExecutor
5296 44.12 44.31 JobWorker
118 0.98 0.98 ContinuousPageFlusher
47 0.38 0.39 WorkerThread (StatisticsServer)
40 0.33 0.33 MergedogMonitor
32 0.26 0.26 MergedogMerger
27 0.22 0.22 JobexMainDispatcher
19 0.15 0.15 LogBackupThread
16 0.13 0.13 Request
10 0.08 0.08 BackupProgressWatchdog
SAMPLES AVG_ACTIVE_THREADS PCT_OF_TOTAL_LOAD THREAD_METHOD
=========== ================== =================
4747 39.54 39.72 ExecutePrepared
3482 29.01 29.13 SearchPartJob
1241 10.33 10.38 ExecQidItab
814 6.78 6.81 RleScanVecOutJob<range>
168 1.39 1.40 ClusterIndexScanBvOutJob<ScanRangePredicate>
151 1.25 1.26 ?
146 1.21 1.22 LOBGarbageCollectorJob
118 0.98 0.98 flushing
115 0.95 0.96 BatchExecute
107 0.89 0.89 NoAction
◉ This example was over a long-time frame(24h) and should just give indications
◉ Normally you should know your time frames with bottlenecks to shrink it down to the needed time as input to the modification area
Pretty flashy here are the Thread methods RleScanBvOutJob<range> and ClusterIndexScanBvOutJob<ScanRangePredicate>. We continue the analyis in part II to keep this blog readable.
Possible indications thread methods could be
◉ IndirectScanBvOutJob*
◉ JobParallelMgetSearch
◉ JobParallelPagedMgetSearch
◉ PrefixedScanVecOutJob
◉ PrefixedScanVecOutJob<range>
◉ RlePredScanJob<ScanVectorBinSearchPredicate>(out=vector)
◉ RlePredScanJob<ScanVectorPredicate>(out=vector)
◉ RleScanBvOutJob<BV>
◉ RleScanBvOutJob<range>
◉ RleScanVecOutJob<BV>
◉ RleScanVecOutJob<range>
◉ RleScanBvOutJob
◉ scanWithoutIndex
◉ ClusterIndexScanBvOutJob<ScanRangePredicate>
◉ ClusterScanBvOutJob<BV>
◉ ClusterScanBvOutJob<range>
◉ ClusterScanVecOutJob<range>
◉ SparseBvScanBvOutJob
◉ SparseBvScanVecOutJob
◉ SparsePredScanBvOutJob<ScanRangesPredicate>
◉ SparsePredScanVecOutJob<ScanRangesPredicate>
◉ SparsePredScanVecOutJob<ScanVectorBinSearchPredicate>
◉ SparsePredScanVecOutJob<ScanVectorPredicate>
◉ SparseRangeScanBvOutJob
◉ SparseRangeScanVecOutJob
◉ sparseSearch
◉ sse_icc_lib::mgetSearchi_AVX2impl
◉ sse_icc_lib::mgetSearchi_AVX
Another indicator can be check 890 or 1125 of the mini checks (HANA_Configuration_MiniChecks*):
|M0890|Unusual frequent thread methods (last hour) | |IndirectScanBvOutJob<BV> (5.33 threads) |none |X| 2114710|
|M1125|Columns with many scanned records | |MSEG.BUDAT_MKPF (41067249/s) |none |X| 2000002|
|M1125|Columns with many scanned records | |MSEG.MANDT (16265794/s) |none |X| 2000002|
|M1125|Columns with many scanned records | |MSEG.MATNR (375450570/s) |none |X| 2000002|
|M1125|Columns with many scanned records | |MSEG.MJAHR (354290653/s) |none |X| 2000002|
|M1125|Columns with many scanned records | |MSEG.WERKS (28137626/s) |none |X| 2000002|
Furthermore, it may be wise to create additionally secondary indexes depending on your data and SQL statements. So, nothing new if you compare it to other DBs like Oracle or DB2.
Initial questions:
◉ Do indexes cost me additionally main storage or disk space?
◉ How to determine the right index?
◉ Are there any tools like index advisors?
◉ How to identify the hash in a root cause performance analysis?
1. Starting with indexes
SAP HANA is able to process data efficiently so that often a good performance is possible even without the use of indexes. In case of frequent, selective accesses to large tables it is nevertheless useful to create additional secondary indexes. As a rule of thumb column store indexes should be created on single columns whenever possible, because single column indexes require much less memory compared to multi column indexes.
At the beginning we have to know that there are different kinds of indexes.
Single column indexes on individual columns
and
Indexes on multiple columns aka concatenated indexes.
There are different kinds of this concat indexes which can be viewed in detail here.
Every existing index slows down an INSERT operation. Check if you can reduce the number of indexes during mass INSERTs and data loads. SAP BW provides possibilities to automatically drop and recreate indexes during data loads. Primary index normally mustn’t be dropped. This means you have to evaluate if an index is useful when a lot of the statements consists of inserts, updates and modify operations. More about this in the analysis part.
All initial questions (see above) will be answered in this blog series besides the first one.
A: concat attributes and inverted index structures (main part) will be persisted to disk and need for this reason additional disk space. However, each index needs its own memory and has to be adjusted on data changes.
2. Limitations
2.1 High runtime of MIN and MAX searches
Indexes in SAP HANA can be used to support MIN / MAX searches if all of the following conditions are met:
◉ SAP HANA >= 2.0 SPS 04
◉ Column store table
◉ Number column or character column without mixture of upper / lower case letters
In all other scenarios an index can’t be used to identify the maximum or minimum value of a column directly. Instead the whole column / table has to be scanned. Therefore, you avoid frequent MAX or MIN searches on large data volumes. Possible alternatives are:
◉ Definition of an ABAP number range object
◉ Sequences (SAP Note 2600095)
◉ Identities (GENERATE [BY DEFAULT] AS IDENTITY, CURRENT_IDENTITY_VALUE())
◉ Additional selective conditions
◉ Maintaining the MIN and MAX values independently in a separate table
2.2 ORDER BY
In most cases the column store provides better performance of MIN / MAX searches compared to row store, so you can consider a move to column store (SAP Note 2222277) in cases when the performance is linked to a row store table.
SAP HANA indexes don’t support sorting. Therefore, an ORDER BY requires explicit sorting, even if an index on the related column(s) exists. In cases where a high amount of records need to be sorted before the first few records are returned, the runtime can be rather high.
Both limitations resulting from the insert only on delta design of SAP HANA which consists of the two data areas of a CS table => main and delta
◉ The main store is read optimized, sorted and compressed
◉ The delta one is write optimized, not sorted and not compressed
2.3 SPARSE / PREFIXED compression
If you create an index on a col with SPARSE or PREFIXED compression you should trigger a optimize compression afterwards. Normally this will be done after one of the next delta merges but to avoid such a situation just analyze it and trigger it if necessary:
UPDATE "<table_name>" WITH PARAMETERS ('OPTIMIZE_COMPRESSION' = 'FORCE')
This can be analyzed with the Data Collector (HANA_SQL_StatementHash_DataCollector – part II of this blog series) or the ColStore Columns (HANA_Tables_ColumnStore_Columns)
3. SAP recommendations
The recommendations of SAP can be found in note 2000002 for the most common tables and SQL hashes. Some recommendations can also be obtained from the tools section below.
4. Evaluate expensive statements
Check the systems for statements which are executed frequently and have a high execution per row time.
For an overview you can start with the most scanned columns.
HANA_Tables_ColumnStore_Columns_Statistics_2.00.030+
( SELECT /* Modification section */
'%' HOST,
'%' PORT,
'%' SCHEMA_NAME,
'%' TABLE_NAME,
'%' COLUMN_NAME,
-1 MIN_MEM_SIZE_MB,
10000000 MIN_SCANNED_RECORDS_PER_S,
' ' ONLY_UNUSED_CONCAT_ATTRIBUTES,
'TABLE' OBJECT_LEVEL, /* TABLE, PARTITION */
'NONE' AGGREGATE_BY, /* HOST, PORT, SCHEMA, TABLE, COLUMN, or comma separated combinations, NONE for no aggregation */
'SCANNED_RECORDS' ORDER_BY /* NAME, SCANNED_RECORDS, INDEX_LOOKUPS, SIZE */
Depending on the size and lifetime (amount of statistic data) of your DB you have to adjust the MIN_SCANNED_RECORDS_PER_S.
------------------------------------------------------------------------
|TABLE_NAME |COLUMN_NAME |SCANNED_RECORDS |SCR_PER_S |INDEX_LOOKUPS|IL_PER_S|MEM_SIZE_MB|INDEX_TYPE|
-------------------------------------------------------------------------
|VBAP |MATNR | 1885008657244212| 2089965282| 0| 0.00| 265.15|NONE |
|MSEG |MATNR | 396765902062072| 439906181| 42262934| 46.85| 6873.89|BLOCK |
|MKPF |BKTXT | 292087907375924| 323846568| 0| 0.00| 25.00|NONE |
|MSEG |MJAHR | 276763424959580| 306855858| 7436| 0.00| 662.68|BLOCK |
|VBEP |EDATU | 243911240701207| 270431662| 0| 0.00| 24.75|NONE |
|KONV |KNTYP | 237690523994145| 263534568| 0| 0.00| 2496.31|NONE |
|BSIS |XOPVW | 103255810457226| 114482794| 0| 0.00| 150.09|NONE |
|MSEG |ZEILE | 102177134639927| 113286834| 375568| 0.41| 2704.73|BLOCK |
|MSEG |BUDAT_MKPF | 37645623055548| 41738824| 0| 0.00| 2412.83|NONE |
|VBPA |KUNNR | 32958632854528| 36542218| 1869284| 2.07| 14.67|BLOCK |
-------------------------------------------------------------------------
On every column with index type “NONE” there is room for improvement. This means now we have the columns which are often accessed / scanned with and without an index.
On this base you can look into the expensive statements searching for those tables.
select statement_hash, statement_string, execution_count, total_execution_time from
m_sql_plan_cache where statement_string like '%VBAP%' order by total_execution_time desc;
4.1 CPU
If you observe a frequently high usage of CPU you should start an analyzes with the TFR (TimeFrameReport):
◉ HANA_Global_CurrentStateReport_2.00.040+ – for just in time workloads
◉ HANA_Global_TimeFrameReport_2.00.040+ – for historical workloads
More experienced experts can also skip this step and go directly to a thread analysis via
◉ HANA_Threads_ThreadSamples_AggregationPerTimeSlice_2.00.040+
◉ HANA_Threads_ThreadSamples_FilterAndAggregation_2.00.040+
◉ HANA_Threads_ThreadSamples_StatisticalRecords_2.00.040+
But for now, we go ahead with the TFR.
Watch out for some special thread methods which can be fast analyzed via this extract of the TFR:
WITH
BASIS_INFO AS
( SELECT
GREATEST(ADD_DAYS(CURRENT_TIMESTAMP, -HISTORY_RETENTION_DAYS - 1), CASE TIMEZONE WHEN 'UTC' THEN ADD_SECONDS(BEGIN_TIME, SECONDS_BETWEEN(CURRENT_UTCTIMESTAMP, CURRENT_TIMESTAMP)) ELSE BEGIN_TIME END) BEGIN_TIME,
LEAST(CURRENT_TIMESTAMP, CASE TIMEZONE WHEN 'UTC' THEN ADD_SECONDS(END_TIME, SECONDS_BETWEEN(CURRENT_UTCTIMESTAMP, CURRENT_TIMESTAMP)) ELSE END_TIME END) END_TIME,
GREATEST(ADD_DAYS(CURRENT_TIMESTAMP, -HISTORY_RETENTION_DAYS - 1), BEGIN_TIME) BEGIN_TIME_ORIG,
LEAST(CURRENT_TIMESTAMP, END_TIME) END_TIME_ORIG,
SECONDS_BETWEEN(GREATEST(ADD_DAYS(CURRENT_TIMESTAMP, -HISTORY_RETENTION_DAYS - 1), BEGIN_TIME), LEAST(CURRENT_TIMESTAMP, END_TIME)) SECONDS,
HOST,
PORT,
THREAD_HISTORY_INTERVAL_S,
TOP_N_CS_SIZE,
TOP_N_RS_SIZE,
TOP_N_DISK,
TOP_N_BLOCKED_TRANSACTIONS,
TOP_N_MEMORY,
TOP_N_IDLE_CURSORS,
TOP_N_LONGRUNNERS,
TOP_N_SQL_TIME,
TOP_N_SQL_EXECUTIONS,
TOP_N_SQL_RECORDS,
TOP_N_EXPENSIVE_SQL_TIME,
TOP_N_EXECUTED_SQL_TIME,
TOP_N_THREAD_SQL,
TOP_N_THREAD_TYPES,
TOP_N_THREAD_STATES_AND_LOCKS,
TOP_N_THREAD_METHODS,
TOP_N_THREAD_DB_USERS,
TOP_N_THREAD_APP_USERS,
TOP_N_THREAD_APP_NAMES,
TOP_N_THREAD_APP_SOURCES,
TOP_N_THREAD_HOST_PORTS,
TOP_N_TABLE_OPTIMIZATIONS,
TOP_N_TRACE_ENTRIES,
LINE_LENGTH
FROM
( SELECT
CASE
WHEN BEGIN_TIME = 'C' THEN CURRENT_TIMESTAMP
WHEN BEGIN_TIME LIKE 'C-S%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(BEGIN_TIME, 'C-S'))
WHEN BEGIN_TIME LIKE 'C-M%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(BEGIN_TIME, 'C-M') * 60)
WHEN BEGIN_TIME LIKE 'C-H%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(BEGIN_TIME, 'C-H') * 3600)
WHEN BEGIN_TIME LIKE 'C-D%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(BEGIN_TIME, 'C-D') * 86400)
WHEN BEGIN_TIME LIKE 'C-W%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(BEGIN_TIME, 'C-W') * 86400 * 7)
WHEN BEGIN_TIME LIKE 'E-S%' THEN ADD_SECONDS(TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS'), -SUBSTR_AFTER(BEGIN_TIME, 'E-S'))
WHEN BEGIN_TIME LIKE 'E-M%' THEN ADD_SECONDS(TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS'), -SUBSTR_AFTER(BEGIN_TIME, 'E-M') * 60)
WHEN BEGIN_TIME LIKE 'E-H%' THEN ADD_SECONDS(TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS'), -SUBSTR_AFTER(BEGIN_TIME, 'E-H') * 3600)
WHEN BEGIN_TIME LIKE 'E-D%' THEN ADD_SECONDS(TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS'), -SUBSTR_AFTER(BEGIN_TIME, 'E-D') * 86400)
WHEN BEGIN_TIME LIKE 'E-W%' THEN ADD_SECONDS(TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS'), -SUBSTR_AFTER(BEGIN_TIME, 'E-W') * 86400 * 7)
WHEN BEGIN_TIME = 'MIN' THEN TO_TIMESTAMP('1000/01/01 00:00:00', 'YYYY/MM/DD HH24:MI:SS')
WHEN SUBSTR(BEGIN_TIME, 1, 1) NOT IN ('C', 'E', 'M') THEN TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS')
END BEGIN_TIME,
CASE
WHEN END_TIME = 'C' THEN CURRENT_TIMESTAMP
WHEN END_TIME LIKE 'C-S%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(END_TIME, 'C-S'))
WHEN END_TIME LIKE 'C-M%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(END_TIME, 'C-M') * 60)
WHEN END_TIME LIKE 'C-H%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(END_TIME, 'C-H') * 3600)
WHEN END_TIME LIKE 'C-D%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(END_TIME, 'C-D') * 86400)
WHEN END_TIME LIKE 'C-W%' THEN ADD_SECONDS(CURRENT_TIMESTAMP, -SUBSTR_AFTER(END_TIME, 'C-W') * 86400 * 7)
WHEN END_TIME LIKE 'B+S%' THEN ADD_SECONDS(TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS'), SUBSTR_AFTER(END_TIME, 'B+S'))
WHEN END_TIME LIKE 'B+M%' THEN ADD_SECONDS(TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS'), SUBSTR_AFTER(END_TIME, 'B+M') * 60)
WHEN END_TIME LIKE 'B+H%' THEN ADD_SECONDS(TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS'), SUBSTR_AFTER(END_TIME, 'B+H') * 3600)
WHEN END_TIME LIKE 'B+D%' THEN ADD_SECONDS(TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS'), SUBSTR_AFTER(END_TIME, 'B+D') * 86400)
WHEN END_TIME LIKE 'B+W%' THEN ADD_SECONDS(TO_TIMESTAMP(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS'), SUBSTR_AFTER(END_TIME, 'B+W') * 86400 * 7)
WHEN END_TIME = 'MAX' THEN TO_TIMESTAMP('9999/12/31 00:00:00', 'YYYY/MM/DD HH24:MI:SS')
WHEN SUBSTR(END_TIME, 1, 1) NOT IN ('C', 'B', 'M') THEN TO_TIMESTAMP(END_TIME, 'YYYY/MM/DD HH24:MI:SS')
END END_TIME,
TIMEZONE,
HOST,
PORT,
THREAD_HISTORY_INTERVAL_S,
TOP_N_CS_SIZE,
TOP_N_RS_SIZE,
TOP_N_DISK,
TOP_N_BLOCKED_TRANSACTIONS,
TOP_N_MEMORY,
TOP_N_IDLE_CURSORS,
TOP_N_LONGRUNNERS,
TOP_N_SQL_TIME,
TOP_N_SQL_EXECUTIONS,
TOP_N_SQL_RECORDS,
TOP_N_EXPENSIVE_SQL_TIME,
TOP_N_EXECUTED_SQL_TIME,
TOP_N_THREAD_SQL,
TOP_N_THREAD_TYPES,
TOP_N_THREAD_STATES_AND_LOCKS,
TOP_N_THREAD_METHODS,
TOP_N_THREAD_DB_USERS,
TOP_N_THREAD_APP_USERS,
TOP_N_THREAD_APP_NAMES,
TOP_N_THREAD_APP_SOURCES,
TOP_N_THREAD_HOST_PORTS,
TOP_N_TABLE_OPTIMIZATIONS,
TOP_N_TRACE_ENTRIES,
LINE_LENGTH
FROM
( SELECT /* Modification section */
'C-H19' BEGIN_TIME, /* YYYY/MM/DD HH24:MI:SS timestamp, C, C-S<seconds>, C-M<minutes>, C-H<hours>, C-D<days>, C-W<weeks>, E-S<seconds>, E-M<minutes>, E-H<hours>, E-D<days>, E-W<weeks>, MIN */
'C-H17' END_TIME, /* YYYY/MM/DD HH24:MI:SS timestamp, C, C-S<seconds>, C-M<minutes>, C-H<hours>, C-D<days>, C-W<weeks>, B+S<seconds>, B+M<minutes>, B+H<hours>, B+D<days>, B+W<weeks>, MAX */
'SERVER' TIMEZONE, /* SERVER, UTC */
'%' HOST,
'%' PORT,
60 THREAD_HISTORY_INTERVAL_S,
10 TOP_N_CS_SIZE,
5 TOP_N_RS_SIZE,
10 TOP_N_DISK,
5 TOP_N_BLOCKED_TRANSACTIONS,
20 TOP_N_MEMORY,
5 TOP_N_IDLE_CURSORS,
5 TOP_N_LONGRUNNERS,
20 TOP_N_SQL_TIME,
20 TOP_N_SQL_EXECUTIONS,
10 TOP_N_SQL_RECORDS,
10 TOP_N_EXPENSIVE_SQL_TIME,
10 TOP_N_EXECUTED_SQL_TIME,
20 TOP_N_THREAD_SQL,
10 TOP_N_THREAD_TYPES,
10 TOP_N_THREAD_STATES_AND_LOCKS,
10 TOP_N_THREAD_METHODS,
3 TOP_N_THREAD_DB_USERS,
3 TOP_N_THREAD_APP_USERS,
5 TOP_N_THREAD_APP_NAMES,
5 TOP_N_THREAD_APP_SOURCES,
5 TOP_N_THREAD_HOST_PORTS,
10 TOP_N_TABLE_OPTIMIZATIONS,
30 TOP_N_TRACE_ENTRIES,
200 LINE_LENGTH
FROM
DUMMY
)
),
( SELECT MAX(IFNULL(RETENTION_DAYS_CURRENT, RETENTION_DAYS_DEFAULT)) HISTORY_RETENTION_DAYS FROM _SYS_STATISTICS.STATISTICS_SCHEDULE )
),
SQLHIST AS
( SELECT
STATEMENT_HASH,
SQL_TYPE,
DURATION_S ELAPSED_S,
ACCESSED_OBJECTS,
EXECUTIONS,
RECORDS,
CASE WHEN DR <= TOP_N_SQL_TIME THEN 'X' ELSE ' ' END TOP_TIME,
CASE WHEN ER <= TOP_N_SQL_EXECUTIONS THEN 'X' ELSE ' ' END TOP_EXECUTIONS,
CASE WHEN RR <= TOP_N_SQL_RECORDS THEN 'X' ELSE ' ' END TOP_RECORDS
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY DURATION_S DESC) DR,
ROW_NUMBER () OVER (ORDER BY EXECUTIONS DESC) ER,
ROW_NUMBER () OVER (ORDER BY RECORDS DESC) RR,
STATEMENT_HASH,
CASE
WHEN STATEMENT_STRING_CLEANED LIKE 'ALTER INDEX%' THEN 'AI'
WHEN STATEMENT_STRING_CLEANED LIKE 'ALTER SYSTEM%' THEN 'AS'
WHEN STATEMENT_STRING_CLEANED LIKE 'ALTER TABLE%' THEN 'AT'
WHEN STATEMENT_STRING_CLEANED LIKE 'ALTER%' THEN 'AL'
WHEN STATEMENT_STRING_CLEANED LIKE 'CALL%' THEN 'CA'
WHEN STATEMENT_STRING_CLEANED LIKE 'COMMIT%' THEN 'CO'
WHEN STATEMENT_STRING_CLEANED LIKE 'CREATE INDEX%' THEN 'CI'
WHEN STATEMENT_STRING_CLEANED LIKE 'CREATE TABLE%' THEN 'CT'
WHEN STATEMENT_STRING_CLEANED LIKE 'CREATE%' THEN 'CR'
WHEN STATEMENT_STRING_CLEANED LIKE 'DELETE%' THEN 'DE'
WHEN STATEMENT_STRING_CLEANED LIKE 'DROP INDEX%' THEN 'DI'
WHEN STATEMENT_STRING_CLEANED LIKE 'DROP TABLE%' THEN 'DT'
WHEN STATEMENT_STRING_CLEANED LIKE 'DROP%' THEN 'DR'
WHEN STATEMENT_STRING_CLEANED LIKE 'EXECUTE%' THEN 'EX'
WHEN STATEMENT_STRING_CLEANED LIKE 'INSERT%' THEN 'IN'
WHEN STATEMENT_STRING_CLEANED LIKE 'REPLACE%' THEN 'RE'
WHEN STATEMENT_STRING_CLEANED LIKE 'ROLLBACK%' THEN 'RO'
WHEN STATEMENT_STRING_CLEANED LIKE 'SELECT%FOR UPDATE%' THEN 'SU'
WHEN STATEMENT_STRING_CLEANED LIKE 'SELECT%' THEN 'SE'
WHEN STATEMENT_STRING_CLEANED LIKE 'TRUNCATE%' THEN 'TR'
WHEN STATEMENT_STRING_CLEANED LIKE 'UPDATE%' THEN 'UP'
WHEN STATEMENT_STRING_CLEANED LIKE 'UPSERT%' THEN 'US'
WHEN STATEMENT_STRING_CLEANED LIKE 'WITH%' THEN 'WI'
ELSE 'unknown'
END SQL_TYPE,
ACCESSED_OBJECTS,
DURATION_S,
EXECUTIONS,
RECORDS,
TOP_N_SQL_TIME,
TOP_N_SQL_EXECUTIONS,
TOP_N_SQL_RECORDS
FROM
( SELECT
S.STATEMENT_HASH,
REPLACE(UPPER(LTRIM(MAP(SUBSTR(TO_VARCHAR(STATEMENT_STRING), 1, 2), '/*', SUBSTR(TO_VARCHAR(STATEMENT_STRING), LOCATE(TO_VARCHAR(STATEMENT_STRING), '*/') + 2), TO_VARCHAR(STATEMENT_STRING)), ' ({')), CHAR(10), '') STATEMENT_STRING_CLEANED,
MAX(TO_VARCHAR(S.ACCESSED_OBJECT_NAMES)) ACCESSED_OBJECTS,
SUM(TO_DOUBLE(TO_BIGINT(S.TOTAL_EXECUTION_TIME + S.TOTAL_PREPARATION_TIME))) / 1000000 DURATION_S,
SUM(S.EXECUTION_COUNT) EXECUTIONS,
SUM(S.TOTAL_RESULT_RECORD_COUNT) RECORDS,
BI.TOP_N_SQL_TIME,
BI.TOP_N_SQL_EXECUTIONS,
BI.TOP_N_SQL_RECORDS
FROM
BASIS_INFO BI,
_SYS_STATISTICS.HOST_SQL_PLAN_CACHE S
WHERE
S.SERVER_TIMESTAMP BETWEEN BI.BEGIN_TIME AND BI.END_TIME AND
S.HOST LIKE BI.HOST AND
TO_VARCHAR(S.PORT) LIKE BI.PORT
GROUP BY
S.STATEMENT_HASH,
TO_VARCHAR(S.STATEMENT_STRING),
BI.TOP_N_SQL_TIME,
BI.TOP_N_SQL_EXECUTIONS,
BI.TOP_N_SQL_RECORDS
)
)
WHERE
DR <= TOP_N_SQL_TIME OR
ER <= TOP_N_SQL_EXECUTIONS OR
RR <= TOP_N_SQL_RECORDS
),
THREADS AS
( SELECT
T.HOST,
T.PORT,
CASE
WHEN T.STATEMENT_HASH = CHAR(63) THEN 'no SQL (' || MAP(T.THREAD_METHOD, CHAR(63), T.THREAD_TYPE, T.THREAD_METHOD) || ')'
ELSE T.STATEMENT_HASH
END STATEMENT_HASH,
CASE
WHEN T.THREAD_TYPE LIKE 'JobWrk%' THEN 'JobWorker'
ELSE T.THREAD_TYPE
END THREAD_TYPE,
T.THREAD_METHOD,
T.THREAD_STATE,
SUBSTR(T.LOCK_WAIT_NAME, MAP(INSTR(T.LOCK_WAIT_NAME, ':' || CHAR(32)), 0, 1, INSTR(T.LOCK_WAIT_NAME, ':' || CHAR(32)) + 2)) LOCK_NAME,
T.USER_NAME DB_USER,
T.APPLICATION_USER_NAME APP_USER,
T.APPLICATION_NAME APP_NAME,
T.APPLICATION_SOURCE APP_SOURCE,
COUNT(*) NUM_SAMPLES,
COUNT(*) / BI.SECONDS * BI.THREAD_HISTORY_INTERVAL_S ACT_THR,
COUNT(*) / (SUM(COUNT(*)) OVER () + 0.01) * 100 THR_PCT
FROM
BASIS_INFO BI,
_SYS_STATISTICS.HOST_SERVICE_THREAD_SAMPLES T
WHERE
T.SERVER_TIMESTAMP BETWEEN BI.BEGIN_TIME AND BI.END_TIME AND
T.HOST LIKE BI.HOST AND
TO_VARCHAR(T.PORT) LIKE BI.PORT
GROUP BY
T.HOST,
T.PORT,
CASE
WHEN T.STATEMENT_HASH = CHAR(63) THEN 'no SQL (' || MAP(T.THREAD_METHOD, CHAR(63), T.THREAD_TYPE, T.THREAD_METHOD) || ')'
ELSE T.STATEMENT_HASH
END,
T.THREAD_TYPE,
T.THREAD_STATE,
T.THREAD_METHOD,
T.LOCK_WAIT_NAME,
T.USER_NAME,
T.APPLICATION_USER_NAME,
T.APPLICATION_NAME,
T.APPLICATION_SOURCE,
BI.SECONDS,
BI.THREAD_HISTORY_INTERVAL_S
),
LINES AS
( SELECT TOP 500
ROW_NUMBER () OVER () LN
FROM
OBJECTS
)
SELECT MAP(BI.LINE_LENGTH, -1, LINE, SUBSTR(LINE, 1, LINE_LENGTH)) LINE FROM BASIS_INFO BI, (
SELECT 5 LINE_NO, '******************************' LINE FROM DUMMY
UNION ALL SELECT 10, '* SAP HANA TIME FRAME REPORT *' FROM DUMMY
UNION ALL SELECT 20, '******************************' FROM DUMMY
UNION ALL SELECT 30, '' FROM DUMMY
UNION ALL SELECT 90, RPAD('Generated with:', 20, CHAR(32)) || 'SQL: "HANA_Global_TimeFrameReport" (SAP Note 1969700)' FROM DUMMY
UNION ALL SELECT 100, RPAD('Start time:', 20, CHAR(32)) || TO_VARCHAR(BEGIN_TIME, 'YYYY/MM/DD HH24:MI:SS') FROM BASIS_INFO
UNION ALL SELECT 110, RPAD('End time:', 20, CHAR(32)) || TO_VARCHAR(END_TIME, 'YYYY/MM/DD HH24:MI:SS') FROM BASIS_INFO
UNION ALL SELECT 120, RPAD('Duration:', 20, CHAR(32)) || SECONDS || CHAR(32) || 's' FROM BASIS_INFO
UNION ALL SELECT 125, RPAD('Database name:', 20, CHAR(32)) || DATABASE_NAME FROM M_DATABASE
UNION ALL SELECT 127, RPAD('Revision level:', 20, CHAR(32)) || VERSION FROM M_DATABASE
UNION ALL SELECT 130, RPAD('Host:', 20, CHAR(32)) || MAP(HOST, '%', 'all', HOST) FROM BASIS_INFO
UNION ALL SELECT 140, RPAD('Port:', 20, CHAR(32)) || MAP(PORT, '%', 'all', PORT) FROM BASIS_INFO
UNION ALL SELECT 1000, '' FROM DUMMY
UNION ALL SELECT 1010, '*********************' FROM DUMMY
UNION ALL SELECT 1030, '* WORKLOAD OVERVIEW *' FROM DUMMY
UNION ALL SELECT 1040, '*********************' FROM DUMMY
UNION ALL SELECT 1050, '' FROM DUMMY
UNION ALL SELECT 1060, RPAD('ACTIVITY', 20, CHAR(32)) || LPAD('TOTAL', 20) || LPAD('RATE_PER_SECOND', 20) FROM DUMMY
UNION ALL SELECT 1070, RPAD('=', 20, '=') || CHAR(32) || LPAD('=', 19, '=') || CHAR(32) || LPAD('=', 19, '=') FROM DUMMY
UNION ALL
SELECT
1100 + L.LN,
CASE L.LN
WHEN 1 THEN RPAD('Executions', 20, CHAR(32)) || LPAD(W.EXECUTIONS, 20) || LPAD(TO_DECIMAL(W.EXECUTIONS / BI.SECONDS, 10, 2), 20)
WHEN 2 THEN RPAD('Compilations', 20, CHAR(32)) || LPAD(W.COMPILATIONS, 20) || LPAD(TO_DECIMAL(W.COMPILATIONS / BI.SECONDS, 10, 2), 20)
WHEN 3 THEN RPAD('Update transactions', 20, CHAR(32)) || LPAD(W.UPDATE_TRANSACTIONS, 20) || LPAD(TO_DECIMAL(W.UPDATE_TRANSACTIONS / BI.SECONDS, 10, 2), 20)
WHEN 4 THEN RPAD('Commits', 20, CHAR(32)) || LPAD(W.COMMITS, 20) || LPAD(TO_DECIMAL(W.COMMITS / BI.SECONDS, 10, 2), 20)
WHEN 5 THEN RPAD('Rollbacks', 20, CHAR(32)) || LPAD(W.ROLLBACKS, 20) || LPAD(TO_DECIMAL(W.ROLLBACKS / BI.SECONDS, 10, 2), 20)
END
FROM
LINES L,
BASIS_INFO BI,
( SELECT
SUM(EXECUTION_COUNT_DELTA) EXECUTIONS,
SUM(COMPILATION_COUNT_DELTA) COMPILATIONS,
SUM(UPDATE_TRANSACTION_COUNT_DELTA) UPDATE_TRANSACTIONS,
SUM(COMMIT_COUNT_DELTA) COMMITS,
SUM(ROLLBACK_COUNT_DELTA) ROLLBACKS
FROM
BASIS_INFO BI,
_SYS_STATISTICS.HOST_WORKLOAD W
WHERE
W.SERVER_TIMESTAMP BETWEEN BI.BEGIN_TIME AND BI.END_TIME AND
W.HOST LIKE BI.HOST AND
TO_VARCHAR(W.PORT) LIKE BI.PORT AND
W.EXECUTION_COUNT_DELTA >= 0
) W
WHERE
L.LN <= 5
UNION ALL SELECT 80000, '' FROM DUMMY
UNION ALL SELECT 80010, '*********************' FROM DUMMY
UNION ALL SELECT 80030, '* THREAD ACTIVITIES *' FROM DUMMY
UNION ALL SELECT 80040, '*********************' FROM DUMMY
UNION ALL SELECT 80050, '' FROM DUMMY
UNION ALL SELECT 80060, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'STATEMENT_HASH' FROM DUMMY
UNION ALL SELECT 80070, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80100 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || STATEMENT_HASH
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.STATEMENT_HASH,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_SQL
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.STATEMENT_HASH,
BI.TOP_N_THREAD_SQL
)
WHERE
LN <= TOP_N_THREAD_SQL
UNION ALL SELECT 80150, '' FROM DUMMY
UNION ALL SELECT 80160, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'THREAD_TYPE' FROM DUMMY
UNION ALL SELECT 80170, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80200 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || THREAD_TYPE
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.THREAD_TYPE,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_TYPES
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.THREAD_TYPE,
BI.TOP_N_THREAD_TYPES
)
WHERE
LN <= TOP_N_THREAD_TYPES
UNION ALL SELECT 80210, '' FROM DUMMY
UNION ALL SELECT 80211, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'THREAD_METHOD' FROM DUMMY
UNION ALL SELECT 80212, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80220 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || THREAD_METHOD
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.THREAD_METHOD,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_METHODS
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.THREAD_METHOD,
BI.TOP_N_THREAD_METHODS
)
WHERE
LN <= TOP_N_THREAD_METHODS
UNION ALL SELECT 80250, '' FROM DUMMY
UNION ALL SELECT 80260, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'THREAD_STATE_AND_LOCK' FROM DUMMY
UNION ALL SELECT 80270, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 80, '=') FROM DUMMY
UNION ALL
SELECT
80300 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || THREAD_STATE
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.THREAD_STATE || CASE WHEN LOCK_NAME IS NOT NULL and LOCK_NAME != '' AND LOCK_NAME != CHAR(63) THEN CHAR(32) || '(' || LOCK_NAME || ')' ELSE '' END THREAD_STATE,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_STATES_AND_LOCKS
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.THREAD_STATE || CASE WHEN LOCK_NAME IS NOT NULL and LOCK_NAME != '' AND LOCK_NAME != CHAR(63) THEN CHAR(32) || '(' || LOCK_NAME || ')' ELSE '' END,
BI.TOP_N_THREAD_STATES_AND_LOCKS
)
WHERE
LN <= TOP_N_THREAD_STATES_AND_LOCKS
UNION ALL SELECT 80350, '' FROM DUMMY
UNION ALL SELECT 80360, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'DB_USER' FROM DUMMY
UNION ALL SELECT 80370, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80400 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || DB_USER
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.DB_USER,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_DB_USERS
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.DB_USER,
BI.TOP_N_THREAD_DB_USERS
)
WHERE
LN <= TOP_N_THREAD_DB_USERS
UNION ALL SELECT 80450, '' FROM DUMMY
UNION ALL SELECT 80460, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'APPLICATION_USER' FROM DUMMY
UNION ALL SELECT 80470, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80500 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || APP_USER
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.APP_USER,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_APP_USERS
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.APP_USER,
BI.TOP_N_THREAD_APP_USERS
)
WHERE
LN <= TOP_N_THREAD_APP_USERS
UNION ALL SELECT 80550, '' FROM DUMMY
UNION ALL SELECT 80560, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'APPLICATION_NAME' FROM DUMMY
UNION ALL SELECT 80570, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80600 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || APP_NAME
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.APP_NAME,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_APP_NAMES
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.APP_NAME,
BI.TOP_N_THREAD_APP_NAMES
)
WHERE
LN <= TOP_N_THREAD_APP_NAMES
UNION ALL SELECT 80650, '' FROM DUMMY
UNION ALL SELECT 80660, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'APPLICATION_SOURCE' FROM DUMMY
UNION ALL SELECT 80670, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 80, '=') FROM DUMMY
UNION ALL
SELECT
80700 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || APP_SOURCE
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.APP_SOURCE,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_APP_SOURCES
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.APP_SOURCE,
BI.TOP_N_THREAD_APP_SOURCES
)
WHERE
LN <= TOP_N_THREAD_APP_SOURCES
UNION ALL SELECT 80750, '' FROM DUMMY
UNION ALL SELECT 80760, LPAD('SAMPLES', 11) || LPAD('AVG_ACTIVE_THREADS', 19)|| LPAD('PCT_OF_TOTAL_LOAD', 18) || CHAR(32) || 'HOST_AND_PORTS' FROM DUMMY
UNION ALL SELECT 80770, LPAD('=', 11, '=') || CHAR(32) || LPAD('=', 18, '=') || CHAR(32) || LPAD('=', 17, '=') || CHAR(32) || RPAD('=', 50, '=') FROM DUMMY
UNION ALL
SELECT
80800 + LN / 1000, LPAD(SAMPLES, 11) || LPAD(ACT_THR, 19) || LPAD(THR_PCT, 18) || CHAR(32) || HOST_AND_PORT
FROM
( SELECT
ROW_NUMBER () OVER (ORDER BY SUM(T.NUM_SAMPLES) DESC) LN,
T.HOST || ':' || T.PORT HOST_AND_PORT,
SUM(T.NUM_SAMPLES) SAMPLES,
TO_DECIMAL(SUM(T.ACT_THR), 10, 2) ACT_THR,
TO_DECIMAL(SUM(T.THR_PCT), 10, 2) THR_PCT,
BI.TOP_N_THREAD_HOST_PORTS
FROM
BASIS_INFO BI,
THREADS T
GROUP BY
T.HOST || ':' || T.PORT,
BI.TOP_N_THREAD_HOST_PORTS
)
WHERE
LN <= TOP_N_THREAD_HOST_PORTS
)
WITH HINT (IGNORE_PLAN_CACHE)
*********************
* THREAD ACTIVITIES *
*********************
SAMPLES AVG_ACTIVE_THREADS PCT_OF_TOTAL_LOAD STATEMENT_HASH
=========== ================== =================
2562 21.34 21.43 c3d7d15bec5e66ec6ab15e86527bcca5
2302 19.18 19.26 649e56ca14054ff20c5f60383f2f2a58
1007 8.39 8.42 426bdc1658f5f21d5bebf85ec4ab9c2c
981 8.17 8.20 4d0b16d0bba85c88ba9a09fa0e114adf
671 5.59 5.61 96cf8b00b971bb539ffe871d72691906
490 4.08 4.10 3622648ff8db4fa25550af86a03537ef
241 2.00 2.01 72b41f8cf534d44d538ba38362d45e6f
166 1.38 1.38 0fe42d1615a0666aa17978dde2282c0e
146 1.21 1.22 no SQL (LOBGarbageCollectorJob)
118 0.98 0.98 no SQL (flushing)
116 0.96 0.97 no SQL (ExecQidItab)
114 0.94 0.95 no SQL (ExecutePrepared)
107 0.89 0.89 no SQL (NoAction)
101 0.84 0.84 no SQL (prepareDeltaMerge)
95 0.79 0.79 961e8ea25c70c3660bdcd998e9611f2c
92 0.76 0.76 1c5843065bedd70dc5cabba36f24cfe1
83 0.69 0.69 6656adc361a7df7ff1fec2ec7a88f9c6
83 0.69 0.69 fd2ed14393cfdce3bd5f20a23cfc9fd4
57 0.47 0.47 17b7b782cef1b6bac3786b122c0a0afb
52 0.43 0.43 4d0240f500d53f7da0223d2f415a5d5b
SAMPLES AVG_ACTIVE_THREADS PCT_OF_TOTAL_LOAD THREAD_TYPE
=========== ================== =================
6296 52.43 52.68 SqlExecutor
5296 44.12 44.31 JobWorker
118 0.98 0.98 ContinuousPageFlusher
47 0.38 0.39 WorkerThread (StatisticsServer)
40 0.33 0.33 MergedogMonitor
32 0.26 0.26 MergedogMerger
27 0.22 0.22 JobexMainDispatcher
19 0.15 0.15 LogBackupThread
16 0.13 0.13 Request
10 0.08 0.08 BackupProgressWatchdog
SAMPLES AVG_ACTIVE_THREADS PCT_OF_TOTAL_LOAD THREAD_METHOD
=========== ================== =================
4747 39.54 39.72 ExecutePrepared
3482 29.01 29.13 SearchPartJob
1241 10.33 10.38 ExecQidItab
814 6.78 6.81 RleScanVecOutJob<range>
168 1.39 1.40 ClusterIndexScanBvOutJob<ScanRangePredicate>
151 1.25 1.26 ?
146 1.21 1.22 LOBGarbageCollectorJob
118 0.98 0.98 flushing
115 0.95 0.96 BatchExecute
107 0.89 0.89 NoAction
◉ This example was over a long-time frame(24h) and should just give indications
◉ Normally you should know your time frames with bottlenecks to shrink it down to the needed time as input to the modification area
Pretty flashy here are the Thread methods RleScanBvOutJob<range> and ClusterIndexScanBvOutJob<ScanRangePredicate>. We continue the analyis in part II to keep this blog readable.
4.2 Indication of necessity of an index
Possible indications thread methods could be
◉ IndirectScanBvOutJob*
◉ JobParallelMgetSearch
◉ JobParallelPagedMgetSearch
◉ PrefixedScanVecOutJob
◉ PrefixedScanVecOutJob<range>
◉ RlePredScanJob<ScanVectorBinSearchPredicate>(out=vector)
◉ RlePredScanJob<ScanVectorPredicate>(out=vector)
◉ RleScanBvOutJob<BV>
◉ RleScanBvOutJob<range>
◉ RleScanVecOutJob<BV>
◉ RleScanVecOutJob<range>
◉ RleScanBvOutJob
◉ scanWithoutIndex
◉ ClusterIndexScanBvOutJob<ScanRangePredicate>
◉ ClusterScanBvOutJob<BV>
◉ ClusterScanBvOutJob<range>
◉ ClusterScanVecOutJob<range>
◉ SparseBvScanBvOutJob
◉ SparseBvScanVecOutJob
◉ SparsePredScanBvOutJob<ScanRangesPredicate>
◉ SparsePredScanVecOutJob<ScanRangesPredicate>
◉ SparsePredScanVecOutJob<ScanVectorBinSearchPredicate>
◉ SparsePredScanVecOutJob<ScanVectorPredicate>
◉ SparseRangeScanBvOutJob
◉ SparseRangeScanVecOutJob
◉ sparseSearch
◉ sse_icc_lib::mgetSearchi_AVX2impl
◉ sse_icc_lib::mgetSearchi_AVX
Another indicator can be check 890 or 1125 of the mini checks (HANA_Configuration_MiniChecks*):
|M0890|Unusual frequent thread methods (last hour) | |IndirectScanBvOutJob<BV> (5.33 threads) |none |X| 2114710|
|M1125|Columns with many scanned records | |MSEG.BUDAT_MKPF (41067249/s) |none |X| 2000002|
|M1125|Columns with many scanned records | |MSEG.MANDT (16265794/s) |none |X| 2000002|
|M1125|Columns with many scanned records | |MSEG.MATNR (375450570/s) |none |X| 2000002|
|M1125|Columns with many scanned records | |MSEG.MJAHR (354290653/s) |none |X| 2000002|
|M1125|Columns with many scanned records | |MSEG.WERKS (28137626/s) |none |X| 2000002|
No comments:
Post a Comment