Tuesday, 30 June 2020

DR Drill for HANA 2.0 multi-tier replication with reverse replication

I have been trying to achieve reverse replication in HANA 3 tier replication environment avoiding full sync. I have referred many HANA replication blogs and thought to write one having clear picture how it works.

Here we have 3 Sites

Site A to Site B (Replication type : SyncMem, operation mode : logreplay)

SiteB to Site C (Replication type : Async, operation mode : logreplay)

Monday, 29 June 2020

HANA Data Strategy: Data Ingestion including Real-Time Change Data Capture

HANA Data Ingestion


Business Value

Simply put HANA data ingestion techniques allow SAP customers to gain immediate access to all of their enterprise and internet data and using HANA real-time virtual modeling techniques easily and quickly get a harmonized well governed enterprise wide view of all of this data to perform advanced analytics and deliver new and advanced insights to the business quicker and more affordably using this single digital business platform.

Friday, 26 June 2020

Execute Stored Procedure from SAP Analytics Cloud with NodeJS App

In this post, we’ll learn how to create a custom widget in SAP Analytics Cloud, Analytic Application to execute stored procedure directly from the UI by clicking a button. After the execution is complete, the message box will pop-up with the status.

Flow Diagram


There are two components that we’ll build: Custom Widget and NodeJS app.

We start by creating the SAC custom widget JavaScript Web Component for main and styling panel and define the widget id, name, web components, properties, methods and event objects in a JSON file.

Thursday, 25 June 2020

SAP Readiness Check for SAP S/4HANA

Overview


The purpose of this blog is to plan a transition from conventional SAP ERP system to SAP S/4 HANA. SAP Readiness check for SAP S/4HANA assists customers with planning and preparation activities necessary to convert an SAP ECC system to SAP S/4HANA. SAP Readiness Check for SAP S/4HANA supports SAP ERP 6.0 (Enhancement Package 0 to 8) and SAP S/4HANA Finance 1504 and 1605.

Wednesday, 24 June 2020

Installation Of SAP HANA 2.0 SPS03 Single-Host On SUSE LINUX 12 SP3

This article is to explain the steps of the HANA 2.0 installation on SUSE Linux 12 SP3. The configuration and permission setups explained in this article is not for the production environment but to practice the HANA administration tasks.

SAP HANA 2.0, SAP HANA Learning, SAP HANA Certification, SAP HANA Exam Prep

Monday, 22 June 2020

SAP S4/HANA Extensibilty

In this blog, we are going to explore SAP S4/HANA Extensibility options available for both On-Premise and Cloud Editions. I have read multiple posts and have visited numerous SAP’s official websites in order to completely understand this extensibility framework. I couldn’t find all the required information together. This actually motivated me to write this, my very first one. I would like to thank all the SCN Contributors who have helped me, in writing this post.You can find all the relevant information related to S4/HANA Extensibility here.

Saturday, 20 June 2020

Combining Data Intelligence and HANA Predictive Analytics Library for a simple Covid19 risk calculator

In this scenario we are going to work into the creation of a simple (and simplistic) Covid19 risk calculator for Mexico, using Mexico government open data. Using this data, we will train a logistic classification model so we can predict death risk based on Mexican pandemic behavior.

We start going to our Data Intelligence Launchpad so we can create a new connection on Connection Management tile. We put our data in an S3 bucket, so we have to create an S3 connection. In addition, we create a connection to our HANA system.

Friday, 19 June 2020

Comprehensive metadata analysis of SAP S/4HANA

In case you ever wondered what, it takes to power all the business processes in SAP S/4HANA, then this is the blog for you. In this blog, I would present the comprehensive metadata analysis of the SAP S/4HANA system and what is under the hood. In S/4HANA the building blocks to power your intelligent enterprise are the ABAP CDS (Core Data Service) Views. The logic embedded in the views enable the hybrid transactional and analytical processing and is also the reason why you no longer need the aggregates. Here in the figure below is the representation of the SAP S/4HANA metadata in and SAP Analytics Cloud Story.

Wednesday, 17 June 2020

The Critical Need for SAP ERP High Availability Protection

Meeting Availability SLA’s for SAP Can be Challenging


Meeting high availability Service Level Agreements (SLAs) is essential to ensure the key components of the IT infrastructure and application are protected and that services to end users, customers, and vendors can be recovered as rapidly as possible in the event of an outage. Many companies set specific recovery time objectives (RTO), and recovery point objectives(RPO) as part of their formal SLA process.

Monday, 15 June 2020

A step by step guide for creating an OData Service on HANA Calculation Views – XSODATA | XSJS

This blog is a short tutorial where you will learn how to publish the HANA Calculation View as an OData service.

What we will be implementing

1. Creating a Simple Calculation View

2. Create an XS Project

Saturday, 13 June 2020

Downsizing a HANA Instance on Azure VM

Description about the activity performed:

This document provides detailed information on how to downsize an existing Hana VM built on Azure to a VM which has lower hardware configuration. While reducing VM size in Azure is relatively easy, Hana being an in-memory database, the disks are usually proportional to the VM memory size.This document outlines how to right-size the VM and disks.

Thursday, 11 June 2020

Hands-On Tutorial: Becoming the Chief Data Cook with RStudio and SAP HANA

Data science is quite similar to cooking and making your favorite meal. While we can usually simply go to our local supermarket and acquire our raw ingredients, it is often not that easy for a data scientist. Imagen before cooking your favorite meal you didn’t know if the supermarket is open or if the food is even edible. Hence, before the fun part the cooking and eating can start, we need to acquire, organize and structure our data. This is by my experience one of the crucial parts during a Machine Learning use case and usually takes most of the time. Often, the data does not just reside locally in a csv or excel file on our laptop but originally lies in a database like SAP HANA. To work on a database like SAP HANA you usually work with the Structured Query Language (SQL) which is over 40 years old. But as a huge R fan I want to stay in my used environment and not switch back and forth. For example, after the first modeling phase I may have to go back into the data preparation phase to engineer new features. Hence, I want to be more flexible but still use the power of SAP HANA. The R package dbplyr brings both worlds together and is designed to work with database tables as if they were local data frames in R. The goal of the package dbplyr is to automatically generate certain SQL statements for you, focusing on select statements. This means you can continue to use the functions out of the dplyr package with which you are familiar with.

Wednesday, 10 June 2020

Removing outlier using standard deviation in SAP HANA

In this blog post we will learn how to remove the outlier in the data-set using the standard deviation , We can have one sample data set with product sales for all the years.

Before moving into the topic we should know what is a outlier and why it used. The Outlier is the values that lies above or below form the particular range of values . For example consider the data set (20,10,15,40,200,50) So in this 200 is the outlier value, There are many technique adopted to remove the outlier but we  are going to use standard deviation technique.

We already know that the standard deviation is square root of the variance. In HANA we have aggregation type STDDEV to calculate it.

Monday, 8 June 2020

Iterative Feature of PaPM Allocation function

In this blog, I would like to show how to leverage and implement iterative feature of allocation function in SAP Profitability and Performance Management (PaPM) as highly flexible and easily adaptable solution to fit specific business needs. Using only one PaPM Allocation function through multiple allocation iterations, it is possible to solve common allocation business problems.

How to achieve this high calculation model efficiency and simplicity?

1. Prepare sender and receiver data for allocation calculation
2. Simple set up of PaPM allocation function.

Friday, 5 June 2020

The Micro-Focus Product portfolio – and how it can support the journey to SAP S/4HANA

A long lasting partnership


SAP and Micro Focus have combined their strengths since more than 12 years in a steady and solid partnership, enriching their respective product portfolio with each other.

Whereas SAP is a market leader in Enterprise Application Software (ERP), Micro Focus is leading the market in the field of software quality assurance.

Wednesday, 3 June 2020

Hands-On Tutorial: Leverage SAP HANA embedded Machine Learning through an R Shiny App

As a student I was focused on the statistics and the modeling of the machine learning algorithms but in practice we are not done with just our R or Python script. In reality, important aspects like the data quality or the deployment of the models must be addressed. Therefore, I started setting up the R Machine Learning client for SAP HANA, which is not only an in-memory database but also comes with various machine learning capabilities. In this blog I will show you how you can stay in your used environment and beloved RStudio and leverage the power of the SAP HANA especially through the combination of the RJDBC, hana.ml.r and shiny packages. In addition, we want to create a dynamic web app directly from R to let users interact with our data and analysis. Clearly, our goal in this Hands-on Tutorial is to give data purpose.

Monday, 1 June 2020

CAP: Consume External Service – Part 2

In my previous blog post Consume External Service – Part 1, I have tested the application using mock and real data. Just by configuring the NorthWind OData service URL in the credentials.url property, I was able to connect to the external service. This approach is only applicable for local development testing. If you try to deploy this into SCP Cloud Foundry, you will encounter an error stating that this approach is not valid for production use.

The reason behind this is that external service consumption in this CDS framework is meant to use the Destination Service in Cloud Foundry. And this is the topic of this blog — deploying the Node.js project into SCP Cloud Foundry using Destination Service.