Monday, 29 April 2019

LTMC for Master Data Step by Step Process

When We are Implementing SAP S/4HANA solution, We can migrate our master data and business data from SAP systems or non-SAP systems to SAP S/4HANA. By using  SAP S/4HANA migration cockpit.

Friday, 26 April 2019

S/4HANA 1709 FPS03 – back on the Mothership again …

Apply S/4HANA 1709 FPS03 …


Since I’m back at SAP SE as Platform Architect for Intelligent Data & Analytics, I also checked my “Innovation Landscape” for updates and new Software Stacks.

Thursday, 25 April 2019

Batch Insert and Update Processing with OData V2

In this blog, we’ll learn how to perform batch insert and update operation with OData version 2 and we apply to the contact persons list where user can add, edit and delete the person first name and last name. I have no issue when performing the single batch insert/update alone. But when it comes updating and inserting at the same time with batch, I think this is one of the easiest way. Do let me know if you have any better solution.

Wednesday, 24 April 2019

Hands on Tutorial PAL in HANA for Customer Churn Analysis for online retail

Predictive analytics encompasses a variety of statistical techniques from data mining, predictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events.

Predictive analytics is an area of statistics that deals with extracting information from data and using it to predict trends and behavior patterns.

Monday, 22 April 2019

How to pivot/unpivot in SAP HANA

Introduction


Currently there is no built-in pivot/unpivot function in HANA. In this blog you will find a workaround how to implement this in SQLScript.

Friday, 19 April 2019

TADIR object types and object descriptions via SQL

Introduction


In our organisation we were upgrading SAP and neded a quick way to associate various objects with their object type and description, however these were not obviously available via SQL, so a solution was required. The primary table for objects was TADIR, but this only contained a code for obect type, so the description had to be determined from elsewhere. Note: this was required for some quick analysis; for ABAP developers there are standard methods for obtaining descriptions.

Wednesday, 17 April 2019

Developing with HANA Deployment Infrastructure (HDI) without XSA/CF or Web IDE

While I no longer work within SAP HANA Product Management, in my new role in SAP Cloud Business Group I am still developing on HANA and assisting teams that are doing so as well. Therefore I wanted to some research on “skinny” local development options for SAP HANA. The goal was to use HANA Express with as small a resource footprint as possible. This meant starting with the server only edition of HANA Express which does not include XSA nor Web IDE for SAP HANA. Yet I still wanted to be able to create database artifacts via HANA Deployment Infrastructure (HDI) design time approach. So that’s the challenge of this blog entry – “how low can I go”?

Monday, 15 April 2019

Handling Non-Cumulative Measures in HANA Calculation Views with Multiple Cartesian Transformation and Single Conversion Matrix

Introduction


The COPA, the forecast and many other S4HANA, ECC and legacy tables contain hundreds of measures in their record structures. This format is not suitable for efficient processing in BI front end tools; e.g., WebI reports.

Friday, 12 April 2019

What’s New in HANA 2.0 SPS04 – SQL

I would like to provide deeper insight from a developer’s perspective of what is newly available for SQL & SQLScript language features. One of the strategic focus is becoming more general-purpose supporting various use cases and applications by extending the coverage of SQL standards functionality and easier development for SAP HANA. With SPS04, there is a long list of new features to support it.

Wednesday, 10 April 2019

SDI SDQ Geocoding & Address Cleansing

Have you tried using the HANA Smart Data Quality (SDQ), and found the transforms aren’t working?  Well this blog should help, there are country specific files required for both geocoding (latitude, longitude) and address cleansing.

Monday, 8 April 2019

SAP HANA, SAP Analytics Cloud, and Brexit: The Automation

In the last article we have discussed how we can easily get big data from the internet, convert it to the required format, massage it a bit, and report on it in SAP Analytics Cloud via SAP HANA which worked pretty well but lacked any sort of proper automation.

In this article we will create an automated flow which can be used to acquire data from the same Petitions website, convert it to the required format and load it to our SAP HANA system which in turn is connected “Live” to SAP Analytics Cloud. Therefore, the reported data in SAP Analytics Cloud would be as recent as possible, depending on our data acquisition flow settings.

Friday, 5 April 2019

XSA Accessing Remote Sources & External Objects (Schemas, etc)

When developing with XSA and the WebIDE you will likely need to access existing database objects, schemas, tables, remote sources or other objects from an HDI Container. This configuration has been captured before by Christophe Gilde, but the process has evolved with the latest feature release of the WebIDE (4.3.63 for HANA 2 SPS3).

Thursday, 4 April 2019

SAP HANA Startup

Have you ever wondered what SAP HANA is doing from the moment you trigger the start or restart until it’s finally available to be used by your application layer?

Or perhaps you have experienced a startup that has taken your system way longer to be available than you have planned for, and you are trying to understand, what it was doing all this time to avoid it, improve it or plan for it next time?

Monday, 1 April 2019

The tale of SAP HANA, SAP Analytics Cloud, and Brexit

This blog I wanted to show you an end to end example of getting unstructured JSON data, loading it into SAP HANA, enriching with geo-spatial attributes and exposing to SAP Analytics Cloud for further analysis.

The problem with most tutorials usually – they are focused on some randomly generated abstract data (like SFLIGHT or EPM Model data) and for some people this doesn’t really mean much, so I thought a real life example of a real up to date data analysis would be very beneficial for everyone.