Traditional databases are based on designs that were developed decades ago. The technical conditions, as well as the usage requirements, during this time differed from today’s expectations.
Traditional databases have been enhanced, but their ability to adapt to new challenges is limited due to compatibility reasons. From the perspective of business application software, traditional databases lead to critical limitations and can complicate or even hinder the simplification, acceleration, and integration of business processes. The following characteristics of traditional databases can be obstructive, for example, when redesigning your business processes:
Users of database solutions have to decide whether they want to analyze data (online analytical processing, or OLAP) or update data (online transaction processing, or OLTP). However, in many situations, a combination of both views makes sense, for example, for forecasts and simulations or for making decisions on the basis of real-time data.
Traditional business applications have to deal with various restrictions, which complicate things for users. Examples include locks that occur so that only one user can work on a data set at a time, which slows down processes. Another factor is the delay resulting from internal data editing processes. Updates carried out by other users, or even by the same user, are sometimes written to the relevant system tables with considerable delay.
In traditional applications, raw data is usually first internally prepared and consolidated via aggregates. These aggregates obey the individual logic of each application. The use of the data by other applications is thus subject to a time delay on one hand, as already mentioned, and semantic knowledge about the respective application aggregate is required on the other. Consequently, the data first needs to be translated into the data model of the other application. For this purpose, interfaces must be available or must be developed. Integration on the basis of such an architecture thus has disadvantages based on costs (development and maintenance of the interfaces) and lacking real-time access.
The current database architectures use in-memory databases that enable new business processes across all lines of business. The following sections describe the characteristics of SAP HANA and describe why no other in-memory database is currently compatible with SAP S/4HANA.
At the turn of the millennium, two basic changes arose in hardware development: (1) multicore processor architectures emerged and, with them, the option of substantial parallelization, and (2) memory evolved from being relatively expensive and limited into being widely available.
Due to the memory restrictions with regard to availability (i.e., price and addressability), the data in software architectures was mainly stored on the hard disk, and only some data was stored in the memory. Accessing in traditional databases was limited by hard disk processing speeds. In in-memory databases, the hard disk is only used to store, archive, and restore data. The data itself is permanently kept in the main memory.
In contrast to other in-memory databases, SAP HANA has further unique characteristics: SAP HANA is not only an ideal generic database but has also been optimized for business applications due to SAP’s holistic experiences with this kind of applications.
Another benefit of the SAP HANA database is that it has been optimized in such a way that the main business data operations can be executed with high performance. For this purpose, SAP HANA uses multicore CPUs for parallelization. In addition, algorithms are optimized using assumptions about the types of updates, inserts, and deletions that are carried out frequently and should consequently be the focus for high performance.
SAP S/4HANA is designed to fully exploit the benefits of SAP HANA described in the previous section. With this focus on SAP HANA, the following consequences arise for the data models in SAP S/4HANA:
The following sections discuss each of these consequences in greater detail.
To compensate for poor speed of traditional databases, data was previously consolidated in aggregate tables. The applications then accessed these aggregate tables to read the data. However, these aggregates had the following disadvantages: Due to the consolidation, entries in the aggregate tables always lagged behind entries in the original tables. This delay increases with the growth of the volume of data that needs to be processed.
Another disadvantage is that the aggregation uses assumptions of the content as a prerequisite for consolidation. As a result, processing this data from a different perspective is usually not possible without reworking the aggregation and thus can be a rather complicated task. For this purpose, you’ll have to use the original data, which reduces processing speed.
The figure below shows an example of a target architecture with a simplified data model for sales documents after migrating from SAP ERP to SAP S/4HANA.
The usual aggregate tables were omitted in this case. All new SAP HANA-optimized applications directly access the original data.
Note that the aggregates continue to exist in the new data model: The database can emulate the tables in real time. For this purpose, SAP S/4HANA provides predefined database views. These views simulate the aggregates in real time so that existing applications that have not been optimized for SAP HANA can be deployed smoothly.
As a result, you’ll still have read access to existing custom developments, such as reports, and you can usually still use these reports without needing adaptations to the new data model.
In addition to omitting aggregates, the example shown above also illustrates that the architecture for the storage of original data is also partly optimized. In this context, you must keep in mind that the data models in SAP ERP had been developed over several decades. On the one hand, these data models had to be compatible with all databases; on the other hand, rigid changes would have led to problems with SAP ERP EHP upgrades, which were promised and expected to be easy to use.
With the focus on the SAP HANA database and the clear differentiation to existing products, SAP S/4HANA now also allows for redesigning the data architecture in general. In this process, data storage is further optimized for SAP HANA, for example, to enhance the compression rate or optimize the general performance.
Another innovation in SAP S/4HANA is that procedures can be directly transferred to the database. In the traditional SAP Business Suite, the ABAP kernel decoupled the application from the database to ensure compatibility to any type of database. Consequently, the raw data first had to be loaded from the database and then concatenated in the application to carry out complex, data-intensive selections and calculations.
In SAP S/4HANA, some data-related processes are pushed down to the database itself, which accelerates the entire process, which is known as code pushdown. Code pushdown can be executed either in ABAP using Open SQL or via SAP HANA content created in SAP HANA Studio.
How does this affect existing custom code enhancements? Because existing Open SQL data access still works, you can continue to use existing enhancements and only have to adapt them in exceptional cases. However, these codes do not exploit the full potential of SAP S/4HANA. Thus, when you plan to migrate to SAP S/4HANA, you should check which custom codes should be rewritten and optimized. Because you can address custom code at any time after migrating SAP S/4HANA, you’ll enjoy greater flexibility in planning.
Note that existing sidecar scenarios must be revised with high probability due to the changes in the data models. In these scenarios, data of (several) separate systems is replicated into one central system. This replication usually writes directly to the database tables. Because the latter have changed in SAP S/4HANA, the replication rules must be adapted accordingly and new mappings must be made.
How do these changes to the data model affect planning a migration? The good news is that you’ll only have to take into account a small portion of these data model changes because SAP S/4HANA provides database views with all the necessary compatibility (“Omission of Aggregates” above). However, when planning your migration project, you’ll have to bring your existing data into the new data models.
Depending on the technical migration scenario selected, different technical procedures are used to convert the data. Usually, these procedures include execution of program after import (XPRA) or execution of class after import (XCLA). The XCLA procedure was introduced with SAP S/4HANA 1709 to optimize data conversions in SAP S/4HANA and minimize downtime.
Both procedures take place after the database schema has been adapted. In special situations, certain conversions are made in individual cases in special phases of the Software Update Manager (SUM). Regardless of the scenario you choose, converting the technical data will take some time.
How much time this conversion needs mainly depends on the volume of the data to be converted. For this reason, you should check what existing data can be archived before migrating to SAP S/4HANA. In this way, you can reduce the volume of the data to be converted and thus minimize the conversion runtime. Thanks to SAP S/4HANA’s built-in compatibility mode, applications contain read modules that allow you to read archived data.
If you want to implement a new SAP S/4HANA system or convert an existing SAP ERP system to SAP S/4HANA, you’ll have to consider the following issues:
When planning the hardware requirements (sizing), different conditions and rules apply than for systems based on traditional databases. The main reason for this difference is that SAP HANA stores data in RAM, which requires different sizing requirements. The SAP HANA data architectures and embedded data compression algorithms routinely achieve data compression rates of 3 to 5 on average.
As a rule of thumb, SAP recommends estimating twice the value of the compressed data volume as the volume for the main memory. Because sizing strongly depends on your specific conditions (e.g., on the compression rate that can be achieved), SAP recommends running a sizing report in your existing SAP ERP systems. SAP provides detailed information on sizing at https://service.sap.com/sizing.
In summary, the SAP HANA database is more closely linked to implementing application functions than other databases in previous years. This close link is the only way for applications to sufficiently benefit from the advantages of the database.
This close relationship is probably also why SAP S/4HANA is currently only available for SAP HANA. The in-memory databases of third-party providers sometimes follow other approaches and require alternative implementations.
Editor’s note: This post has been adapted from a section of the book Migrating to SAP S/4HANA by Frank Densborn, Frank Finkbohner, Martina Höft, Boris Rubarth, Petra Klöß, and Kim Mathäß.