Load Tests

SAP Solutions
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Securing High Performance and Scalability in
Customer System Landscapes
Table of Contents
3
Executive Summary
5Introduction
6
Performance and Scalability Indicators
8
IT Landscape Planning and Implementation
The Starting Point – A High-Availability System
Network and WAN Performance
Enterprise Mobility
CPU Throughput Capacity and SCU Performance
Sizing
25 System Monitoring and Analysis
SAP Solution Manager
29 Development Projects
Performance Guidelines
Linear Resource Consumption
Linearity Tests
Performance Measurement and Analysis tools
Best Practices for Testing
40 Performance Optimization
Virtualization
41Outlook
In-Memory Database and Data Management
Platform
43 Find Out More
Load Tests
Support Services from SAP
Rollout and Support
About the Authors
Dr. Ulrich Marquard is the senior vice president of Products & Innovation, SAP HANA Platform, Performance & Scalability at SAP AG.
Detlef Thoms is a product expert of Products & Innovation, SAP HANA Platform, Performance & Scalability at SAP AG.
LEGAL DISCLAIMER
This document provides guidance on performance and scalability characteristics for SAP® software landscapes. It contains
current and intended strategies, developments, and functionalities of SAP solutions, applications, and technologies, but it does not
bind SAP to any particular course of business, product strategy, or development. Its content is subject to change without notice.
This document is not subject to your license agreement or any other service or subscription agreement with SAP.
SAP assumes no responsibility for errors or omissions in this document. SAP does not warrant the accuracy or completeness of
the information, text, graphics, links, or other items contained within this material. This document is provided without a warranty
of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or noninfringement.
SAP shall have no liability for damages of any kind, including without limitation direct, special, indirect, or consequential damages
that may result from the use of these materials. This limitation shall not apply in cases of intent or gross negligence. The statutory
liability for personal injury and defective products is not affected. SAP has no control over the information that you may access
through the use of links contained in these materials and does not endorse your use of third-party Web pages nor provide any
warranty whatsoever relating to third-party Web pages or their content.
All forward-looking statements are subject to various risks and uncertainties that could cause actual results to differ materially
from expectations. Readers are cautioned not to place undue reliance on these forward-looking statements, which reflect
knowledge and intention up to the date of publication. They should not be relied upon in making purchasing decisions.
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Executive Summary
Many IT organizations have evolved from an
internal “back-office” service provider into a
strategic partner to the business, providing
innovative solutions in customer and partner
interactions and product or service development. As such, its management has become
much more challenging. Especially important
for CIOs and enterprise architects is securing
high performance and scalability for such
complex IT landscapes. This paper provides
guidance on that topic.
Figure 1: Performance and Scalability Aspects Covered in
This Paper
High availability
Network
Mobile
SCU* performance
Sizing
Virtualization
In-memory database/platform
Load tests
Instead of one single system running an SAP® ERP application,
there are now multiple systems integrated by the flow of business processes, spanning different business domains, and
involving different SAP Business Suite applications, third-party
products, and middleware. Data and process consistency is
not provided by one integrated database design but achieved
through messaging, data, and UI unification. It is much more
challenging to keep track of the entire integrated IT landscape,
particularly if an incident or problem occurs.
Issues with the performance of business-critical applications
can cause deterioration of an organization’s business performance. Key business processes supported by slow or notreadily-available applications can cause revenue loss, decline
in customer satisfaction, lowered employee productivity, and
jeopardized brand reputation.
In addition to this, organizations are increasingly looking for
new ways to transform their business offerings and improve
their performance by developing new and unique approaches
to conduct business.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Rollout/support
Web servers
Application servers
Monitoring/analysis
Development
Database
*Single computing unit
Directed specifically to CIOs and enterprise architects, this
paper provides guidance on performance and scalability
characteristics fundamental to facing these challenges within
SAP software landscapes. It addresses four main areas: •• IT landscape planning and implementation
Pitfalls for network and WAN performance as well as the
general trend for CPU throughput and SCU performance1
are discussed in this section. It introduces the SAP HANA®
platform, an innovative in-memory database and data management platform. It describes the sizing approach used
by SAP to enable customers to obtain reliable hardware
estimations in order to plan their budgets more accurately.
The impact of virtualization on sizing is mentioned as well.
3
•• System monitoring and analysis
This section discusses best practices for system monitoring
and performance analyses using the SAP Solution Manager
application management solution to administrate, monitor,
and analyze the software landscape and solutions.
•• Development projects
Support for best practices and recommendations are
discussed to help you develop performance-optimized
applications yourself with:
–– Optimized programming
–– Performance measurement and analysis tools
–– Testing applications
•• Performance optimization
This section focuses on the possible levels at which performance optimization can be implemented. It moves backwards through the software lifecycle, from the operation
and maintenance phases to the requirement specification.
With these four areas, the paper covers the major phases of a
software system lifecycle: plan, implement and build, run, and
optimize. This paper closes by highlighting some of the recent
technology trends.
FOOTNOTE
1. S
ingle computing unit (SCU) performance: This refers to the processing power of a single computing unit within a system. SCUs could include a single
thread within a multithreaded CPU, a core within a multicore CPU, or any other individual unit that makes up a CPU.
4
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
DEMANDS BUSINESSES MAKE ON SOFTWARE TODAY
Introduction
In today’s economy a wide variety of business scenarios make
many different demands on the performance of software.
Typical examples of common business scenarios include:
•• For a marketing campaign, a telecommunications company
sends out 3 million SMSs over a period of 10 hours. The
SMSs trigger 3 million activities in the company’s customer
relationship management (CRM) software.
•• Every month a company must process payroll for its
250,000 employees within 2.5 hours.
•• A service provider is responsible for the maintenance of
airplanes, which have bills of materials containing several
hundred thousand entries. These bills of materials are
accessed and changed as maintenance work is performed
on an airplane.
•• Pharmaceutical companies have to retain their data in their
online systems over several years. As a result their databases
can reach sizes of several terabytes and have tables with over
100 gigabytes of data.
•• In Brazil, in response to every delivery resulting from a sales
order, a delivery note and a copy of it for the tax authorities –
called a nota fiscal – must be produced within a very tight
time window.
•• Thousands of users of an Internet shop browse through the
items of the shop at any one time.
•• Users of search engines now expect search results to appear
as they type in the search term.
•• Users of news communities expect immediate distribution
of their messages.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Each scenario highlights a different aspect of what is perceived
as good performance. In addition, many companies today use
cloud and mobile applications, which extend the IT landscape
beyond their firewalls to enable collaboration with subsidiaries,
partners, suppliers, and customers. These trends increase the
complexity and heterogeneity of IT landscapes.
Not only does SAP support its customers in these areas by
offering a portfolio of products that includes cloud and mobile
solutions, but SAP also emphasizes the integration of these
new offerings into existing landscapes and the orchestration
of hybrid landscapes to support business processes end to
end. The challenge of orchestration is to fit applications and
technology components together so they behave as a single
solution. Performance and scalability are fundamental
characteristics of any solution that meets this challenge.
Performance can be considered both from a system point of
view and a user point of view. While system administrators are
interested in achieving required system throughput within a
given IT budget, end users demand a reasonable response time
when interacting with software systems. Acceptable response
times are related to the content of the business process. These
challenges are relevant for custom application development
projects as well.
This paper discusses best practices and recommendations
specific to different performance aspects that can aid you in
setting up, administrating, and monitoring your IT landscape.
They can also help you develop performance-optimized
applications.
5
FUNDAMENTAL CHARACTERISTICS
Performance and Scalability Indicators
The examples listed in the introduction of common business
scenarios used by actual customers show that the most important indicators for performance are response time and
data throughput:
•• Good end-to-end response times are especially important
when users work interactively. Achieving subsecond average
end-to-end response times is a target primarily for interactive applications that process business data of normal
volumes. In the IT industry, subsecond response times are
standard and in line with studies on perceived performance
conducted by our usability team (see Figure 2).
•• A high throughput is generally more relevant for background
processes.
Companies expect a system to behave in a predictable
manner when throughput increases. This means that a system must always run optimally even in high-load situations
in which several million document line items have to be processed per hour.
•• The precondition for consistently high throughput is software
and hardware that are scalable.
A scalable software system can handle increasing amounts
of work predictably and should be easily extendible to process more tasks. For example, if the load increases by a
factor of 10, a well-scaling software system would require at
most 10 times the resources, preferably less. A poorly scaling
system, in contrast, would require three times the resources
to deal with twice the load.
•• Only when an IT system is scalable is it possible to perform
adequate hardware sizing. The purpose of hardware sizing is
to determine the hardware resources necessary to process
performance-critical business processes, thus preventing
hardware resources from becoming bottlenecks.
Performance refers to the total effectiveness of a computer
system, including throughput, individual response time, and
availability. Programming for good performance means making
reasonable use of critical resources, keeping response time
at a minimum, taking into consideration aspects of network
communication, and producing software that is scalable.
Figure 2: Perceived Performance
Effects on input elements are smooth
Engaged user-system dialog breaks down
User loses focus and turns to
other tasks
User becomes
annoyed
User’s focus starts to wander
0.1 second
6
1 second
3 seconds
10 seconds
15 seconds
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Scalability, in most general terms, means the degree to which
a business scenario, component, or system can be expanded
(or reduced) in size, volume, or number of users served and
still continue to function properly and predictably. In other
words, scalability refers to the predictable resource consumption of a software application under different system loads
(increasing multiuser or parallel load) while keeping response
time within a reasonable range.
•• The first aspect of scalability is linearity with the number of
business objects.
•• The second aspect of scalability is concurrency (processing
with parallel jobs or concurrent users).
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Larger loads can be balanced with more hardware without the
response time getting worse. A distinction is commonly made
between:
•• Scaling up – This involves replacing an existing server with
a more powerful server to double throughput by doubling
processing power.
•• Scaling out – This adds servers to a set of identical servers
that process user requests in parallel, thus increasing
processing nodes to increase throughput.
Scalability is a mandatory prerequisite for sizing, and sizing
is an important parameter for IT landscape planning and
implementation. It will be discussed in the next section.
7
SYSTEM LANDSCAPE LAYOUT AS PART OF LANDSCAPE DESIGN AND ARCHITECTURE
IT Landscape Planning and Implementation
After determining the processes you require and the software packages (stand-alone engines or clients, for example)
required for those processes, you determine the layout of your
system landscape as part of the landscape design and architecture. That is, you decide how many servers you require and
how you want to use each of these servers.
For this step, you have to consider many aspects and landscape
rules to decide if you want to bundle functions inside a server
or distribute them to different servers. For example, dependencies between usage types, interoperability of hubs, maintenance
and operation of your IT landscape, and security play an important role in determining a landscape layout that fits your
individual requirements.
This section gives you valuable insight into important characteristics of performance and scalability as you plan and implement your IT landscape.
Figure 3: Topic Overview – A High-Availability System
High availability
Network
Mobile
SCU* performance
Sizing
Virtualization
In-memory database/platform
Load tests
Rollout/support
Web servers
Application servers
Monitoring/analysis
Development
Database
THE STARTING POINT – A HIGH-AVAILABILITY SYSTEM
Business continuity is an essential element of any modern,
global enterprise running a complex, worldwide organization.
Business continuity is commonly defined as the act of maintaining continual operations while safeguarding vital organizational
data. Central concepts revolve around the high availability of
business and operational systems as well as disaster planning.
A high-availability concept must be discussed from a business
perspective first. It is impossible to design a system without a
definite understanding of the availability requirements. Highavailability requirements are, in turn, specific to each individual
enterprise.
A service-level agreement specifies the level of service to be
provided to the user by the IT infrastructure. Accordingly, it defines the requirements of the system, such as the time frame
within which normal system functioning is to be resumed after
downtimes. A service-level agreement may be drawn up between
the domain users of a company and their internal IT team or
between a client company and a vendor providing hosted
services. The demands on the application availability must be
clearly described in order to come to an agreement on the level
of service that can be provided.
High availability is not the core subject of this paper, but it
is often mentioned when performance is discussed. From a
performance perspective, load tests should cover aspects of
high availability, such as tests for failover scenarios.
Another aspect that needs to be considered is “sizing.” Standard
sizing does not usually consider high-availability concepts, as
high availability is based on individual solutions. For that reason,
appropriate sizing must be taken into account when the IT
landscape is being planned.
*Single computing unit
8
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Figure 4: Topic Overview – Network Performance
High availability
Network
Mobile
SCU* performance
Sizing
Virtualization
Web servers
In-memory database/platform
Load tests
Rollout/support
Application servers
Monitoring/analysis
Development
Database
*Single computing unit
NETWORK AND WAN PERFORMANCE
Distributed landscapes today result in an increasing number of
users connected to a central system from dispersed sites with
varying degrees of available network bandwidth and latency.
User access takes place through a local area network (LAN)
and a wide area network (WAN). As a basis for the successful
planning and productive operation of a high-performing network
infrastructure, sizing of network bandwidth plays an important
role. Its objective is to avoid a performance bottleneck caused
by insufficient network bandwidth and ensure reasonable
network response times, but network bandwidth sizing alone
cannot guarantee specific network response times.
Network performance is defined by two network key performance
indicators (KPIs):
•• Bandwidth
•• Latency
Network bandwidth is a resource shared by many users. If the
available bandwidth is exhausted, the network becomes a
bottleneck for all network traffic. Interactive applications suffer
the most from overloaded networks. Due to users’ unpredictable
behavior, concurrent requests from multiple users can interfere with each other. This results in varying network response
times, especially if you have many users on a relatively slow
network connection. Other applications that compete for available bandwidth may also influence response time. Keep these
points in mind when you sum up all traffic as you plan your
network bandwidth.
Latency time depends on the number of network hops (nodes)
and the propagation time required for a signal to get from A to
B, between and on these nodes. Today, network latency figures
as the most significant parameter in poor response times.
Even in a metropolitan area network (MAN),2 the database
and the application server should run in the same data center
(physical location).
The following table gives you an overview of typical bandwidths
and latencies in different network environments.
Typical Bandwidths and Latencies in Network Environments
Media
Bandwidth (kilobits per second)
Round trip latency
(milliseconds)
100 megabit LAN
100,000
<10
56K modem
56
250
Terrestrial WAN
(frame relay)
128–2,048
250
Satellite
64–2,048
600
Considered just from the basis of bandwidth and latency, there
is a difference in network quality between an intranet and the
(public) Internet. The difference is less technical than organizational in nature. An intranet is wholly owned and controlled by
a company or organization, while your Internet service provider
(ISP) has complete control over just your access area (within
a radius of some miles) and limited influence over end-to-end
quality. On-premise applications typically run in an intranet,
while the Internet is more relevant for mobile and on-demand
applications (in the cloud).
FOOTNOTE
2.A metropolitan area network (MAN) is a computer network in which two or more computers or communicating devices or networks,
which are geographically separated but in the same metropolitan city and connected to each other, are said to be connected on a MAN.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
9
According to the two network KPIs considered here, latency
and bandwidth, an application should trigger a minimal number of sequential round trips and transfer only necessary data
to the front end. The conclusion is obvious: the more round
trips, the worse the application’s response time.
Well-known recommendations are, for example, one application round trip and 10 KB to 20 KB of data per user interaction
step. Major strategies to optimize network performance are
compression and front-end caching; both are implemented
in the SAP NetWeaver® technology platform and are part of
standard SAP software. For geographical regions where sufficient network quality is not available, you could consider using
a per­formance-optimized protocol like Citrix and Windows
Terminal Server (WTS) for end users, which all SAP software
supports. To further improve network performance, Accelerated
Application Delivery for SAP NetWeaver is available for SAP
customers. This implements sophisticated compression and
caching techniques that go beyond the standard features of
http. Among the advantages of accelerated application delivery
is that it:
•• Improves application performance for end users at remote
locations
•• Reduces IT management of remote sites
•• Enables enterprises to avoid expensive bandwidth upgrades
•• Provides monitoring of bandwidth usage and compression
ratio
•• Supports confidentiality of transported data
To find out more, see Front-End Network Requirements for SAP
Business Solutions.3
Figure 5: Topic Overview – Enterprise Mobility
High availability
Network
Mobile
SCU* performance
Sizing
Virtualization
In-memory database/platform
Load tests
Web servers
Application servers
Rollout/support
Monitoring/analysis
Database
Development
*Single computing unit
ENTERPRISE MOBILITY
Enterprise mobility is changing the IT landscape in a fundamental way. In today’s flexible working environment, more and
more employees use a variety of devices, from smartphones
to tablets to equipment not even owned by the enterprise. With
employees relying more on consumer technology for work and
personal purposes, the line dividing employees’ personal and
professional lives is blurring fast.
As employees bring their devices to work and use them to
work productively while on the go, the bring-your-own-device
(BYOD) trend gains full force. Many corporations realize the
value of BYOD and are providing access to back-end software
to serve the enterprise, employees, and customers.
An important step for enterprises is developing a holistic
mobile strategy that lays the foundation and ground rules
for implementing enterprise mobility. Introducing change
management to foster a mobile mind-set is essential for
gaining acceptance and adoption of enterprise mobility.
Another aspect of a mobile strategy is testing mobile performance. This is especially important for understanding and
planning for the impact of mobile devices on your infrastructure.
FOOTNOTE
3.This paper contains a description of SAP software’s front-end characteristics and exemplary network traffic between the front end and the application
layer from both a LAN and a WAN perspective. The objective of this network bandwidth sizing approach is to ensure that end users are able to work
efficiently with the different user interfaces of the SAP software.
10
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Certain network conditions are common to mobile applications, such as latency, packet loss, and dropped connections.
Mobile applications must be capable of handling demands that
vary and spike unexpectedly. Designed in and tested during
development, scalability must ensure that performance levels
are maintained under a variety of loads and conditions. Although not immediately apparent, scalability can vary widely,
potentially leaving enterprises and users vulnerable in missioncritical situations. Although emulators are a good work-around
to test functionality, they are not recommended for measuring
or estimating performance. Instead, native mobile tools should
be used to collect data specific to such factors as disk, memory,
energy, CPU usage, and response time. And not to be forgotten,
mobile devices use different networks (3G, 4G, and LTE, among
others) with different bandwidths. The mobile application is
expected to perform well on each of them.
The following table provides an overview of the theoretical
throughput and latency numbers of different connectivity
technologies.
Throughput and Latency Numbers of Connectivity Technologies
Technology
GSM
UMTS
LTE
GPRS
EDGE
UMTS
HSDPA
HSUPA
HSPA+
LTE
LTE Advanced
Downlink
53.6 Kbps
236.8 Kbps
384 Kbps
1.8 Mbps
3.6 Mbps
7.2 Mbps
14.4 Mbps
21.1 Mbps
42.2 Mbps
Up to 100
Mbps
Up to 1 Gbps
Uplink
13.4 Kbps
118.4 Kbps
128 Kbps
1.8 Mbps
3.6 Mbps
5.8 Mbps
5.8 Mbps
(11.5 Mbps)
Up to 50
Mbps
Up to 500 Mbps
Latency
>500 ms
300–400 ms 170–200 ms
60–70 ms
60–70 ms
<50 ms
<35 ms
Legend:
Kbps = kilobits per second
ms
= milliseconds
Mbps = megabits per second
Gbps = gigabits per second
GSM
UMTS
LTE
HSDPA
HSUPA
GPRS
= Global System for Mobile Communications
= Universal Mobile Telecommunications System
= Long-Term Evolution
= High-Speed Downlink Packet Access
= High-Speed Uplink Packet Access
= General packet radio service
The following table lists the network time and end-to-end time returned by a lightweight sample app.
Results Returned by a Lightweight Sample App
Network
Number of items
2G
3G
Wi-Fi
Size with overhead
[bytes]
Network time
[milliseconds]
End-to-end time
[milliseconds]
100
200
3843
6954
1510
1630
1693
1852
300
10129
1718
1976
100
3843
794
965
200
6954
827
1042
300
10129
987
1257
100
3843
293
510
200
6954
236
501
300
10129
350
662
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
11
The network time is calculated by the transmission time4 and two times the latency5 (see below).
Example: Theoretical Download GSM - Edge (2G) with 100 items
Theoretical
Theoretical
Size with
Theoretical
Download (Kbps) Download (Bps) Overhead [byte] Transmit Time
[ms]
Theoretical
Latency [ms]
Theoretical Network time [ms]
Measured Network time [ms]
236,8
400
930
1510
29600
3843
130
Legend:
Kbps = kilobits per second
Bps = bytes per second
ms = milliseconds
The example indicates a delta of 580 milliseconds between the
theoretical and the measured network times. Due to the shared
infrastructure (cellular network6) and dependency on the number of network hops, the theoretical network time can hardly
be reached.
Consumer-Friendly Experience
The way people work, consume content, and use technology
to get things done is changing at a rapid pace. Demand is also
growing in the workplace for consumer-friendly experiences
when using technology to accomplish tasks. Addressing the
needs of today’s business user, SAP has launched SAP Fiori™
apps. These apps are straightforward to use and provide an
intuitive user experience for frequently used SAP software
functions across a variety of devices – desktop computers,
tablets, smartphones – to help get the job done quickly.
Learn about today’s mobile computing trends and challenges.
Find out how mobile solutions and strategies from SAP can
help make your employees, customers, and partners more
efficient. See how you can quickly build and deploy mobile apps
with SAP Mobile Platform. By leveraging this mobile application
development platform, you can focus on your application’s
user interface, data model, and business logic. The cloud versions of the enterprise and consumer editions of SAP Mobile
Platform offer mobile as a service (MaaS). Provided in a secure
cloud environment, this unique service delivers the same core
mobile services as the on-premise versions but without your
having to host them.
FOOTNOTES
4.The transmission time is the duration from the beginning until the end of message transmission. In the case of a digital message, it is the time elapsed
between when the first bit and the last bit of a message leave the transmitting note.
5.
Latency time depends on the number of network hops (nodes) and the propagation time for the signal to get from A to B between and on these nodes.
6.See http://en.wikipedia.org/wiki/Cellular_network.
12
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Figure 6: Topic Overview – Performance of a Single
Computing Unit
High availability
Under these constraints good performance can be achieved
if applications do not require excessive physical I/O accesses.
Increasing database caches does not eliminate physical I/O
accesses completely.
Network
Mobile
SCU* performance
Sizing
Virtualization
In-memory database/platform
Load tests
Rollout/support
Web servers
Application servers
Monitoring/analysis
Development
Database
*Single computing unit
CPU THROUGHPUT CAPACITY AND SCU PERFORMANCE
In the past, companies tended to look at throughput KPIs to
determine what hardware resources would be right to support
their software installations. At that time, true to Moore’s law,
CPU throughput capacity was doubling every 18 months, increasing the speed with which a CPU could process data in
parallel. To keep pace, the time required to access the physical
disk would have to decrease by 50% during each of those
18-month periods. But access time was decreasing by
only 15%.
To close this gap, solid-state drives (SSDs) want to improve
I/O throughput significantly. SAP has also targeted faster I/O
rates as an important objective to achieve with the SAP HANA
platform. However, the trend to use network-attached storage
(NAS) to ease administration will increase I/O latency, while
the impact of virtualization and cloud computing on I/O
performance cannot be foreseen.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
The number of physical I/O accesses depends heavily on:
•• The number of successful accesses to the buffer (the
cache/hit ratio) – the higher the cache/hit ratio, the fewer
physical I/Os are performed
•• The absolute number of database accesses – a cache/hit
ratio of 99% with 1,000 database accesses per second
means 10 physical disk accesses per second; with 1 million
database accesses per second would mean 10,000 physical
disk accesses per second
In real-life systems, a high number of physical I/O per second
causes problems. It is not easy to prevent hot spots on single
disks, as the number of physical I/O operations per second
is limited. With an average latency of 5 milliseconds per I/O
access, no more than 200 accesses per second per disk are
possible.
However, there is now another KPI that companies must consider: single computing unit (SCU) performance. This refers
to the processing power of a single computing unit within a
system. SCUs could include a single thread within a multithreaded CPU, a core within a multicore CPU, or any other
individual unit that makes up a CPU.
As SAP applications grow more sophisticated, flexible, and
integrated, the hardware resources required to support them
have evolved as well. To keep their software operating optimally,
companies must ensure that the right hardware is in place
early on. That’s why determining the sizing requirements of
your software before you purchase new hardware is so critical.
13
Figure 7: Single Computing Unit Performance
Figure 8: Performance Development of Complete
Computer Systems
First multicore systems
2011: ~1,016,000 SAPS
2010: ~2,000 SAPS
2000
200
1996: ~200 SAPS
Total Number of SAPS*
Number of SAPS* per Single Computing Unit
1996: ~200 SAPS
Time
1995
2003
2010
*SAP Application Performance Standard
(Source: www.sap.com/benchmark)
Time
*SAP Application Performance Standard
(Source: www.sap.com/benchmark)
In addition, clock rates have decreased since 2005. In the past,
it was possible to significantly increase SCU performance by
increasing the processor clock speed. Now, increasing clock
speed has become much more difficult to the point that the
approach is no longer possible. Instead, hardware vendors are
heading in other directions, such as the introduction of multicore architectures.
In addition, for several years a computer system’s ability to
handle more and more throughput has continued to grow
(mainly due to the introduction of parallelization and multicore
technologies). In Figure 8, we compare internal and published
benchmark results from 1996 to 2011. You can see that the
total number of overall SAPS is increasing.
The general trend is that hardware engineers will continue to
design computer systems with more and more cores and
threads per CPU. So while individual SCU performance is not
likely to increase, the overall number of SCUs will. However, it
is important to remember that adding these cores and threads
might not speed up software running on it in a significant way,
although it depends on the solution being used.
These factors mean that future improvements to system performance will likely depend on other factors than continued
performance improvement in individual SCUs. For instance,
software developers will be challenged to write software that
scales with the number of cores in a system and ensure that
their solutions are designed in a way that the speed of an SCU
will not become a limiting factor.
Read more from the following resources:
•• In the SAP Notes tool, SAP Note 1501701 provides you with
more information regarding performance of SCUs.
•• SAPinsider article “Do You Have the Right Hardware for Your
SAP Solution?” sheds additional light on the subject.
•• See the sizing guide List of factors that have an influence on
Disk I/O for more information.
Nowadays, as Figure 7 shows, the fastest SCUs have a capacity
of roughly 2,000 SAPS.7 However, for some years now, the
performance of an individual SCU has remained stable or has
increased only very slightly. It seems unlikely that individual
SCU performances will become significantly faster in the
foreseeable future.
FOOTNOTE
7.The SAP Application Performance Standard (SAPS) is a hardware-independent unit of measurement that describes the performance of a system
configuration in the SAP environment. More information is available at www.sap.com/benchmark.
14
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Figure 9: Topic Overview – Sizing to Determine How
Software Uses Hardware Resources
High availability
Network
Sizing KPIs
CPU time of business transactions or tasks is measured to determine the number of processors required. CPU time should
be measured with appropriate granularity – for example, it is
useful to know the CPU time for each dialog step of a business
process, not just CPU time for the entire business process.
Mobile
SCU* performance
Sizing
Virtualization
In-memory database/platform
Load tests
Web servers
Application servers
Rollout/support
Monitoring/analysis
Database
Development
*Single computing unit
SIZING
In general, sizing means translating business requirements
into hardware requirements to run software solutions and
applications. A mathematical model and procedure must be
designed to predict resource consumption based on a reasonable number of input parameters and assumptions.
Sizing KPIs, listed below, help to fully understand how the
application is using the hardware resources – CPU, memory,
hard disk, and network (see Figure 10). Each hardware resource is associated with a lower or higher initial cost and
maintenance cost and is part of the customers’ total cost of
ownership (TCO).
Figure 10: Key Performance Indicators for Sizing
CPU
Normally, hardware with greater processing capacity is more
expensive and consumes more electrical power, which means
that its day-to-day running cost is relatively high.
Disk size and disk I/O are measured to determine the rate of
database table growth and the file system footprint. The number of insert operations for the database, write operations for
the file system, and bytes that are written to the database or
file system are measured.
Memory is measured in different ways depending on the type
of application. For some applications, it is sufficient to measure
the user session context, while for others it is necessary to also
measure application-specific buffers and shared spaces and
temporary allocations by stateless requests, among other
factors.
Network load measurements refer to the number of round
trips for each dialog step and the number of bytes sent and
received for each round trip. The measurements are used to
determine the network bandwidth requirements.
Who Is Responsible for Sizing?
The SAP business units are responsible for providing standard
sizing guidelines. The hardware vendors are responsible for
providing hardware that will meet the customer’s throughput
and response time requirements. Employees working in different roles from SAP Sales, SAP Consulting, the implementation
partner, and the customer’s IT basis team may need to perform
at least an initial sizing.
•• Duration of time required for CPU to perform
business transactions or tasks
Disk size
Disk I/O
•• Volume of data that resides in database
•• File read and write activity to storage
Memory
•• Memory allocated by user or background process
•• Overall memory for application initialization,
buffers, and caches
Network
load
•• Volume of transferred data
•• Number of round trips
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
15
Is Sizing Release Dependent?
It certainly is. Sometimes architectural changes between releases will lead to different resource consumption. Business
process designs may change or transactions may be removed,
to name just two changes that can affect resource consumption. New releases tend to require higher resource consumption due to greater functionality. In older releases, resource
consumption tends to remain the same.
Who Leads a Sizing Project at the Customer Site?
That depends. We recommend that the customer take the lead
in sizing projects since the customer must provide the application data. Usually, the customer’s IT basis team performs sizing
together with the hardware vendor representative, who will be
expected to provide service-level agreements. SAP Consulting
or the support team from SAP may be involved as well.
How Long Does It Take to Perform Sizing?
That depends on the type of sizing and the sizing method used.
Initial sizing requires little background information and thus
can be done quickly but has a higher probability of inaccuracy.
Expert sizing including measurements may take several
months but is more accurate.
Are Sizing Results Valid for a Particular Operating System?
The results stem from measurements made on various operating systems undertaken by SAP and its hardware partners. To
make sure that the results are valid for different platforms, the
results are not exact figures but rather categories. It is the task
of the respective hardware partner in partnership with the customer to determine the exact figures. The operating system is
not an input variable in the “Quick Sizer” (addressed below).
How Far Off May the Sizing Result Be to My Individual
Configuration?
Unfortunately, sizing is not an exact science. The more accurate
the input provided for the sizing tool, the greater the accuracy
of the overall result and, consequently, your configuration will
be. Sizing is not a one-time job but an iterative process. Invest
time in reevaluating your sizing. It’s time well spent and can result in a higher degree of accuracy for your productive system.
Where Can I Find the Appropriate Sizing Method for My
Sizing Project?
The sizing decision tree on SAP Service Marketplace may help
you.
What Is the Role of the Quick Sizer?
The “Quick Sizer” is an online sizing tool SAP uses. It is available 24x7 and is free of charge. Customers and prospects only
need an S-user ID8 to create sizing projects. The Quick Sizer
targets 80% of mainstream customers and contains a set of
different sizing guidelines for sizing new SAP solutions.
What Are SAP Standard Application Benchmarks?
SAP standard application benchmarks help customers and
partners find the appropriate hardware configuration for their
IT solutions. Working in concert, SAP and our hardware partners developed SAP standard application benchmarks to test
the hardware and database performance of SAP applications
and components.
The benchmarking procedure is standardized and well defined.
It is monitored by a benchmark council made up of representatives of SAP and technology partners involved in benchmarking. Originally introduced to strengthen quality assurance, SAP
standard application benchmarks can also be used to test and
verify scalability, concurrency, power efficiency, and multiuser
behavior of system software components, relational database
management systems (RDBMS), and business applications.
What’s the Customer Benefit of SAP Standard Application
Benchmarks?
Customers can benefit from benchmark results in various
ways. For example, benchmark results illuminate the scalability
and manageability of large installations. Benchmark results
provide basic information that help in configuring and sizing
SAP Business Suite software, and they allow users to compare
different platforms. They enable proof-of-concept scenarios
and provide an outlook for future performance levels (with
new platforms or new servers, for example).
FOOTNOTE
8. Please register at SAP Service Marketplace.
16
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Figure 11: Topic Overview – Virtualization
High availability
Network
Mobile
SCU* performance
Sizing
Virtualization
In-memory database/platform
Load tests
Rollout/support
Web servers
Application servers
Monitoring/analysis
Development
Database
*Single computing unit
VIRTUALIZATION
More and more companies are turning to virtualization to
reduce IT costs, decrease complexity, and improve flexibility.
Virtualization can have a lot of benefits. For example, you can
run several operating systems and applications at the same
time on a single computer and flexibly balance applications.
This helps you reduce TCO through the economical use of
servers that are fully utilized. You can save power usage as
well as energy for cooling, thereby achieving a green IT policy.
Another big advantage is that you can react quickly to changing business requirements by adding additional computing
resources as needed.
However, to take full advantage of virtualization, there are some
important things to be considered.
Traditionally, SAP software is installed on dedicated physical
servers. For good reasons, they are calibrated to meet peak
demands (peak sizing) rather than average demands. In most
cases, peak periods do not last long and peaks usually occur
at predictable times. That means that for the rest of the time,
these dedicated servers are rarely fully utilized. This leads to
a high rate of IT spend for both hardware and maintenance –
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
although your IT infrastructure is underutilized. Virtualization
technology can address this problem. Typically, virtualized
environments consist of two different types of virtualization:
server virtualization and storage virtualization. Storage virtualization means the consolidation of the storage environment.
With many companies facing the fact that their data is growing
by 50% to 100% per year, storage virtualization might be an
alternative.
As mentioned above, with virtualization you may realize significant cost savings because of flexible load balancing (for example,
additional hardware can be allocated quickly and dynamically
when peak loads require it) and system/server consolidation.
This can lead to cost savings due to, for example, lower energy
consumption as well as power for cooling. But note that hardware sizing in a virtualized environment is still necessary. It can
also be more complex than sizing a single physical server. The
main challenge for sizing in a virtual environment is that you
can measure only the virtual resource consumption, not the
actual physical consumption. Sizing of a virtualized environment also includes many factors and parameters that might
have an influence on total performance. It depends on external
factors, for example, you need to understand that other virtual
servers are running on the same physical hardware and are
competing for shared available resources. It might be that
several virtualized systems are communicating over the
same physical network. You have to make sure that this won’t
become a bottleneck. Another important issue is that in a
virtualized environment, storage is normally addressed over
a network, and since the network is a shared resource, it represents a potential bottleneck. As a consequence, the demand
for real computing power in virtualized systems often varies.
In summary, in a virtualized environment it is essential to not
only measure the system load within the single system but also
to link the measurements to the overall load situation of the
physical server or the entire storage system. Each system in a
virtualized environment might be influenced by its neighbor
systems, so that bottleneck investigations of a single system
always require “sanity checks” of the other systems. Because
resources are shared in virtual environments, virtual machines,
though isolated from each other in terms of memory and access,
can nonetheless affect each other in terms of performance
(end-to-end response time).
17
However, virtualization is not free. Virtualization imposes additional overhead in the form of more hardware for the virtualization software. First experiences and measurements have
shown that for server virtualization approximately 10% of
additional hardware resources are needed for the virtualization
software. Of course, this depends heavily on the workload and
the setup. The overhead might be much higher for individual
cases. Nevertheless, it’s possible to achieve great cost savings
through virtualization, for example, through CPU overcommitment (in cloud environments, it is called oversubscription).
CPU overcommitment means that CPUs can be utilized much
more efficiently. System administrators can allocate more CPU
power to virtual servers. Through this CPU overcommitment,
each virtual server has a higher processor performance, because you can assume that not all servers are running fully
utilized at the same time. Through virtualization, full CPU
power is provided where it is needed at the moment.
With virtualization, application instances can share available
server resources to maximize infrastructure utilization (see
Figure 12). You can add or increase servers dynamically as long
as there are computing resources available and as long as the
system is able to respond within the expected times.
Figure 12: Server Virtualization – Hardware Partitions
App*
App
App
OS**
OS
OS
Server
*Application
**Operating system
For CPU sizing in a consolidated environment, which can be
achieved by virtualization, you have to consider the peak
CPU consumption for the consolidated sum of all systems
(see Figure 13). Please note that SAP software systems are
typically sized for peak utilization.
Imagine you have SAP solution “A” for online transaction processing during the day and SAP solution “B” for batch processes
during the night. The peak CPU consumption occurs at different
times. Therefore, you have to take the consolidated sum of
both solutions. If you ran the solutions on separate servers,
the total amount of required CPU capacity would be equal
to the sum of the two peaks.
For memory sizing in a virtualized environment, the same
amount of memory that was sized for a nonvirtualized landscape must be available in your virtual environment. Memory
overcommitment is not possible. There should be no swapping
in a virtualized environment, because swapping would significantly deteriorate performance. In a virtual environment
memory is more often the bottleneck than CPU and disk.
If you plan to move your IT system to a virtualized environment,
you should get in touch with your hardware vendor or your
infrastructure provider to find the right landscaping strategy.
Read More
More information about virtualization at SAP can be found on
SAP Community Network.
Figure 13: Consolidated CPU Consumption Versus Sum
of Peaks
Number of CPUs
This kind of effect might be found in different aspects of
system operations:
•• I/O throughput on storage
•• CPU and memory resources on the server side
•• Network bandwidth and latency in the shared network
infrastructure
70
60
50
Sum of peaks
Consolidated CPU
consumption
40
30
20
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
SAP® Solution A
18
SAP Solution B
Total
Time
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Figure 14: Topic Overview – In-Memory Databases
High availability
Network
Mobile
SCU* performance
Sizing
Virtualization
In-memory database/platform
Load tests
Rollout/support
Web servers
Application servers
Monitoring/analysis
Development
Database
*Single computing unit
IN-MEMORY DATABASE AND DATA MANAGEMENT
PLATFORM
In the past, database management systems were designed and
optimized to perform well on hardware with limited RAM and
with slow disk I/O as the main bottleneck. As the demand to
perform real-time analytics on growing data volumes outpaced
the capacity of these databases, specialized database systems
(often referred to as data warehouses) for analytic processing
on preaggregated data were introduced at a cost of duplicate
data sets and an expensive extract-transform-load process
between them. In recent years, several related hardware trends
have led to a dramatic paradigm shift in database design,
resulting in the development of in-memory databases that
are setting new standards in data management performance.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Massively parallel processing technology and advancement
in predictive data science with more sophisticated algorithms
(for example, automated trading or real-time behavior) are
applying pressure on older data management technologies.
The ability to ask any question from any data without knowing
the data format is now the edge companies need from their
technology infrastructure.
In-memory computing is a technology that enables the analysis
of very large, nonaggregated data at unprecedented speed in
local memory (versus in disk-based databases). The following
are primary characteristics of this technology:
•• Stores massive amounts of compressed information in main
memory
•• Utilizes parallel processing on multiple cores
•• Moves data-intensive calculations from the applications layer
into the database layer for even faster processing
Since all the detailed data is available in main memory and processed on the fly, there is no need for aggregated information
and materialized views, which simplifies the architecture and
hence reduces latency, complexity, and cost. In business terms,
this means that complex analyses, plans, and simulations can
be performed based on real-time data and made available
immediately.
This technology serves as the foundation for the next-generation
database, the SAP HANA database. SAP HANA can perform
real-time online analytical processing (OLAP) on an online
transaction processing (OLTP) data structure. As a result,
you can dramatically improve the performance of both your
IT and your business.
19
The SAP HANA platform is specifically developed to take full
advantage of the capacity provided by modern hardware to
increase application performance (see Figure 15,). By keeping
all relevant data in main memory (RAM), data processing operations are significantly accelerated.
SAP HANA is deployable as an on-premise appliance or in the
cloud. It can be distributed across multiple servers to achieve
scalability in terms of both data volume and user concurrency.
An in-memory columnar database for an enterprise software system:
•• Combines workload by handling in parallel online transaction processing
(OLTP) and online analytical processing (OLAP) queries
•• Leverages in-memory functions to:
– – Reduce the amount of data
– – Aggregate data on the fly
– – Run analytic-style queries (to replace materialized views)
– – Execute stored procedures to reduce network traffic and for faster
processing
– – Improve scan performance (row store versus column store)9
– – Improve select performance10
SAP Business Suite applications are now powered by the
SAP HANA platform. SAP HANA brings together transactions
and analytics onto a single in-memory computing platform.
To become a real-time business, you must manage the daily
business transactions of your core business processes in real
time for your finance, sales, and production operations. You
must also be able to capture data from sources like social
media, mobile apps, and machine sensors. You need to analyze
all this data in real time and leverage advanced models, such as
predictive modeling, to make more relevant business decisions.
And your employees must be able to access real-time information on any device for immediate action.
Figure 15: Overview of the SAP HANA Platform
SAP® BusinessObjects™ business intelligence solutions
BICS
BICS
SAP NetWeaver®
Business Warehouse
Third-party systems
Metadata repository
Row store
SAP Data Services
SAP Landscape Transformation
SAP Business Suite
SQL
SAP HANA® database
SAP Sybase® Replication Server®
SAP ERP
Other applications
SQL
SAP HANA
studio
Information
modeler
Planning engine
Calculation engine
MDX
Persistent
storage
Data
Aggregation engine
Column store
High-availability blade service
Administration
console
Log
Configuration
System monitoring
Backup and recovery
Data load services
BICS = Business intelligence consumer services
SQL = Structured query language
MDX = Multidimensional expression
FOOTNOTES
9.Lecture “Scan Performance” from online lecture on “In-Memory Data Management” by Prof. Hasso Plattner, Hasso Plattner Institute.
10.Lecture “Indices” from online lecture on “In-Memory Data Management” by Prof. Hasso Plattner, Hasso Plattner Institute.
20
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
The IT architecture linked to conventional disk-based databases
typifies the bottlenecks you must eliminate in order to become
a real-time business. Disk-based database limitations have
forced a divide in transactional and analytical processing,
making it necessary to transfer data from a transactional system
to an analytical system in order to prepare it for predefined reports. The information you collect in your current transactional
systems typically goes through many layers of processing. With
your existing disk-based database technology, you consistently
need to make trade-offs. As an example, optimizing your enterprise data system for reporting purposes with both breadth
and depth on the one hand and speed and simplicity on the
other is simply not possible. For a report that provides broad
and deep analysis, you must allow time for significant data
manipulation and thus can expect runtimes of hours or days.
For a report that is simple and fast, you must settle for less
information across fewer dimensions. In neither case can
you build in real-time updates; in a typical IT environment,
those must wait for overnight batch jobs. This can lead to a
complex IT landscape, slower processes, or even businessuser restrictions to accessing the right information at the
right time. Drawing on long-term business experience and
visionary research, SAP has created a new paradigm for a
technology portfolio that takes advantage of the advances
represented by in-memory computing.
The SAP HANA platform realizes that paradigm. It provides
the basis on which you can dramatically increase the performance of your SAP Business Suite applications now and
continue to innovate without disruption on an open platform
as illustrated in Figure 16.
Figure 16: Open Platform for Innovating Without Disruption
SAP® UI
HTML5
Mobile
SAP Business Suite
SAP
ERP
SAP
CRM
SAP
SCM
SAP
PLM
SAP
SRM*
SAP NetWeaver® technology platform
Real-time
reporting
and
analytics
SAP BusinessObjects™
business intelligence
solutions
SAP HANA® Live for SAP Business Suite
+
+
New apps
Real-time
planning
and
execution
User-driven experience
SAP NetWeaver
Business Warehouse
Design
thinking
Real-time
applications
Real-time
platform
+ ++
SAP HANA
*With SAP enhancement package 3 for the SAP Supplier Relationship Management
(SAP SRM) application 7.0, SAP SRM will be powered by the SAP HANA platform.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
21
See how the SAP HANA® platform churns through volumes of Big Data in
real time, enabling new levels of business insight. Hear how customers
(testimonials/stories) are using SAP HANA to extract value from business
data to support better reporting, analysis, and decision making.
Figure 17: Topic Overview – Load Tests
High availability
Network
Mobile
With SAP Business Suite powered by SAP HANA, SAP combines
a proven suite of applications with the next-generation platform
to drive your entire business in real time. Your core applications,
such as the SAP ERP application, the SAP Customer Relationship
Management (SAP CRM) application, and the SAP Supply
Chain Management (SAP SCM) application can now use
SAP HANA as their primary database.
Sizing for SAP HANA is continuously optimized based on input from customer implementations and is subject to change.
Sizing consists of individual calculations for memory sizing:
•• For static data and for data objects created during runtime
(data load and query execution)
•• For disk sizing to allow for persistency of data logs and data
models
•• For CPU sizing (the number of users determines the size
of the CPU needed, considering query complexity and user
behavior such as click rates)
With SAP’s sizing tool Quick Sizer, you can conduct an SAP
HANA sizing.
Read More
Follow these links to get details for the following topics:
•• In-memory computing (SAP HANA)
•• Sizing for SAP Business Suite powered by SAP HANA
•• SAP Note 1793345: Sizing for SAP Business Suite on
SAP HANA
•• SAP Note 1872170: SAP Business Suite on SAP HANA
memory sizing
•• SAP Note 1514966: Sizing Script for SAP HANA
•• SAP Note 1637145: The SAP NetWeaver Business Warehouse
(SAP NetWeaver BW) application on SAP HANA – Sizing the
SAP HANA Database
•• SAP Note 1736976: SAP NetWeaver BW on SAP HANA –
Sizing report for SAP NetWeaver BW on SAP HANA
22
SCU* performance
Sizing
Virtualization
In-memory database/platform
Load tests
Rollout/support
Web servers
Application servers
Monitoring/analysis
Development
Database
*Single computing unit
LOAD TESTS
Load tests are an important and useful step during IT landscape planning and implementation. However, they could
lead to high costs considering the availability of hardware
and software systems, expert consulting, and other resources
you might require. You should plan to perform load tests with
the expected peak load during the product lifecycle prior to
going live.
Load tests come in various guises. Often-used synonyms
are stress test, volume test, and baseline performance test.
Depending on the target, a load-test project has to:
•• Validate a previous sizing
•• Perform integration checks
•• Validate feasibility projections
•• Understand system behavior under system load (proof
of volume)
Clear success criteria have to be defined as well.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Load testing is performed to determine a system’s behavior
under both normal and anticipated peak load conditions that
must be based on realistic business requirements. The targeted
normal and anticipated peak loads have to be simulated correctly
at a technical level in terms of number of users, think time, data
volume, system configurations, and weight of different business
scenarios, to name a few.
Before starting with load tests, single-user tests must be conducted. Based on the single-user measurement results, a prediction of the results of the load test should be made, assuming
linear dependency of resource consumption or scalability. The
extrapolated test results are verified in the multiuser load test,
thus they represent the success criteria of a multiuser load test
project.
Load tests are often conducted in large, highly modified, missioncritical software implementation projects, shortly before going
live. A successful custom load test can ultimately ensure that
the performance requirements of a complex software application – including the hardware platform, the system landscape,
and the communication network infrastructure – have been met.
For best practices for running load tests, please see the section
“Run Load Tests.”
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Figure 18: Topic Overview – Rollout/Support
High availability
Network
Mobile
SCU* performance
Sizing
Virtualization
In-memory database/platform
Load tests
Rollout/support
Web servers
Application servers
Monitoring/analysis
Development
Database
*Single computing unit
ROLLOUT AND SUPPORT
Rollout refers to the reuse of existing implementation and business processes at a new location. Different kinds of solutions –
business to business, business to consumer, cloud, and mobile –
have different requirements. In particular, the e-commerce
model creates new business requirements. To carry out electronic business successfully, Web sites must provide reliable
connectivity and 24x7 availability. Corporate Web sites must
address user scalability and performance in order to handle
thousands of simultaneous Internet connections to their data.
Solutions are needed as well to provide immediate Web browser
access to existing applications and services. This brings up
new challenges, especially for a global rollout of your solution
in the following areas:
•• End-to-end tests (to verify end-to-end response time before
the rollout)
•• Infrastructure (complexity and potential adjustments)
•• Network
•• End-to-end monitoring
23
With more complex networks and the ubiquity of mobile devices
(see Figure 19), network management strategies and technologies are changing. Organizations today have adopted many
technologies and services that make operations more efficient
and effective – but put tremendous pressure on network
performance, bandwidth, capacity, and security.
The use of cloud computing has also created new challenges.
For example, servers and storage used to sit side by side, but
today many organizations can have parts of applications sitting
in different physical locations. To add to the complexity, some
of the infrastructure isn’t even owned or managed by the organization that delivers the app to users. From a project management perspective, it is important to identify critical success
factors and define responsibilities clearly.
between the network, the applications, and the servers. This
has a significant impact on the support structure to be provided
after the software goes into productive operation and must be
arranged to cover the process end to end instead of covering
individual components.
To manage these issues, most organizations are using some
networking management and monitoring tools. Around-theclock monitoring saves time, supports administrators in
planning resources, and helps optimize the network. When
network traffic increases, upgrades can be planned rather
than implemented in a hurry while fire fighting.
The next section describes the central system monitoring functionality in the SAP Solution Manager application management
solution.
The growth of e-commerce puts a great burden on the network, as it requires always-on, no-latency networks. The crux
of the issue is how to deal with new interdependencies
Figure 19: Infrastructure Challenges
Back end
Middleware
Client
Wi-Fi
DMZ*
Internal External
firewall firewall
*Demilitarized zone
24
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
A CENTRAL SOURCE OF UP-TO-DATE INFORMATION ON THE SOLUTION AND SYSTEM LANDSCAPE
System Monitoring and Analysis
SAP SOLUTION MANAGER
The challenges described in the beginning of this paper have
convinced SAP customers and the SAP support organization
that they must rely on a central source of up-to-date information within their solution and system landscape. The central
system monitoring functionality in SAP Solution Manager provides a reliable foundation for operating a complex, heterogeneous system landscape as well as its instances, databases,
and hosts. Central configuration features in conjunction with
predefined monitoring templates that are aware of the system
landscape significantly reduce TCO for configuring and operating the infrastructure.
SAP Solution Manager supports these tasks by covering the
entire lifecycle of a solution, including requirements, design,
build and test, deploy, operations, and optimizing phases. With SAP Solution Manager, systems can be monitored and
service-level reports created based on monitoring thresholds
and service-level agreements. Business-process monitoring
and interface monitoring can be set up and analyzed by system
operation. Workload analysis, root cause analysis, and transaction traces can be performed to solve incidents or problems.
Although SAP Solution Manager is an all-encompassing
end-to-end tool for managing your productive SAP software
environment, certain areas of its functionality focus on the
performance of the SAP software environment itself. For the
purpose of this paper, we focus on those areas, which are:
•• System monitoring
•• Technical analytics
•• Root cause analysis
•• End-to-end trace analysis
•• End-to-end workload analysis
Figure 20: Topic Overview – System Monitoring and
Analysis
High availability
Network
Mobile
SCU* performance
Sizing
Virtualization
In-memory database/platform
Load tests
Rollout/support
Web servers
Application servers
Monitoring/analysis
Development
Database
*Single computing unit
System Monitoring
The system monitoring functionality of SAP Solution Manager
gives you a status overview of all software systems, including
their instances, databases, and hosts. It knows which systems
belong to which databases and servers and provides direct access to additional landscape information. The status overview
displays the availability and performance of a chosen system
as well as information about existing alerts. The administrator
can check to see if any alerts were generated and, if so, determine what actions should be taken. You can drill down to view
information about individual metrics and events, including their
thresholds and current rating or value. Furthermore, you have a
feature to “jump” to metric reporting that includes the history
of values for a single metric.
Read More
Follow this link to read more about application operations.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
25
Technical Analytics
Today’s business processes often operate globally, with participants all over the world. Guaranteeing the highest availability
and performance from almost every location is no longer just
a challenge for huge companies. The technical analytics functionality in SAP Solution Manager is an efficient way to evaluate
and report on the availability and performance of your productive systems (see Figure 21). The reports are structured to
provide information to help in performing trend analysis as
well as statistical evaluation of technical component usage.
In SAP Solution Manager, all administration, monitoring, alerting, and diagnostics tasks take place centrally. The technical
analytics functionality provides an aggregated overview of the
system, database, and host behavior over a period of time. It
provides dashboards to visualize trends in various categories,
such as availability, performance, exceptions, capacity, and
usage. It offers real-time management and reporting to
administrators, managers, and customers based on servicelevel agreements.
Technical analytics includes support for technical reporting
and management reporting to address the needs of technical,
business, and management teams. Proactive identification,
analysis, and resolution of issues has been sped up dramatically
with the integration of monitoring, alerting, and end-to-end
diagnostics within the administration infrastructure, leading
to lower TCO and increased ROI.
Figure 21: Technical Analytics Functionality in SAP® Solution Manager
Monitor proactively
in real time
Enable reactive handling
of critical events
Lower mean time to
problem resolution
Optimize excellence of
technical operations
Prove value to business
26
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Root Cause Analysis
End-to-end root cause analysis functionality in SAP Solution
Manager offers features for cross-system and technology analysis. Especially in heterogeneous landscapes, it is important
to isolate a problem component as fast as possible and to involve the right experts for problem resolution right away. With
the tool set provided by the root cause analysis functionality,
this can be performed with the same tool, regardless of the
technology an application is based on. It allows a first “in-depth”
analysis by a generalist, which eliminates any potential PingPong analysis of getting bounced between different groups of
experts.
The heterogeneous IT landscapes on which customers run
their mission-critical applications have become increasingly
complex during the last decade. Finding the root cause of an
incident in those environments can be challenging, requiring
a systematic top-down approach to isolate the component
causing the problem. The approach must be supported by
tools that help customers do this as efficiently as possible,
which the functionality in SAP Solution Manager provides. The
tools support customers as well as SAP experts in performing
a root cause analysis across different support levels and different technologies. The basic idea behind root cause analysis is
to determine where and why a problem occurred.
With respect to operations, the root cause analysis tools from
SAP are designed to reduce the number of resources in each
step of the resolution process (see Figure 22). Together, an
IT generalist with core competence in root cause analysis
and a component expert are usually enough to investigate
an issue and nail it down. For that reason, root cause analysis
offers tools for each task in cross-component (end-to-end)
analysis and component-specific analysis. Per definition, a
cross-component analysis involves several systems or technologies, whereas component-specific analysis deals with
one system or technology. Overall, root cause analysis works
toward simplifying the problem resolution process within an
IT environment and reducing the total cost of ownership.
Figure 22: Root Cause Analysis Functionality in SAP® Solution Manager
End-to-end workload analysis
• General performance overview for heterogeneous landscape
• Review of most important key performance indicators (KPIs) across all technologies,
with functions to drill down to product-specific workload KPIs
End-to-end change analysis
• Statistical change data across all technologies based on daily configuration snapshots
• Functions to compare configurations between systems and to drill down to change
reporting for a detailed change history
End-to-end exception analysis
• Statistical exception data across all technologies to analyze exception trend or
review exception after changes
• Functions to jump to component-specific exception analysis (ST22, NWA , and so on)
End-to-end trace analysis
• Single user request tracing in a complex system landscape
• Ability to identify component causing problems (performance or functional) and
to jump to component-specific trace analysis (SQL, ABAP®, J2EE trace, and so on)
System, host, and database analysis
• Central, safe, remote access to file system, operating system, and database
• Links to read-only monitoring and administration tools like Wily Introscope for
performance analysis and monitoring
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
27
End-to-End Trace Analysis
End-to-end trace is a tool for isolating a single user interaction.
It follows the interaction through the entire landscape, providing
trace information on each of the components involved for that
interaction only. It starts with the user interaction in the browser
and ends with data being committed to the database. Identifying long-running user requests within a complex system landscape is the most common use case of end-to-end trace analysis,
but you can also identify functional errors that have occurred
during the execution of one request. The exceptions are
attached to the trace, similar to a dump. The trace can also
be used to perform functional testing and to make sure that
an activity executed in one system does not lead to functional
errors in connected systems during request execution.
28
End-to-End Workload Analysis
If a customer faces a performance problem, the end-to-end
workload analysis function might be the tool to start with. SAP
Solution Manager regularly collects the performance information of each system and makes this data centrally available in
its root cause analysis work center. The workload analysis tool
helps identify general performance bottlenecks, such as sizing
problems and problems that affect all users of a particular
system.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
HANDLE YOUR DEVELOPMENT PROJECTS WITH REGARD TO PERFORMANCE AND SCALABILITY
Development Projects
Figure 23: Topic Overview – Development
High availability
Network
Mobile
SCU* performance
Sizing
Virtualization
In-memory database/platform
Load tests
Rollout/support
Web servers
Application servers
Monitoring/analysis
Development
Database
*Single computing unit
PERFORMANCE GUIDELINES
Not all unique business requests can be covered by a standard
software package. When they’re not covered, you can extend
your software so that it does cover your unique needs and
your interests.
The guidelines are based on product standards advanced by
SAP that draw up a set of nonfunctional characteristics and
overall quality criteria commonly understood to be in demand
by the market. Every SAP product must fulfill these standards.
They are designed to facilitate the smooth operation and ready
use of SAP software for all parties involved, from administrator
to end user.
Defining the Right KPIs
As described above, performance is typically measured by
three metrics:
•• Response time – speed of task completion
•• System throughput – amount of work done in a given
amount of time
•• Linearity and scalability – predictable resource consumption
of a software application under different system loads (linearity
with data volume/concurrency)
Several factors contribute to end-to-end response times:
•• Front-end time
•• Network time, number of round trips, data volume, latency,
and bandwidth
•• Server response time
The following sections provide guidance on how to handle
your development projects with regard to performance.
In particular, these sections will discuss:
•• The best practices for your development
•• The best practices for testing
•• How to measure performance
•• Which tools to use for performance analysis
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
29
Factors for Specifying Key Performance Indicators
Perspective
Factors for Specifying Key Performance Indicators
People
•• What is the expectation of the user (service-level agreement)? (Please consider perceived
performance classifications.)
•• Are there steps in the process that have a service-level agreement to guarantee the business success
(for example, a call center)?
•• Why is a user interface (UI) interaction required? Is batch processing offered? Is a progress indicator
shown for the UI process? What happens with the transaction if the UI is closed?
Front end
•• Are there steps that are executed on the front end without server interaction? What is the user
expectation for those steps (see classifications)?
•• What is typically commercially available?
•• If it is a browser-based application, which OS-browser combination has to be supported?
Network
•• What does the infrastructure look like – on premise, on demand, on device?
•• What is the network response time (LAN/WAN)?
•• What response times cannot be influenced?
•• What network connection is typically commercially available (latency/bandwidth)?
•• Are there specific reasons for large data volumes in the application (uploads, videos, pictures)?
Server
What is the expected breakdown of the end-to-end response time into server time, network time,
and front-end time?
30
CIO Guide
Planning for Performance Targets and Activities in Your
Projects
KPIs should already be defined in the planning phase of your
development. First of all, good KPIs should reflect the performance both from a user and system point of view. Second,
good KPIs should be measured accurately. Accuracy supports
the optimization process: If a KPI can be measured with an
accuracy of 5%, then the KPI can only verify optimizations
providing more than 5% improvement, with optimizations
under 5% going completely undetected. Third, only reproducible
measurement results can be used for performance evaluation.
If the measurement results of a KPI cannot be reproduced,
there must be some unknown factors with significant performance impacts. To identify the unknown factors is the next
step in the performance analysis and optimization process.
Last but not least, good KPIs should give indications for possible
optimizations.
The Importance of Defining the Right KPIs
In the KPI-focused approach, the question “Which performance
characteristics should be defined as KPIs?” is much more
important than meeting concrete KPI values. Let’s consider
network KPIs as examples to discuss the importance of good
KPIs. Network time behavior is highly nondeterministic due to
the fact that network resources are shared by many consumers,
all unpredictable, and depend heavily on the network topology.
This means that measurement results depend on when and
where you measure. In addition, different installations of a software application may use different network transport protocols
and could have different qualities of services. For this reason,
instead of network time, SAP uses the number of network
round trips and transferred data volume in the application layer
of a protocol stack as KPIs for network performance. These
two KPIs can be measured very accurately, and they give clear
guidelines to developers on how to improve network performance: one network round trip per user interaction step and
minimal transferred data volume. When testing in today’s LAN
with a maximum 10 milliseconds latency time and a gigabitper-second bandwidth (from the user or client perspective),
network performance issues hardly ever occur. But in a WAN,
an intercontinental network connection could have a latency
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
time higher than 300 milliseconds and 100 kilobytes per
second as a typical bandwidth. Using measured network
KPIs, the real network time can be reliably predicted for
different given latency times and bandwidths.
Best Practices for Performance-Optimized Programming
Many performance issues cannot be solved by simply fixing an
error in the software. Often they are caused by inappropriate
design decisions made in the software architecture. Performance
design patterns specify target behaviors and design rules for
software components and interfaces while at the same time
allowing a high degree of freedom for the implementation.
An SAP business application generally consists of three layers:
the persistence layer, the application layer, and the front-end
layer. Relational database systems are typically used in the
persistence layer, and the data accesses are executed by structured query language (SQL) statements. While the persistence
layer has to ensure data consistency under highly concurrent
data accesses, the application layer carries out the CPU-intensive
processing of business logic. The application layer of a large
software system is typically built up by a load-balanced cluster
of application servers, which scale in CPU and memory and
provide high availability at the same time. The front-end layer
represents the user interface (UI), typically running on PCs,
connected by a LAN or WAN to the application servers and
providing high usability and productivity to end users.
Based on best practices, the documents listed below introduce
the most common do’s and don’ts when developing performanceoptimized programs in the ABAP® programming language and
JAVA. Inspecting the areas of database, application server,
communication, and front end, they cover a wide range of
programming guidelines. The documents also describe which
tools you can use to measure the performance of your coding:
•• How to Verify Performance Requirements for Your
Development Projects Manually
•• Performance Dos and Don’ts ABAP
•• Performance Dos and Don’ts Java
•• Best Practices for Java Performance Analysis –
SAP NetWeaver 7.30
31
Derived from the performance design patterns described within
those documents, you could use the KPIs shown in Figure 24
in performance tests carried out by the developers themselves.
You could use them as well for quality checkpoints and regression
tests run by quality teams. The goal of testing is to find as many
functional and performance issues as possible by executing a
limited set of specified test cases.
For example, there is the KPI "Average end-to-end response
time." End-to-end response time includes the server response
time, the network time, and front-end rendering time. The traditional approach of basing performance testing, analysis, and
optimization on response time tries to break down
end-to-end response time into times spent in individual
software components, and to identify the root causes.
The optimization activities try to reduce or even eliminate any
identified issue.
Recommended design patterns to keep in mind include:
•• Network round trip per user interaction step
A user-interaction step has at least one synchronous communication cycle, which is a network round trip between the
front end and the application server. It is triggered by a user
action, for example, clicking a button in the UI screen, and
typically results in a new UI screen. Since network performance
is mainly defined by latency time and bandwidth, a minimal
number of network round trips and minimal transferred data
volume will optimize the network time.
Recommendation: Minimize the number of network round
trips – try to keep it down to two per user-interaction step –
and volume of data transferred
•• Frequently executed database accesses
Indexes of a database table accelerate data access and
ensure a data-read time that is nearly independent of the
number of entries in the database table. Based on the data
access patterns identified, appropriate indexes are designed
and used to support frequently executed data accesses.
Recommendation: Keep the number of records searched
small.
Figure 24: Example for Key Performance Indicators in Performance Tests
Network (LAN/WAN)
Front-end layer
Memory consumption, CPU time, and so on
32
Round trips, data volume,
and so on
Application layer
Memory consumption, CPU time, and so on
Number of accesses,
data volume, and so on
End-to-end dialog
response time
Throughput, sizing
Persistence layer
Access times independent of amount of data persisted,
complete WHERE clauses of select, no identical accesses,
appropriate indexes, and so on.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
•• Fully specified WHERE clauses of SELECT statements
The performance when reading data from a database is impacted by the size of the result set of a SELECT statement.
The intention of this design pattern is to restrict the size of
result sets to the minimum required for processing.
Recommendation: Keep the amount of transferred data
small. A fully specified WHERE clause can reduce the number of records to those that are actually required by:
–– Specifying all conditions in the WHERE clause of the
SELECT statement
–– Not using CHECK statements within SELECT …
ENDSELECT
•• Use of buffers and caches at the application server layer
In the application layer, data representing intermediate
results are accessed very frequently. Caching or buffering
this kind of data on application servers can speed up performance by a factor of 10 to 100.
Recommendation: Use system buffers and caches
efficiently.
•• No identical accesses to persistence layer
For data consistency reasons, the persistence layer represents
a central resource in SAP applications. This design pattern
represents a simple and efficient criterion in the data and
architecture design and can be easily verified in testing.
Recommendation: Avoid several accesses to the same
data within one transaction.
•• Number of accesses to the persistence layer
This number indicates the performance and scalability impact
of data accesses to the persistence layer, which typically represents a central system resource in an SAP solution. In the
case of relational databases, this is the number of executed
SQL statements.
Recommendation: Optimize access to the persistence layer.
•• Parallel processing
As long as data consistency is ensured, business objects
should be processed in parallel to utilize the parallel processing features of modern hardware.
Recommendation: Enable parallel processing through
efficient lock design and proper splitting and size of work
packages.
•• Linear dependency
The scalability of a large application can only be achieved
when the resource consumption depends at most linearly11
on the number and size of processed business objects.
Otherwise, the resource requirements will quickly exceed
any possible limits and result in unpredictable increases in
response times.
Recommendation: Resource consumption should depend
at most linearly (see Figure 25) on the number and size of
processed business objects.
LINEAR RESOURCE CONSUMPTION
Linear resource consumption can be considered from the
aspect of CPU consumption and peak memory consumption.
CPU Consumption (Milliseconds)
The CPU represents central system resources that are shared
by many concurrent users and computing processes, which
could easily become a bottleneck for performance and scalability. In general, CPU time is close to, but not necessarily
equal to, the response time. It depends on whether a request
is processed sequentially or in parallel and on the wait time
during request processing.
CPU time can be measured much more precisely than response time because CPU time depends largely on the test
case and CPU speed. Response time is influenced by many
additional wait times.
Peak Memory Consumption (MB)
Physical main memory is another central system resource.
When the CPU resource becomes a bottleneck, it results in
increased system response time. This means requests wait
longer in system queues and their processing is postponed
(CPU bound). However, when the system resource memory is
exhausted, applications cannot continue to run, resulting in a
system crash like the Java out-of-memory error (memory
bound). In programming languages like Java, with automatic
memory management based on garbage collection (GC),
memory consumption behavior cannot be described by
a single KPI. The topic of Java memory KPIs and GC tuning
will be discussed in the next section.
FOOTNOTES
11. The resource consumption should be linear or better (at most linearly). That is, the resource consumption should not increase faster than proportionally with the number of data objects processed.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
33
LINEARITY TESTS
Make sure you define at least three measurement points per
scenario, but the more, the better. Please refer to the example
in Figure 25: linearity with the number of objects in three tests.
Another set of tests could include, for example, 10 objects with
5, 50, and 500 line items, respectively.
Linearity tests to verify the maximal linear dependency of
resource consumption on the number and size of processed
business objects can be executed based on a repeatable
pattern, for example:
•• Linearity with an increasing number of objects:
–– 1 object, 5 line items
–– 10 objects, 5 line items
–– 100 objects, 5 line items
•• Linearity with an increasing number of line items:
–– 1 object, 10 line items
–– 1 object, 100 line items
–– 1 object, 1000 line items
Figure 25: Linearity with the Number of Objects
0.2000
0.1800
0.1600
CPU time
0.1400
Application
Database
Total
0.1200
0.1000
0.0800
0.0600
0.0400
0.0200
0.0000
34
0
10
20
30
40
50
60
70
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
PERFORMANCE MEASUREMENT AND ANALYSIS TOOLS
The following figures describe which tools you can use to measure the performance of your coding. These tools are used by
SAP Solution Manager but can also be used independently. As
measurements, particularly CPU utilization, are highly volatile,
you should check to see whether the system in which you are
running your performance measurements is in a state to return
meaningful and reproducible performance test results.
Several tools are available to help you obtain the KPI values
mandated by your requirements and to identify the causes
of any hot spots for where your code could best be optimized.
The performance tools for ABAP and for Java listed in Figures
26 and 27 can be categorized into the three following types:
•• Static-code analyzers
– Analyze the source code and its underlying data structures and do not execute software.
– Provide insight into the quality of static code and data
structures. They identify potential errors that need to be
considered by the developer.
– The performance checks carried out by the ABAP test
cockpit or code inspector:
› Detect issues in ABAP source code and DDIC table
definitions in the early stages of development.
Figure 26: Analysis Tools for SAP NetWeaver® Application Server for ABAP
Area
Browser
4
1
4
1
Gateway
ABAP dispatcher
ICM*
3
Shared
memory
and
buffers
As ABAP
Enqueue server
Message server
2
Database
(ABAP schema)
2
3
3
3
Central services
2
Hardware
Application server
OS06 (ST06), OS07
Database
General/database accesses
Work
processes
3
4
ABAP® programming language
DBACOCKPIT, ST10, ST05,
STAD
Application
Shared memory and buffers
ST02, SHMM
ABAP runtime
SAT (SE30)
User memory analysis
SM04, SMI
Work process overview
SM50, SM66
Workload analysis
ST03N, STAD
Enqueue monitoring
SM12, ST05
4
Communication
HTTP traffic
SMICM, ST03G, SMICM
Remote function calls
ST03N, ST05
*Internet communication manager
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
35
•• System monitors
–– Provide information on an integral level (for example,
per transaction step).
–– Create minimal overhead.
–– Observe resources all the time (no explicit start
necessary).
•• Traces
–– Trace in detail and with high granularity using SQL trace
and enqueue trace, among others.
–– Give specific information on individual resources.
–– Create considerable overhead.
–– Preferably run only on demand.
To analyze network communication, you could use various
browser plug-ins or network sniffer tools. This enables you
to retrieve the KPIs “number of round trips per dialog step”
and “amount of transferred data per dialog step” for HTTPbased UIs.
Read More
Use these links to access details on the following:
•• Best Practices for Java Performance Analysis
•• How to Verify Performance Requirements for Your
Development Projects Manually
•• Praxishandbuch SAP Code Inspector, Eilenberger, Randolf;
Ruggaber, Frank; Schilcher, Reinhard; SAP Press, First
edition, 2011 (in German).
Figure 27: Analysis Tools for SAP NetWeaver Application Server for Java
Area
Browser
4
1
4
Gateway
Cluster manager
Server
nodes and
threads
3
Global
buffers
and
caches
3
ICM*
As Java
Message server
Database
(Java schema)
Central services
2
3
JAVA runtime
MC for SAP software, SAP NetWeaver
Administrator tool: open SQL
monitors
Wily Introscope, JVM profiler from SAP
User memory analysis
JVM profiler from SAP
Server nodes/thread
overview
SAP NetWeaver Administrator:
operations management; MC for
SAP software
Wily Introscope
Enqueue monitoring
4
SAP NetWeaver Administrator: open
SQL monitors; MC for SAP software
Communication
HTTP traffic
36
SAP NetWeaver® Administrator tool:
open SQL monitors; Wily Introscope;
Java Virtual Machine (JVM) profiler
from SAP
Application
Global buffers
and caches
Workload analysis
*Internet communication manager
Management console (MC) for SAP®
software
Database
General/database
accesses
Enqueue server
2
4
Hardware
Application server
2
3
3
1
Java
Remote function calls
HttpWatch, SAP NetWeaver Administrator, MC for SAP software
SAP NetWeaver Administrator, JVM
profiler from SAP, Wily Introscope
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
BEST PRACTICES FOR TESTING
The planning phase is probably the most time-consuming
phase. An experienced project team can set up and run the
tests quite efficiently.
Determine the Bread-and-Butter Processes
There are basically two sets of performance requirements:
user driven and throughput driven. In the first case, you analyze the system from an end-user perspective, considering
perceived performance requirements as well as system performance requirements. In the second case, response time is
negligible. Here, you test nightly background jobs where hundreds of thousands of data items need to be processed in a few
hours or where several processes are dependent on each other.
Determining the frequently used business processes – the
“bread-and-butter” processes – can sometimes pose a tradeoff. This is because 80% of the load is normally caused by up
to 20% of the functionality, those being the bread-and-butter
processes. To keep complexity to a minimum, you should try to
reduce the number of processes to be tested as much as you
can. Testing 5 to 10 should be sufficient, keeping in mind that it
is better to define a few scenarios well than have too many
scenarios. Remember too that the test cases must be scripted
and analyzed from different perspectives.
Examples of bread-and-butter processes include:
•• Client A planned to include a single-sign-on authorization
check in its corporate portal, and the end-user response time
for this process was very important.
•• Client B ran several different systems and applications within
one process chain. Its critical path analysis determined that
the full order processing had to be completed within 15 minutes
or else severe business problems would result.
In addition to asking the application owners, you can also make
use of the legacy system to understand the core business processes and critical applications.
In hybrid landscapes the determination of processes to be
tested can be more challenging. The processes run across the
system, and the defined KPIs need to be validated on different
systems (see Figure 28). A breakdown of performance targets
for end-to-end response time and throughput is also
recommended.
Figure 28: Key Performance Indicators in Hybrid Landscapes
Front-end layer
On premise
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Application layer
Persistence layer
End-to-end process
On demand
Network
Orchestration
Network
On device
Application layer
Persistence layer
37
Running Tests
One basic principle for achieving meaningful performance results is to let the system "warm up" with dry runs of the processes to be tested. This fills buffers and caches and in general
minimizes the influence of artifacts. The rule applies to singleuser tests and load tests alike. The advantage is that the results
will be reproducible, a very important factor for performance
analysis and regression testing.
These dry runs also have the advantage of letting you check
to see if there are any functional issues with the test cases.
However, you could also decide to measure the performance
of a “cold” system, because a cold start shows nonoptimal programming on first access of the applications (a good focus for
optimization potential). This could become critical in situations
where you cannot increase buffers and caches due to hardware
limitations and eviction of cached data is necessary or cached
content simply expires. By having the system restarted automatically by the load testing environment just before the load
generation is triggered, you can get a reproducible initial state
of all buffers and caches. On the other hand, these results are
not reproducible and should not be used as a basis for regression testing. It is important that you understand the implication
of each method for your test results (see Figure 29).
Run Single-User Tests or Unit Performance Tests
Single-user tests can check important business processes in
the early phases of development projects. They focus on the
end-to-end response time and system resource consumption
and are able to break these factors down into the system components. Single-user tests provide baseline performance
figures, which can serve as starting points for multiuser load
tests. An important aspect of single-user tests is that the test
system can be shared by many users for different implementation and testing activities when the test system is running at
moderate load. Another advantage of single-user tests is that
they can be easily repeated for accurate and reproducible
measurement results. To deliver relevant measurement results,
possible caching and just-in-time compilation effects should be
taken into consideration. Therefore, test cases should first be
executed to warm up the system before measurements start.
A single-user test refers to one user clicking through a series
of screens and entering data as required while these activities
are being monitored for performance. The test can also be automated. However, the important aspect here is a manageable,
well-defined test scenario.
Figure 29: Single-User and Volume Tests
Single-user test
 Small test system
 (QA, development) one user
Analyze and measure
scalable behavior
 Quality and implications of
accesses to persistence layer
 Linear resource consumption
 Parallel processing mechanisms,
load balancing
 Memory usage and memory leaks
 Disk requirements
 Front-end network load
Performance predictions
for high-volume environment
Volume test
 Equivalent to multiuser test, stress test,
load test, benchmark
38
Verify
assumptions
Verify
assumptions
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Whichever system you use, to obtain optimal results, you
should make sure you have the system all to yourself so only
your activities are captured. If you cannot get exclusive use of
the system, then at least make sure there are no background
processes running and that CPU utilization is below 10%.
If there are serious time constraints for your development
project, you might capture just the performance KPIs first.
These might include number of database accesses, amount of
data retrieved, CPU time per step, peak memory consumption,
number of round trips to the front end and between two servers,
and transferred kilobytes to the front end. In a second step, if
you have more time, you can analyze the quality of the database accesses – for example, index design, identical selects,
memory allocation, quality and quantity of cached data – and
remove bugs. These findings can then be returned to the application team responsible for modifying the application software.
Let’s look at a simple procedure to check the linear dependency in resource consumption using a single-user test. We’ll suppose the linear dependency of the number of line items of a
sales order should be checked. We assume that a performance
trace is available that can measure total response time and
CPU time as well as the response time and CPU time for the
methods and routines called. In a first test run, you create a
sales order with five line items; in a second test run, you create
a sales order with 20 line items. When comparing the total
response times and CPU times, the increasing factor should
be below four. If not, it means a nonlinear dependency exists.
To identify possible nonlinear methods and subroutines, you
can compare the sorted hot-spot lists of the methods and
subroutines in the performance trace; they should stay in the
same order. If the order is changed, the methods and subroutines that have moved toward the top of the hot-spot lists are
the candidates for a further detailed analysis.
Run Load Tests
After single-user tests have been performed, multiuser load
testing is used to generate a system load by simulating a specified number of concurrent users who execute defined critical
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
business processes. In addition to solving the functional issues
that occur under load and were not detected by single-user
tests, a much more important task of multiuser load testing is
to monitor and understand system behavior under load. While
the first part is to ensure the accurate simulation of a specified
system load, the second part is to analyze how the measured
KPIs react to increasing system load. A system load is not
only represented by the number of concurrent users and the
average think time (the time elapsed between two successive
user-interaction steps), but it is also impacted by the amount
of data used in the multiuser load tests.
You can start the exercise by performing dry runs to fill buffers
and caches. If you want to include the warm-up period in the
tests, you can start testing immediately; but if you do, you need
to separate the “cold tests” (focus on system behavior) from
the “warm tests” (focus on application behavior). Proceed as
follows:
•• Start with a small load and increase the load step-by-step
until you get the final throughput target (most customers
double the load in iterations)
•• Measure the throughput
•• Measure the resource consumption using appropriate monitoring tools (ideally integrated in the load generation tool)
•• Carefully monitor the expected performance bottlenecks
•• Optimize system parameter settings and repeat the test
executions, if necessary
The SAP LoadRunner application by HP, for example, validates
software effectiveness with thorough, efficient load testing
while keeping projects on time and within budget. With SAP
LoadRunner, you can test applications, make fixes quickly with
deep diagnostics tools, and retest and compare to previous
performance before going live with new functionality.
Read More
To find out more, follow this link:
Test Your Application Before Going Live.
39
POSSIBLE LEVELS FOR IMPLEMENTING PERFORMANCE OPTIMIZATIONS
Performance Optimization
Don’t start with any performance optimization before you can
measure the impact of the optimization. This recommendation
emphasizes that any performance optimization should always
be documented by measurements before and after the optimization in order to validate the improvement gained. Otherwise,
you won’t know the effect of your optimization effort. All performance optimization concepts should be derived by analyzing
the measured KPIs.
Good KPIs can give an indication of possible areas for optimization. Developing optimization concepts is a creative procedure, highly dependent on the expertise and creativity of the
software architects and developers involved. Below we discuss
possible levels at which to implement performance optimization by going backwards in the software lifecycle from operation and maintenance to the requirement specification.
You can optimize performance by:
•• Tuning the software configuration
The most effective and cost-efficient way of performance
optimization is to solve performance and scalability issues
identified during the productive usage of a software application by applying proper configurations. For this reason, SAP
provides its customers with a variety of configuration guidelines based on experience from our internal testing to cover
advanced landscaping, high-availability requirements, and
different usage types.
•• Utilizing additional hardware resources
If the software application is scalable with respect to hardware utilization, then any performance issues can be solved
by adding more hardware resources. This is a relatively costefficient way to solve the problem, since today’s hardware
purchase prices don’t play a dominant role in the total cost of
ownership. To help estimate hardware requirements reliably,
SAP provides its customers with sizing guidelines for all software applications it delivers.
•• Fixing errors in the code
When violations against performance design patterns are
identified when analyzing measured KPIs and those issues
can be fixed by changing the code, then this is the most suitable method for optimizing performance. The disadvantage
of this kind of performance optimization is that additional
testing effort is necessary to avoid functional destabilization.
How high the necessary testing effort is will depend on how
critical the code modification is.
•• Improving the software architecture
When the identified performance and scalability issues
cannot be fixed by simple code modifications and redesign of
interfaces or fundamental changes in underlying algorithms
are necessary, then an expensive effort involving design,
implementation, and testing is needed for performance
optimization.
•• Redesigning the business process and how it maps to the
software solution
If no optimization potential can be identified in the software
application, the next step is to go a level higher to consider
the business process. By redesigning the business process,
an optimization of the mapping of the business process to
software concepts can be achieved.
40
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
Outlook
Many companies today use cloud and mobile applications,
thereby extending their IT landscapes beyond their firewalls
to collaborate with subsidiaries, partners, suppliers, and customers. This allows large enterprises to refocus on innovation
and collaboration. Smaller enterprises can scale to match the
changing needs of their business using a pay-as-you-go model.
A comprehensive cloud-based integration infrastructure must
cover on-demand and on-premise integration as well as process and data integration.
At the same time, business users expect the same level of
IT service and support for the business software they use as
they get with consumer-based applications and services. The
network quality, especially for the Internet, plays and will play
an important role in the future as network latency is often a
significant parameter in poor response times.
Enterprise mobility will fundamentally change the IT landscape.
In today’s flexible working environment, more and more employees use a variety of different devices, from smartphones
to tablets to equipment not even owned by the enterprise. With
employees relying more on consumer technology for work and
personal purposes, the line dividing employees’ personal and
professional lives is blurring fast. An important step for enterprises is developing a holistic mobile strategy that lays the
foundation and ground rules for enterprise mobility.
Organizations will need to focus on nontraditional data types
and external data sources. Big Data12 is gaining momentum.
To succeed today, firms must manage the rapid growth in data
volume and complexity while exploiting that data to adjust
operational processes, strategies, and business models faster
than the competition.
SAP HANA is already helping businesses reach these levels of
productivity by addressing very important aspects of Big Data
– like fast access to and real-time analysis of very large data
sets – that allow managers and executives to understand their
business at “the speed of thought.”13 In recent years,
several related hardware trends have led to a dramatic paradigm shift in database design, resulting in the development of
new in-memory databases that are setting new standards in
data management performance.
SAP HANA enables true real-time OLAP on an OLTP data
structure. This helps to dramatically improve the performance
of both your IT and your business and to keep your total cost of
ownership to a minimum.
Cloud computing often goes along with virtualization. Virtualization is becoming the standard for data centers everywhere.
Companies are increasingly turning to virtualization to reduce
IT costs, decrease complexity, and improve flexibility. IT departments need to handle more challenges in coordinating IT-related
activities. In a virtual environment, with rapidly varying dynamic
workload and resource allocation, continuous monitoring and
measurement to establish performance requirements is especially vital but also challenging.
FOOTNOTES
12. “ Big Data is a term that describes large volumes of high velocity, complex, and variable data that requires advanced techniques and technologies
to capture, store, distribute, manage, and analyze it.” Source: TechAmerica Foundation: Federal Big Data Commission: www.techamerica.org/Docs
/fileManager.cfm?f=techamerica-bigdatareport-final.pdf.
13.CIO Guide – How to Use Hadoop with Your SAP® Software Landscape.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
41
Find Out More
To learn more about SAP’s approach to performance and scalability, or to discuss these concepts with SAP experts, please
visit http://scn.sap.com/community/performance-scalability
or www.service.sap.com/performance or contact us at
[email protected].
SUPPORT SERVICES FROM SAP
The SAP Active Global Support organization provides several
services to optimize performance and keep your total cost
of ownership to a minimum.
SAP EarlyWatch® Check Service
The continuous quality check SAP EarlyWatch Check service
analyzes the components of your SAP solution, your operating
system, and your database to determine how to optimize performance and keep your total cost of ownership to a minimum.
42
SAP GoingLive Check Service
The continuous quality check going-live support service within
continuous quality checks is a standardized method to support
companies during their critical steps for going live.
Technical Performance Optimization Service
The continuous quality check technical performance optimization service focuses on the optimization of the throughput on
the database.
These services are available as part of an SAP support offering.
They can also be ordered as single services. To order, please
contact your local support center (see SAP Note 560499) or
SAP Store.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
CIO Guide
© 2013 SAP AG or an SAP affiliate company. All rights reserved.
43
www.sap.com/contactsap
CMP23738 (14/01) © 2014 SAP AG or an SAP affiliate company.
All rights reserved.
No part of this publication may be reproduced or transmitted in any form or
for any purpose without the express permission of SAP AG or an SAP affiliate
company.
SAP and other SAP products and services mentioned herein as well as their
respective logos are trademarks or registered trademarks of SAP AG (or an
SAP affiliate company) in Germany and other countries. Please see
http://www.sap.com/corporate-en/legal/copyright/index.epx#trademark for
additional trademark information and notices. Some software products
marketed by SAP AG and its distributors contain proprietary software
components of other software vendors.
National product specifications may vary.
These materials are provided by SAP AG or an SAP affiliate company for
informational purposes only, without representation or warranty of any kind,
and SAP AG or its affiliated companies shall not be liable for errors or
omissions with respect to the materials. The only warranties for SAP AG or
SAP affiliate company products and services are those that are set forth in
the express warranty statements accompanying such products and services,
if any. Nothing herein should be construed as constituting an additional
warranty.
In particular, SAP AG or its affiliated companies have no obligation to pursue
any course of business outlined in this document or any related presentation,
or to develop or release any functionality mentioned therein. This document,
or any related presentation, and SAP AG’s or its affiliated companies’ strategy
and possible future developments, products, and/or platform directions and
functionality are all subject to change and may be changed by SAP AG or its
affiliated companies at any time for any reason without notice. The
information in this document is not a commitment, promise, or legal
obligation to deliver any material, code, or functionality. All forward-looking
statements are subject to various risks and uncertainties that could cause
actual results to differ materially from expectations. Readers are cautioned
not to place undue reliance on these forward-looking statements, which
speak only as of their dates, and they should not be relied upon in making
purchasing decisions.