Performance Testing 101: Planning a Performance Test
Posted on Mar, 2013 by Admin
This is the third installment in a multi-part series to address the basics of performance testing applications. It’s for the beginner, but it is also for the experienced engineer to share with project team members when educating them on the basics as well. Project Managers, Server Team members, Business Analysts, and other roles can utilize this information as a guide to understanding performance testing process, the typical considerations, and how to get started when you have never embarked on such a journey.
Planning an effective performance test is basically comprised of three parts.
- Establish performance testing goals
- Gather system usage information
- Analyze system under test.
What are your performance testing goals?
Before starting performance testing it is important to determine what are the goals and objectives that should be met.
Here is a list of common goals:
- Application response time – How long does it take to complete a task?
- Configuration sizing – Which configuration provides the best performance level?
- Acceptance – Is the system stable enough to go into production?
- Regression – Does a new version of the software adversely affect response time?
- Reliability – How stable is the system under a heavy workload?Z
- Capacity planning – At what point does performance degradation occur?
- Bottleneck identification – What is the cause of the performance degrading?
- Product evaluation – What is the best server for 100 users?
Once goals have been determined some kind of quantitative value should be assigned to them. Often these are considered to be Key Performance Indicators (KPI) or they could be Service Level Agreements (SLA).
For example common quantitative goals are:
- All transactions should occur within 5 seconds. This is the end user response time
- Servers should be under 75% CPU and memory utilization.
Gathering System Usage Information
This step is by far the most important in the planning phase. The more time spent and details uncovered in this phase the better the overall test. Everything collected here will have repercussions on the rest of the testing effort. Gathering system usage data will help determine business processes are needed to be recorded, the expected response times, and the locations from which to monitor.
The first step in gathering system usage information is determining what business process should be recorded. To determine what processes to script use the three following criteria:
- Heavy Throughput – These business processes are used the most on a day-to-day basis by most users. They should have high volumes on either an hourly or daily basis. Processes that are used sporadically but in high volume spurts are also good candidates.
- Mission-Critical – These business processes are required for day-to-day operations.
- Process Intensive – These business processes are either data intensive or have long response times.
When it comes to performance testing, it is mostly about what most of the users do on a day-to-day basis. Because of this typically more weight is given to processes that have the highest volumes. It is common to see that the top 20% of business process by volume make up approximately 80% of the total volume. Identifying these top 20% processes is crucial to having an accurate plan.
Since all business processes can’t be scripted a selection criteria has to be used. For example, processes that are run as scheduled batch jobs are not interactive and can’t be scripted. There is also return on investment to create the scripts. Processes that are only run a few times or are overly complicated may not be good fits for scripting.
Most of this information can come from collecting current application usage if available. The key value is the total transaction volume over a given time frame, typically an hour, and not necessarily the number of users. For example, knowing how many PO’s are created in an hour is more important than knowing how many users create PO’s. The problem with using users is that it is unknown if a user creates one PO or 50 PO’s in an hour. Using the total number of PO’s in an hour it is possible to extrapolate the number of virtual users needed to simulate the load.
If current application usage information isn’t available the best educated guess should be used. At times maybe daily, weekly, monthly, or yearly volume might be known. Understanding how users use the system can help extrapolate a peak hourly volume. It is important to make a considerable effort in determining volume.
When trying to determine business processes that are mission-critical it is important to be selective on processes that meet this requirement. Often times this criterion is very subjective of the business users. It is common for business users to list many processes as business critical, the ones with the highest priority should be labeled as business critical.
Process intensive business processes can also be subjective and should be related to heavy volumes. Typically this is any business process that may be a known performance concern. This can also be some long running process such as reports. Processes that are run only a few times shouldn’t be scripted unless there is a significant business need for it.
It is also important to note how many scripts should be created. As the number of scripts increases so does the amount of time spent maintaining them. A typical performance project usually has about 10 scripts. A larger project may have about 20 scripts. At this point the scenario becomes more difficult to maintain and it is highly recommended to keep the total number of scripts per project under 30. Remember that the top 20% of business process can easily make up 80% of the total volume.
Application workflow complexity should be a part of the decision making process on which business processes to automate. If there are dependencies where one business process creates data that should be consumed later by a different automated process, this has to be planned out properly so that the test can run multiple times without any manual interaction. This type of complexity can be found with many packaged application with multiple modules. Oracle E-Business, SAP, Peoplesoft, and other packaged applications may need additional time to plan out because of workflow dependencies across modules. Each module may be considered a separate, “mini” project combined with a larger, integrated test at the end that covers all the modules working together.
In the next installment of this blog series we will look at a case study for determining peak load.