rainbowgiraffe.info Handbooks Performance Testing Guidance For Web Applications Pdf

PERFORMANCE TESTING GUIDANCE FOR WEB APPLICATIONS PDF

Thursday, May 16, 2019


Performance Testing Guidance for Web ApplicationsFeedback/Comments: [email protected] Performance Testing Gu. Title Performance Testing Guidance for Web Applications; Author(s) J.D. Meier, ); Paperback pages; ebook Online, HTML, PDF ( pages, MB). Testing the performance of web applications is easy. It's easy to . Performance Testing Guidance for Web Applications provides an end-to-end approach.


Performance Testing Guidance For Web Applications Pdf

Author:BERNARDINE CONTRERES
Language:English, Spanish, Arabic
Country:Kuwait
Genre:Religion
Pages:161
Published (Last):05.08.2015
ISBN:348-8-36992-463-6
ePub File Size:26.77 MB
PDF File Size:15.25 MB
Distribution:Free* [*Sign up for free]
Downloads:47989
Uploaded by: CARLENE

Performance Testing Guidance for Web Applications. By: J.D. Meier, Carlos Farre , Prashant Bansode, Scott Barber, Dennis Rea. What to Performance Test. Welcome to the patterns & practices Performance Testing Guidance for Web Applications! This guide shows you an end-to-end approach for implementing. Apache JMeter is an Apache project that can be used as a load testing tool for analyzing and measuring the performance of a variety of.

With respect to Web applications, you can use a baseline to determine whether performance is improving or declining and to find deviations across different builds and versions. For example, you could measure load time, the number of transactions processed per unit of time, the number of Web pages served per unit of time, and resource utilization such as memory usage and processor usage.

A baseline can also be created for different layers of the application, including a database, Web services, and so on. It is important to validate that the baseline results are repeatable, because considerable fluctuations may occur across test results due to environment and workload characteristics.

Baselines can help product teams identify changes in performance that reflect degradation or optimization over the course of the development life cycle.

Identifying these changes in comparison to a well-known state or configuration often makes resolving performance issues simpler. Baselines are most valuable if they are created by using a set of reusable test assets. It is important that such tests accurately simulate repeatable and actionable workload characteristics.

Baseline results can be articulated by using a broad set of key performance indicators, including response time, processor capacity, memory usage, disk capacity, and network bandwidth. Sharing baseline results allows your team to build a common store of acquired knowledge about the performance characteristics of an application or component.

Software performance testing

If your project entails a major reengineering of the application, you need to reestablish the baseline for testing that application. A baseline is application-specific and is most useful for comparing performance across different versions. Sometimes, subsequent versions of an application are so different that previous baselines are no longer valid for comparisons. It is a good idea to ensure that you completely understand the behavior of the application at the time a baseline is created.

Failure to do so before making changes to the system with a focus on optimization objectives is frequently counterproductive. At times you will have to redefine your baseline because of changes that have been made to the system since the time the baseline was initially captured. You can then compare your application against other systems or applications that also calculated their score for the same benchmark.

You may choose to tune your application performance to achieve or surpass a certain benchmark score. A benchmark is achieved by working with industry specifications or by porting an existing implementation to meet such standards.

Benchmarking entails identifying all of the necessary components that will run together, the market where the product exists, and the specific metrics to be measured. Benchmarking results can be published to the outside world. Since comparisons may be produced by your competitors, you will want to employ a strict set of standard approaches for testing and data to ensure reliable results.

Most Common Problems Observed in Performance Testing

Performance metrics may involve load time, number of transactions processed per unit of time, Web pages accessed per unit of time, processor usage, memory usage, search times, and so on.

Terminology The following definitions are used throughout this guide. Every effort has been made to ensure that these terms and definitions are consistent with formal use and industry standards; however, some of these terms are known to have certain valid alternate definitions and implications in specific industries and organizations. Keep in mind that these definitions are intended to aid communication and are not an attempt to create a universal standard. Capacity The capacity of a system is the total workload it can handle without violating predetermined key performance acceptance criteria.

You perform capacity testing in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads, you need to know how many additional resources such as processor capacity, memory usage, disk capacity, or network bandwidth are necessary to support future usage levels. Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out.

Component test A component test is any performance test that targets an architectural component of the application. Commonly tested components include servers, databases, networks, firewalls, and storage devices.

Endurance test An endurance test is a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time. Endurance testing is a subset of load testing. Investigation is frequently employed to prove or disprove hypotheses regarding the root cause of one or more observed performance issues.

Latency Latency is a measure of responsiveness that represents the time it takes to complete the execution of a request. Latency may also represent the sum of several latencies or subtasks.

Metrics Metrics are measurements obtained by running performance tests as expressed on a commonly understood scale. Some metrics commonly obtained through performance tests include processor utilization over time and memory usage by load. Performance testing is the superset containing all other subcategories of performance testing described in this chapter. Performance Performance budgets or allocations are constraints placed on budgets or developers regarding allowable resource consumption for their allocations component.

Performance goals Performance goals are the criteria that your team wants to meet before product release, although these criteria may be negotiable under certain circumstances. For example, if a response time goal of three seconds is set for a particular transaction but the actual response time is 3.

ebook Performance Testing Guidance for Web Applications

Performance Performance objectives are usually specified in terms of objectives response times, throughput transactions per second , and resource-utilization levels and typically focus on metrics that can be directly related to user satisfaction.

Performance Performance requirements are those criteria that are absolutely requirements non-negotiable due to contractual obligations, service level agreements SLAs , or fixed business needs.

Performance targets Performance targets are the desired values for the metrics identified for your project under a particular set of conditions, usually specified in terms of response time, throughput, and resource-utilization levels.

Performance targets typically equate to project goals. Performance testing Performance testing objectives refer to data collected through objectives the performance-testing process that is anticipated to have value in determining or improving product quality. However, these objectives are not necessarily quantitative or directly related to a performance requirement, goal, or stated quality of service QoS specification.

Performance Performance thresholds are the maximum acceptable values for thresholds the metrics identified for your project, usually specified in terms of response time, throughput transactions per second , and resource-utilization levels. Performance thresholds typically equate to requirements. You can use the steps as a baseline or to help you evolve your own process. Life cycle approach. The guide provides end-to-end guidance on managing performance testing throughout your application life cycle, to reduce risk and lower total cost of ownership TCO.

Each chapter within the guide is designed to be read independently. You do not need to read the guide from beginning to end to benefit from it. Use the parts you need. The guide is designed with the end in mind. If you do read the guide from beginning to end, it is organized to fit together in a comprehensive way.

The guide, in its entirety, is better than the sum of its parts. Subject matter expertise. The guide exposes insight from various experts throughout Microsoft and from customers in the field. How to Use This Guide You can read this guide from beginning to end, or you can read only the relevant parts or chapters. You can adopt the guide in its entirety for your organization or you can use critical components to address your highest-priority needs.

Ways to Use the Guide There are many ways to use this comprehensive guidance. The following are some suggestions: Use it as a mentor. Use the guide as your mentor for learning how to conduct performance testing.

The guide encapsulates the lessons learned and experiences gained by many subject matter experts. Use it as a reference. Use the guide as a reference for learning the do s and don ts of performance testing. Incorporate performance testing into your application development life cycle. Adopt the approach and practices that work for you and incorporate them into your application life cycle.

Use it when you design your performance tests. Design applications using the principles and best practices presented in this guide. Benefit from lessons learned. Create training. Create training based on the concepts and techniques used throughout the guide. Organization of This Guide You can read this guide from end to end, or you can read only the chapters you need to do your job.

Performance testing additionally tends to focus on helping to identify bottlenecks in a system, tuning a system, establishing a baseline for future testing, and determining compliance with performance goals and requirements. In addition, the results from performance testing and analysis can help you to estimate the hardware configuration required to support the application s when you go live to production operation.

Identify the Test Environment.

PerfTestGuide

Identify the physical test environment and the production environment as well as the tools and resources available to the test team.

The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project.

In some situations, this process must be revisited periodically throughout the project s life cycle. Activity 2. Identify Performance Acceptance Criteria.

Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics.

Activity 3. Plan and Design Tests. Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed. Configure the Test Environment.

Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary. Activity 5.

Implement the Test Design. Develop the performance tests in accordance with the test design. Activity 6. Execute the Test. Run and monitor your tests. Validate the tests, test data, and results collection.

Execute validated tests for analysis while monitoring the test and the test environment. Activity 7. Analyze Results, Report, and Retest. Consolidate and share results data. Analyze the data both individually and as a cross-functional team. Reprioritize the remaining tests and re-execute them as needed. When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration.

Feedback on the Guide We have made every effort to ensure the accuracy of this guide and its companion content. If you have comments on this guide, send to We are particularly interested in feedback regarding the following: Technical issues specific to recommendations Usefulness and usability issues The Team Who Brought You This Guide This guide was created by the following team members: J. Please tell us by writing a short summary of the problems you faced and how this guide helped you out.

Learn the core activities of performance testing. Learn why performance testing matters. Learn about the relevance of project context to performance testing. Learn how tuning fits into the performance testing cycle. Performance testing is commonly conducted to accomplish the following: Assess production readiness Evaluate against performance criteria Compare performance characteristics of multiple systems or system configurations Find the source of performance problems Support system tuning Find throughput levels This chapter provides a set of foundational building blocks on which to base your understanding of performance testing principles, ultimately leading to successful performance-testing projects.

Additionally, this chapter introduces various terms and concepts used throughout this guide. How to Use This Chapter Use this chapter to understand the purpose of performance testing and the core activities that it entails.

To get the most from this chapter: Use the Project Context section to understand how to focus on the relevant items during performance testing. Use the Relationship Between Performance Testing and Tuning section to understand the relationship between performance testing and performance tuning, and to understand the overall performance tuning process. Use the Performance, Load, and Stress Testing section to understand various types of performance testing.

Use the Baselines and Benchmarking sections to understand the various methods of performance comparison that you can use to evaluate your application. Use the Terminology section to understand the common terminology for performance testing that will facilitate articulating terms correctly in the context of your project. Figure 1. Activity 1. Activity 4. Why Do Performance Testing? Some more specific reasons for conducting performance testing include: Assessing release readiness by: o Enabling you to predict or estimate the performance characteristics of an application in production and evaluate whether or not to address performance concerns based on those predictions.

Assessing infrastructure adequacy by: o Evaluating the adequacy of current capacity. Assessing adequacy of developed software performance by: o Determining the application s desired performance characteristics before and after changes to the software.

Improving the efficiency of performance tuning by: o Analyzing the behavior of the application at various load levels. Project Context For a performance testing project to be successful, both the approach to testing performance and the testing itself must be relevant to the context of the project. Without an understanding of the project context, performance testing is bound to focus on only those items that the performance tester or test team assumes to be important, as opposed to those that truly are important, frequently leading to wasted time, frustration, and conflicts.

The project context is nothing more than those things that are, or may become, relevant to achieving project success. This may include, but is not limited to: The overall vision or intent of the project Performance testing objectives Performance success criteria The development life cycle The project schedule The project budget Available tools and environments The skill set of the performance tester and the team The priority of detected performance concerns The business impact of deploying an application that performs poorly Some examples of items that may be relevant to the performance-testing effort in your project context include: Project vision.

Before beginning performance testing, ensure that you understand the current project vision. The project vision is the foundation for determining what 19 performance testing is necessary and valuable. Revisit the vision regularly, as it has the potential to change as well.

Purpose of the system. Understand the purpose of the application or system you are testing. This will help you identify the highest-priority performance characteristics on which you should focus your testing. You will need to know the system s intent, the actual hardware and software architecture deployed, and the characteristics of the typical end user. Customer or user expectations. Keep customer or user expectations in mind when planning performance testing.

Remember that customer or user satisfaction is based on expectations, not simply compliance with explicitly stated requirements. Business drivers. It is important to meet your business requirements on time and within the available budget.

Reasons for testing performance. Understand the reasons for conducting performance testing very early in the project.

Failing to do so might lead to ineffective performance testing. These reasons often go beyond a list of performance acceptance criteria and are bound to change or shift priority as the project progresses, so revisit them regularly as you and your team learn more about the application, its performance, and the customer or user.

Value that performance testing brings to the project. Understand the value that performance testing is expected to bring to the project by translating the project- and business-level objectives into specific, identifiable, and manageable performance testing activities. Coordinate and prioritize these activities to determine which performance testing activities are likely to add value. Project management and staffing.

Understand the team s organization, operation, and communication techniques in order to conduct performance testing effectively. Understand your team s process and interpret how that process applies to performance testing.

Compliance criteria. Understand the regulatory requirements related to your project. Obtain compliance documents to ensure that you have the specific language and context of any statement related to testing, as this information is critical to determining compliance tests and ensuring a compliant product.

Also understand that the nature of performance testing makes it virtually impossible to follow the same processes that have been developed for functional testing. Project schedule. Be aware of the project start and end dates, the hardware and environment availability dates, the flow of builds and releases, and any checkpoints and milestones in the project schedule.

Cooperative Effort Although tuning is not the direct responsibility of most performance testers, the tuning process is most effective when it is a cooperative effort between all of those concerned with the application or system under test, including: Product vendors Architects Developers Testers Database administrators System administrators Network administrators Without the cooperation of a cross-functional team, it is almost impossible to gain the system-wide perspective necessary to resolve performance issues effectively or efficiently.

The performance tester, or performance testing team, is a critical component of this cooperative team as tuning typically requires additional monitoring of components, resources, and response times under a variety of load conditions and configurations. Generally speaking, it is the performance tester who has the tools and expertise to provide this information in an efficient manner, making the performance tester the enabler for tuning.

Tuning Process Overview Tuning follows an iterative process that is usually separate from, but not independent of, the performance testing approach a project is following. The following is a brief overview of a typical tuning process: Tests are conducted with the system or application deployed in a well-defined, controlled test environment in order to ensure that the configuration and test results at the start of the testing process are known and reproducible.

It is not uncommon to make temporary changes that are deliberately 21 designed to magnify an issue for diagnostic purposes, or to change the test environment to see if such changes lead to better performance. The cooperative testing and tuning team is generally given full and exclusive control over the test environment in order to maximize the effectiveness of the tuning phase.

Performance tests are executed, or re-executed after each change to the test environment, in order to measure the impact of a remedial change.

The tuning process typically involves a rapid sequence of changes and tests. This process can take exponentially more time if a cooperative testing and tuning team is not fully available and dedicated to this effort while in a tuning phase. When a tuning phase is complete, the test environment is generally reset to its initial state, the successful remedial changes are applied again, and any unsuccessful remedial changes together with temporary instrumentation and diagnostic changes are discarded.

The performance test should then be repeated to prove that the correct changes have been identified. It might also be the case that the test environment itself is changed to reflect new expectations as to the minimal required production environment.

This is unusual, but a potential outcome of the tuning effort. Performance, Load, and Stress Testing Performance tests are usually described as belonging to one of the following three categories: Performance testing. Performance is concerned with achieving response times, throughput, and resourceutilization levels that meet the performance objectives for the project or product. In this guide, performance testing represents the superset of all of the other subcategories of performance-related testing.

Load testing. This subcategory of performance testing is focused on determining or validating performance characteristics of the system or application under test when subjected to workloads and load volumes anticipated during production operations.

Stress testing. This subcategory of performance testing is focused on determining or validating performance characteristics of the system or application under test when subjected to conditions beyond those anticipated during production operations. Stress tests may also include tests focused on determining or validating performance characteristics of the system or application under test when subjected to other stressful conditions, such as limited memory, insufficient disk space, or server failure.

These tests are designed to determine under what conditions an application will fail, how it will fail, and what indicators can be monitored to warn of an impending failure. Baselines Creating a baseline is the process of running a set of tests to capture performance metric data for the purpose of evaluating the effectiveness of subsequent performance-improving changes to the system or application. A critical aspect of a baseline is that all characteristics and configuration options except those specifically being varied for comparison must remain invariant.

Once a part of the system that is not intentionally 22 being varied for comparison to the baseline is changed, the baseline measurement is no longer a valid basis for comparison. With respect to Web applications, you can use a baseline to determine whether performance is improving or declining and to find deviations across different builds and versions. For example, you could measure load time, the number of transactions processed per unit of time, the number of Web pages served per unit of time, and resource utilization such as memory usage and processor usage.

Some considerations about using baselines include: A baseline can be created for a system, component, or application. A baseline can also be created for different layers of the application, including a database, Web services, and so on. A baseline can set the standard for comparison, to track future optimizations or regressions. It is important to validate that the baseline results are repeatable, because considerable fluctuations may occur across test results due to environment and workload characteristics.

Baselines can help identify changes in performance. Baselines can help product teams identify changes in performance that reflect degradation or optimization over the course of the development life cycle. Identifying these changes in comparison to a well-known state or configuration often makes resolving performance issues simpler.

Baselines assets should be reusable. Baselines are most valuable if they are created by using a set of reusable test assets. It is important that such tests accurately simulate repeatable and actionable workload characteristics.With a plan, test design, and the necessary environments in place, test designs are implemented for major tests, or work items are identified for imminent performance builds.

Identifying these changes in comparison to a well-known state or configuration often makes resolving performance issues simpler. You perform capacity testing in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data. Instead, attend to the anticipated sequencing of performance builds, a rough estimate of their contents, and an estimate of how much time to expect between performance builds.

This process can take exponentially more time if a cooperative testing and tuning team is not fully available and dedicated to this effort while in a tuning phase. When viewed from a linear perspective, the approach starts by examining the softwaredevelopment project as a whole, the relevant processes and standards, and the performance acceptance criteria for the system.

DOMINICA from Baltimore
Look through my other articles. I'm keen on go. I relish exploring ePub and PDF books sympathetically.