Telerik blogs

If you've been monitoring our roadmap or other announcement channels, you may have heard we’re bringing performance testing to Test Studio. This is a pretty exciting step for us, and it lets us bring some great functionality to help you continue to boost the quality of the software you’re delivering.

I thought I’d take some time to lay down some of my opinions and experiences around performance testing in general as a bit of a lead in to later posts which will showcase our performance testing functionality. I’ll be writing a number of posts over the next few weeks to cover some fundamentals around performance testing.

The phrase “performance testing” can mean a great many things to different people in different scenarios, so covering a few of the different types of tests may be helpful.

Performance Testing is generally an umbrella term covering a number of different, more complex test environments. I’ve also used the term to describe a very simple set of scenarios meant to provide a baseline for performance regressions.

Load Testing generally uses a number of concurrent users to see how the system performs and find bottlenecks

Stress Testing throws a huge number of concurrent users against your system in order to find “tipping points” – the point where your system rolls over and crashes due to a huge amount of traffic

Endurance/Soak Testing checks your system’s behavior over long periods to look for things like degredation, memory leaks, etc.

Wikipedia’s Software Performance Testing page has some very readable information on the categories.

You can also look at performance testing as a slice of your system’s performance. You can usea specific scenario to dive down in to specific areas of your system, environment, or hardware.

Load, stress, and endurance testing are all that, but turned up to 11. (A reference to Spinal Tap for those who’ve not seen the movie.)

With that in mind, I generally think of performance testing in two categories: testing to ensure the system meets specified performance requirements, and testing to ensure performance regressions haven’t crept into your system. Those two may sound the same, but they’re not.

Performance testing to meet requirements means you’ll need lots of detail around expected hardware configurations, baseline datasets, network configurations, and user load. You’ll also need to ensure you’re getting the hardware and environment to support those requirements. There’s absolutely no getting around the need for infrastructure if your customers/stakeholders are serious about specific performance metrics!

Performance testing to guard against regressions can be a bit more relaxed. I’ve had great successes running a set of baseline tests in a rather skimpy environment, then simply re-running those tests on a regular basis in the exact same environment. You’re not concerned with specific metric datapoints in this situation – you’re concerned about trends. If your test suite shows a sudden degradation in memory usage or IO contention then you know something’s changed in your codebase. This works fine as long as you keep the environment exactly the same from run to run—which is a perfect segue into my next point.

Regardless of whether you’re validating performance requirements, guarding against regressions, or flooding your system in a load test designed to make your database server weep, you absolutely must approach your testing with a logical, empirical mindset. You’ll need to spend some time considering your environment, hardware, baseline datasets, and how to configure your system itself. I’ll be spending some more time on this topic in my next post.

About the author

Jim Holmes

Jim Holmes

has around 25 years IT experience. He is co-author of "Windows Developer Power Tools" and Chief Cat Herder of the CodeMash Conference. He's a blogger and evangelist for Telerik’s Test Studio, an awesome set of tools to help teams deliver better software. Find him as @aJimHolmes on Twitter.


Comments

Comments are disabled in preview mode.