West Wind WebSurge
Recommended load testing strategy
Gravatar is a globally recognized avatar based on your email address. Recommended load testing strategy
  Carl Chambers
  Jul 12, 2022 @ 11:00am

Hey Rick,

What I'm about about ask may seem elementary to you but it's new to me so I hope you can bear with me for a moment.

So I now have Web Surge installed and have set up a session that contains the requests that somewhat typify the sequence of requests I expect to be called by the client.


  • My app is currently on a shared server and has had very little traffic so far but that is soon to jump significantly with a large customer deciding to use my API.
  • Currently, my app is configured for 2 instances.
  • My app is primarily an API serving either JSON or an HTML fragment (not a full page).
  • I need to determine how much of a server upgrade I will need for the number of instances I will need based on my best guess regarding anticipated traffic.
  • I don't know what strategy to use to figure out my server needs.


  1. I'm not clear on how I should set the "Threads" setting in Web Surge or if I should experiment with it at all. Should this be set to match the number of server instances or is it part of a "try different values until something is observed" strategy?
  2. I see a considerable difference between the responses times in Web Surge than I what I see in the wwRequest log. If the response times in Web Surge are impacted by the speed of my internet connection and my own machine (which are both rather slow), how much weight should I give to the wwRequest response times vs. the Web Surge response times?
  3. It seems to me that I read somewhere that you recommend 1 server instance per core.
    Given that I currently have 2 instances, what testing strategy can I use to determine how many more server instances I will need for an anticipated number of requests?

Any advice would be greatly appreciated.


Gravatar is a globally recognized avatar based on your email address. re: Recommended load testing strategy
  Rick Strahl
  Carl Chambers
  Jul 12, 2022 @ 12:02pm

Hi Carl,

There are no easy answers for load testing ever 😃 But there are lots of options.

There are really two testing scenarios that people typically use:

  • Max-out testing
  • Typical load testing

Max out testing

The first scenario is meant to try determine what type of load you can get your server to run so essentially you're trying to push the load tester to overload and then back off until you have an application that is still running within you expected performance range.

Max out testing involves using no timeouts for requests, so as soon as one finishes you fire the next. IOW, you're firing requests as quickly as you can. The only thing you should tweak in that scenario is the number of threads. If your app is a FoxPro app you likely can't use too many threads simultaneously since VFP won't process more than x number of requests that match your instance count simultaneously. When I do load testing I usually start with 10 threads, then bump up and down to see where I get the optimum performance from the test in terms of request throughput. If the app is warmed up you can run the tests for a short period of time - a couple of minutes is a usually good enough to get an idea of the overall perf profile.

For FoxPro apps this type of processing is likely limied to a few hundred requests a second max (local). For .NET Core apps (without data) I've been able to push this to well over 80k requests a second. With real world DB processing etc. that will drastically drop so it obviously depends heavily on your application. That's why I say - it depends and you have to play with the tests to find the sweet spot.

User Load Testing

This type of testing sets up a more realistic scenario where you say I have 100 users that are accessing the app at the same time, but you set the timeout to reflect real-world delays between requests. So 100 threads, but they have a delay of maybe 2,000ms (or more) between requests as it takes people time between individual navigations or API accesses.

This often results in much much lower actual request counts despite the higher thread counts. You don't what to run too many threads, though, so if you have thousands of users you'll want to compensate using fewer threads and instead reducing the interval between requests.


As to the request log times and the times in WebSurge - yes that's the difference between request processing on the server which produces only output but doesn't count any wire transfer time, and the actual time it takes to send a request to the server and return it back from the server in addition to the request time that the server takes. So yeah that will be longer. Depending on the connection speed and latency of connection the wire time may be slower than the processing time.

FoxPro server apps that are CPU heavy will work best if they are set up for 2 instances per physical or virtual CPU. The doubling up allows for time in between requests and for the potential wait time for things like SQL connections etc. If your app is very IO heavy (ie. long SQL Server queries for example, or making external HTTP requests that take time), then you can bump the server count higher. The goal is to ensure you don't overload the machine CPUs and keep it running (on average) at no more than 50% load preferably less.

All that said - most applications won't incur that kind of load. I've run a heavy load test on this message board a while back and even though performance was starting to max out around ~150 req/sec, the CPU load was actually minimal (in the 10% range with peaks into the 50s) with only two instances (which is what the site runs here).

The key for FoxPro server applications is to minimize long running requests as those will block instances from processing more requests. If you have requests that take more than a couple of seconds you'll want to think to offloading those into some sort of async or queueing process.

+++ Rick ---

Gravatar is a globally recognized avatar based on your email address. re: Recommended load testing strategy
  Carl Chambers
  Carl Chambers
  Jul 12, 2022 @ 03:12pm

Thanks Rick,

I think I follow the concepts you have laid out OK but I'm a bit shaky on my next steps.
For example, you said (emphasis added)...

Depending on the connection speed and latency of connection the wire time may be slower than the processing time.

For a simple starting point, I set up a session with only 1 request and ran it with 10 threads for 20 seconds.
I copied/pasted the wwRequestLog page to an Excel sheet to calculate the average response time. This calculation would be a bit off because most of the response times were displayed as 0 (i.e less than 10ms) and the rest were .02 (20ms). Average was calculated as about 3.8ms.
Edit: This is likely rather low - an average of around 6 to 8ms is probably more realistic.

The Web Surge summary report showed an average response time of 152.97ms. So the wire time is approximately 40x (Edit: 20-25x) longer than the processing time with an average of 761 bytes served per request.
So I don't think I'm getting a real-world picture.

I don't have direct access to the server (only FTP access) so I cannot run Web Surge on the server itself nor can I simulate a connection remotely close to the speed you intimated with your above statement.

This all makes me wonder if I need to look largely at the wwRequestLog response times along with the amount of data downloaded per request and try to calculate the wire time based on typical internet connection speeds.
Does this sound feasible or am I off in my logic?


Gravatar is a globally recognized avatar based on your email address. re: Recommended load testing strategy
  Rick Strahl
  Carl Chambers
  Jul 12, 2022 @ 04:44pm

The WebRequest log data is useless for stress testing because it includes only the FoxPro processing and not even quite all of that (there's still logic that returns the data, writes the log entry into the DBF etc.).

To that you have to add the actual Web Server calling the FoxPro server (via COM or Filebased transfer), encoding and decoding the response and then the wire transfer time. The WebRequest log gives you an idea how long your internal processing takes, but not the entire request time. For that you'd have to look in the Web Server logs or you can turn on detail logging in the Web Connection module (although that'll mix in all sorts of other stuff).

If the wire transfer time is slow you can compensate by using additional threads and/or reducing the timeout.

This is why stress testing is difficult. It's a discovery process, not a paint by the numbers gig.

If your requests are as fast as you say, then your main concern will be CPU load so your goal should be testing for max load and checking what the CPU load is at varying levels of processing.

Note that more instances don't necessarily mean better performance even with extra cores. There's some info in the docs that talk about how to maximize your instance count.

+++ Rick ---

Gravatar is a globally recognized avatar based on your email address. re: Recommended load testing strategy
  Carl Chambers
  Rick Strahl
  Jul 13, 2022 @ 01:34pm

Hi Rick,

I'll fumble around with Web Surge for a bit an see what I can come up with.
Thanks for your advice.


© 1996-2022