Web Connection
COM servers per CPU
Gravatar is a globally recognized avatar based on your email address. COM servers per CPU
  Paul Borowicz
  All
  Feb 3, 2025 @ 03:13am

Hello

We have a been using Web Connect for over 20 years and the question of how many COM servers per CPU should be configured for our application.

The application is using the WC .NET Module using an Oracle backend database on a remote server including VFP database for supporting metadata for business rules. During normal business hours we see as much as 30,000 to 40,000 requests per day; about 90% of the requests are very fast between 0.10 and 0.99 seconds. The remaining 10% requests are between 1 and 10 seconds with a handful over 10 seconds.

Application is hosted on Cloud with a server that has only 2 CPU, I have read through all of the Web Connect documentation and the conclusion is that 1-3 COM servers should be configured depending on how much foxpro data is involved and the amount of work that may influence the CPU and RAM usage.

Right now I'm only configuring only 2 COM servers per CPU--which gives a total of 4 COM servers. At the end of normal business hours and I open the admin page I can see 4 COM servers have done a lot of work. The last COM server (i.e. load based) shows 1,300+ requests. This means that requests are queueing in IIS waiting for the one of the 4 COM servers to become available.

If I add 3 COM servers per CPU I begin to see COM errors in WCERRORS.TXT during peak usage. The CPU and RAM show normal usage and not getting stressed out. I'm aware that Foxpro is a single threaded application and this is a contributing to the COM errors.

So far I have requested 2 additional CPU added to these Cloud servers--to configure 8 COM servers--but it there anything else can be done on hardware end to increase performance?

Thanks!

--Paul

Gravatar is a globally recognized avatar based on your email address. re: COM servers per CPU
  Rick Strahl
  Paul Borowicz
  Feb 3, 2025 @ 05:50pm

Not sure I understand what you mean by your 4 servers and the last one with 1300+ requests. If the server is well loaded the number of requests handled should drop off a bit towards the last servers. If all servers are equally loaded that likely means you're pushing the load of those servers pretty hard.

Ideally what you want is have the last server in your list be very lightly loaded compared to the others. 1300+ request doesn't mean much on its own - it's gotta be taken in relation to the total number of requests.

If you have two CPUs and you're hitting a SQL backend like Oracle, and you are fairly certain that most of the process time is spent on SQL requests waiting on Oracle, then you can bump the number of servers higher by a bit.

For a fairly busy site, 2 CPUs is not a lot to go with for a FoxPro app or any app really, mainly because 1 CPU typically needs to deal with OS housekeeping stuff especially if you have a desktop up and running (ie. not running logged off). Two CPUs is what I consider minimum if your site is fairly busy. 4 CPUs (or threads if you're using physical CPUs) will give you significantly more headroom, especially if there's other stuff going on that same machine.

It also depends on how well those CPUs are actually working. Not all 'virtual' CPUs perform the same on different platforms and I've found that especially platforms like Azure and managed AWS have a ton of overhead that diminishes those CPU resources compared to some other light weight providers like Vultr/Digital Ocean etc. Hard to make that call though - only way you really know is by testing side by side.

As to the errors - if you're getting timeout errors in the logs then that's a clear sign that servers are not making it through the queue quickly enough to get a server assigned and process. If that's the case it's time to look through the logs and try to analyze your slow requests and see if they're causing the pile ups - a lot of times slow requests cause users to click multiple times which can very easily overload servers - in the logs that will show up as multiples of the slow requests in close succession. Always start there... 10 second requests is very long for server processing and if that's the case you should consider either making those request async, or perhaps running them using single server instances to keep the server pool from being dead locked. You may get slow performance if CPU is a problem, but at least requests start processing.

+++ Rick ---

Gravatar is a globally recognized avatar based on your email address. re: COM servers per CPU
  Paul Borowicz
  Rick Strahl
  Feb 9, 2025 @ 11:07am

Rick

Thanks for the feedback! I did some additional homework and have these questions.

  • We're still on WC 5.5; since this version has there been performance enhancements in the WC core libraries (i.e. wwServer, etc.. ) and the .NET Module webconnectionmodule.dll that would be beneficial?
  • Back in the day (circa 2000) when I was first introduced Web Connect it was always recommended to configure only 2 WC COM servers per CPU. Now with the newest CPU in cloud we're seeing multiple cores per CPU. For example, on our cloud server we have two [Intel Xeon Platinum 8175M 2.5GHz] CPU which I believe have 24 core count. Does this mean I can increase my com sever count from 2 per CPU to 4 per CPU--perhaps even more?
  • I'm still learning about STA and how many can be configured on a multi-core CPU. Are STA limitations based on the core count of the CPU?

Thanks!

--Paul Borowicz

Gravatar is a globally recognized avatar based on your email address. re: COM servers per CPU
  Rick Strahl
  Paul Borowicz
  Feb 9, 2025 @ 12:27pm

CPU Counts

CPU count is different today - what matters is the active core count as each of these cores can run a separate CPU 'thread' (not the same as a process thread in Windows). If you're running Virtual Machines though they go by cores not CPUs so you have to be sure to know what's what.

Minimum core count should be 2 as 1 thread is usually tied with system related processing especially on machines that have an active desktop running. Busy sites should probably have 3 or 4 cores to help balance the load better. I run on a 4 core virtual machine, but I also run about 20 different applications (a few FoxPro and a lot of .NET apps) on that server, some of them that are fairly busy. The extra CPUs go a long way towards spreading out the load more evenly.

Again general rule is 2 instances per core/cpu for mostly CPU bound (ie. FoxPro) processing. If you're mostly hitting a backend for data access that effectively offloads most of the processing overhead to the SQL engine, then you can easily bump that number up.

Testing for Load

I recommend if you're not sure, you do some load testing and load up the app with a stress tester like West Wind WebSurge or similar and see how the load settles when you push the app to run at ~70% CPU load. See what amount of traffic you are generating and how it spreads out over the app pool. What I do is run it to the point where the last server in pool starts getting steady flow of hits but still mostly less than the other servers that likely are fully loaded at that point - that means the pool is starting to saturate and you're getting near the limits. You'll want to test typical requests and avoid throwing the really long ones in there. At that point you can experiment and add or remove servers instances and see how that affects overall performance. More is not always better...

Newer Web Connection Versions

I don't think newer versions of Web Connection necessarily improve performance, but there are improvements that make server loading faster and more consistent in the latest versions. There are also big enhancements in server management, but most of those are tooling features that come from new project setup that won't benefit old projects unless you do some work to retrofit the projects - basically recreate the project and backfill the application or manually create or copy in the management files.

There are definite benefits, but you're not going to see anything ground shaking that changes the performance.

+++ Rick ---

© 1996-2025