Ericom Blog
 
 
Support Center   |    Forums   |    Blog   |    Contact

ET and CPUs – Load Balancing from Outer Space

| 802
SHARE:
ET and CPUs – Load Balancing from Outer Space

Do you believe in space aliens? You too can join the search for extraterrestrial life by participating in the SETI @ home project. The SETI @ home project sifts through radio signals originating from out space, attempting to detect evidence of extraterrestrial technology. The problem is that space is big. You just won’t believe how vastly, hugely, mind- bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space. As a result, analyzing the data collected from every direction in space over a wide range of frequencies requires a lot of computational power. In the past this power has been provided by supercomputers, but supercomputers are very expensive and even they may not be up to such a humongous task. And so, a couple of very smart guys from Berkeley University came up with the cool idea of distributing this computational effort between millions of Personal Computers connected via the Internet, thus creating a virtual super-supercomputer. To participate in this effort and join this virtual network all you need to do is to download and install the SETI @ home client onto your computer.

The SETI @ home project is not so new, it’s been going on since 1999, and you may be wondering why I’m bringing it up in the context of load balancing. The connection is that the SETI @ home client utilizes the unused processor cycles to perform its computations. This means that it can do its work without interfering with yours (I would still advise checking your corporate policy regarding such applications before installing the SETI@home client onto your work PC). To see where these unused cycles come from open Task Manager, select the “Processes” tab and sort by CPU usage (you will need to click the column header twice in order to sort the processes in descending order). At the top of the list you will see an item entitled “System Idle Process”. This is a special process that eats up all the processor cycles that are unused by any other running processes. The SETI @ home client simply takes cycles away from the System Idle Process but not from any other.

The funny thing is that in most cases, over time, the SETI @ home client will get more CPU cycles than any other running process. You can easily prove this to yourself by keeping the Task Manager open – you will see that the System Idle Process will remain at the top list more often than not. Which brings us at last to load balancing – if indeed the CPU is such an underused resource, why load balance on CPU at all? There are two main reasons for using CPU load as a metric for Terminal Services load balancing. The first reason is that the CPU is generally more heavily used in a Terminal Server environment than on a PC. Indeed, if you were to install SETI @ home onto a Terminal Server, your contribution to alien hunting would be rather small. This is because on a Terminal Server the same CPU (or CPUs) is used to service multiple users at the same time, and as result, a significantly larger number of processes running concurrently.

The second reason has to do with the CPU usage pattern. If you switch Task Manager over to the “Performance” tab you’ll see graphs displaying CPU and memory usage over time. What’s interesting here is that while memory usage is fairly constant, CPU usage continually oscillates between peaks of high usage and (hopefully) longer period of low to medium use. During the peaks the system may respond sluggishly to user interaction. In a local computing environment, since there is only one user, these peaks generally occur in response to that user’s actions, and as a result, are expected and anticipated. In an SBC environment, on the other hand, such peaks may occur in response to the actions of other users, and thus be totally unexpected, annoying and even disruptive (much like you don’t get car sick when you are driving the car, but can easily become car sick if you are a passenger and somebody else is driving). Because of these two reasons, we believe that the CPU load must be one of the metrics used by the default load evaluator.

Actually, rather than use the raw CPU load, PowerTerm WebConnect computes the total available CPU cycles in MHz for each Terminal Server from that value. This measurement is performed by a lightweight agent installed on each server. The agent reads the appropriate performance counters and applies the following formula to the data retrieved for each CPU core:

core speed (MHz) * (100 – usage (percent)) / 100

The agent then summarizes the results for all the cores, and transmits this value to a central load balancing server at a rate of up to once a second. The load balancer server averages these measurements over a period of 30 seconds (both the transmission rate and averaging period are customizable). The reason that the CPU measurements are averaged over time is to avoid skewed results due to the above mentioned oscillations in CPU usage.

An obvious question is how to calculate the total load given three such disparate values as available memory (MB), available CPU (MHz) and session count. The PowerTerm WebConnect load balancer does this by normalizing each measurement to a value in the range of 0 to 1. For the memory and CPU data this is done by locating the maximal value and then dividing all the measurements by that value:

CPU score = available CPU (MHz) / maximum available CPU (MHz)

For the session count the same formula is used but the result is then subtracted from 1 so that the server with the least sessions gets the top score for that metric.

The value of each metric is multiplied by its relative weight to determine the load score for the Terminal Server. As I described in my previous post, the default load evaluator used by PowerTerm WebConnect is memory biased: 60% memory, 30% CPU and 10% Session Count. But, as I also explained, when the Terminal Servers have more than 350MB of available memory they get the same score for the memory portion. In such cases the formula effectively becomes 75% CPU and 25% Session Count. Because of this, if a particular Terminal Server has significantly more available CPU cycles it will be selected. Otherwise, if all the Terminal Servers have similar CPU loads, the Terminal Server with the least sessions will be selected. I swear we came up with this technique ourselves and did not get it from Area 51!

This post has been a tribute to 60th anniversary of the Roswell incident.

Profile:
Dan Shappir is responsible for all aspects of product design, software development and maintenance of the Ericom's product lines. Mr. Shappir joined Ericom in 2001 and brings over 15 years experience in computer programming, and in the design and architecture of software products. Mr. Shappir holds a B.Sc. in Computer Science (with honors) from the Hebrew University of Jerusalem, Israel and an M.Sc. in Computer Science (with honors) from Tel Aviv University, Israel. | Ericom Software
comments powered by Disqus
 
 

Sign Up!

By subscribing you agree to the terms of our Privacy Policy

Categories

Recent Posts