Man standing in server room inspecting
|
|
Apr. 1, 2014

Stanford Innovation Uses Netflix-based Software for Cloud Efficiency

New software developed at Stanford University promises to improve the efficiency of cloud systems by using an algorithm modeled on Netflix's recommendation engine. According to experts, this innovation needs a lot of work before it can become a viable solution.

After realizing that servers use about 20 percent of their capacity on most workloads, associate professor Christos Kozyrakis and doctoral student Christina Delimitrou came up with a cluster management system that would optimize server capacity, according to a report from the Stanford News. In theory, this would accelerate processing cycles and lower server time, which would also lead to money and energy savings.

This is a potentially great innovation, because host processing can be wasteful. Developers reserve server space in advance by guessing how much they will need, often buying extra to avoid slowdowns. As a result, large parts of servers go unused. But it’s like reserving a room for an open-invitation party: You plan on the highest possible attendance, not on the  number of people that would perfectly fit in the space.

The Quasar Method of Cloud Efficiency

Stanford’s Quasar program promises a better way. It identifies app types and assigns them to a minimum number of servers capable of multitasking. It does this by figuring out apps that run best together within the server. This is important because apps often don’t work well together at all. For example, data mining and web-search calculations likely use different parts of a server.

Quasar assigns servers differently. It performs “performance-based allocation” of data resources, where developers determine the performance level their apps require, according to the Stanford News story.

For instance, if an app involves queries from users, how quickly must the app respond and to how many users? Under this approach the cluster manager would have to make sure there was enough server capacity in the data center to meet these requirements.”

Quasar uses collaborative filtering to filter how apps perform with certain types of servers. This filtering is similar to what Netflix uses to recommend movies, based on a member’s viewing history.

Independent analysis has already revealed that the process optimizes server performance, which advocates say could reduce the energy used to run servers.

Quasar’s Not Perfect

There are potential problems with the Quasar model. Wolf Halton, co-author of Computer and Internet Security, says that even if optimized servers reduce power by two-thirds, increased performance usage will “increase the heat produced by the servers and so will increase the cost of cooling the servers.”

Since most data centers inefficiently cool server rooms (usually by cooling large-volume swaths that contain smaller servers), power to run a server and cool it over its life span will cost more than owning and feeding a server. More testing regarding this cooling issue is clearly critical.

There are also security and privacy issues to think about. Halton warns that “data may not be effectively segregated and ownership and access issues exist” in this model, just like in other virtualization schemes. This is an important issue for both private and public hosts.

“If two or three times as many users' data and processes are on any given server in the farm, then [there is] two or three times the probability of malicious code appearing on any given server,” Halton says.

Developers should be thinking about improving security features that catch up to this potentially higher pace.

How Innovative Is Quasar, Really?

There’s a potential problem with assuming that collaborative filtering uses average app performance as the only expected performance level.

“I do not see how [filtering] gives you all that much better a usage level. Test loads may be less-than-accurate representations of reality,” he says

In addition, this new model may not be much of an improvement on models already available. Burst capability features allow sites to maximize uptime and coolly handle traffic peaks. The Linux kernel task scheduler is one such system, as are OpenVZ for virtualization and ProxMox VE and KVM virtual machines. 

Besides the Quasar model, other possible resource schemes could improve server efficiency. Monitoring server usage patterns more closely and automating resource release at the moment projects end are two such schemes.

“I see a lot of servers in data centers that are just running because there is no policy to release computing resources when they are no longer in use and servers that are running past their operating system end of life,” Halton says.  

Instead of waiting for a perfect new scheme that will solve the server efficiency problem, it seems developers have a lot of work to do — on themselves and on their efficiency guidelines.

[image: kjekol/iStock/ThinkStockPhotos]

ABOUT THIS CONTRIBUTOR
Jose is a technology expert with experience in covering cloud computing, startups and enterprise IT.
Back to top