Copyright © All rights reserved. Made By Serif. Terms of use | Privacy policy
the                                                 site Making the Grade:        Cost Savings Upgrades for Today’s Data Centers

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with high availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Ericsson.

About the Author:

Feature Article: January 2014

Data + Technology Today

Check out Craig S. Mullins’ blog on data + database technology. more>

Cloud computing is paving the way for a whole new way of experiencing content. As companies increasingly take advantage of the benefits that cloud computing has to offer; the ability to access data has migrated from solely desktops to the tiny computers we all carry in our pockets today. This great migration to the cloud has led to new opportunities and challenges for the service providers’ data centers that support this expansion. Sustaining a competitive advantage in this growing market has become increasingly difficult and service providers are taking a closer look at their infrastructures for ways to reduce costs while maintaining consistent performance.  

If the current situation remains as it is, offering low-cost cloud services will pose serious business challenges for service providers. Upgrading, maintaining and scaling cloud infrastructures is an expensive endeavor, and shifting the cost of that infrastructure upgrade to the consumers only means that they will be burdened with storage capacity limitations and higher prices. Such difficulties are motivating service providers to find new ways to improve efficiencies in the data center while keeping costs as low as possible.

Considering the Costs and Benefits

As one response to the increase of online activity, many data centers are moving toward centralizing data and making it accessible over networks. In doing so, they are able to reduce operational expenses while increasing efficiency by allowing easier accessibility. Centralizing equipment allows service providers to deliver enhanced performance and reliability.

However, these added benefits also make scaling the infrastructure more costly and difficult to accomplish. Improving efficiency within a centralized data center requires the purchase of additional specialized, high-performance equipment, which increases costs and energy consumption. In an economy where cost-cutting is a fact of life, these added expenses are unwelcome.

Features of the Cloud

Resolving performance problems, like data bottlenecks, is a constant concern for cloud providers, who are responsible for managing far more users and greater performance demands than do enterprises. While end-users of enterprise systems also require high performance, these systems usually manage fewer users who are able to access their data directly through the network. Moreover, enterprise system users are accessing, sending and saving comparatively smaller files – such as documents or spreadsheets – that require less storage capacity and performance.

Outside an enterprise’s intranet, however, it’s a different story. Numerous users access cloud systems simultaneously via the Internet, which itself becomes a performance bottleneck. The average cloud consumer also stores larger files than does the typical enterprise user, which places greater pressures on data center resources. The cloud provider’s storage system not only has to be scalable, it must also sustain performance across all users.

Best Practices for Cloud Management

Service providers must be capable of scaling their systems quickly in order to meet the increasing demand for data storage. The following best practices can help optimize data center ROI in a period of significant IT cutbacks:

Seek out a distributed storage system: Distributed storage presents the best way to build at scale even though the data center trend has been moving toward centralization. Increased performance at the software level counterbalances the performance advantage of a centralized data storage approach.

Use commodity components when possible: Low-energy hardware makes good business sense. Commodity hardware is not only cost-effective, but also energy-efficient, which significantly decreases both setup and operating costs in one move.

Avoid a system that forces data through a single point of entry: A single point of entry can result in performance bottlenecks. Integrating caches to relieve the bottleneck, as most data centers currently do, quickly adds to the complexity and cost of a system. On the other hand, a horizontally-scalable system that distributes data among all nodes delivers a high level of redundancy.

Beyond the Cloud

Currently, storing large amounts of data while enabling around-the-clock access for consumers requires data centers to operate high-performance equipment and implement vertically-scaled systems.

Since these current architectures lack the versatility and cost-effectiveness required to meet dramatic shifts in demand, cloud service providers must take a critical look at their storage infrastructures to identify ways to improve performance, address variances in demand and cut costs wherever possible. Shifting to a horizontally-scaled data storage model that distributes data evenly onto energy-efficient hardware can reduce costs and increase performance in the cloud. With this in mind, service providers of today can take the necessary steps to improve operations to meet the demand of tomorrow.

By Stefan Bernbo

Founder and CEO of Compuverde

VP Marketing, CA Technologies and author of Agile Marketing