Scale-Up vs. Scale-Out Storage: What’s the Difference?

In the evolving landscape of enterprise storage, the distinction between scale-up and scale-out storage architectures remains a focal point. As organizations face exponential data growth, understanding the nuances of these architectures is crucial for efficient storage management and expansion.

Storage capacity is the primary benchmark for evaluating storage devices, closely followed by the ease of capacity expansion. The urgency of scaling is a critical concern for storage administrators, often requiring a choice between adding hardware to an existing system or architecting a more complex solution such as a new data center. The former, known as scale-up, and the latter, scale-out, are differentiated by their inherent architectural designs.

space

The Traditional Scale-Up Model

Scale-up storage has been the traditional approach. It typically involves a central pair of controllers overseeing multiple shelves of drives. Expansion is linear and limited; when space runs out, additional shelves of drives are integrated. The limitation of this model lies in the finite scalability of the storage controllers themselves.

As storage demands increase, the scale-up model encounters bottlenecks. New systems must be introduced to manage additional data, leading to increased complexity and isolated storage silos. This architecture also struggles with resource allocation inefficiency, as determining the optimal location for workloads becomes increasingly challenging.

RAID technology underpins drive failure protection in scale-up systems. However, RAID does not extend across multiple storage controllers, anchoring the drives to a specific controller and consequently cementing the scalability challenge of this architecture.

Figure 1 – Modular/Scale-up Storage Architecture

As an organization’s data volume grows, completely new systems need to be added to cope with the additional demands. Ultimately, this architecture becomes highly complex to manage. Inefficient resource allocation becomes an issue in deciding where workloads need to reside.

Figure 2 shows the potential for storage system sprawl.

Figure 2 – Modular/Scale-up Storage Silos

The Modern Scale-Out Strategy

In contrast, scale-out storage architectures, particularly those utilizing object storage, offer a dynamic alternative. Constructed with industry-standard servers, storage is linked to each node, reminiscent of Direct Attached Storage (DAS). Object storage software on each node unifies the nodes into a single cluster, creating a pooled storage resource with a unified namespace accessible to users and applications.

Protection against drive failure in a scale-out environment is not reliant on RAID but on RAIN (Redundant Array of Independent Nodes), which offers data resilience across nodes. RAIN supports several data protection methods, including replicas and erasure coding, which mirror RAID’s data safeguarding principles but are optimized for multi-node environments.

Figure 3 – Object/Scale-out Storage Architecture

Scale-Out with Cloudian HyperStore

Cloudian HyperStore exemplifies the scale-out storage solution. HyperStore utilizes object storage technology to enable seamless scalability, providing a storage platform that expands horizontally by adding nodes. Each node addition enhances storage capacity, as well as compute and networking capabilities, ensuring that performance scales with capacity.

HyperStore’s architecture allows for simple integration of new nodes, which the system then incorporates into the existing cluster. Data is intelligently distributed across the new configuration, maintaining performance and reliability without the limitations of traditional scale-up architectures.

In a multi-data center setup, Cloudian HyperStore’s geo-distributed capabilities shine. Nodes can be deployed across various geographical locations, and thanks to HyperStore’s geo-awareness, data can be strategically placed to optimize access speeds. Users access storage through a virtual address, with the system directing requests to the closest or most optimal node. This ensures fast response times and consistent data availability, irrespective of the user’s location.

HyperStore’s innovative approach not only addresses the immediate scalability challenges but also provides a future-proof solution that accommodates the ever-increasing volume and complexity of enterprise data. Its efficient use of resources, simplified management, and robust data protection mechanisms make it a compelling choice for enterprises looking to overcome the traditional hurdles of storage expansion.

In summary, the evolution from scale-up to scale-out storage, epitomized by solutions like Cloudian HyperStore, marks a significant transition in enterprise storage. Organizations can now address their data growth challenges more effectively, with architectures designed for the demands of modern data management.

For more information, watch our overview video on object storage or read the first part of our series on object storage vs. file storage.

You may also be interested in:
Using Storage Archives to Secure Data and Reduce Costs

 

Draw my Cloudian: What is Object Storage?

A few months ago, I was hesitant about applying to an internship at a technology company. Unlike many of my peers who view the Silicon Valley as the perfect gateway for fueling their careers and interests, I was never quite drawn to the tech scene I had grown up with.

A few months ago, I was hesitant about applying to an internship at a technology company. Unlike many of my peers who view the Silicon Valley as the perfect gateway for fueling their careers and interests, I was never quite drawn to the tech scene I had grown up with.

At the same time, a majority of my hesitation could be attributed to intimidation – I had neither a technical background nor real understanding of the sorts of professions and companies that existed in the Silicon Valley. But given the exciting opportunity to intern at Cloudian these past few months, I got the chance to not only explore the cloud computing industry, but also immerse myself in an environment I was once too scared to venture into.

Along with the support of my manager and peers at Cloudian, one of the major projects I worked on as a marketing intern was a “draw my life” style video about object storage. Although I had absolutely no clue what object storage was prior to my internship, my co-workers were always there for me to turn to for help and guidance. After all, being given the opportunity to work on a topic I was previously unacquainted with translated into an opportunity to learn everything I stumbled upon. From there, the video began to unfold – from hours upon hours of research to creating a script, incessant doodling, and many dry-erase marker stains, here’s the finished product!

Thanks to my team at Cloudian for supporting me the entire way. Having worked on this project has definitely instilled in me a new confidence to take a leap into the unknown. As I spend my last few days here, I am proud to have been able to spend some time familiarizing myself with the core of Cloudian’s product and leave a piece of something I created before I go!

– Lesley

Can Scale-Out Storage Also Scale-Down?

This is a guest post by Tim Wessels, the Principal Consultant at MonadCloud LLC.

Private cloud storage can scale-out to meet the demands for additional storage capacity, but can it scale-down to meet the needs of small and medium-sized organizations who don’t have petabytes of data?

The answer is, yes it can, and you should put cloud storage vendor claims to the test before making your decision to build a private storage cloud.

Scale-out cloud storage

A private storage cloud that can cost-efficiently store and manage data on a smaller scale is important if you don’t need petabyte-capacity to get started. A petabyte is a lot of data. It is equivalent to 1000 terabytes. If you have 10 or 100 terabytes of data to manage and protect, a scale-down private storage cloud is what you need to do that. And in the future, when you need additional storage capacity, you must be able to add it without having to rip-and-replace the storage you started with.

The characteristics of scale-down, private cloud storage make it attractive for organizations with sub-petabyte data storage requirements.

It's important for storage to be both scale-out and scale-down

You can start with a few storage servers and grow your storage capacity using a mix of storage servers and storage capacities from different manufacturers. A private storage cloud is storage server hardware agnostic so you can buy what you need when you need it.

Scale-down, private cloud storage should employ a “peer-to-peer” architecture, which means the same software elements are running on each storage server.

A “peer-to-peer” storage architecture doesn’t use complex configurations that require specialized and/or redundant servers to protect against a single point of failure. Complexity is not a good thing in data storage. After all, why would you choose a private cloud storage solution that is too complex for your needs?

Scale-down, private cloud storage should also be easy-to-use and easy-to-manage.

Easy-to-use means simple procedures to add, remove or replace storage servers. It also means using storage software with built-in intelligence that can protect your data and keep it accessible without a lot of fine tuning or tinkering to do it.

Easy-to-manage means you don’t need a dedicated storage administrator to keep your private cloud storage cluster running. An in-house computer systems administrator can do it or you can hire out administration to a managed services provider who can do it remotely.

So just how small is small when it comes to building your own private cloud storage? Small is a relative term, but a practical minimum from a hardware perspective would be about 10 terabytes of usable storage. There is nothing hard and fast about starting with 10 terabytes of usable storage, but once you start moving data into your private storage cloud, you should have an amount of usable storage that is appropriate for the uses you have in mind.

If you have never built your own private cloud storage, you will need to determine which private storage cloud vendor has a simple, easy-to-use and easy-to-manage, private cloud storage solution that will work for you.

The best way to help you make your decision is to conduct a Proof-of-Concept (POC) to determine which vendor will best meet your requirements for private cloud storage. Every vendor will tell you how easily their cloud storage scales out, but they may not mention if it can easily scale-down to meet the needs of organizations with sub-petabyte data storage requirements.

A Proof-of-Concept is not a whiteboard exercise or a slide presentation. A POC is done by having vendors showing you how their storage software running on their storage hardware or your storage hardware works. A vendor who cannot commit to a small-scale POC may not be a good fit for your requirements.

The applications you plan to use with your private storage cloud should also be included in your POC. If you are not writing your own applications, then it is important to consider the size of the application “ecosystem” supported by the storage vendors participating in your POC.

After ten years in the public cloud storage business, Amazon Web Services (AWS) has the largest “ecosystem” of third-party applications written to use their Simple Storage Service (S3). The AWS S3 Application Programming Interface (API) constitutes a de facto standard that every private storage cloud vendor supports to a greater or lesser degree, but only Cloudian guarantees that applications that work with AWS S3 will work with Cloudian HyperStore. The degree of AWS S3 API compliance among storage vendors is something you can test during your POC.

Running a POC will cost you some time and money, but it is a worthwhile exercise because storage system acquisitions have meaningful implications for your data. It is worth spending a small percentage of the acquisition cost on a POC in order to make a good decision.

The future of all data storage is being defined by software. Storage software running on off-the-shelf storage server hardware defines how a private storage cloud works. A software-defined private storage cloud gives you the features and benefits of large public cloud storage providers, but does it on your premises, under your control, and on a scale that meets your requirements. Scale-down private cloud storage is useful because it is where many small and medium-sized organizations need to start.

Tim Wessels is the Principal Consultant at MonadCloud LLC, which designs and builds private cloud storage for customers using Cloudian HyperStore. Tim is a Cloudian Certified Engineer and MonadCloud is a Preferred Cloudian Reseller Partner. You can call Tim at 978.413.0201, email twessels@monadcloud.net, tweet @monadcloudguy, or visit http://www.monadcloud.com

 

YOU MAY ALSO BE INTERESTED IN:

Object Storage vs File Storage: What’s the Difference?