HPE Scalable Object Storage with Scality RING on HPE Apollo 4500

HPE Scalable Object Storage with
Scality RING on HPE Apollo 4500
Object-based, software-defined storage at petabyte scale
Contents
Executive summary.................................................................................................................................................................................................................................................................2
Overview ...........................................................................................................................................................................................................................................................................................2
Business problem ...............................................................................................................................................................................................................................................................2
Challenges of scale............................................................................................................................................................................................................................................................2
Why choose Scality? ........................................................................................................................................................................................................................................................3
RING architecture .....................................................................................................................................................................................................................................................................4
RING components..............................................................................................................................................................................................................................................................5
Multi-site geo-distribution..........................................................................................................................................................................................................................................9
HPE value add for an object storage environment ......................................................................................................................................................................... 10
HPE reference architecture for Scality RING......................................................................................................................................................................................... 10
Server platforms used in the reference architecture...................................................................................................................................................................... 11
HPE ProLiant DL360 Gen9.................................................................................................................................................................................................................................. 12
Sample Bill of Materials (BOM) for HPE DL360 servers and Apollo 45xx servers............................................................................................ 12
Summary ................................................................................................................................................................................................................................................................................ 14
Resources .............................................................................................................................................................................................................................................................................. 14
Technical white paper
Technical white paper
Page 2
Executive summary
Traditional file and block storage architectures are being challenged by the explosive data growth, fueled by the expansion of Big Data and the
Internet of Things (IoT). Emerging storage architectures that focus on object storage are helping businesses deal with these trends. By providing
cost-effective storage solutions that keep up with the demand to store more data and expand storage capacity, these systems provide improved
data protection using erasure-coding technology at a lower cost per terabyte (TB).
Enterprise-class storage subsystems are designed to address storage requirements for business-critical transactional data throughput. However,
they aren’t the most cost-effective solution for unstructured data or for backup and archival storage at petabyte and beyond scale. In these
cases, enterprise-class reliability is still required, but the need for massive scale-out capacity and lower solution investment per TB while
maintaining or improving the cost of data protection have become the most important customer requirements.
Object storage software solutions are designed to run on industry-standard server platforms, offering lower infrastructure costs and scalability
beyond the capacity points of typical file server storage subsystems. HPE Apollo 4500 series servers provide a comprehensive and cost-effective
set of storage building blocks for customers who wish to deploy an object storage software solution on industry-standard Linux®-based servers.
Target audience
CTOs and solution architects who are looking for a storage solution that can handle the rapid growth of unstructured data, cloud, and archival
storage can refer to this white paper. The paper also focuses on controlling licensing and infrastructure costs.
Overview
Business problem
Businesses are looking for more cost-effective ways to manage exploding data storage requirements. In recent years, the amount of storage
required by many businesses has increased dramatically, especially in the areas of media serving, IoT data collection, and records retention. The
cost per TB of storage and ease of data retrieval have become critical factors in choosing a solution that can expand quickly and economically.
For an increasing number of businesses, traditional file and block storage approaches can’t meet these desired solution attributes. Organizations
that have tried to keep up with data growth using traditional file and block storage solutions are finding that both the cost and the complexity of
managing and operating are challenging. Meanwhile, many organizations that have moved their object storage to a hosted cloud environment
have encountered cost or data control issues as well.
Challenges of scale
There are numerous difficulties associated with storing unstructured data at petabyte and beyond scale:
Cost
• Unstructured and archival data tends to be written only once and read very infrequently. This stale data takes up valuable space on expensive
block and file storage capacity.
• Tape is an excellent choice for achieving the lowest cost per TB but suffers from extremely high latencies.
Scalability
• Unstructured deployments can accumulate billions of objects. File system limits the number and size of file and block storage, along with a cap
on the size of presented blocks, these limitations can become significant deployment challenges.
• Additionally, block and file storage methods suffer from metadata bloat at massive scale, resulting in a large system that cannot meet
service-level agreement requirements.
Availability and manageability
• Enterprise storage is growing from single site deployments to geographically distributed, scale-out configurations. With this growth, the
difficulty of keeping all the data safe and available is also growing.
• Management silos and user interface limitations have made it increasingly difficult for businesses to deploy the additional storage capacity
they need using their existing storage infrastructure.
Unstructured and archival data may sit dormant for a while but must be available in seconds rather than minutes when a read request is received
by the storage system.
Technical white paper
Page 3
Why choose Scality?
Today’s data centers have adopted a new software-defined storage (SDS) model as part of their overall strategy to provide efficient and scalable
infrastructure services. A software-defined data center (SDDC) architecture combines proven virtual machine solutions that use the underlying
compute resources more efficiently via software-defined networking (SDN) and SDS solutions.
We see these elements coming together in software to enable data center agility. The software shapes the underlying hardware to deliver
efficient services for applications to consume. By decoupling software from the underlying platform, we also provide platform flexibility, spanning
the entire portfolio of HPE ProLiant industry standard servers—including future hardware offerings. ProLiant and Scality provide a decisive step
forward in reducing the cost of ownership of the future data center.
Figure 1. SDS within the SDDC
The Scality RING running on HPE ProLiant servers provides a SDS solution for petabyte-scale data storage that is designed to interoperate in the
modern SDDC. The RING software is designed to create scale-out storage system, which is deployed as a distributed system on a minimum cluster of
six storage servers. This system can be seamlessly expanded to thousands of physical storage servers as the need for storage capacity grows. To
match performance of the deployed capacity, the RING can independently scale out the access nodes (connector servers) to meet a customer’s
growing input/output (I/O) throughput requirements. The underlying physical storage servers can be of any density, ranging from a DL380 Gen9
Server with a small number of hard disk drives (HDD) to the HPE Apollo 4510 containing a combination of up to 68 HDDs and SSDs.
The RING software requires no specific certification for a customer’s HPE ProLiant Server configuration of choice and supports new generations
of hardware as they are released. The RING requires no kernel modifications, eliminating the need to maintain hardware compatibility lists
beyond the constraints imposed by the specific Linux distributions running on the server.
Figure 2. Scality RING SDS high-level architecture
Technical white paper
Page 4
The software-defined architecture of RING addresses a number of key customer challenges:
• Massive capacity growth—provides virtually unlimited scaling of storage capacity and performance to meet today and tomorrow’s
requirements
• Legacy storage silos with high costs—provides broad support for a large mixture of customer storage workloads, to simplify storage
management with fewer silos
• Always-on requirements—is designed for 100 percent uptime, with self-healing and the highest-levels of data durability
• Cloud-scale economics—is compatible across the HPE portfolio, enabling customers to leverage the low TCO of a proven and reliable
HPE server platform
• Multi-protocol data access—enables the widest variety of object-, file-, and host-based applications for reading and writing data to the RING
• Flexible data protection mechanisms—efficiently and durably protects a wide range of data types and sizes
• Self-healing—expects and tolerates failures and automatically resolves them
• Platform agnostic—provides optimal platform flexibility, allowing mixed server configurations, eliminating the need to migrate data when
refreshing the underlying hardware
RING architecture
To scale up both storage capacity and performance to massive levels, the Scality RING software is designed as a distributed, fully parallel,
scale-out system. It has a set of intelligent services for data access and presentation, data protection, and systems management. To implement
these capabilities, the RING provides a set of fully abstracted software services including a top layer of scalable access services (connector
servers) that provide storage protocols for applications.
The middle layers comprise a distributed virtual file system, a set of data protection mechanisms to ensure data durability and integrity,
self-healing processes, and a set of systems management as well as monitoring services. At the bottom of the stack, the system is built on a
distributed storage layer comprising virtual storage nodes and underlying I/O daemons that abstract the physical storage servers and disk
drive interfaces.
At the heart of the storage layer is a scalable, distributed key-value object store based on a second-generation peer-to-peer routing protocol.
This routing protocol ensures that store and lookup operations scale up efficiently to very high numbers of nodes. These comprehensive storage
software services are hosted on a number of servers with appropriate processing resources and disk storage. They are connected through
standard IP-based network fabrics such as 10GbE.
Figure 3. Scality RING architecture
Technical white paper
Page 5
RING components
The RING software comprises the following main components: the RING connector servers, a distributed internal database for metadata called
MESA, the RING storage servers and I/O daemons, and the supervisor Web-based management portal. The MESA metadata database is used to
provide object indexing and manage the metadata used by the Scality scale-out file system (SOFS) abstraction layer.
Connectors
The connector processes provide the access points and protocol services for applications that use the RING for data storage. Applications may
make use of multiple connector servers in parallel to scale out the number of operations per second or aggregate RING throughput for high
numbers of simultaneous user connections. The system may be configured to provide a mix of file and object access to support multiple
application use cases. Connector process may run on the physical storage nodes, or may run on separate dedicated connector servers for
load-balancing and I/O scalability separate from storage scalability. Figure 4 illustrates using dedicated servers for the connector processes.
Figure 4. RING software processes: RING connectors, storage nodes, and I/O daemons
The application data I/O path flows from applications through the connectors. Connectors are also responsible for implementing the configured
data protection storage policy (replication or ARC), as described in the following section. For new object writes, the connectors may chunk
objects that are above a configurable size threshold before the object data is sent to the storage servers.
Table 1. External application interfaces supported by connector servers
Type
Connector
Strengths
Object
HTTP/REST
Scality Sproxyd, a highly scalable, stateless, lightweight, native REST API; provides
support for geo-distributed deployments
Amazon S3-compatible
Supports buckets, authentication, and object indexing
CDMI (SNIA Cloud Data management Interface)
REST API namespace compatible with SOFS (NFS, SMB, FUSE) data
OpenStack® Swift
Scalable storage for OpenStack Swift; supports containers, accounts, and Keystone
NFS
NFS v3 compatible server; supports Kerberos, advisory-locking (NLM), and
user/group quotas
FUSE
Scality Sfused Local Linux file system driver, great for application servers; fast for
big files; provides parallel I/O to multiple back-end storage nodes
SMB
SMB 2.x and a subset of SMB 3.x compliant server
Cinder
OpenStack Cinder Driver for attaching volumes to Nova instances
File
Block
Technical white paper
Page 6
Storage nodes
Storage nodes are virtual processes that own and store a range of objects associated with its portion of the RING’s keyspace. Each RING storage
system is typically configured with at least six storage nodes. Under each storage node is a set of storage daemons that are responsible for data
persistence across the underlying local disk file system. Each daemon is a low-level process that manages the I/O operations associated with a
particular physical disk drive, maintaining the mapping of object indexes to the actual object locations on the disk. The typical configuration is to
have one daemon per physical disk drive with support for up to hundreds of daemons 1 per server.
The recommended deployment for systems that have both HDD and SSD media on the storage nodes is to deploy a data RING on HDD, and the
associated metadata in a separate RING on SSD.
Figure 5. RING software deployment
Systems management
To manage and monitor the RING, Scality provides a comprehensive set of tools with a variety of interfaces. These include a Web-based GUI
(the supervisor), a command line interface (CLI) that can be scripted (RingSH), and SNMP-compliant MIB and traps for use with standard SNMP
monitoring consoles.
The supervisor is the RING’s Web-based management GUI. It provides visual, point-and-click style monitoring and management of the RING
software, as well as management of the underlying physical platform layer. The supervisor provides a main dashboard page that offers graphical
RING views including the servers, zones, and storage nodes comprising the RING. This comes with browsing capabilities to drill down to details of
each component and page for operations, management, and provisioning of RING services. The supervisor also provides performance statistics,
resource consumption information, and health metrics through a rich set of graphs.
The supervisor UI provides a simple volume UI for the scale-out file system that enables administrators to easily provision volumes and
connector servers. Once provisioned through the UI, the connector servers are configured and started, making them ready for use by the
customer’s applications.
The supervisor works in conjunction with the Scality management agent (sagentd), which is hosted on each Scality managed storage server and
connector server. The sagentd daemon provides a single point of communication for the supervisor with the given host for collecting statistics
and health metrics. This avoids the additional overhead of individual connections from the supervisor to each storage server and each disk drive
daemon running on a specific host.
1
Up to 255 storage daemons per physical server in current releases
Technical white paper
Page 7
RingSH is a scriptable CLI for managing and monitoring the RING, which can be used on the supervisor host or on any storage server for
managing the RING components. RingSH provides a rich set of commands for managing the complete stack, as well as access to system statistics
and health metrics.
For monitoring the RING from popular data center tools such as Nagios, the RING provides an SNMP-compliant MIB. This enables these tools to
monitor the RING’s status actively, as well as receives alerts via SNMP traps. System health metrics, resource consumption, and connector and
storage server performance statistics are available and may be displayed using a SNMP MIB browser.
Scale-out file system
The RING supports native file system access to RING storage through the file connector servers and the integrated SOFS. SOFS is a
POSIX-compliant virtual file system that provides file storage services without the need for external file gateways as is commonly required
by other object storage solutions.
To provide file system semantics and views, the RING utilizes an internal distributed database (MESA) on top of the RING’s storage services.
MESA is a distributed, NewSQL database that is used to store file system directories and inode structures to provide a virtual file system
hierarchy with the guaranteed transactional consistency required in a highly available file system infrastructure. Through MESA, SOFS supports
sparse files to provide highly efficient storage of very large files using a space-efficient mechanism.
Intelligent data durability and self-healing
The RING is designed to manage a wide range of component failures involving disk drives, servers, and network connections within a single
data center or across multiple data centers. The RING provides data durability through a set of flexible data protection mechanisms optimized for
distributed systems including replication, erasure coding, and geo-replication capabilities that allow customers to select the best data protection
strategies for their data. The RING automatically manages storing objects with the optimal storage strategy. Replication and erasure coding may
be combined, even in a single connector, following user-defined policies. Small objects are stored more efficiently (at lower storage cost) using
replication. Large objects are stored more efficiently using erasure coding, avoiding the cost of replicating very large datasets.
Figure 6. Scality classes of service
Replication Class of Service
To optimally store smaller files, the RING employs local replication with multiple file copies. The RING will spread these replicas across multiple
storage servers and across multiple disk drives in order to separate them from common failures. The RING supports six Class of Service (CoS)
levels (0–5) for replication, indicating that the system can maintain between 0 and 5 replicas (or 1–6 copies) of an object. This allows the system
to tolerate up to “5” simultaneous disk failures while still preserving access to the object.
Replication is typically used only for “small objects” as defined by a configurable value. By default, objects less than 60 kilobytes will be replicated.
Technical white paper
Page 8
Advanced resiliency configuration erasure coding
Scality’s advanced resiliency configuration (ARC) provides an alternative data protection mechanism to replication that is optimized for large
objects and files. ARC implements “Reed-Solomon” erasure coding 2 techniques to store large objects with an extended set of parity “chunks”
instead of multiple copies of the original object. The basic idea with erasure coding is to break an object into multiple chunks (m in number) and
apply a mathematical encoding to produce an additional set of parity chunks (k in number).
The resulting set of chunks, (m+k in number) are then distributed across the RING nodes, providing the ability to access the original object as
long as any subset of at least m data or parity chunks are available.
Figure 7. Scality ARC: Example of ARC (10/4) schema
Self-healing, rebuilds performance under load
The RING provides self-healing operations to resolve component failures automatically including the ability to rebuild missing data chunks due to
disk drive or server failures, and the ability to rebalance data when nodes leave or join the RING. In the event that a disk drive or even a full server
fails, background rebuild operations are spawned to restore the missing object data from its surviving replicas or ARC chunks. The rebuild
process is complete when it has restored the original CoS by restoring either the full number of replicas or the original number of ARC data and
parity chunks.
Self-healing provides the RING with the resiliency required to maintain data availability and durability in the face of the expected wide set of
failure conditions including multiple simultaneous component failures at the hardware and software process levels. For many customers,
self-healing has eliminated the requirement for maintaining external backups, which in turn reduces infrastructure and operational expenses.
Figure 8. Six-server RING example of performance during hardware failures and speed of rebuild
2
Reed-Solomon erasure coding: en.wikipedia.org/wiki/Reed-Solomon_error_correction
Technical white paper
Page 9
Multi-site geo-distribution
To enable site-level disaster recovery solutions, the RING can be deployed across multiple sites with failure tolerance of one or more sites. Two
distinct geo-distributed deployment options are provided. The first makes use of a single logical RING deployed across multiple sites (“stretched
RING”), and the second deployment option is for independent RINGs, each within its own site, with asynchronous mirroring employed to maintain
data synchronization between the RINGs.
Figure 9. Mirrored RINGs with object
Mirrored RING
For object mirroring, the RING supports RING-to-RING mirroring over the Tier1Sync feature. As shown in figure 10, this maintains an
asynchronous mirror of the source RING (RING-1A) to a target RING (RING-1B), with a configurable time delay. This mode is enabled at the
entire RING level and should only be employed when the application can tolerate a small difference in the current status of RING-B with respect
to the source on RING-A. This implies a small difference in the recovery point objective (RPO) of RING-B if RING-A fails and the application
needs to failover to RING-B. This asynchronous mirroring mode is useful in higher latency WAN environments when it is desirable to not impose
the latency of writing over the WAN to RING-B for every update to RING-A. Tier1Sync supports mixed mode data protection schemes on RING-A
and RING-B including mixed ARC schemas on the source and target RING.
Figure 10. RING Tier1Sync example
Technical white paper
Page 10
In the case of a data center failure, the data remains available on the peer RING in the second data center, without any manual intervention. Since
the Tier1Sync process is asynchronous, there may be a loss of the last few updates from the failed RING that have not yet been synchronized.
The administrator may use administrator utilities and log files to determine objects that have not been properly synchronized.
Stretched RING
The RING supports a stretched RING mode to provide multi-site deployments with site protection and complete data consistency between all
sites. In this mode, a single logical RING is deployed across multiple data centers with all nodes participating in the standard RING protocols as if
they were local to one site. When a stretched RING is deployed with ARC, it provides multiple benefits, including site-level failure protection,
Active/Active access from all data centers, and dramatically reduced storage overhead compared to a mirrored RING. An ARC schema for a
three-site stretched RING of ARC (7, 5) would provide protection against one complete site failure, or up to four disk or server failures per site,
plus one additional disk or server failure in another site with approximately 70 percent space overhead. 3
HPE value add for an object storage environment
Software-defined storage running on Linux servers can be deployed on a variety of hardware platforms. However, clusters built on a white-box
server infrastructure work for business at small scale, but as they grow, the complexity and cost make them less compelling than enterprise
hardware-based solutions. With white-box server infrastructure, IT has to standardize and integrate platforms as well as supported components
themselves, and support escalation becomes more complicated. Without standardized toolsets to manage the hardware at scale, IT must chart
their own way with platform management and automation. Often the result is the IT staff working harder and the businesses spending more to
support a white-box hardware infrastructure than the one-time CAPEX savings realized in buying the white-box servers.
Using an HPE hardware and software solution provides advantages that reduce OPEX spending not available in an infrastructure built on
white-box servers. Key OPEX savings from using an integrated HPE solution are:
• Platform management tools that scale across data centers
• Server components and form factors that are optimized for enterprise use cases
• Hardware platforms where component parts have been qualified together
• A proven, worldwide hardware support infrastructure
Disk encryption
In addition to the benefits of using the HPE platform as listed earlier, all Apollo 4000 configurations include an HPE Smart Array card capable of
Secure Encryption providing enterprise-class encryption. Secure Encryption is FIPS 140-2 certified and has been verified to have a low impact on
IOPS for spinning media, in addition to being transparent to the operating system. This means data for any drive on the server can be encrypted,
providing the user used Gen9 Secure encryption controller, giving much more flexibility than encryption on drive solutions while at the same time
reducing the cost. Keys can be managed either locally on the server or via an enterprise key management system.
HPE reference architecture for Scality RING
The base reference architectures for the Apollo 4510 and Apollo 4530 are identical. Typically, Apollo 4510 servers are used whenever storage
capacity exceeds 2 petabytes while Apollo 4530 servers or Apollo 4510 servers with partially populated drive bays are used when the storage
requirements are under 2 petabytes. The base architectures can be customized using the HPE sizing tools for Scality to build RING
configurations with the ideal amount of bulk storage, metadata capacity, and memory performance. Work with your HPE account team to
customize a RING configuration.
This paper describes a base reference architecture with external connector nodes, storage nodes, and a supervisor node. Each layer can be sized
up or down, independently. Figure 11 illustrates a typical usage scenario in which there is one external connector server per three storage
servers. When higher application Operations per Second are expected, the ratio can become one to one with the number of connector Servers
equaling the number of storage servers.
3
“ARC Architecture,” April 29, 2015
Technical white paper
Page 11
Figure 11. Sample Scality configuration using Apollo 4510 and DL360 Gen9 servers
Server platforms used in the reference architecture
The following sections provide key attributes and benefits around the industry standard HPE servers chosen for the reference configuration.
HPE Apollo 4500 Systems
The Apollo 4510 and Apollo 4530 are third-generation density-optimized platforms used as Scality storage servers. Key attributes of the
HPE Apollo 4500 Server Family line include:
• Chassis
– The Apollo 4500 Gen9 chassis is 4 RU Gen9 power supplies, support for DC power
• Processor
– Intel® Xeon® E5-2600 v4 Series processors
– Up to 512 GB (8 x 64 GB) may be associated with each processor for a maximum of 512 GB per server node
• Memory
– HP DDR4-2400MHz Low Voltage Memory, 16 DIMM slots
– Maximum of 1024 GB (16 x 64 GB) per server tray
• OS drive controller/drives
– HPE Dynamic Smart Array B140i SATA RAID controller (for the two server node SFF drives and M.2 drives). Optional P244br Smart Array
or H244br Smart HBA controller for hardware RAID may be used instead
– Multiple Smart Array and HBA controller options for the internal bulk storage devices
– 12G SAS
• Storage
– Up to 68 data drives 12G SAS support
– 8 TB drive support
Technical white paper
Page 12
• PCIe slots
– Supports up to 4 x PCIe 3.0 8x slots plus 1x FlexibleLOM slot
• On System Management
– HPE iLO 4 Management Engine
– Optional HPE iLO Advanced
• Cluster Management (optional)
– HPE Insight Cluster Management Utility (CMU)
Both the Apollo 4510 and Apollo 4530 servers require a 1200 mm rack. The Apollo 4510 offers a maximum density-optimized storage solution
with up to 544 TB of storage (68 drives x 8 TB per drive) in a single 4U chassis. The Apollo 4530 provides additional computational resources,
offering 3 computational nodes with 120 TB per node (15 drives x 8 TB/drive). Each computational node may have two Intel® Socket R
processors and 16 DIMM slots (512 GB with 16 x 32 GB).
Figure 12. Front view of an Apollo 4510
HPE ProLiant DL360 Gen9
The DL360 Gen9 is a low-cost, 1RU server platform that is a perfect fit for the compute and memory requirements of the Scality manager and
connector servers.
Sample Bill of Materials (BOM) for HPE DL360 servers and Apollo 4500 servers
Sample DL360 Gen9 BOM
Quantity
Product
Description
1
755259-B21
HPE ProLiant DL360 Gen9 4LFF Configure-to-order Server
1
818174-L21
HPE DL360 Gen9 Intel Xeon E5-2630v4 FIO Processor Kit
1
818174-B21
HPE DL360 Gen9 Intel Xeon E5-2630v4 Processor Kit
2
805349-B21
HP 16GB (1x16GB) Single Rank x4 DDR4-2400 CAS-17-17-17 Registered Memory Kit
1
665249-B21
HPE Ethernet 10Gb 2-port 560SFP+ Adapter
1
749976-B21
HP H240ar 12Gb 2-ports Int FIO Smart Host Bus Adapter
1
766211-B21
HP DL360 Gen9 P440ar/H240ar SAS Cbl
2
657750-B21
HPE 1TB 6G SATA 7.2K rpm LFF (3.5-inch) SC Midline 1yr Warranty Hard Drive
2
720478-B21
HPE 500W Flex Slot Platinum Hot Plug Power Supply Kit
1
789388-B21
HPE 1U Gen9 Easy Install Rail Kit
Technical white paper
Page 13
Sample Apollo 4510 BOM
Quantity
Product
Description
1
799581-B21
HPE Apollo 4510 Gen9 CTO Chassis
1
799377-B21
HPE XL4510 8HDD Cage Kit
1
786593-B21
HPE 1xApollo 450 Gen9 Node Svr
1
842973-L21
HPE XL450 Gen9 E5-2630v4 FIO Kit
1
842973-B21
HPE XL450 Gen9 E5-2630v4 Kit
6
805351-B21
HPE 32GB 2Rx4 PC4-2400T-R Kit
1
665243-B21
HPE Ethernet 10Gb 2P 560FLR-SFP+ Adptr
1
761878-B21
HPE H244br FIO Smart HBA
2
726821-B21
HPE Smart Array P440/4G Controller
1
727258-B21
HP 96W Smart Storage Battery
1
808967-B21
HPE A4500 Mini SAS Dual P440 Cable Kit
2
655710-B21
HPE 1TB 6G SATA 7.2k 2.5in SC MDL HDD
4
832433-B21
HPE 960GB 6G SATA Mixed Use-3 LFF 3.5-in LPC 3yr Wty Solid State Drives
4
846785-B21
HPE 6TB 6G SATA 7.2K LFF 512e LP MDL HDD
10
867261-B21
HPE 6TB 6G SATA 7.2K LFF LP 512e FIO HDD (Bundle)
3
720479-B21
HPE 800W Common Slot Platinum Plus Hot Plug Power Supply Kit
1
681254-B21
HPE 4.3U Rail Kit
Sample Apollo 4530 BOM
Quantity
Product
Description
1
799581-B23
HPE Apollo 4530 Gen9 CTO Chassis
3
786595-B23
HPE ProLiant XL450 Gen9 Configure-to-order Server Node
3
842973-L21
HPE XL450 Gen9 E5-2630v4 FIO Kit
3
842973-B21
HPE XL450 Gen9 Intel Xeon E5-2630v4 Processor Kit
12
805349-B21
HPE 16GB 1Rx4 PC4-2400T-R Kit
3
761878-B21
HP H244br FIO Smart HBA
3
726821-B21
HP Smart Array P440/4G Controller
3
727258-B21
HP 96W Smart Storage Battery
6
808961-B21
HPE A4500 Mini SAS H240 Cable Kit
3
665243-B21
HP Ethernet 10Gb 2-port 560FLR-SFP+ Adapter
6
655710-B21
HPE 1TB 6G SATA 7.2K rpm SFF (2.5-inch) SC Midline 1yr Warranty Hard Drive
3
832433-B21
HPE 960GB 6G SATA Mixed Use-3 LFF 3.5-in LPC 3yr Wty Solid State Drives
6
867265-B21
HPE 4TB 6G SATA 7.2K LFF LP 512e FIO HDD (Bundle)
6
846783-B21
HPE 4TB 6G SATA 7.2K LFF 512e LP MDL HDD
3
720479-B21
HPE 800W Common Slot Platinum Plus Hot Plug Power Supply Kit
1
681254-B21
HPE 4.3U Server Rail Kit
Note
Apollo 4530 LFF drives are configured per node. There are 15 bays for each node. There is a maximum of two 6-pack bundles and two
individual HDD per node.
Technical white paper
Summary
With rapid growth of unstructured data and backup/archival storage, traditional storage solutions lack the ability to scale or efficiently serve data
from a single unified storage platform. For unstructured data, the performance capability of traditional SAN and NAS vendors is often less
important than the cost per gigabyte of storage at scale.
Scality RING running on HPE ProLiant and HPE Apollo hardware combines object storage software and industry-standard servers to provide low
cost, reliable, flexible, centralized management that businesses need for large scale unstructured data. The HPE Scalable Object Storage with
Scality RING creates a solution with a lower TCO than traditional SAN and NAS storage vendors, while providing greater data protection for
current and future large-scale storage needs.
Resources
Email [email protected] for questions about HPE hardware for Scality object storage solution. With increased density, efficiency,
serviceability, and flexibility, the HPE Apollo 4500 System are the perfect solution for scale-out storage needs.
Support the management and access features of object storage, as well as seamlessly operate as part of HPE Converged Infrastructure, along
with the HPE ProLiant DL360 Gen9 Server bring the power, density, and performance required.
Documents for HPE Scality object storage solutions on industry-standard servers are at hpe.com/storage/scalableobject
HPE Secure Encryption at hpe.com/servers/secureencryption
HPE Integrated Lights-Out (iLO) at hpe.com/info/ilo
Learn more at
hpe.com/storage/scalableobject
Sign up for updates
© Copyright 2015–2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change
without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty
statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
The OpenStack Word Mark is either registered trademark/service mark or trademark/service mark of the OpenStack Foundation, in the
United States and other countries and is used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community. Pivotal and Cloud Foundry are trademarks and/or registered
trademarks of Pivotal Software, Inc. in the United States and/or other countries. Intel and Intel Xeon are trademarks of Intel Corporation in
the U.S. and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. All other third-party
trademark(s) is/are the property of their respective owner(s).
4AA5-9641ENW, September 2016, Rev. 3