Load balancing Scality RING
Useful resources
About Scality RING
Scality builds the most powerful storage tools to make data easy to protect, search and manage anytime, on any cloud. Scality gives you the autonomy and agility to be competitive in a data-driven economy, helping you prepare for the challenges of the fourth industrial revolution. Scality RING software deploys on industry-standard x86 servers to store objects and files whilst providing compatibility with the Amazon S3 API.
Key benefits of load balancing
Here are a few key benefits:
- Helps make sure data is highly available and accessible at all times
- Enables businesses to meet growing data demands through scalability
- Load balancers monitor and perform health checks to ensure traffic is routed correctly
A variety of load balancing methods are currently supported by Scality RING, dependent on customer infrastructure, including layer 4, layer 7, and geo GSLB / location affinity. The RING service that should be load balanced is the S3 component.
How to load balance Scality RING
The function of the load balancer is to distribute inbound connections across a cluster of Scality RING nodes, to provide a highly available and scalable service. One virtual service is used to load balance the S3 aspect of RING. Client persistence is not required and should not be enabled.
SSL termination on the load balancer is recommended for load balancing Scality RING. The S3 service uses the “Negotiate HTTP (GET)” health check. For multi-site RING deployments, it is possible to use the load balancer’s GSLB functionality to provide high availability and location affinity across multiple sites. Using this optional, DNS based feature, in the event that a site’s RING service and/or load balancers are off-line then local clients are automatically directed to a functioning RING cluster at another site.
For deployments that are read-intensive, it is possible to use an alternative load balancing method known as direct routing. This allows reply traffic to flow directly from the back end servers to the clients, thus removing the load balancer as a potential bottleneck for reply traffic. Direct routing can benefit read-intensive deployments with a large reply traffic to request traffic ratio.
- High Availability (HA): Traffic is directed to healthy Scality Ring nodes only, based on application-level health checks.
- Resilience: Automated failover and location-based traffic steering for multi-site
- Scalability: Optimized for Scality Ring’s distributed architecture to facilitate easy scale out of application workloads and concurrent users.
The load balancer can be deployed as a single unit, although we recommend a clustered pair for resilience and high availability. Details on configuring a clustered pair can be found on page 25 of our deployment guide, below.
deployment guide
Scality RING Deployment Guide
Read deployment guidemanual
Administration manual v8
Read manualcase study
Loadbalancer.org, Scality, and HPE team up to take the object storage market by storm.
Read case studyblogs
Things to keep in mind while choosing a load balancer for your object storage system
Read blogNAS vs Object Storage: what's best for unstructured data?
Read blogHow load balancing helps to store and protect petabytes of data
Read blogLoad balancing: The driving force behind successful object storage
Read blogwhite paper
Load balancing: the lifeblood in resilient Object Storage
Read white paperother
Loadbalancer.org sizing for HPE Solutions with Scality.
Read otherLoadbalancer.org optimizing S3 networks for HPE Solutions with Scality.
Read other