Author: Vemula Srimithra /Aman Chaudhary
Reviewer: Raghav S N / Nagaraj Harikar
Priority Order Load Balancing
Priority load balancing enables the assignment of a priority value to each service or service group associated with a priority load balancing virtual server. The services or service group with the lowest numerical value possesses the highest priority. Traffic is directed to this highest-priority services as long as it remains available. In the event of its unavailability, traffic is redirected to the next service with the highest priority.
Additionally, priority to the order of services can be controlled through utilization of load balancing policies and actions. When a load balancing policy evaluates to true for a specific client request, the order specified within the load balancing action takes precedence over the default order preference.
High Availability
High Availability (HA) using NetScaler’s priority load balancing is a method to ensure continuous availability of applications by distributing traffic based on the priority of services or service groups. High Availability is needed in different scenarios such as Active/passive deployment or during Maintenance and upgrades or during Disaster Recovery. Let’s see how we can use Priority order to achieve High Availability in all these scenarios.
In below blog post, you'll find an example scenario with detailed explanations of priority load balancing and high availability, making it easier to understand and implement these concepts.
Click here to access the blog:
https://www.citrix.com/blogs/2022/03/23/priority-load-balancing-high-availability-simplified/
Maintenance and upgrades
To minimize downtime and guarantee uninterrupted service availability, NetScaler priority load balancing enables efficient management of maintenance and upgrades.
Priority load balancing lets you conduct maintenance or upgrades on particular services or service groups without compromising the overall availability of your application. This is accomplished by redirecting traffic to services with lower priority while servicing or upgrading the top-priority service.
Example Scenario
Consider a shopping cart application where two levels of redundancy ensure the desired service-level objective. When the primary application instance requires an upgrade, one must disable that entity (service or service group member) on the Netscaler. This action triggers Netscaler to automatically switch traffic to the secondary application instance, maintaining service continuity.
Netscaler CLI Configuration
add servicegroup shoppingCart SSL
bind servicegroup shoppingCart <Primary application instance IP> 443 -order 1
bind servicegroup shoppingCart <secondary application instance IP> 443 -order 2
bind servicegroup shoppingCart <tertiary application instance IP> 443 -order 3
# One can disable primary application instance during maintenance using below command
Disable servicegroup shoppingCart <Primary application instance IP> 443 -graceful YES
Disaster Recovery
To maintain the availability and functionality of your applications during a disaster, Disaster Recovery (DR) employs Netscaler priority order load balancing technique to manage failover and recovery processes efficiently.
Priority order load balancing enables the assignment of varying priorities to your services or service groups. In the event of a disaster, this mechanism guarantees that traffic is routed to the next highest-priority accessible service, ensuring uninterrupted application availability.
Example Scenario
Imagine your shopping cart application is running in two data centers located in US east and US west. To achieve disaster recovery, we need to bind services to load balancing virtual server with different order where each order corresponds to one region.
Applications running in US east can be bound with order 1
Applications running in US west can be bound with order 2
In normal circumstances, traffic is routed to US east. In the event of failure in the US east region, traffic is automatically rerouted to US west. Once Applications in US east is up and running, traffic is directed back to US east again.
Netscaler CLI configuration
add gslb vserver shoppingcart http
add gslb service shopping_cart_US_East <application instance ip running in US> SSL 443 -sitename USeast
add gslb service shopping_cart_US_West <application instance ip running in EU> SSL 443 -sitename USwest
bind gslb vserver shoppingcart -servicename shopping_cart_US_east -order 1
bind gslb vserver shoppingcart -servicename shopping_cart_US_west -order 2
Geographical distribution
Geographical distribution in context of Global Server Load Balancing(GSLB) can be acheived using priority order load balancing where clients are distributed traffic across multiple servers located in different geographical regions. This method enhances application performance by directing user requests to the most appropriate server.
Example scenario
Suppose you have a shopping cart application running across multiple data centers, namely DC1, DC2, and DC3. To optimize user experience, you aim to redirect users to the data center that is geographically closest to their location.
Based on client ip, traffic can be distributed to the nearest data center using lb policies.
Netscaler CLI Configuration
add gslb vserver shoppingcart SSL
add gslb service shopping_cart_NA <application instance ip running in US> SSL 443 -sitename NA
add gslb service shopping_cart_EU <application instance ip running in EU> SSL 443 -sitename EU
add gslb service shopping_cart_ASIA <application instance ip running in ASIA> SSL 443 -sitename ASIA
bind gslb vserver shoppingcart -servicename shopping_cart_NA -order 1
bind gslb vserver shoppingcart -servicename shopping_cart_EU -order 2
bind gslb vserver shoppingcart -servicename shopping_cart_ASIA -order 3
add lb action lbact1 -type SELECTIONORDER -value 1 2 3
add lb action lbact2 -type SELECTIONORDER -value 2 3 1
add lb action lbact3 -type SELECTIONORDER -value 3 2 1
add lb policy lbpol1 -rule client.ip.src.location.eq("NA.*") -action lbact1
add lb policy lbpol1 -rule client.ip.src.location.eq("EU.*") -action lbact2
add lb policy lbpol1 -rule client.ip.src.location.eq("ASIA.*") -action lbact3
bind gslb vserver shoppingcart -policyname lbpol1 -priority 10
bind gslb vserver shoppingcart -policyname lbpol2 -priority 20
bind gslb vserver shoppingcart -policyname lbpol3 -priority 30
If client ip falls under any of above specified locations, client is redirected to data center. If for a client ip, none of the above expressions are true, then it follows default order precedence during gslb service selection.
Service Level Agreements
Service Level Agreements (SLAs) are critical in ensuring that service providers meet the performance expectations of their clients. One of the requirements is to ensure premium clients are served with minimal latency. Using priority order load balancing, premium clients can be load balanced to low latency services.
Example Scenario
Consider a streaming service running 2 servers, one with high throughput to serve premium clients and other with lower throughput serving rest of the clients.
Netscaler CLI Configuration
add lb vserver streaming SSL <ip> 443
add service premium_service <ip> SSL 443
add service nornal_service <ip> SSL 443
bind lb vserver streaming premium_service -order 1
bind lb vserver streaming normal_service -order 2
add lb action lbact1 -type SELECTIONORDER -value 1
add lb action lbact2 -type SELECTIONORDER -value 2
import patsetfile local:ip_list.txt client_iplist
add patsetfile client_iplist
add patset client_list -patsetfile client_iplist
add lb policy lbpol1 -rule client.ip.src.typecast_text_t.equals_any("client_list") -action lbact1
add lb policy lbpol2 -rule true -action lbact2
bind lb vserver streaming -policyname lbpol1 -priority 10
bind lb vserver streaming -policyname lbpol2 -priority 20
In above sample configuration, premium client srcip list is imported in netscaler using import patsetfile utility. lbpol1 evaluates if incoming client is part of premium client list, then it load balances from low latency server list.
Geo Fencing
Geographical distribution using priority order load balancing can significantly enhance compliance and security by ensuring that data is processed and stored in appropriate locations while maintaining high availability and performance.
Example Scenario
Consider e-commerce application who is running their servers in 2 regions. One in the EU and other in US. Following Netscaler configuration ensures clients from EU region is directed to EU-server.
Netscaler CLI Configuration
add gslb vserver e_commerce SSL <ip> 443
add gslb service eu_server <ip> SSL 443 -sitename EU
add gslb service us_server <ip> SSL 443 -sitename NA
bind gslb vserver e_commerce -servicename eu_server -order 1
bind gslb vserver e_commerce -servicename us_server -order 2
add lb action lbact1 -type SELECTIONORDER -value 1
add lb action lbact2 -type SELECTIONORDER -value 2
add lb policy lbpol1 -rule client.ip.src.between(10.10.10.1, 20.20.20.1) -action lbact1
add lb policy lbpol2 -rule true -action lbact2
bind gslb vserver v1 -policyname lbpol1 -priority 10
bind gslb vserver v1 -policyname lbpol2 -priority 20
If client is coming from specified ip range (consider that ip range represents EU region), then such client is load balanced to eu_server only.
Blue-Green Deployment
To minimize downtime and risk, the blue-green deployment strategy employs two identical production environments, known as Blue and Green. Only one of these environments is active at any given time, handling all the production traffic. NetScaler's priority order load balancing is an effective tool for managing this deployment strategy.
For detailed information regarding Netscaler configuration to manage Blue-Green deployment, please refer to this blog
https://www.citrix.com/blogs/2022/03/02/simplify-continuous-deployments-with-citrix-adc/
Canary
Canary deployment is a strategy where a new version of an application is gradually rolled out to a small subset of users before being deployed to the entire user base. This allows for monitoring and testing in a real-world environment with minimal risk. NetScaler’s priority order load balancing can be effectively used to manage this deployment strategy.
For detailed information regarding Netscaler configuration to manage Canary deployment, please refer to this blog
https://www.citrix.com/blogs/2022/03/02/simplify-continuous-deployments-with-citrix-adc/
Recommended Comments
There are no comments to display.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now