Jump to content
Updated Privacy Statement

Avinash Voona

Legacy Group
  • Posts

    4
  • Joined

  • Last visited

  • Days Won

    1

Avinash Voona last won the day on June 18 2018

Avinash Voona had the most liked content!

Avinash Voona's Achievements

Newbie

Newbie (1/14)

  • Conversation Starter Rare
  • First Post Rare
  • Week One Done
  • One Month Later
  • One Year In

Recent Badges

0

Reputation

  1. Submitted March 23, 2021 Author: Avinash Kumar The agility and flexibility cloud provides are critical to helping customers meet their business needs, and most adopt a hybrid-cloud or multi-cloud approach. That said, cloud is not free, and the increase in cloud adoption has put more focus on the total cost of operations and how to lower it. After all, the application infrastructure cost is one of the major components of IT expenses. A prime example of how NetScaler ADC helps many customers to reduce back-end infrastructure cost is through connection multiplexing. This blog post will highlight three ways NetScaler ADC can help you save money on your cloud journey: Use of NetScaler ADC instead of L4 load balancers that don’t support multiplexing on cloud Consolidation of L4 and L7 load balancing and application security in the same NetScaler ADC for better management and visibility Complement cloud load balancers (for example, using NetScaler ADC behind L4 cloud load balancers) In each case NetScaler ADC can reduce back-end infrastructure costs via connection multiplexing, resulting in the use of fewer servers. What is Connection Multiplexing? Connection multiplexing is a method of reusing connections to avoid the overhead on the server that comes with establishing new connections for each request. Connection multiplexing support in NetScaler ADC ensures that server connections are efficiently reused, which results in dramatically reduced SSL/TLS load on back-end servers. Solution 1: Use of NetScaler ADC instead of L4 load balancers that don’t support multiplexing on cloud For any given load, the use of NetScaler ADC requires 40 percent fewer back-end servers when compared with L4 load balancers in the cloud. This is because NetScaler ADC supports connection multiplexing. L4 cloud provider load balancers without multiplexing support must establish a new connection for each request, which places more compute burden on the back-end servers, especially for TCP and SSL/TLS. For Example, a L4 load balancer without multiplexing support might require five back-end servers to cope with a load of 5,000 SSL connections per second. With NetScaler ADC, three servers would suffice. NetScaler ADC reuses connections, which reduces the processing overhead on each server by around 40 percent, as shown below. Load Balancer Backend Servers (AWS C5.large – $0.085/hr) Monthly Server Cost ($) (~40% saving with NetScaler ADC) L4 load balancer without multiplexing support in cloud 5*0.085 = $0.425/hr 306 NetScaler ADC 3*0.085 = $0.255/hr 184 ($122/Month Savings for 5K SSL TPS) Table 1: The difference in overall cost for compute resources to handle the given load on AWS Cloud Load Balancer Backend Servers (Azure A2V2 – $0.076/hr) Monthly Server Cost ($) (~40% saving with NetScaler ADC) L4 load balancer without multiplexing support in cloud 5*0.076 = $0.38/hr 273.6 NetScaler ADC 3*0.076 = $0.228/hr 164.2 ($109.4/Month Savings for 5K SSL TPS) Table 2: The difference in overall cost for compute resources to handle the given load on Azure Cloud The resulting savings here is $122 per month in AWS and $109.4 per month in Azure. In other scenarios, these savings can be much higher as customers use more servers to cope with larger workloads. In all the scenarios, you will see a cost savings of about 40 percent in server computing because of NetScaler ADC and multiplexing support. Solution 2: Consolidation of L4 and L7 load balancing and application security in the same NetScaler ADC for better management and visibility In many cloud deployments, customers use two different layers of load balancers for Layer 4 and Layer 7 decision-making. The consolidation of distinct L4 and L7 load balancers into a single NetScaler ADC provides three distinct advantages 40 percent savings on back-end infrastructure via multiplexing Lower operational overhead (consolidation of multiple service types — HTTPS, TLS/SSL-TCP, HTTP, TCP, UDP) Lower total cost of ownership — NetScaler ADC supports both Layer 4 and Layer 7 Service types NetScaler ADC can consolidate multiple layers of load balancers to reduce costs. In addition, the enhanced feature set of NetScaler ADC reduces the need for additional services like a web application firewall for application security, reducing costs further. Solution 3: Complement cloud load balancers (for example, using NetScaler ADC behind L4 Cloud load balancers) Some customers want to retain Cloud load balancers and use NetScaler ADC, as shown below, to save on server workloads. There are a variety of reasons for this, including: Retain the L4 cloud load balancer as Tier-1 load balancer to maintain a single static IP address per availability zone Use NetScaler ADC as a Tier-2 load balancer to reduce server cost by 40 percent Use NetScaler ADC for its rich Layer 7 functionalities like global server load balancing (GSLB), rewrite, responder, authentication, and application security (WAF) use cases. With NetScaler ADC, you can lower server TCO by 40 percent, lower your operational overhead by consolidating multiple service types (HTTPS, TLS/SSL-TCP, HTTP, TCP, UDP) in a single NetScaler ADC, and complement your Layer 4 cloud load balancer. Learn more about NetScaler ADC today.
  2. Latency is the mother of interactivity. Low Latency has become an indispensable part of communication, it is not an option anymore. The less interactive a site becomes the more likely users are to simply click away to a competitor’s site. So, to enhance user experience and improve application responsiveness and availability, the majority of web applications use Application Delivery Controllers / Load balancers. Application deployment and delivery can be complex in the cloud era and with so much at stake in terms of application experience, it needs to be made simpler and less error-prone. This new service is designed to be intent-based with unmatched automation so that businesses can deliver applications more quickly. When you intend to take a vacation and use the services of a travel agent, you tell the agent where you want to go and when, and the agent plans the itinerary, books plane tickets, and reserves accommodations. Similarly, App Delivery and Security Service enables you to define your business intent, translates that into the right policies, and then automatically orchestrates and configures resources appropriately to achieve it. App Delivery and Security Service removes the complexity from each step of the application delivery lifecycle. You simply define the intent and let the service take care of all the tedious, repetitive orchestration and configuration steps for you. The unmatched levels of automation increase operational efficiencies by up to 60 percent for IT. After all, IT should be focused on the business goals and not the syntax of the configuration. Below are the 3 major factors that contribute to the web page load time. Client to ADC ADC Processing Server response time Let’s see how the service delivers the desired intent to ensure the best app performance and lower app latency throughout the year Use Case 1: Ensure the best app performance and lower app latency throughout the year. To achieve the best app performance and the least latency, we follow a three-phased approach called Design, Deploy and Optimise. Design: The phase where you are preparing for app deployment in the cloud. When there is a dilemma to choose clouds, cloud regions based on the optimal cloud region to provide the least latency to end users. Here the Cloud Recommendation Engine of service helps. Cloud recommendation engine:It recommends you the best locations to deploy new sites for a multi-site application. These recommendations help you boost the overall performance of your application. The recommendations provided are based on the user location, traffic (in percentage) expected from each user location, and the cloud service provider. You can use the recommended site location information while adding sites for a multi-site application. The cloud region recommendation engine provides recommendations for single-site, dual-site, and triple-site scenarios. The recommendations contain location information, global latency, and benefit percentage. Global latency is based on real-time measurements and the site location recommendations are arranged according to the best-performing combination of cloud providers’ sites. Benefit displays the advantage of choosing dual-site or triple-site in comparison with the single site. Let's suppose there is a global app called demo app 1, with 30% user traffic from each country United States, UK, India, and the rest 10% from other countries as shown in the screenshot below. Let’s see how the service provides recommendations to lower the end-user latency. App Delivery and Security service provide multiple recommendations with varying Global Latency, by comparing the latency benefit percentage across recommendations. As shown in the screenshot below, the Global latency for a single-site is 108.71ms, a dual-site is 65.09ms and a triple-site is 39.76ms. Customer/Admin based on the desired intent to keep the lower latency to end users will pick a triple site and bring their App servers to the recommended cloud regions. It also provides new cloud region recommendations to lower the end user latency for the already deployed apps by tracking user distribution per Geo and the respective latency. For more information on how to configure, see CADS Cloud region recommendation engine. Deploy: In the Deploy phase, as we know the optimal cloud regions the ADC Env will be deployed to seamlessly deliver and secure the applications, where the operational overhead of ADC Env is offloaded to us. The deployed ADC Env auto scales (scale up, scale down) based on the ADC resource utilization. Optimize: Once you have deployed the application and actual traffic is running. During the operations phase your traffic fluctuates, and the internet state changes, so it is more necessary to gauge and self-heal the root cause if there is an issue like a poor-performing server. Here the Self-healing of Service helps. Self-healing: The self-healing capability provides deep application analytics with improved application experience. With the self-healing capability: You can automatically detect, remediate, and replace the defective server with a healthy server whenever the performance of an application server degrades or starts to malfunction. If an anomaly is detected in one of the instances of an application, it generates an alert and can automatically take remediation actions. The detected anomalies and corrective steps are logged inside the Action History. The service considers the following conditions while replacing a faulty server: All persistent connections are honored. Existing connections are completed. New connections aren’t accepted for the faulty server. As shown in the above screenshot, enable Auto replace slow server to self-heal from bad performing/slow server. For more information on how to configure, see Create Services. Automatically replace Slow Server: In a stack of applications running as a back-end server in an Autoscale group, if one of the instances goes faulty. And if that instance starts responding slowly based on the response time. Then, the self-healing feature enables detecting the faulty server and replacing the faulty server with a healthy server automatically. To know more about App Delivery and Security service, register here
×
×
  • Create New...