Jump to content
Welcome to our new Citrix community!
  • 0

WEM Upgrade from 4.7 to version 2006 and SQL AlwaysOn w/Availability Group to SQL AlwaysOn Distributed Availability Group - Two DC's active/active

Brian Murphy1709161200


I apologize for the length.  I've spent the last few weeks researching - KB's, forums, blogs - and I'm not able to find specific information for this specific scenario.  In some cases, I've found conflicting information so when this is successful it might have relevance to someone else down the road. 


Inline upgrade of WEM 4.7 with a SQL 2017 AlwaysOn SQL Cluster and existing AG in DC-1.  Separate Citrix Ecosystems (Sites) between two GSLB/LB (NetScaler) - to WEM 2006 and SQL 2017 AG1 and AG2 having two physical separate DC's connected by Dark Fiber.  Instead of using AG1 - proposal to use AG-1 and AG-2 in a distributed AG scenario.  First, must address the inline upgrade issues I've read thus far.


We are preparing to upgrade an existing WEM 4.7 environment to 2006 latest version.  In this design, we have two data centers with two autonomous Citrix Ecosystems (Sites).  LB and GSLB provided by Citrix NetScaler SDX's and where the DC's connected by dark fiber.  Each data center is autonomous with autonomous SQL 2017 cluster and AG's for each of the databases (Broker - Monitoring, Logging, Site), PVS.  WEM is separate to one of the DC's but all agents point to an HA pair of WEM Infrastructure servers and those defined in internal LB VIP on internal NetScalers.  This was done to keep all agents from both DC's in one Database and I would prefer to keep this design.  With that said, I would like to have two WEM Infra Servers in both DC's and have DC-A point to WEM HA VIP DC-A and WEM N+1 Infra Servers in DC-B and use the LB VIP to split the traffic and if possible have a preference for DC-B to use DC-B and DC-A if both nodes are down. 


WEM supports availability groups but as of now there is one AG and one DB for both DC-A and B.  I'm exploring the Distributed Availability Group option and wanted to get feedback.   I've seen some conflicting information relative to removing WEM DB from SQL AG first when the upgrade is 1909 or prior.  But then I've read where this is not a requirement or perhaps it is the way I interpreted what was written.


Without having to read the proposed steps below I really have just two questions.


My initial question:  WEM 4.7 DC-A and SQL 2017 AG - Is it required to remove that Database from the AG prior to upgrading OR does 2006</strong> support inline upgrade from 4.7 in combination with SQL 2017 AG.  I would rather not make assumptions and perhaps someone has already done this or similar upgrade from 4.7 to higher version and SQL AG.  If I can upgrade 4.7 inline to 2006 then I can keep the existing AG's.  Both clusters already setup and split between DC's.  If removing AG is required - has anyone experienced issues with this other than typical loss of Configuration information on Infra server and making sure to have a good SQL Full backup?  WEM encryption of the DB?  I've read a few articles regarding the encryption of the DB and these steps seemed rather extensive and desire is to avoid.






If I can't do the inline upgrade with the existing AG and so assuming it must first be removed and possibly changing this password.  Thus far, I've not found a discussion where this was resolved.  There were suggestions but no actual resolution


Reference: https://discussions.citrix.com/topic/399205-wem-and-alwayson-group</

I need to retain the existing data and this is 4.7 which was implemented years prior to my arrival. 

No documentation and no information as to what the password might be and Citrix doesn't support changing the password.

WEM Infrastructure Service Config - Uses the Availability Group Listener Database Server

Advanced Settings: Enable Windows account impersonation (Unchecked) - all these are blank actually (see image)


Question: Should I set the vuemUser SQL user account password here?  Before the upgrade?

If the Inline option works on the AG or removing the AG and inline as-is there is no issue. 


#2. Next, curious if anyone has implemented the Distributed Availability Group option?


Per the MS documentation this is where you have two separate SQL Clusters and AlwaysOn.  If I stick with the one AG for WEM I would have 2 WEM Infra in DC-A and 2 in DC-B and the existing AG is in DC-A.  So all 4 Infra servers are pointing to the one SQL AG / Cluster in DC-A.   In theory, the WEM Infra servers in DC A and B would point to the Distributed AG which is a combining of the two AG's from both DC's.  Agents will point to the LB VIP on NetScaler as usual.  Based on read so far there is still only one primary AG in the Distributed.   So one of the AG's or replicas in the Distributed is read-only.  Question is, can DC-B point to DC-Infra Servers and those use DC-B read-only AG.  Or, will pointing to the Distributed AG always hit the Global Primary (can be DC-A AG or DC-B AG - manual failover still required regardless to switch Global Primary). 


I've just started reading up on the distributed option an it is my understanding that you have the availability group in DC-A and another Availability Group in DC-B.  Instead of pointing to the AG LIstener you would have WEM point to the Distributed AG "name".  To create a distributed availability group, you must create two availability groups each with its own listener. You then combine these availability groups into a distributed availability group.  Fyi, this is not a Primary and DR scenario.  These are two separate AlwaysOn clusters.  Per my understanding, You have to explicitly point the WEM Infra to the instance names of the readable secondary replicas. You only have one read-write copy of the database.  DC-2 would now point to DC-2-AG-WEM Listener and DC1 to DC-1-AG Listener.


This is fine by me, just make sure to make any write changes in DC-1-AG-WEM - use Infra servers in DC-1 to make changes to WEM.  


In theory, this seems a better way to perform manual failover when necessary such as maintenance or DR testing.  Simply switch DC-2-AG-WEM to global primary (for example)


Has anyone attempted a Distributed AG with WEM DB specifically?  And, if all the DB's are configured with AlwaysOn does this add any additional complexity to the other DB's?  In other words, Site, Monitoring, PVS, Logging must still remain separate and autonomous.  This proposal is specific to WEM DB but WEM DB does reside on the dedicated clusters in DC-1 and DC-2


To me this seems as a logical way to use both of my SQL AlwaysOn clusters where I have only 1 WEM DB on a single AG cluster in DC-1 - adding a distributed replica of WEM DB to the other SQL AG in DC-2.  Allowing for manual failover to make DC-2 the primary global for WEM DB and this is versus say creating two separate Databases for WEM.  I like having all the Agents in a single console but the other DB's for Sites 1 and 2 must remain autonomous and identical.  


* Note: These listeners already exist on both clusters.  So, I would use existing Listeners with a caveat that WEM AG must be removed initially and I assume added back.


However, it would seem in order to do this in sequence you must remove the WEM AG regardless - if, the first AG created is the primary or it might be as simple as using existing AG and simply creating the other one in DC-B then distributed?




High level - Assumes removing the AG on WEM 4.7 existing:</strong>

  •  Create the primary Availability Group (AG_DC1) with a corresponding listener name (AG_DC1_LISTENER) - Exists already
  •  Create the Availability Group endpoint on all the replicas in the secondary Availability Group (Current config - 2 servers in each cluster)
  •  Create login and grant the SQL Server service account CONNECT permissions to the endpoint
  •  Create the secondary Availability Group (AG_DC2) with a corresponding listener name (AG_DC2_LISTENER) - Exists already
  •  Join the secondary replicas to the secondary Availability Group (Current config - 2 servers in each cluster)
  •  Create Distributed Availability Group (DistAG_DC1_DC2) on the primary Availability Group (AG_DC1)
  •  Join the secondary Availability Group (AG_DC2) to the Distributed Availability Group


Couple of notes:

  1. Only manual failover is supported at this time. 
  2. Only the underlying Availability Groups require a listener name; the Distributed Availability Group does not
  3. You have to explicitly point the client applications to the instance names of the readable secondary replicas
  4. Unlike traditional Availability Groups where you can afford to not have a listener name and just use instance names for client application connectivity, Distributed Availability Groups require a listener name for each of the underlying Availability Group. The listener names are used as endpoints for the synchronization between the Availability Groups
  5. The other primary replica on the secondary Availability Group functions similar to a distributor in a replication topology – it only receives transaction log records from the primary replica of the primary Availability
  6. Group and sends them to the other secondary replicas of the secondary Availability Group.





Link to comment

0 answers to this question

Recommended Posts

There have been no answers to this question yet

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...