Jump to content
Welcome to our new Citrix community!
  • 0

Replacing 2 Dell R715s (identical) with 2 R740s (identical)


Travis MAIN NUMBER Rabe

Question

Hello All,

 

Here is my situation.  I have two shared storage devices (let's call them SS1 and SS2) and two XenServers (Let's call them Xen1 and Xen2).  Xen1 is my pool master.  I'd like to replace Xen2 with NewXen2 and then Xen1 with NewXen1.  Each new and old server has a two sockets for a total of 4-socket licenses.

 

I was planning on the following:

 

1.  Power off all VMS on Xen2.

2.  Destroy Xen2 in ZenCenter. (Hopefully by doing this I release the 2-socket licenses back to the pool).

3.  Power Off Xen2.

4.  Disconnect the FC cables from the 2 MD3xxx devices from Xen2.

5.  Connect the FC cables from the 2 MD3XXX devices to NewXen2.

6.  Power on NewXen2.

7.  Add NewXen2 to the server pool.

8.  Put NewXen2 into Maintenance Mode.

9.  Turn on Mutlipathing in XenCenter for NewXen2.

10.  Apply the only patch for 7.1 CU2.

11.  Take NewXen2 out of maintenance mode.

12.  Confirm the proper entries are NewXen2's mutlipathconf file for both MD3xxx storage arrays.

13.  Confirm there are no multipathing errors on the two MD3xxx devices.

14.  Start powering up devices on NewXen2.

 

 

Is there anything I am forgetting?  Any gotchas?  I assume if NewXen2 has issues, I can just re-add Xen2 back again.

 

 

 

 

 

Link to comment

Recommended Posts

Just use "xsconsole".   You can specify the target SR as well as if you want just that SR's VM metadata or all the SRs' VM metadata stored onto it.

I'd run this multiple times and to be safe, create a copy on each SR. Make sure you check if it really wrote the backup successfully afterwards as there's a bug in all XS 7.X versions that if there's no space left, it will happy exit without reporting an error! You can test with a "dry run" that just goes through th emotions, but makes no actual changes.  All this is incorporated within the xsconsole utility (which you can run from the server's  console or launch separately from CLI).

 

-=Tobias

Link to comment

All VMs exported as XVA files contain already each VM's metadata internally. As it is, each metadata backup area created by xsconsole is actually just a VDI, so you can look at the SR containing the currently 250 MB sized  storage and export it as a VDI using "xe vdi-export uuid=..." as one option. The VDI will be labeled "Pool Metadata Backup".

 

-=Tobias

Link to comment

Thank you both a ton.

 

Next weekend my scheduled process will be as follows:

 

1. In my OLD_POOL, I will power off all VMs. 

2. From XSCONSOLE on the Pool Master, perform the Metadata Backups to all SRs.

3.  Detach all SRs.

3.  Poweroff all XenServers in the OLD_POOL.

4.  Physically disconnect the MD3xxx storage arrays from the old xen servers.

5.  Physically connect the new XenServer to the MD3xxxx storage arrays.

6.  Power On the new XenServer.

7.  Check the Storage Arrays for any multipathing issues.

8.  Run a multipath -l on the new XenServer to make sure both MD3xxxx are being read and functioning as expected.

9.  On NEWPOOL, choose "New SR", Choose Hardware HBA.

10.  When the processes detects various LUNs pick one.

11.  When it identifies that the SR has previous data and it asks me to use it, say YES and attach the SR.

12.  Repeat steps 9-11 for all SRs.

13.  Power up a VM.

 

Sound solid?

 

My only unknows, to me at least, (A) are on Step 11 - I've never seen this UI before I'm not completely certain what it will look like, (B) IF during the attach process, something goes awry.

 

Link to comment

Thanks Tobias,

 

Understood - I had the detach step in there, but misnumbered…..(two number 3s).  I've added in step 14 and 15.  

 

1. In my OLD_POOL, I will power off all VMs. 

2. From XSCONSOLE on the Pool Master, perform the Metadata Backups to all SRs.

3.  Detach all SRs.

4.  Poweroff all XenServers in the OLD_POOL.

5.  Physically disconnect the MD3xxx storage arrays from the old xen servers.

6.  Physically connect the new XenServer to the MD3xxxx storage arrays.

7.  Power On the new XenServer.

8.  Check the Storage Arrays for any multipathing issues.

9.  Run a multipath -l on the new XenServer to make sure both MD3xxxx are being read and functioning as expected.

10.  On NEWPOOL, choose "New SR", Choose Hardware HBA.

11.  When the processes detects various LUNs pick one.

12.  When it identifies that the SR has previous data and it asks me to use it, say YES and attach the SR.

13.  Repeat steps 9-11 for all SRs.

14.  In XSCONSOLE, select "restore metadata" select a SR and complete the restore.

15.  Complete step 14 for all SRs.

16. Power Up.

 

Better?

 

Link to comment

Correct on the storage. If the storage is online with no vdi's in use you will see detach.

If the storage is offline you will see forget and destroy.

 

And yes, all five will be done individually as you detach on this side and when you are on the 

new XenServers the way you attach is go through the motions of creating a new SR. You 

will get a message  near the end stating an SR has been found containing data on that

LUN and if you want to reattach. Be sure and say yes ! 

 

Backups are very important as you never know when working with storage that something

unexpected happens. 

 

--Alan--

 

Link to comment

When you go from the old servers to the new servers with Hardware HBA and the MD3000's is there 

any security settings regarding IQN's? Your new servers will have new IQN's different from your old 

one's. Didn't know if thats an issue or not, it is for me on a SAN.

 

--Alan--

 

Link to comment

Hmmmmmm

 

This I am completely unfamiliar with.  Right now in XenCenter I see each XenServer appears to have a bogus iSCSI IQN (something example.com) and then each SR has a SCSI ID.

 

OLD_XEN_SERVER1 = iqn.2018-07.com.example:bcf91acd.

 

This has me freaked out a little again now.  So I don't mind recording SCSI IDs andUUIDs (I actually had already done that) and what not, but now in what portion of the process do I need to re-enter the odd looking IQNs?  or the SCSI IDs?  I would have thought since the SRs aren't changing, that these would have in no way been impacted.

 

 

Link to comment

Please forgive my ignorance ( I know that may be asking a lot), but with my HBAs, am I using iSCSI?  And if so, I guess I've just never known.  Once I've attached storage in the past, I've never configured anything with the IQNs or iSCSI in the past.

 

So, then based on Alan's input, I should just match my old IQNs and use this article to do so?  When do I perform this step?  After attaching the SR and before restoring the metadata?

Link to comment

Your new pool will be a totally separate entry in XenCenter, so no worries!  Creating a new pool with the new R740 servers will not affect the old pool in any way. No worries!  However, before you do that, you must make sure you are running the latest version of XenCenter that comes with whichever version of XenServer you are using or plan to install. Which version are you on now?

 

Relax, we can help you though all this! It's not as drastic an issue as you think. The procedure just has to be changed around a bit.

 

-=Tobias

Link to comment

You should create a VM metadata backup onto the storage you plan to migrate over or you'll lose the specific VM settings. You should be able to detach and re-attach the storage then successfully and restore the VM metadata. This can all be done using xsconsole. I' d probably do all the patching on your new servers as well as turning on multipathing before attaching the storage back on so they all have similar hotfix levels as well as multipathing since this is a pooled SR.

 

-=Tobias

Link to comment

Hi Tobias,

 

The storage is staying the same in the before and after, only the XenServers are changing.   I have 2 MD3XXX and 2 Dell R715s.  I am replacing each Dell R715 with a Dell R740 (one at a time).

 

I have added the backup metadata step, but I am unsure about removing and reattaching storage as I haven't had to do that in the past that I recall.  Am I way off here?

 

image.thumb.png.11295f47c0b970197c500d2ab60cfc8b.png

 

 

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...