Jump to content
Welcome to our new Citrix community!
  • 0

Xen don't reattach storage


Rede IFI

Question

Hello All, how are you?

 

First of all, sorry by my poor English!

 

In my job we have a Xen Pool with a Dell Storage attached. Since Monday something happen and we can't reattach this storage on the pool. We try re-connect the storage via CLI.

 

We try:

 

<code>

#iscsiadm -m discovery -t st -p 172.16.1.9

172.16.1.9:3260,258 iqn.2005-11.org.freenas.ctl:lantarget

</code>

 

After a lot of studding and researching in this forum, we try <code> #pvscan </code>. This command try for us a list of VG_XenStorage, but that with the UUID of Storage doesn't appear. Looking in the /etc/lvm/backup we find the VG_XenStorage with the UUID of the Storage. How I can restore this backup? Is it possible? If it is possible, after We do the restore, our Xen will see the storage again?

 

Thank's for help!

 

 

Link to comment

19 answers to this question

Recommended Posts

  • 0
1 minute ago, Alan Lantz said:

Instead of command line what do you see when connecting to your iscsi storage via XenCenter ? 

Also, what are the errors in /var/log/SMlog when you try ?

 

--Alan--

 

Via Xen Center, when we try to repair, the message is: "The request is missing the target parameter"

 

About SMlog:

 

Jul 31 17:11:05 hunt SM: [28322] Setting LVM_DEVICE to /dev/disk/by-scsid/36589cfc000000dde94d1ed9d827ad228
Jul 31 17:11:05 hunt SM: [28322] Setting LVM_DEVICE to /dev/disk/by-scsid/36589cfc000000dde94d1ed9d827ad228
Jul 31 17:11:05 hunt SM: [28322] Raising exception [95, The request is missing the target parameter]
Jul 31 17:11:05 hunt SM: [28322] ***** LVHD over iSCSI: EXCEPTION SR.SROSError, The request is missing the target parameter
Jul 31 17:11:05 hunt SM: [28322]   File "/opt/xensource/sm/SRCommand.py", line 343, in run
Jul 31 17:11:05 hunt SM: [28322]     sr = driver(cmd, cmd.sr_uuid)
Jul 31 17:11:05 hunt SM: [28322]   File "/opt/xensource/sm/SR.py", line 142, in __init__
Jul 31 17:11:05 hunt SM: [28322]     self.load(sr_uuid)
Jul 31 17:11:05 hunt SM: [28322]   File "/opt/xensource/sm/LVMoISCSISR", line 83, in load
Jul 31 17:11:05 hunt SM: [28322]     iscsi = driver(self.original_srcmd, sr_uuid)

Jul 31 17:11:05 hunt SM: [28322]   File "/opt/xensource/sm/SR.py", line 142, in __init__
Jul 31 17:11:05 hunt SM: [28322]     self.load(sr_uuid)
Jul 31 17:11:05 hunt SM: [28322]   File "/opt/xensource/sm/ISCSISR.py", line 116, in load
Jul 31 17:11:05 hunt SM: [28322]     raise xs_errors.XenError('ConfigTargetMissing')
Jul 31 17:11:05 hunt SM: [28322]   File "/opt/xensource/sm/xs_errors.py", line 52, in __init__
Jul 31 17:11:05 hunt SM: [28322]     raise SR.SROSError(errorcode, errormessage)
 

Link to comment
  • 0

I'm not sure, I don't remember ever having a 95. What I would do is forget the storage in XenServer and go through

the discovery process again. If it finds storage you should get an option to reattach. If not then you have some

serious underlying issues.

 

--Alan--

 

Link to comment
  • 0
16 hours ago, Tobias Kreidl said:

Did you do the initial iSCSI discovery via XenCenter using the wildcard parameter if you have multipathing?

Might also need from the CLI additional iscsiadm commands:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/online_storage_reconfiguration_guide/scanningnewdevs-iscsi

 

-=Tobias

 

Hi Tobias. Our storage is attached in 2 Xen Pools with 2 server in each pool. In the storage have 2 LUN, 1 for LAN (That we can't access) and other for DMZ, that is ok!

 

We run the command showed in the link:

 

# iscsiadm -m discovery -t iqn.2005-11.org.freenas.ctl:lantarget -p 172.16.1.9:3260
# BEGIN RECORD 6.2.0.873-21
discovery.startup = manual
discovery.type = sendtargets
discovery.sendtargets.address = 172.16.1.9
discovery.sendtargets.port = 3260
discovery.sendtargets.auth.authmethod = None
discovery.sendtargets.auth.username = <empty>
discovery.sendtargets.auth.password = <empty>
discovery.sendtargets.auth.username_in = <empty>
discovery.sendtargets.auth.password_in = <empty>
discovery.sendtargets.timeo.login_timeout = 15
discovery.sendtargets.use_discoveryd = No
discovery.sendtargets.discoveryd_poll_inval = 30
discovery.sendtargets.reopen_max = 5
discovery.sendtargets.timeo.auth_timeout = 45
discovery.sendtargets.timeo.active_timeout = 30
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
# END RECORD

 

# iscsiadm -m discovery -t sendtargets -p 172.16.1.9:3260
172.16.1.9:3260,258 iqn.2005-11.org.freenas.ctl:lantarget

 

# iscsiadm -m discovery -t sendtargets -p 172.16.1.9:3260 -P 1
Target: iqn.2005-11.org.freenas.ctl:lantarget
Portal: 172.16.1.9:3260,258
Iface Name: default


# iscsiadm -m session --rescan
Rescanning session [sid: 2, target: iqn.1986-03.com.ibm:sn.142263620, portal: 172.16.1.36,3260]
Rescanning session [sid: 49, target: iqn.2005-10.org.freenas.ctl:targetlan, portal: 172.16.1.40,3260]
Rescanning session [sid: 74, target: iqn.2005-11.org.freenas.ctl:lantarget, portal: 172.16.1.9,3260]

 

# iscsiadm --mode node --targetname iqn.2005-11.org.freenas.ctl:lantarget --portal 172.16.1.9:3260,258
# BEGIN RECORD 6.2.0.873-21
node.name = iqn.2005-11.org.freenas.ctl:lantarget
node.tpgt = 258
node.startup = automatic
node.leading_login = No
iface.hwaddress = <empty>
iface.ipaddress = <empty>
iface.iscsi_ifacename = default
iface.net_ifacename = <empty>
iface.transport_name = tcp
iface.initiatorname = <empty>
iface.state = <empty>
iface.vlan_id = 0
iface.vlan_priority = 0
iface.vlan_state = <empty>
iface.iface_num = 0
iface.mtu = 0
iface.port = 0
iface.bootproto = <empty>
iface.subnet_mask = <empty>
iface.gateway = <empty>
iface.dhcp_alt_client_id_state = <empty>
iface.dhcp_alt_client_id = <empty>
iface.dhcp_dns = <empty>
iface.dhcp_learn_iqn = <empty>
iface.dhcp_req_vendor_id_state = <empty>
iface.dhcp_vendor_id_state = <empty>
iface.dhcp_vendor_id = <empty>
iface.dhcp_slp_da = <empty>
iface.fragmentation = <empty>
iface.gratuitous_arp = <empty>
iface.incoming_forwarding = <empty>
iface.tos_state = <empty>
iface.tos = 0
iface.ttl = 0
iface.delayed_ack = <empty>
iface.tcp_nagle = <empty>
iface.tcp_wsf_state = <empty>
iface.tcp_wsf = 0
iface.tcp_timer_scale = 0
iface.tcp_timestamp = <empty>
iface.redirect = <empty>
iface.def_task_mgmt_timeout = 0
iface.header_digest = <empty>
iface.data_digest = <empty>
iface.immediate_data = <empty>
iface.initial_r2t = <empty>
iface.data_seq_inorder = <empty>
iface.data_pdu_inorder = <empty>
iface.erl = 0
iface.max_receive_data_len = 0
iface.first_burst_len = 0
iface.max_outstanding_r2t = 0
iface.max_burst_len = 0
iface.chap_auth = <empty>
iface.bidi_chap = <empty>
iface.strict_login_compliance = <empty>
iface.discovery_auth = <empty>
iface.discovery_logout = <empty>
node.discovery_address = 172.16.1.9
node.discovery_port = 3260
node.discovery_type = send_targets
node.session.initial_cmdsn = 0
node.session.initial_login_retry_max = 8
node.session.xmit_thread_priority = -20
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.nr_sessions = 1
node.session.auth.authmethod = None
node.session.auth.username = <empty>
node.session.auth.password = <empty>
node.session.auth.username_in = <empty>
node.session.auth.password_in = <empty>
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.err_timeo.host_reset_timeout = 60
node.session.iscsi.FastAbort = Yes
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.DefaultTime2Wait = 2
node.session.iscsi.MaxConnections = 1
node.session.iscsi.MaxOutstandingR2T = 1
node.session.iscsi.ERL = 0
node.conn[0].address = 172.16.1.9
node.conn[0].port = 3260
node.conn[0].startup = manual
node.conn[0].tcp.window_size = 524288
node.conn[0].tcp.type_of_service = 0
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD

 

After this we try repair the storage with XenCenter. but the message still being: "The request is missing the target parameter".

Link to comment
  • 0

We run:

 

# vgcfgrestore -f /root/lvm-backup/VG_XenStorage-86714c80-4049-89d9-8ca7-a6c66a4a8dba -l
   
  File:/root/lvm-backup/VG_XenStorage-86714c80-4049-89d9-8ca7-a6c66a4a8dba
  Couldn't find device with uuid j8nKbA-l2Ba-rd3y-MFV9-ipAj-kqJ0-FV2cf9.
  VG name:    VG_XenStorage-86714c80-4049-89d9-8ca7-a6c66a4a8dba
  Description:Created *after* executing '/usr/sbin/lvcreate -n VHD-25695b0a-4a06-4e2c-9441-0f0e7ed9f34a -L 61568 VG_XenStorage-86714c80-4049-89d9-8ca7-a6c66a4a8dba'
  Backup Time:Tue Jul 23 14:26:28 2019

 

86714c80-4049-89d9-8ca7-a6c66a4a8dba = SR-UUID of storage

 

Is the j8nKbA-l2Ba-rd3y-MFV9-ipAj-kqJ0-FV2cf9 = UUID of PV? How I can link they again?

Link to comment
  • 0
1 hour ago, Tobias Kreidl said:

Have you looked for a PBD and tried to just plug it in again?

I Tried this:

 

#xe pbd-list sr-uuid=86714c80-4049-89d9-8ca7-a6c66a4a8dba

uuid ( RO)                  : b80a8a1d-aac6-5e2e-67a1-805f9c0965b1
             host-uuid ( RO): af91c691-c30f-4ce9-8f95-c0755e4e5bcb
               sr-uuid ( RO): 86714c80-4049-89d9-8ca7-a6c66a4a8dba
         device-config (MRO): 172.16.1.9: ; targetIQN: iqn.2005-11.org.freenas.ctl:lantarget; SCSIid: 36589cfc000000dde94d1ed9d827ad228
    currently-attached ( RO): false

 

Where:

86714c80-4049-89d9-8ca7-a6c66a4a8dba = SR-UUID of storage

 

I didn't find nothing with UUID = j8nKbA-l2Ba-rd3y-MFV9-ipAj-kqJ0-FV2cf9 that was show in VG_XenStorage backup

 

How i plug the pbd again?

Link to comment
  • 0
4 minutes ago, Tobias Kreidl said:

xe pbd-plug uuid=(UUID-of-PBD)

 

If it's already associated with an SR, there's no PBD left unplugged and one associated with that storage doesn't exist.

 

Just run "xe pbd-list" and see if one shows up that is not associated with any SR.

 

-=Tobias

 

 xe pbd-plug uuid=b80a8a1d-aac6-5e2e-67a1-805f9c0965b1
Error code: SR_BACKEND_FAILURE_95
Error parameters: , The request is missing the target parameter,

 

 xe pbd-list

uuid ( RO)                  : f54fe3cb-9a63-63df-3051-dc23b5cc7804
             host-uuid ( RO): 8a3f0236-fbe7-4e44-8a29-de104a12511d
               sr-uuid ( RO): 538fde3d-f0f7-fbba-418e-5534b23edc10
         device-config (MRO): location: /dev/xapi/cd
    currently-attached ( RO): true

 

uuid ( RO)                  : b80a8a1d-aac6-5e2e-67a1-805f9c0965b1
             host-uuid ( RO): af91c691-c30f-4ce9-8f95-c0755e4e5bcb
               sr-uuid ( RO): 86714c80-4049-89d9-8ca7-a6c66a4a8dba
         device-config (MRO): 172.16.1.9: ; targetIQN: iqn.2005-11.org.freenas.ctl:lantarget; SCSIid: 36589cfc000000dde94d1ed9d827ad228
    currently-attached ( RO): false

 

uuid ( RO)                  : e4b88646-2c29-14ea-8280-e6da77d62d8e
             host-uuid ( RO): 8a3f0236-fbe7-4e44-8a29-de104a12511d
               sr-uuid ( RO): 52d098e3-c1c5-2ae3-2b15-16cf8b34d3db
         device-config (MRO): target: 172.16.1.36; port: 3260; targetIQN: iqn.1986-03.com.ibm:sn.142263620; SCSIid: 360a980003237663547244453524d3845
    currently-attached ( RO): true

 

uuid ( RO)                  : 81e4ddaa-1454-ba7c-fa4b-336260ca2dbc
             host-uuid ( RO): 8a3f0236-fbe7-4e44-8a29-de104a12511d
               sr-uuid ( RO): 3e25853f-2f20-ebe6-a63c-324dc7d763a9
         device-config (MRO): device: /dev/disk/by-id/scsi-36782bcb066a85600167a1e5f4065c6c5-part4
    currently-attached ( RO): true

 

uuid ( RO)                  : bde4472d-8242-7300-5622-104eaee5c6c4
             host-uuid ( RO): af91c691-c30f-4ce9-8f95-c0755e4e5bcb
               sr-uuid ( RO): 1348b15c-b1ad-f6d6-a826-39c4f891245c
         device-config (MRO): device: /dev/disk/by-id/scsi-36782bcb066ab6500167a00823e292dc3-part4
    currently-attached ( RO): true

 

uuid ( RO)                  : 5715d151-ea24-cced-dfc2-cef64decdbcb
             host-uuid ( RO): 8a3f0236-fbe7-4e44-8a29-de104a12511d
               sr-uuid ( RO): b9a084c1-c863-a2dd-477c-1b1f45f65e30
         device-config (MRO): location: /dev/xapi/block
    currently-attached ( RO): true

 

Error again... :-(

Link to comment
  • 0
14 hours ago, Tobias Kreidl said:

The SR is apparently not reachable using the IP address you have assigned.  Look at this thread for a possible solution:

https://ixsystems.com/community/threads/iscsi-mpio-without-a-switch-30-links.27579/#post-540605

 

-=Tobias

 

 

Great Tobias!!!

 

Now my storage is connect to Xen, but in the Storage tab in XenCenter I don't see any disk... I belive that I need to try execute PVS and VGS command's, correct?

Link to comment
  • 0

xe sr-list name-label="iSCSI Dell" params=all


uuid ( RO)                    : 9174ef62-fb93-2ed9-ed3d-7dd305710b8f
              name-label ( RW): iSCSI Dell
        name-description ( RW):
                    host ( RO): <shared>
      allowed-operations (SRO): VDI.create; VDI.snapshot; PBD.create; PBD.destroy; plug; update; VDI.destroy; scan; VDI.clone; VDI.resize; unplug
      current-operations (SRO):
                    VDIs (SRO):
                    PBDs (SRO): 65fc54cc-646d-88a6-1ded-fe5349b196c5; be5ec8d5-4883-29a9-f972-f50c53593b0b
      virtual-allocation ( RO): 0
    physical-utilisation ( RO): 4194304
           physical-size ( RO): 4198389252096
                    type ( RO): lvmoiscsi
            content-type ( RO):
                  shared ( RW): true
           introduced-by ( RO): <not in database>
            other-config (MRW):
 sm-config (MRO): allocation: thick; use_vhd: true; multipathable: true; devserial: scsi-36589cfc000000dde94d1ed9d827ad228
                   blobs ( RO):
     local-cache-enabled ( RO): false
                    tags (SRW):

 

I don't see any VDI... I think the next step is find a command to show the VDI to Xen, correct?

Link to comment
  • 0
On 02/08/2019 at 11:20 AM, Tobias Kreidl said:

Good, sounds like progress. Try "vgchange -ay" to make sure the storage is activated from the LVM side.  Until it's active you won't see any VDIs associated with the SR.

 

-=Tobias

 

Hello!

 

This weekend we need to power off everything because qhe had a long time black-out. When the energy came back I do this:

 

 xe sr-list name-label="iSCSI Dell" params=all
uuid ( RO)                    : 9174ef62-fb93-2ed9-ed3d-7dd305710b8f
              name-label ( RW): iSCSI Dell
        name-description ( RW):
                    host ( RO): <shared>
      allowed-operations (SRO): VDI.create; VDI.snapshot; PBD.create; PBD.destroy; plug; update; VDI.destroy; scan; VDI.clone; VDI.resize; unplug
      current-operations (SRO):
                    VDIs (SRO):
                    PBDs (SRO): be5ec8d5-4883-29a9-f972-f50c53593b0b; 65fc54cc-646d-88a6-1ded-fe5349b196c5
      virtual-allocation ( RO): 0
    physical-utilisation ( RO): 4194304
           physical-size ( RO): 4198389252096
                    type ( RO): lvmoiscsi
            content-type ( RO):
                  shared ( RW): true
           introduced-by ( RO): <not in database>
            other-config (MRW): dirty:
               sm-config (MRO): allocation: thick; use_vhd: true; multipathable: true; devserial: scsi-36589cfc000000dde94d1ed9d827ad228

 blobs ( RO):
     local-cache-enabled ( RO): false
                    tags (SRW):

 

I observed that value of other-config (MRW) was change. Before was empty and now is dirty:

 

I run the vgchange -ay command:

 

 vgchange -ay
  37 logical volume(s) in volume group "VG_XenStorage-52d098e3-c1c5-2ae3-2b15-16cf8b34d3db" now active
  14 logical volume(s) in volume group "VG_XenStorage-3f49c5a8-7604-539e-a81c-21da761a0203" now active
  1 logical volume(s) in volume group "VG_XenStorage-1348b15c-b1ad-f6d6-a826-39c4f891245c" now active

 

 pvscan
  PV /dev/sdc    VG VG_XenStorage-52d098e3-c1c5-2ae3-2b15-16cf8b34d3db   lvm2 [2.99 TB / 1.67 TB free]
  PV /dev/sdb    VG VG_XenStorage-3f49c5a8-7604-539e-a81c-21da761a0203   lvm2 [5.09 TB / 3.66 TB free]
  PV /dev/sda4   VG VG_XenStorage-1348b15c-b1ad-f6d6-a826-39c4f891245c   lvm2 [270.82 GB / 270.82 GB free]
  Total: 3 [8.35 TB] / in use: 3 [8.35 TB] / in no VG: 0 [0   ]

 

Last week I saw, with pvscan, something like this:

 

pvscan
  PV /dev/sdd    VG VG_XenStorage-9174ef62-fb93-2ed9-ed3d-7dd305710b8f   lvm2 [3.82 TB / 3.82 TB free]
  PV /dev/sdc    VG VG_XenStorage-3f49c5a8-7604-539e-a81c-21da761a0203   lvm2 [5.09 TB / 3.66 TB free]
  PV /dev/sdb    VG VG_XenStorage-52d098e3-c1c5-2ae3-2b15-16cf8b34d3db   lvm2 [2.99 TB / 1.67 TB free]
  PV /dev/sda4   VG VG_XenStorage-1348b15c-b1ad-f6d6-a826-39c4f891245c   lvm2 [270.82 GB / 270.82 GB free]
  Total: 4 [12.17 TB] / in use: 4 [12.17 TB] / in no VG: 0 [0   ]

 

But today, /dev/sdd not show!

Link to comment
  • 0

I try to repair via XenCenter, and now I have my Storage plugged in my Pool, but when I try to run the vgchange -ay the result is:

 

 vgchange -ay
  37 logical volume(s) in volume group "VG_XenStorage-52d098e3-c1c5-2ae3-2b15-16cf8b34d3db" now active
  14 logical volume(s) in volume group "VG_XenStorage-3f49c5a8-7604-539e-a81c-21da761a0203" now active
  1 logical volume(s) in volume group "VG_XenStorage-1348b15c-b1ad-f6d6-a826-39c4f891245c" now active

 

In the Storage tab doesn't have any information, there was empty...

 

 

Link to comment
  • 0
On 02/08/2019 at 11:20 AM, Tobias Kreidl said:

Good, sounds like progress. Try "vgchange -ay" to make sure the storage is activated from the LVM side.  Until it's active you won't see any VDIs associated with the SR.

 

-=Tobias

 

 

After some tests, I run vgchange -ay again:

 

  1 logical volume(s) in volume group "VG_XenStorage-9174ef62-fb93-2ed9-ed3d-7dd305710b8f" now active
  37 logical volume(s) in volume group "VG_XenStorage-52d098e3-c1c5-2ae3-2b15-16cf8b34d3db" now active
  14 logical volume(s) in volume group "VG_XenStorage-3f49c5a8-7604-539e-a81c-21da761a0203" now active
  1 logical volume(s) in volume group "VG_XenStorage-1348b15c-b1ad-f6d6-a826-39c4f891245c" now active

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...