Jump to content
Updated Privacy Statement
  • 0

CentOS MCS password issue


Jeff Riechers

Question

So I am playing around with MCS with 1808 on CentOS 7.5 linux.  My maintenance image joins to the domain just fine, and I can login no problem.  I modify the mcs file and run the deploy mcs script.  Shut the machine down, provision them and they boot up and register.  However while trying to login to the machine I get an Invalid Logon window in the X session.

 

Trying to login to the console directly I get Sorry, that didn't work.  Please try again.

 

Tailing the /var/log/secure file I get these messages.

 

Sep  2 14:59:55 centos-ws-mcs1 gdm-password]: pam_unix(gdm-password:auth): authentication failure; logname= uid=0 euid=0 tty=/dev/tty1 ruser= rhost=  user=CV\administrator
Sep  2 14:59:55 centos-ws-mcs1 gdm-password]: pam_winbind(gdm-password:auth): getting password (0x00004190)
Sep  2 14:59:55 centos-ws-mcs1 gdm-password]: pam_winbind(gdm-password:auth): pam_get_item returned a password
Sep  2 14:59:55 centos-ws-mcs1 gdm-password]: pam_winbind(gdm-password:auth): request wbcLogonUser failed: WBC_ERR_AUTH_ERROR, PAM error: PAM_AUTH_ERR (7), NTSTATUS: NT_STATUS_LOGON_FAILURE, Error message was: The attempted logon is invalid. This is either due to a bad username or authentication information.
Sep  2 14:59:55 centos-ws-mcs1 gdm-password]: pam_winbind(gdm-password:auth): user 'CV\administrator' denied access (incorrect password or invalid membership)
 

Running LDAP tests I can query AD and get groups, users, etc.  It looks like some permission is getting lost during the boot up and the mcs scripts running.

 

Anyone have any ideas?

Link to comment

20 answers to this question

Recommended Posts

  • 1

Some notes on this.

 

To set the system to only use Centos 7.4 I did the following steps.

Had to edit the /etc/yum.repos.d/CentOS-Base.repo and comment out each mirror list and un comment the baseurl and change it to vault.centos.org/$releasever/os/$basearch/

You will have to do that 4 times in the file once for each session.

 

Then at a command prompt run the following command.

echo '7.4.1708' > /etc/yum/vars/releasever

 

Then I ran yum update to get all the patches to the base 7.4 environment.

 

To ensure the proper samba version I ran the following lines.

 

yum install yum-plugin-versionlock
yum install samba-winbind-clients-4.6.2-12.el7_4.x86_64
yum versionlock samba*

 

Then I installed the Linux VDA for 1808.  Modified the mcs.conf for my my environment and ran the deploymcs.sh.

 

Provisioned out the machines and I could login to them from the console with my domain creds, but they were not registering.  The CTX services were not running or configured.

 

Booted up the base image, ran the ctxsetup.sh and defined my environment there, then re-ran the deploymcs.sh shutdown and re-deployed.  However the machines would never boot to the new version of the disk.  I ended up removing the machines and catalog and creating new ones from the updated image.

 

However these machines still would not register.

 

It looks like the mcs.conf is missing some settings for defining your Delivery Controllers for registration.  I have gone through the MCS web doc in detail, and there are some steps missing, or implied that need to be spelled out more.  I am going to to continue working with this some more.

 

I removed the Linux VDA on the image.  Then did a fresh install, running the ctxinstall.sh I got an error with selinux.  I fixed this with the following.

 

To use this version of the Linux VDA in RHEL 7.4 or CentOS 7.4, update the selinux-policy package to selinux-policy-3.13.1-192.el7_5 or later versions.

 

Then I was able to run the ctxinstall and deploymcs and things registered.

 

So it definitely looks like that Samba-winbind-client is the issue.

 

As a test I then updated the 7.4 image to 7.5, with a --skip-broken to update but leave Samba at it's version.

 

This broke everything horribly.  So back to a fresh 7.4 build up and it works fine now.  Let me know if the deploymcs.sh is going to have options for defining the delivery catalogs directly.  And it should have a check in place for 7.4 not having the correct selinux policies in place.

 

 

 

 

  • Like 1
Link to comment
  • 1

I did follow this document.  If I didn't run the ctxsetup.sh the system did not know what Delivery Controllers to register to.  The document outlines installing the VDA, but not once does it mention configuring the VDA.

 

I ran through this process many times, and this is what worked on CentOS 7.4

 

Install 7.4

Install XenTools

Disable IPv6 in grub

Disable selinux

Install selinux policy 3.13.1-192.el7_5.3.noarch.rpm

set releasever to 7.4.1708

Edit /etc/yum.repos.d/CentOS-Base.repo to point to vault.centos.org since binaries have moved from the mirror sites

Install version lock

install samba winbind client 4.6.2-12.el7_4.x86.64

lock that samba version

install linux vda

yum update

run easy install script

modify mcs.conf

Run deploymcs.sh

shutdown

 

Now when I roll out this image it works perfectly.  I ran both the easy install and the regular install, the easy install put in all the pre-requisites and verified they were working correctly before I ran the deploy.  That is how I found the selinux policy issue even with the service disabled.  The deploymcs didn't report it, but the easy install did.

 

I tried running the whole scenario without the linux vda config script and the machines never registered because they didn't have the DDCs in the system.

  • Like 1
Link to comment
  • 1

My experience mirrors jriechers'.  I had to run the Easy Install script before the MCS script or the VDA would not register.

 

At this point, I've gotten apps, published desktops, and VDI working in my test environment: Virtual Apps/Desktops 1808 with Centos 7.4, MCS, and VMware/vSphere 6.5.

 

Here's my cookbook based on jriechers' posts.  I'm not a Linux admin, so please forgive any errors.

 

Created a 7.4 CentOS VM template in vSphere (7.5 doesn't work)

Selected an installation template that included GUI and included development tools

Went with included open-vm-tools instead of installing VMware Tools

Specified centos74.subdomain.domain.edu as hostname during setup

Edited the /etc/yum.repos.d/CentOS-Base.repo and commented out each mirror list and uncommented the baseurl and changed it to http://vault.centos.org/$releasever/XXXXX/$basearch/

Did this four times, in each of the four sections of the file

Ran: yum install yum-plugin-versionlock

Fixed hostname(s) everywhere needed:

Added "centos74.subdomain.domain.edu centos74" to hosts file per https://docs.citrix.com/en-us/linux-virtual-delivery-agent/current-release/installation-overview/redhat.html

Edited /etc/hostname file to have short name only, "centos74"

Ran "hostname centos74" and checked results of hostname and hostname -f

Rebooted to make sure the hostname commands output didn't change

Ran: yum install selinux-policy after seeing selinux errors - did this before locking everything down with versionlock below

Ran: echo '7.4.1708' > /etc/yum/vars/releasever

Did a yum update-check to verify that I would get 4.6.2-12.el7_4 from the changed download location

Ran: yum update samba* (did not do specific "yum install samba-winbind-clients-4.6.2-12.el7_4.x86_64" first time, whether or not that makes a difference)

Ran: yum versionlock samba*

Searched for, then installed epel repository configuration ( ? ) from utility in UI as I didn't know the full name

Ran: yum update

Shut down VM and converted to template

Created VM from this template and switched to it to keep template clean in case I had to fall back to it

Checked host name(s) - seems to have kept the right format

Did not join this master/parent to domain as that will be done for me as part of Easy Install

Installed VDA per Easy Install (Yum proxy configured and working; VM doesn't have Internet access)

Ran: yum update

Did Easy Install at this point, following advice from link above

Ran: In another terminal window, touch /var/log/ctxinstall.log

Ran: tail -f /var/log/ctxinstall.log

Ran: back in first window, /opt/Citrix/VDA/sbin/ctxinstall.sh

Updated /var/xdl/mcs/mcs.conf as needed (e.g. DNS servers; for VDI, update VDI_MODE entry if that even matters)

Ran: in another terminal window, touch /var/log/deploymcs.log

Ran: tail -f /var/log/deploymcs.log

Ran: back in first window, /opt/Citrix/VDA/sbin/deploymcs.sh

Rebooted just in case any of the above updates needed to be applied (like with Windows)

Shut back down and took snapshot for MCS

Created catalog

Boot newly-created VM(s) and gave it time to register

Created delivery group

  • Like 1
Link to comment
  • 0

Hi, first, we recommend you use MCS on a clean template machine, and the template machine should not join domain.

Besides, you need to check the samba version as there is a known samba issue. We have already submitted a bug to RedHat, we will let you know once it's fixed. You can try with CentOS7.4 for now. You may find the details here: https://support.citrix.com/article/CTX235834

For more information, we need to check the deploymcs.log and ad_join.log under /var/log.

Link to comment
  • 0

Do not run easy install script. it conflicts with mcs script. The template machine does not need to join the domain.

 

And no, there is no configuration about delivery catalog should be configured in template machine using MCS. After creating machines using machine catalog, you should create delivery catalog manually.  Also, I think you missed some dependencies, please follow the guide step by step to use MCS: https://docs.citrix.com/en-us/linux-virtual-delivery-agent/current-release/installation-overview/use-mcs-to-create-linux-vms.html

Link to comment
  • 0

I had a similar experience and opened a case. We ended up starting from scratch like you did with MCS and performing tasks in a specific order when creating the master image and everything is working now from a machine join perspective. Users are getting the same experience you describe in your original

post when attempting to access the box through Storefront.

Link to comment
  • 0

Looks like I will keep CentOS 7.4 on MCS, and will move to Ubuntu 16.04 on PVS.  This is what I got on my case.

 

There is currently no support for LVDA 7.18 w/ PVS as v7.18 does not support RHEL\CentOS 7.2 or 7.3.

You will have to use v7.17 and use RHEL\CentOS 7.2 or 7.3 or abandon PVS and use MCS instead.

The docs for the LVDA and the PVS streaming of said LVDA seem to contradict each other…

 

These are the versions required for LVDA in v7.18 The Linux VDA supports the following Linux distributions:

•             SUSE Linux Enterprise:

•             Desktop 12 Service Pack 3

•             Server 12 Service Pack 3

•             Red Hat Enterprise Linux

•             Workstation 7.4

•             Workstation 6.9

•             Workstation 6.8

•             Server 7.4

•             Server 6.9

•             Server 6.8

•             CentOS Linux

•             CentOS 7.4

•             CentOS 6.9

•             CentOS 6.8

•             Ubuntu Linux

•             Ubuntu Desktop 16.04

•             Ubuntu Server 16.04

 

 These are the versions required for Linux Streaming (PVS) in v7.18:

The following Linux distributions are supported:

•             Ubuntu 16.04

•             16.04.01 and 16.04.02 with the 4.4.x kernel.

When using these distributions for Linux streaming, consider that the PVS installer requires that the Linux kernel package be version be greater than or equal to version 4.4.0.53. The PVS installer automatically provides the correct version during the installation process.

•             RedHat Enterprise Linux Server 7.2, 7.3

•             CentOS 7.2, 7.3

•             SUSE Linux Enterprise Server (SLES) 12.1, 12.2

Link to comment
  • 0

I am seeing something similar on my end. I took a PCAP and seeing some auth failures and within the trace logs on the LVDA I am seeing the system failing to use smart card auth. I have opened a ticket on this issue and will report back on the findings. I also opened a ticket with RedHat and they are seeing that GSSPROXY is being called. I have a question in to Citrix regarding the use of GSSPROXY as well.

Link to comment
  • 0

Hi all, 

 

We created a bug to RHEL team, and in the latest samba packages in RHEL7.5, the crash issue has been fixed. However, there is a big change in this samba version compared with previous ones, so we had to modify our mcs code to adjust this change so that it could authenticate domain users. 

 

For FAS configuration, its not in the mcs config file, if you need to configure it, there is a workaround, you could add it in the script /var/xdl/mcs/mcs_util.sh, there is a function called setup_vda(), it configures all the VDA related settings. you can add it there, it will be run when MCS created VMs boots up.

 

Also, to provision MCS VMs, you do not need to run ctxinstall.sh script in the template machine. it might cause some unexpected results.

 

Thanks,

Qian

Link to comment
  • 0
On 22/10/2018 at 3:39 AM, Qian Wang said:

We created a bug to RHEL team, and in the latest samba packages in RHEL7.5, the crash issue has been fixed. However, there is a big change in this samba version compared with previous ones, so we had to modify our mcs code to adjust this change so that it could authenticate domain users. 

 

Do you have an update on when the Linux VDA will be updated to include this fix for CentOS / RHEL 7.5?

 

Do the delivery controllers need be running 1808 for the following scenario to work? Our DDCs are currently 7.18.

 

 7.4 CentOS, Linux VDA 1808, MCS deployment, VDI Mode, ESXi Hypervisor.

Link to comment
  • 0

Trying to interpret Qian's remark above, I think he is saying that the current 1808 VDA already has the updates for CentOS 7.5.  What has changed recently is that the crash behavior in 7.5 has been resolved.  I haven't tested this yet, but I will, along with editing the mcs_util.sh script instead of running Easy Install.

 

I asked Citrix support the same question about the DDCs; ours are at 7.15 LTSR.  The response was that we needed our DDCs to be running 1808.  I see that you are running VMware, and vSphere was first supported in 1808; only XenServer and Azure was supported in 7.18, iirc.  NB: Citrix hadn't updated their webpage to say that vSphere is supported in 1808, but it is.  You can always just drop in another one/two DDCs running 1808 alongside your existing test (or production) 7.18 ones if you can't wait to upgrade your current DDCs.  That's what I did in test.

Link to comment
  • 0

It worked really well.  And I was able to update the image and roll it out successfully.

 

However with CentOS 7.5 you have to lock it so that it doesn't update to 7.6.  However, the steps they provide don't work with it. Instead what I had to do.

 

1. vi /etc/yum.repos.d/CentOS-Vault.repo Add the 7.5.1804 to the yum vault repositories. (See items below)

 

# C7.5.1804
[C7.5.1804-base]
name=CentOS-7.5.1804 - Base
baseurl=http://vault.centos.org/7.5.1804/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
enabled=1

[C7.5.1804-updates]
name=CentOS-7.5.1804 - Updates
baseurl=http://vault.centos.org/7.5.1804/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
enabled=1

[C7.5.1804-extras]
name=CentOS-7.5.1804 - Extras
baseurl=http://vault.centos.org/7.5.1804/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
enabled=1

[C7.5.1804-centosplus]
name=CentOS-7.5.1804 - CentOSPlus
baseurl=http://vault.centos.org/7.5.1804/centosplus/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
enabled=1

[C7.5.1804-fasttrack]
name=CentOS-7.5.1804 - CentOSPlus
baseurl=http://vault.centos.org/7.5.1804/fasttrack/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
enabled=1

 

2. yum-config-manager -v --disable CentOS\*
3. yum-config-manager --enable C7.5\*

Link to comment
  • 0

Just wanted to follow back to this thread to let you know that PVS is working now really well. We still have the same samba version issues but I have used yum version lock to help with that. Working with Citrix engineering there was a bug that was uncovered and fixed in 1909. In the event that you may want to revisit PVS I can attest that it is stable now.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...