Jump to content

Jeff Riechers

Members
  • Posts

    776
  • Joined

  • Last visited

  • Days Won

    39

Posts posted by Jeff Riechers

  1. New freshly deployed 2019 image and patched will all updates, problem still exists.

     

    Event error 1 

     

    New Server Deployment
    An error occurred running the command: 'Set-DSNewClusterEnvironment' 
    An error occurred while adding the StoreFront configuration API. Exception of type 'Citrix.DeliveryServices.Framework.Web.Deployment.Exceptions.AspNetNotAvailableException' was thrown.
    At C:\Program Files\Citrix\Receiver StoreFront\Management\Cmdlets\ClusterConfigurationModule.psm1:1290 char:9
    +         Add-STFConfigurationApi
    +         ~~~~~~~~~~~~~~~~~~~~~~~
    An error occurred while adding the StoreFront configuration API. Exception of type 'Citrix.DeliveryServices.Framework.Web.Deployment.Exceptions.AspNetNotAvailableException' was thrown.

    Citrix.DeliveryServices.PowerShell.Command.RunnerInterfaces.Exceptions.PowerShellExecutionException: An error occurred running the command: 'Set-DSNewClusterEnvironment' 
    An error occurred while adding the StoreFront configuration API. Exception of type 'Citrix.DeliveryServices.Framework.Web.Deployment.Exceptions.AspNetNotAvailableException' was thrown.
    At C:\Program Files\Citrix\Receiver StoreFront\Management\Cmdlets\ClusterConfigurationModule.psm1:1290 char:9
    +         Add-STFConfigurationApi
    +         ~~~~~~~~~~~~~~~~~~~~~~~ ---> System.Management.Automation.ActionPreferenceStopException: The running command stopped because the preference variable "ErrorActionPreference" or common parameter is set to Stop: An error occurred while adding the StoreFront configuration API. Exception of type 'Citrix.DeliveryServices.Framework.Web.Deployment.Exceptions.AspNetNotAvailableException' was thrown.
       at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input)
       at Citrix.DeliveryServices.PowerShell.Command.Runner.PowerShellCommandRunner.InvokeCommand(IPowerShellCommand command, Command powerShellCommand)
       at Citrix.DeliveryServices.PowerShell.Command.Runner.PowerShellCommandRunner.RunCommand(IPowerShellCommand command)
       --- End of inner exception stack trace ---
       at Citrix.DeliveryServices.PowerShell.Command.Runner.PowerShellCommandRunner.RunCommand(IPowerShellCommand command)
       at Citrix.DeliveryServices.Admin.FirstTimeConfiguration.Common.DeploymentSteps.DeploymentBase.Deploy()
       at Citrix.DeliveryServices.Admin.FirstTimeConfiguration.ScopeNode.Wizard.Models.FTUWizardCreatingViewModel.CreateDeployment()
     

    Event error 2

     

     

    A PowerShell SDK execution error occurred.

    Cmdlet name: Add-STFConfigurationApi

    Parameters:

    Citrix.DeliveryServices.Framework.Web.Deployment.Exceptions.AspNetNotAvailableException: Exception of type 'Citrix.DeliveryServices.Framework.Web.Deployment.Exceptions.AspNetNotAvailableException' was thrown.
       at Citrix.DeliveryServices.Framework.Web.Feature.Web.WebApplicationInstance.Deploy(IFeatureInstance parent, InstanceProperties data, Boolean isUpgrade)
       at Citrix.DeliveryServices.Framework.Abstractions.FrameworkControllerBase.DeployInstance(IFeatureInstance parent, InstanceProperties data, IFeatureInstance instance, Boolean isUpgrade)
       at Citrix.DeliveryServices.Framework.Abstractions.FrameworkControllerBase.CreateInstance(FeatureClass featureClass, IFeatureInstance parent, InstanceProperties data, Guid id, Boolean isUpgrade)
       at Citrix.DeliveryServices.Framework.Abstractions.FrameworkControllerBase.CreateInstance(FeatureClass featureClass, IFeatureInstance parent, InstanceProperties data, Guid id)
       at Citrix.DeliveryServices.Framework.Feature.AddFeature.CreateInstance(IFrameworkController frameworkController, FeatureClass featureClass, IFeatureInstance parent, Dictionary`2 data, Dictionary`2 readOnly, ReadOnlyDictionary`2 paramList, Guid requiredInstanceId, Guid tenantId, String cmdlet, String snapIn)
       at Citrix.StoreFront.ConfigurationManager.Deployment.InstallBase.CreateInstance(FeatureClass featureClass, IFeatureInstance parent, Dictionary`2 data, Dictionary`2 readOnly, Dictionary`2 parameters, Guid requiredInstanceId)
       at Citrix.StoreFront.ConfigurationManager.Deployment.InstallWebApplicationBase.Add(String virtualPath, String defaultDocuments, String applicationPool, Int64 siteId, String eventSource)
       at Citrix.StoreFront.ConfigurationApi.ConfigurationApiInstaller.AddConfigurationApi()
       at Citrix.StoreFront.ConfigurationApi.Cmdlets.AddConfigurationApi.ProcessRecord()
     

    Event error 3

     

    An error occurred running the command: 'Set-DSNewClusterEnvironment' 
    An error occurred while adding the StoreFront configuration API. Exception of type 'Citrix.DeliveryServices.Framework.Web.Deployment.Exceptions.AspNetNotAvailableException' was thrown.
    At C:\Program Files\Citrix\Receiver StoreFront\Management\Cmdlets\ClusterConfigurationModule.psm1:1290 char:9
    +         Add-STFConfigurationApi
    +         ~~~~~~~~~~~~~~~~~~~~~~~
    An error occurred while adding the StoreFront configuration API. Exception of type 'Citrix.DeliveryServices.Framework.Web.Deployment.Exceptions.AspNetNotAvailableException' was thrown.

    Citrix.DeliveryServices.PowerShell.Command.RunnerInterfaces.Exceptions.PowerShellExecutionException, Citrix.DeliveryServices.PowerShell.Command.RunnerInterfaces, Version=3.20.0.0, Culture=neutral, PublicKeyToken=e8b77d454fa2a856
    An error occurred running the command: 'Set-DSNewClusterEnvironment' 
    An error occurred while adding the StoreFront configuration API. Exception of type 'Citrix.DeliveryServices.Framework.Web.Deployment.Exceptions.AspNetNotAvailableException' was thrown.
    At C:\Program Files\Citrix\Receiver StoreFront\Management\Cmdlets\ClusterConfigurationModule.psm1:1290 char:9
    +         Add-STFConfigurationApi
    +         ~~~~~~~~~~~~~~~~~~~~~~~


    System.Management.Automation.ActionPreferenceStopException, System.Management.Automation, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
    The running command stopped because the preference variable "ErrorActionPreference" or common parameter is set to Stop: An error occurred while adding the StoreFront configuration API. Exception of type 'Citrix.DeliveryServices.Framework.Web.Deployment.Exceptions.AspNetNotAvailableException' was thrown.
    System.Management.Automation.Interpreter.InterpretedFrameInfo: System.Management.Automation.Interpreter.InterpretedFrameInfo[]
       at System.Management.Automation.Runspaces.PipelineBase.Invoke(IEnumerable input)
       at Citrix.DeliveryServices.PowerShell.Command.Runner.PowerShellCommandRunner.InvokeCommand(IPowerShellCommand command, Command powerShellCommand)
       at Citrix.DeliveryServices.PowerShell.Command.Runner.PowerShellCommandRunner.RunCommand(IPowerShellCommand command)
     

  2. Same issue here.  I deployed from a template of 2019 from a few months ago and the same issue.

     

    storefront wasn't showing up in add remove programs after install either.  couldn't create a new farm or add to an existing one.

     

    Also wasn't able to upgrade an existing install.  

     

    Had to roll out a 2016 image to install the 1906 version.  Then I was able to join an existing lower level storefront.

  3. On 4/30/2019 at 1:33 PM, Kasper Johansen1709159522 said:

    I am running a VPX 1000 in Hyper-V, it's an older version, but still supporting both EDT and DTLS.

     

    That might be it.  This is my day one lab where everything is on the latest greatest.  12.1 is where I started seeing these disconnects.  You can see multiple EDT UDP sessions get created when users have the pausing grey screen.  I think for the longest sessions I had like 20 there.  I have yet to test with 1903.  It's on the to do list.

  4. Has anyone else still seeing reconnect issues with Server 2019 published desktops?

     

    When desktop gets disconnected due to timeouts, or machine sleep I can't reconnect until I do the "restart" option via storefront.  All connections are via TCP (UDP still broken) NetScaler Gateway VPX(sorry ADC marketing team) on the latest versions.  

     

    When attempting to login the following lines are shown on the VDA.

     

    The Citrix ICA Transport Driver has received a connect request from NetscalerSNIP:34495

     

    followed by

     

    The Citrix ICA Transport Driver connection from NetScalerSNIP:34495 has been closed.

     

    Reseting the session on the NetScaler doesn't fix it.

     

    Problem does not occur on Windows 10 1809.

  5. I upgraded to 1903 and the latest 12.1 firmware.  The inability to reconnect is gone.  I still get odd screen flashes and checking the netscaler I see it is still creating new udp channels, but it is clearing out the old ones.

     

    So they are closer....but still not 100%  ;)

     

    Also ADM is now gathering udp connection info correctly.

     

    Update:

    Spoke too soon.  Got some UDP traffic, but once session reconnected it stopped gathering data.

  6. It worked really well.  And I was able to update the image and roll it out successfully.

     

    However with CentOS 7.5 you have to lock it so that it doesn't update to 7.6.  However, the steps they provide don't work with it. Instead what I had to do.

     

    1. vi /etc/yum.repos.d/CentOS-Vault.repo Add the 7.5.1804 to the yum vault repositories. (See items below)

     

    # C7.5.1804
    [C7.5.1804-base]
    name=CentOS-7.5.1804 - Base
    baseurl=http://vault.centos.org/7.5.1804/os/$basearch/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    enabled=1

    [C7.5.1804-updates]
    name=CentOS-7.5.1804 - Updates
    baseurl=http://vault.centos.org/7.5.1804/updates/$basearch/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    enabled=1

    [C7.5.1804-extras]
    name=CentOS-7.5.1804 - Extras
    baseurl=http://vault.centos.org/7.5.1804/extras/$basearch/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    enabled=1

    [C7.5.1804-centosplus]
    name=CentOS-7.5.1804 - CentOSPlus
    baseurl=http://vault.centos.org/7.5.1804/centosplus/$basearch/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    enabled=1

    [C7.5.1804-fasttrack]
    name=CentOS-7.5.1804 - CentOSPlus
    baseurl=http://vault.centos.org/7.5.1804/fasttrack/$basearch/
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
    enabled=1

     

    2. yum-config-manager -v --disable CentOS\*
    3. yum-config-manager --enable C7.5\*

  7. Once I updated FAS to the latest version I see that every time the linux client logs in, even if not using SAML, that it checks with the FAS and crashes out the connection.

     

    If I have the image up, and then run the fas setup sh script the session will work, but when provisioning it doesn't keep it.

  8. Question on this.  I am seeing HUGE amount of retries with Ubuntu 16.04 on XenServer with my PVS server on VMWare.

     

    My windows machines on the same host don't have any type of issue like that.

     

    Also performance on the machine in private mode is a dog.  I didn't see this type of performance when testing CentOS, but ran into different compatibility issues.

  9. Looks like I will keep CentOS 7.4 on MCS, and will move to Ubuntu 16.04 on PVS.  This is what I got on my case.

     

    There is currently no support for LVDA 7.18 w/ PVS as v7.18 does not support RHEL\CentOS 7.2 or 7.3.

    You will have to use v7.17 and use RHEL\CentOS 7.2 or 7.3 or abandon PVS and use MCS instead.

    The docs for the LVDA and the PVS streaming of said LVDA seem to contradict each other…

     

    These are the versions required for LVDA in v7.18 The Linux VDA supports the following Linux distributions:

    •             SUSE Linux Enterprise:

    •             Desktop 12 Service Pack 3

    •             Server 12 Service Pack 3

    •             Red Hat Enterprise Linux

    •             Workstation 7.4

    •             Workstation 6.9

    •             Workstation 6.8

    •             Server 7.4

    •             Server 6.9

    •             Server 6.8

    •             CentOS Linux

    •             CentOS 7.4

    •             CentOS 6.9

    •             CentOS 6.8

    •             Ubuntu Linux

    •             Ubuntu Desktop 16.04

    •             Ubuntu Server 16.04

     

     These are the versions required for Linux Streaming (PVS) in v7.18:

    The following Linux distributions are supported:

    •             Ubuntu 16.04

    •             16.04.01 and 16.04.02 with the 4.4.x kernel.

    When using these distributions for Linux streaming, consider that the PVS installer requires that the Linux kernel package be version be greater than or equal to version 4.4.0.53. The PVS installer automatically provides the correct version during the installation process.

    •             RedHat Enterprise Linux Server 7.2, 7.3

    •             CentOS 7.2, 7.3

    •             SUSE Linux Enterprise Server (SLES) 12.1, 12.2

  10. I did follow this document.  If I didn't run the ctxsetup.sh the system did not know what Delivery Controllers to register to.  The document outlines installing the VDA, but not once does it mention configuring the VDA.

     

    I ran through this process many times, and this is what worked on CentOS 7.4

     

    Install 7.4

    Install XenTools

    Disable IPv6 in grub

    Disable selinux

    Install selinux policy 3.13.1-192.el7_5.3.noarch.rpm

    set releasever to 7.4.1708

    Edit /etc/yum.repos.d/CentOS-Base.repo to point to vault.centos.org since binaries have moved from the mirror sites

    Install version lock

    install samba winbind client 4.6.2-12.el7_4.x86.64

    lock that samba version

    install linux vda

    yum update

    run easy install script

    modify mcs.conf

    Run deploymcs.sh

    shutdown

     

    Now when I roll out this image it works perfectly.  I ran both the easy install and the regular install, the easy install put in all the pre-requisites and verified they were working correctly before I ran the deploy.  That is how I found the selinux policy issue even with the service disabled.  The deploymcs didn't report it, but the easy install did.

     

    I tried running the whole scenario without the linux vda config script and the machines never registered because they didn't have the DDCs in the system.

    • Like 1
  11. Some notes on this.

     

    To set the system to only use Centos 7.4 I did the following steps.

    Had to edit the /etc/yum.repos.d/CentOS-Base.repo and comment out each mirror list and un comment the baseurl and change it to vault.centos.org/$releasever/os/$basearch/

    You will have to do that 4 times in the file once for each session.

     

    Then at a command prompt run the following command.

    echo '7.4.1708' > /etc/yum/vars/releasever

     

    Then I ran yum update to get all the patches to the base 7.4 environment.

     

    To ensure the proper samba version I ran the following lines.

     

    yum install yum-plugin-versionlock
    yum install samba-winbind-clients-4.6.2-12.el7_4.x86_64
    yum versionlock samba*

     

    Then I installed the Linux VDA for 1808.  Modified the mcs.conf for my my environment and ran the deploymcs.sh.

     

    Provisioned out the machines and I could login to them from the console with my domain creds, but they were not registering.  The CTX services were not running or configured.

     

    Booted up the base image, ran the ctxsetup.sh and defined my environment there, then re-ran the deploymcs.sh shutdown and re-deployed.  However the machines would never boot to the new version of the disk.  I ended up removing the machines and catalog and creating new ones from the updated image.

     

    However these machines still would not register.

     

    It looks like the mcs.conf is missing some settings for defining your Delivery Controllers for registration.  I have gone through the MCS web doc in detail, and there are some steps missing, or implied that need to be spelled out more.  I am going to to continue working with this some more.

     

    I removed the Linux VDA on the image.  Then did a fresh install, running the ctxinstall.sh I got an error with selinux.  I fixed this with the following.

     

    To use this version of the Linux VDA in RHEL 7.4 or CentOS 7.4, update the selinux-policy package to selinux-policy-3.13.1-192.el7_5 or later versions.

     

    Then I was able to run the ctxinstall and deploymcs and things registered.

     

    So it definitely looks like that Samba-winbind-client is the issue.

     

    As a test I then updated the 7.4 image to 7.5, with a --skip-broken to update but leave Samba at it's version.

     

    This broke everything horribly.  So back to a fresh 7.4 build up and it works fine now.  Let me know if the deploymcs.sh is going to have options for defining the delivery catalogs directly.  And it should have a check in place for 7.4 not having the correct selinux policies in place.

     

     

     

     

    • Like 1
  12. So I am playing around with MCS with 1808 on CentOS 7.5 linux.  My maintenance image joins to the domain just fine, and I can login no problem.  I modify the mcs file and run the deploy mcs script.  Shut the machine down, provision them and they boot up and register.  However while trying to login to the machine I get an Invalid Logon window in the X session.

     

    Trying to login to the console directly I get Sorry, that didn't work.  Please try again.

     

    Tailing the /var/log/secure file I get these messages.

     

    Sep  2 14:59:55 centos-ws-mcs1 gdm-password]: pam_unix(gdm-password:auth): authentication failure; logname= uid=0 euid=0 tty=/dev/tty1 ruser= rhost=  user=CV\administrator
    Sep  2 14:59:55 centos-ws-mcs1 gdm-password]: pam_winbind(gdm-password:auth): getting password (0x00004190)
    Sep  2 14:59:55 centos-ws-mcs1 gdm-password]: pam_winbind(gdm-password:auth): pam_get_item returned a password
    Sep  2 14:59:55 centos-ws-mcs1 gdm-password]: pam_winbind(gdm-password:auth): request wbcLogonUser failed: WBC_ERR_AUTH_ERROR, PAM error: PAM_AUTH_ERR (7), NTSTATUS: NT_STATUS_LOGON_FAILURE, Error message was: The attempted logon is invalid. This is either due to a bad username or authentication information.
    Sep  2 14:59:55 centos-ws-mcs1 gdm-password]: pam_winbind(gdm-password:auth): user 'CV\administrator' denied access (incorrect password or invalid membership)
     

    Running LDAP tests I can query AD and get groups, users, etc.  It looks like some permission is getting lost during the boot up and the mcs scripts running.

     

    Anyone have any ideas?

×
×
  • Create New...