Fixes the security issues described in the advisory
Jenkins-based containers (Operations Center, Managed Master, Castle) are now based on the Alpine Linux distribution.
[RHEL] Redhat package installation is now using appropriate proxy settings
[RHEL][Centos] Fixed issue where NFS
4.1 mount verification can hang forever with RHEL/CentOS 7.4
Fixed compatibility with Jenkins
Fixed issue where the existence of extraneous files in CJE project directory that have prefixes controller or worker causes some operations to fail
Removing certificates from the
certificates/ folder now removes them from the cluster after applying
There may be incompatible JVM flags that are left over from a previous install of CJE.
If you have upgraded, and Operations Center fails to come up during the upgrade process, manually remove the following flags from the files
You can use any editor to remove each of these items from anywhere within the file.
We found a major compatibility issue with NFS v3.x. At this time, do not install or upgrade to CJE
1.11.0 if you are using a NFS v3.x server.
If you enter an invalid Managed Master image location under the Manage Jenkins page on Operations Center, when deploying the new instance, the log window will only show that it's attempting to deploy and will not give any further feedback. Correct the image location to resolve this.
CJE allows you to enable using one-shot executors. These provide slightly faster provisioning of the executors. However, the current implementation of one-shot executors doesn't support pipeline resumption.
CJE doesn't support installing the Palace Cloud Plugin into masters that are not managed by CloudBees Jenkins Enterprise.
Managed Masters may appear to be not accessible when Operations Center is being upgraded. This issue occurs when the internal application router is being updated and is a temporary condition.
When deleting a Managed Master, the data associated with the master is retained in a backup snapshot used for recovery purposes. If you add a new master with the same name, it will recover the data from the snapshot and re-create it.
[AWS] A CJE
cluster-recover fails if its subnet is created in another availability zone. When using the operation
cluster-recover, it is simpler to keep the cluster in the same AWS availability zone (AZ).
CJE can fail to upgrade when a worker was incompletely set up.
Under some circumstances, a
cje prepare worker-add operation can fail. The typical case (on Amazon) is when the user's MFA code is incorrectly entered when prompted after the "apply" step. This results in a worker that is incompletely set up and the instance isn't started.
When this condition exists, an upgrade will fail.
To resolve this, use the
cje prepare worker-remove on the partially created workers, and then restart the upgrade process.
[AWS] Under some circumstances, an unexpected file prevents the operation
cluster-destroy from completing on AWS.
When destroying a cluster, CJE can also delete the S3 buckets, but a file
docker.tar.gz may be present in the bucket which prevents CJE from finishing. To work around this issue, manually delete the file using the aws CLI and apply the cje operation again.
If Operations Center is not running, you will be unable to use the cli to remove workers. If a worker configuration is damaged, Operations Center won't start properly. In the event of this situation, please contact CloudBees customer support for assistance in resolving this.
[anywhere] the cluster destroy operation does not remove /etc/.*_installed files in the target machine instances.
These files will interfere with a subsequent installation if you re-use the same machine instances.
To work around this delete the files, OR, re-install the OS for the machine instances.