CloudBees, Inc.

CloudBees Jenkins Enterprise 1.11.0

New Features

Major TIGER-3453

Jenkins-based containers (Operations Center, Managed Master, Castle) are now based on the Alpine Linux distribution.

Resolved issues

Major TIGER-3887

[RHEL] Redhat package installation is now using appropriate proxy settings

Major TIGER-3813

[RHEL][Centos] Fixed issue where NFS 4.1 mount verification can hang forever with RHEL/CentOS 7.4

Minor TIGER-3964

Fixed compatibility with Jenkins 2.89

Minor TIGER-3787

Fixed issue where the existence of extraneous files in CJE project directory that have prefixes controller or worker causes some operations to fail

Minor TIGER-3809

Removing certificates from the certificates/ folder now removes them from the cluster after applying certificates-update operation.

Known issues

Major TIGER-3987

There may be incompatible JVM flags that are left over from a previous install of CJE.

If you have upgraded, and Operations Center fails to come up during the upgrade process, manually remove the following flags from the files <CJE-PROJECT-DIR>/.dna/project.config and <CJE-PROJECT-DIR>/.dna/servers/cjoc/dna.config.

  • -XX:+UnlockExperimentalVMOptions
  • -XX:+UseCGroupMemoryLimitForHeap
  • -XX:MaxRAMFraction=2

You can use any editor to remove each of these items from anywhere within the file.

Major TIGER-4051

We found a major compatibility issue with NFS v3.x. At this time, do not install or upgrade to CJE 1.11.0 if you are using a NFS v3.x server.

Minor TIGER-2355

If you enter an invalid Managed Master image location under the Manage Jenkins page on Operations Center, when deploying the new instance, the log window will only show that it's attempting to deploy and will not give any further feedback. Correct the image location to resolve this.

Minor TIGER-2371

CJE allows you to enable using one-shot executors. These provide slightly faster provisioning of the executors. However, the current implementation of one-shot executors doesn't support pipeline resumption.

Minor TIGER-2426

CJE doesn't support installing the Palace Cloud Plugin into masters that are not managed by CloudBees Jenkins Enterprise.

Minor TIGER-2522

Managed Masters may appear to be not accessible when Operations Center is being upgraded. This issue occurs when the internal application router is being updated and is a temporary condition.

Minor TIGER-2724

When deleting a Managed Master, the data associated with the master is retained in a backup snapshot used for recovery purposes. If you add a new master with the same name, it will recover the data from the snapshot and re-create it.

Minor TIGER-2427

[AWS] A CJE cluster-recover fails if its subnet is created in another availability zone. When using the operation cluster-recover, it is simpler to keep the cluster in the same AWS availability zone (AZ).

Minor TIGER-3414

CJE can fail to upgrade when a worker was incompletely set up.

Under some circumstances, a cje prepare worker-add operation can fail. The typical case (on Amazon) is when the user's MFA code is incorrectly entered when prompted after the "apply" step. This results in a worker that is incompletely set up and the instance isn't started.

When this condition exists, an upgrade will fail.

To resolve this, use the cje prepare worker-remove on the partially created workers, and then restart the upgrade process.

Minor TIGER-3539

[AWS] Under some circumstances, an unexpected file prevents the operation cluster-destroy from completing on AWS.

When destroying a cluster, CJE can also delete the S3 buckets, but a file docker.tar.gz may be present in the bucket which prevents CJE from finishing. To work around this issue, manually delete the file using the aws CLI and apply the cje operation again.

Minor TIGER-3739

If Operations Center is not running, you will be unable to use the cli to remove workers. If a worker configuration is damaged, Operations Center won't start properly. In the event of this situation, please contact CloudBees customer support for assistance in resolving this.

Minor TIGER-3957

[anywhere] the cluster destroy operation does not remove /etc/.*_installed files in the target machine instances.

These files will interfere with a subsequent installation if you re-use the same machine instances.

To work around this delete the files, OR, re-install the OS for the machine instances.

See also