Delivering RPM packages securely and continuously with Jenkins and Hashicorp Vault

 

When you publicly deliver more than eight releases a day—like we do for Tuleap—making sure that users can verify the authenticity of the deliverables becomes a challenge. Over the past few weeks we have modified our Jenkins build pipelines to GPG sign every RPM package we deliver, thus enhancing the level of security we bring to every user.

Why GPG signing RPM packages is important?

rpm packages hashicorp vault
 

Signing RPM packages lets your users verify the authenticity of the data they download. It prevents unauthorized parties from altering packages of the repository server without anyone noticing, which is definitely something you want when you make software. Namely, you do not want hackers to be able to alter the software your users download, potentially creating backdoors or taking over full control of users’ systems. Signing packages is also one the necessary steps towards achieving secure code delivery.

When you distribute RPM packages, you can use GPG signatures to protect the following two elements:

  • First, the RPM package itself. The signature is embedded in each RPM file. It is the first item you should sign, as the RPM packages are your software’s “carriers.”
  • The Yum repository metadata. Yum uses the metadata to collect information about each package of the repository to determine dependencies. The metadata should also be signed, as it determines which dependencies will be installed with your software and if an update is available for your system, two things you do not want a hacker to control.

The challenge of signing automatically RPM packages

rpm-package-management-300x257.png
 

Before we started automating the signing of RPM packages, we were only signing our software monthly milestone release packages using a tedious and error-prone manual process that also made signing our daily releases impossible. We decided it was time to change that!

To automate the signing process, however, we needed to find a way to securely store the GPG key used to sign the packages. If the key is stolen by a hacker, for example, all your efforts will be worthless: The hacker simply signs his own malicious artifacts with your trusted key.

One industry standard for storing private keys is to use a hardware security module. Unfortunately, this was not compatible with our current infrastructure. So, we continued our quest for a solution.

We stumbled on Sigul, which is the automated GPG signing system used by the Fedora Release Engineering team. While Sigul did address our needs, the setup and maintenance were a bit intimidating. Also, Sigul does not appear to be widely used outside of the Fedora infrastructure, another factor that pushed us to keep searching.

At around the same time, we were starting to use Hashicorp Vault to manage the secrets of the SaaS service, so it made sense to use it in our package signing process, too. The ideal use case would be to send the data that need to be signed to Vault and to retrieve the signature. This way, the GPG key stays inside Vault at all times. Unfortunately, this is not a supported use case at this time. Because it made sense for us to reuse an existing component of our infrastructure, we decided to store the GPG key in the generic secret backend of Vault and to retrieve it only when needed.

Automated GPG signing of RPM packages and repositories with Jenkins and Hashicorp Vault

jenkins continuous integrationvault-hashicorp.png
 

We build our packages in a Jenkins pipeline. Therefore, adding steps at the end of the pipeline to sign the packages and the repository seemed pretty straightforward—we just had to extend the small library  we use in our build processes to add the signature-related components. Like with almost everything we do in Jenkins, we encapsulated all the heavy lifting in Docker images, as this helps us obtain reproducible behaviors between environments. All that was left to do was to figure out the whole process and limit the exposure of the GPG key as much as possible.

To get the GPG key to the Vault container without exposing it too much, Vault has a mechanism to wrap all of the responses it issues inside a single-use token with a short lifetime. So, you can safely pass secrets to a Docker container while limiting exposure. If the token passed to the container is stolen, the fact that it is single-use means that any breaches can be rapidly detected. Plus, the token’s short lifetime makes it harder for hackers to use it.

 

We took a few additional steps to limit the exposure of the GPG key as much as possible

gpg key
 
  • We avoided manipulating the GPG key anywhere else but in RAM. This was because we wanted to avoid writing anything to the disk, something that makes easier to retrieve the key even after the container is deleted. To do this, we built a tmpfs directory to handle all of our sensitive data, and we instructed the OS to avoid using the swap for this container.
  • We overwrote sensitive information with random data before exiting the container. This was to avoid leaving any traces that could make it possible to extract the GPG private key from the RAM.
  • We made sure that we did not leak the key in the Docker container logs.
  • We built and signed the packages and repository on dedicated Jenkins slaves. This ensures that we do not expose the key on slaves running unrelated activities, like executing our unit tests.
  • We used strict policies in Vault to restrict who can access the key and how the key can be accessed.
 

So, with all that in mind, we ended up with the following process

  1. Authenticate against Vault using the AppRole authentication backend. We store the RoleID and SecretID in the Jenkins Credentials plugin.
  2. Retrieve the GPG key response wrapped from Vault to pass it to the container that will sign the packages.
  3. Start the Docker container that signs the packages. The response wrapped GPG key is given as an environment variable to the container.
    1. The container retrieves the GPG key from Vault with the wrapped response.
    2. Load the GPG key in the keyring.
    3. Sign all the packages.
    4. Overwrite the sensitive data before exiting the container.
  4. Construct the repository metadata.
  5. Retrieve the GPG key wrapped response from Vault to pass it to the container that will sign repository metadata.
  6. Start the Docker container that signs the repository metadata. The wrapped response GPG key is given as an environment variable to the container.
    1. The container retrieves the GPG key from Vault with wrapped response.
    2. Load the GPG key in the keyring.
    3. Sign the repository metadata.
    4. Overwrite the sensitive data before exiting the container.
  7. Publish the newly built and signed repository.
 

You will find all of the code we used to do this in our repositories

While the process is still not perfect, it is vastly improved. We are now able to sign every RPM package and repository we release; we have also eliminated the risk of human error. Also, with Vault, we have an auditable journal of when, where, and by whom the GPG key is used. Being able to sign data with a GPG key directly in Vault would be a major improvement as it would greatly reduce the exposure of the key.

Subscribe to Tuleap newsletter Join Online meetings

Share this post

Comments (4)

  • anon

    nice article but not able to access to docker images ...

    Aug 05, 2017
  • anon

    Hi, Sorry about that, the permissions on the repository were a bit too restrictive. It should be good now!

    Aug 30, 2017
  • anon

    Getting an error on the Docker Images link "You are not allowed to access this page"

    Aug 17, 2017
  • anon

    Hi, Sorry about that, the permissions on the repository were a bit too restrictive. It should be good now!

    Aug 30, 2017

Leave a comment

To prevent automated submissions please leave this field empty.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.