The last 5 posts to the Eiara Blog
One of the core axioms of security for modern web deployments is to not check your keys into source control. From our API keys to our database passwords, these secrets being in our Git repository is a recipe for multiple kinds of disaster.
From the firing of a new engineer who used production credentials left in onboarding documentation to the numerous reports of plaintext credentials being used to pivot further into a breach, we’re instructed over and over to encrypt credentials, and decrypt only once we’ve deployed.
Encryption is presented as the answer, as the secure approach, but encryption grants us nothing. We still need to be able to decrypt the secrets, and inject them into our app during startup. In testing, we often do this with the equivalent of
.env files, and the Twelve Factor Application approach recommends we load our secrets into environment variables.
What this process loses is visibility. Our secrets, theoretically, only get decrypted and used once, during application startup. But, how do we know? In a development context, this is largely irrelevant, because our
.env files or startup scripts have disposable credentials for disposable services. In production, though, do we know if credentials have been decrypted outside of our normal process?
Tools such as Hashicorp’s Vault are intended to solve this problem, by providing an interface that logs every request and regularly rotates key material, which both makes rogue access visible and ensures that any breach can be quickly repaired.
However, integrating Vault requires changes to development and deployment workflows. Instead of loading secrets from a file, developers need to modify the application to request credentials from the Vault API. This is fine, but it involves both extra developer load and a different code path in production and development.
Not to mention the extra maintenance and operational requirements that adding another service to the deployment stack entails.
What if there was a simpler idea?
Ratchet is a similar, simpler idea.
Rachet works by providing a Docker container that mounts a directory, and creates a
.env in that directory. But instead of it being a file, that
.env is backed by a FIFO buffer.
Once the FIFO is opened by the application, to read its environment as normal, Ratchet reaches out to an API and fetching the standard credentials file and writing it to the FIFO, ratcheting around to wait for any subsequent read.
The API backing Ratchet is nothing more than a simple AWS Lambda or other Azure Serverless function, with access control handled by standard IAM roles or API Gateway access keys, and that Lambda function is able to transparently use AWS KMS or Azure Key Vault to decrypt credentials dynamically, sending them over secure HTTPS to the requesting Ratchet daemon.1
By running in this way, Ratchet ensures that credentials are always stored in encryption-at-rest by the Lambda function and cloud key management, are always encrypted during transit to the Ratchet daemon, which offers similar visibility and security semantics when compared with tools like Vault.
Crucially, this strategy also allows Ratchet to be seamlessly integrated with existing development and deployment workflows, as the same code path is always used - reading a file from disk with the credentials we need. These credentials are never stored on disk on the deployed machines, and any attacker reading from our source file provides an access notification.
This technique also ensures easy rotation of secrets, as we merely have to update the Lambda function’s environment with new, KMS-secured environment variables, and perform a rolling restart of our services.
Ratchet uses a Docker image because it specifically came out of the difficulty involved in securely distributing credentials to containers running on ECS hosts. While direct API calls to AWS KMS are always a possibility, this requires updates and ongoing developer attention and support to maintain.
By using the Ratchet Docker image, the deployment and startup is seamlessly handled by ECS, and ensures that only a single interface point exists for managing key material.
Ratchet aims to be fully application-agnostic by providing a simple, cross-language interface that’s easy to integrate with. This simplicity and generic approach ensures that any web stack can benefit from secure, reliable secret storage with a minimum of developer time or maintenance.
While Ratchet is currently built for AWS services, Azure offers identical functionality, and Ratchet would run seamlessly on the Azure platform. ↩