Every deployment pipeline needs credentials. Database passwords, API tokens, SSH keys and cloud provider secrets flow through your CI/CD system dozens of times a day. The question is not whether your pipeline handles secrets — it is whether it handles them safely.
For many teams the answer is uncomfortable. A 2025 GitGuardian report found that over 10 million secrets were exposed in public GitHub repositories in a single year. But public repos are only the tip of the iceberg. Private repositories, build logs and environment variable dashboards are just as vulnerable when secrets management is an afterthought.
The most common pattern is also the most dangerous: credentials committed directly into source code or configuration files. Even if the repository is private, every developer with read access can see every secret. And once a credential enters git history, it persists forever unless you rewrite the entire history.
Here are the most common anti-patterns:
DB_PASSWORD = "s3cret123" in a config file checked into gitecho or printenvdocker historyMany teams graduate from hardcoded secrets to environment variables and consider the problem solved. It is not. Environment variables are an improvement, but they have significant limitations:
A more robust approach is the Agent Gateway pattern: your CI/CD pipeline requests credentials from a vault at runtime, and a human (or an auto-approve policy) grants access. The credential is delivered just-in-time, used once and never stored on disk.
The flow works like this:
This pattern provides a full audit trail, time-limited access and the ability to revoke an agent key instantly if a pipeline is compromised.
Here is how a deployment script requests a database credential before running migrations:
from unveilpass import UnveilPass
client = UnveilPass(
server_url="https://unveilpass.com",
agent_key="uvp_agent_a1b2c3d4e5f6..."
)
# Request the production database credential
cred = client.get_credential(
entry_id="f47ac10b-58cc-7ce8-a183-0c7d220b4a12",
timeout=120 # wait up to 2 minutes for approval
)
db_host = "prod-db.internal"
db_user = cred["username"]
db_pass = cred["password"]
# Run migrations — credential is never written to disk
os.system(f'DATABASE_URL="postgres://{db_user}:{db_pass}@{db_host}/app" alembic upgrade head')
The same pattern in a Node.js deployment pipeline:
const { UnveilPass } = require("@unveilpass/sdk");
const client = new UnveilPass({
serverUrl: "https://unveilpass.com",
agentKey: process.env.UNVEILPASS_AGENT_KEY
});
async function deploy() {
const cred = await client.getCredential(
"f47ac10b-58cc-7ce8-a183-0c7d220b4a12",
{ timeout: 120000 }
);
// Use credential for deployment
await runMigrations({
host: "prod-db.internal",
user: cred.username,
password: cred.password
});
}
| Platform | Agent Key Storage | Invocation |
|---|---|---|
| GitHub Actions | Repository secret (UNVEILPASS_AGENT_KEY) | Run SDK in a step before deployment |
| GitLab CI | CI/CD variable (masked, protected) | Script block with SDK call |
| Jenkins | Credentials store (secret text) | Pipeline stage with withCredentials |
| CircleCI | Context or project env var | Run command in job |
- name: Fetch deploy credentials
run: |
pip install unveilpass
python -c "
from unveilpass import UnveilPass
c = UnveilPass('https://unveilpass.com', '${{ secrets.UNVEILPASS_AGENT_KEY }}')
cred = c.get_credential('${{ vars.DB_ENTRY_ID }}')
# Export for subsequent steps (masked by Actions)
print(f'::add-mask::{cred[\"password\"]}')
print(f'DB_USER={cred[\"username\"]}') >> '$GITHUB_ENV'
print(f'DB_PASS={cred[\"password\"]}') >> '$GITHUB_ENV'
"
Regardless of which tool you use, these principles should guide your approach:
If installing an SDK is not practical (minimal Docker images, shell scripts, legacy systems), the same flow works with plain HTTP requests:
# Request a credential
REQUEST_ID=$(curl -s -X POST https://unveilpass.com/api/agent/request \
-H "X-Agent-Key: $AGENT_KEY" \
-H "Content-Type: application/json" \
-d '{"entry_id": "f47ac10b-58cc-7ce8-a183-0c7d220b4a12"}' \
| jq -r '.id')
# Poll until approved (or timeout after 2 minutes)
for i in $(seq 1 24); do
STATUS=$(curl -s https://unveilpass.com/api/agent/request/$REQUEST_ID \
-H "X-Agent-Key: $AGENT_KEY" | jq -r '.status')
[ "$STATUS" = "approved" ] && break
sleep 5
done
# Fetch the credential (one-time use)
CRED=$(curl -s https://unveilpass.com/api/agent/credential/$REQUEST_ID \
-H "X-Agent-Key: $AGENT_KEY")
DB_USER=$(echo $CRED | jq -r '.username')
DB_PASS=$(echo $CRED | jq -r '.password')
Start by auditing your current pipelines. Search your repositories for common secret patterns — grep -r "password\|secret\|api_key\|token" .github/ Jenkinsfile Dockerfile. You will likely find at least a few credentials that should be moved to a proper secrets manager.
Then adopt a vault-based workflow: store credentials in an encrypted vault, fetch them at runtime via an API and ensure every access is logged. UnveilPass provides this through its Agent Gateway with Python and Node.js SDKs, IP whitelisting and one-time credential consumption — but the principles apply regardless of the tool you choose.
Stop hardcoding credentials in your CI/CD configuration. Fetch secrets at runtime with full audit trail and one-time consumption.
Get Started Free