Before we delve into the core issues, let’s briefly review the landscape of non-human identity (NHI) federation among the big three cloud providers. We’ve previously covered how external identity federation using ID tokens works in AWS, GCP, and Azure in this blog post.
EC2 instances can use instance profiles to obtain temporary credentials with the permissions of an IAM role. These credentials are fetched via the AWS Instance Metadata Service (IMDS) and can be used by workloads running on the instance to request a signed token from AWS STS.
This token can then be exchanged:
This allows workloads running on AWS EC2 instances to authenticate to GCP or Azure using the IAM role of the instance.
Azure VMs can be assigned managed identities. Workloads on such VMs can retrieve access tokens directly from the Azure Instance Metadata Service without requiring secrets. The VM’s managed identity backs these tokens and is automatically scoped to the environment.
This token can then be exchanged:
This allows workloads on Azure VMs to authenticate to GCP or AWS using the managed identity of the VM.
GCP VMs can obtain identity tokens from the GCP Instance Metadata Service (IMDS). These tokens represent the VM’s service account and can be retrieved by any workload running on the VM without requiring secrets.
This token can then be exchanged:
This allows workloads running on GCP VM instances to authenticate to Azure or AWS using the VM’s identity.
While these integrations enable multi-cloud identity federation, they have important limitations:
Even with cloud federation mechanisms in place, something still has to retrieve the ID token in the first place. This often means:
When workloads run within a cloud provider’s infrastructure, this burden is significantly reduced thanks to built-in identity mechanisms:
However, as discussed above, these cloud-native options come with important caveats, particularly regarding shared identity at the instance level. As a result, they may not be desirable in scenarios requiring strict workload-level isolation.
For hybrid environments, on-prem infrastructure, or more fine-grained identity boundaries, you’re still often left with the traditional complexity of securely provisioning and rotating credentials to the right process and keeping them out of reach from others.
Solutions like HashiCorp Vault, or AWS Secrets Manager help, but they introduce their overhead as they require setup, access control, encryption configuration, and often come with latency and availability concerns.
In the end, secure credential distribution and isolation remain a hard problem, even when federation mechanisms are available.
Imagine a world where you don’t have to think about storing or rotating secrets, where every workload has its own cryptographic identity, and where credential issuance is automatic and secure.
This vision includes:
This enables:
This is exactly what we’re building at Riptides: rooted in the Linux kernel and built on SPIFFE, that works seamlessly across on-prem, hybrid, and cloud-native environments.
Riptides acts as an external identity provider (IDP) to cloud platforms like AWS, GCP, and Azure, solving the credential delivery challenge at the operating system level.
Here’s how it works:
sysfs delivery — Credentials are exposed just-in-time via sysfs, scoped so only the requesting workload can read them at the exact moment they call a cloud API.sysfs path, or completely transparently when using the on-the-wire injection.The diagram below shows the high-level flow of the sysfs based solution. In a follow-up post, we’ll cover the on-the-wire credential replacement approach.

sysfs with the gcloud CLIIn the following recording, you’ll see a simple demo showing how credentials prepared by Riptides in sysfs can be used with the gcloud CLI.

Initial state, no credentials available
First, you can see that the gcp_credentials.json file is not accessible.
This is because the credential source configuration and policies, which Riptides uses to generate credentials for the gcloud CLI, have not yet been created.
Grant workload access to GCP resources
Next, we run the step labeled # Allow workload access to GCP resources.
In this step, we create the required credential source configuration and policies. These are defined as Kubernetes custom resources, which are consumed by the Riptides Control Plane:
apiVersion: core.riptides.io/v1alpha1
kind: CredentialSource
metadata:
name: gcp-wif
spec:
gcp:
serviceAccount: demo-56@deft-diode-457816-s2.iam.gserviceaccount.com
oidcProviderId: //iam.googleapis.com/projects/432279690143/locations/global/workloadIdentityPools/demo/providers/demo2
---
apiVersion: core.riptides.io/v1alpha1
kind: WorkloadCredential
metadata:
name: gcp-access-token
spec:
workloadID: staging/demo/gcloud-cli
credentialSource: gcp-wif
---
apiVersion: core.riptides.io/v1alpha1
kind: WorkloadIdentity
metadata:
name: gcloud-cli
spec:
scope:
agentGroup:
id: riptides/agentGroup/demo
selectors:
- process:binary:path: /snap/google-cloud-cli/364/usr/bin/python3.10
process:gid: 1000
process:uid: 501
workloadID: staging/demo/gcloud-cli
We’ll cover the full configuration process in a separate post to avoid overloading this one with too much detail. Briefly:
Once configured, Riptides securely obtains the credentials from the cloud provider and delivers them to sysfs on the relevant node.
Access to credentials restricted to gcloud workload
In the step labeled # Access is restricted so that only the designated workload can read its assigned credentials, we verify that no other process can access these credentials. Only the process matching the configured WorkloadIdentity attributes can read them.
Successful authentication
After that, the gcloud auth login command succeeds because the credentials are now available in sysfs.
Note the UUID after /sys/kernel/riptides/credentials/ in the path; this is derived from the workload’s SPIFFE ID and scopes the credentials to that workload.
This ensures that no other workload can access this path. The Linux kernel module verifies workload attributes at runtime to determine the workload’s identity. Only if the UUID derived from this identity matches the UUID in the path is the process allowed to read its contents.
Listing VM instances
Once logged in, we successfully list VM instances using the gcloud CLI.
Revoking access
Finally, in the step labeled # Remove GCP access and try again, we delete the credential source configuration and policies for the gcloud workload.
As expected, listing VM instances now fails because the credential file has been removed from sysfs.
In short: you never have to handle GCP credentials manually; Riptides provisions, delivers, scopes, and refreshes them securely and automatically, exactly when needed.
We believe every workload deserves a verifiable, unique, and trusted identity and SPIFFE provides exactly that.
In traditional models, credentials are distributed to a host, and any process on that host can use them, regardless of what it is. Riptides changes that:
This enforces strict credential isolation, reducing the risk of lateral movement, privilege escalation, and credential leakage.
Riptides brings zero-secret, per-workload identity to any Linux system, with no application code changes required. It eliminates manual credential distribution and ensures that each workload receives the right credentials, at the right time, in the most secure way possible.
Modern cloud-native federation is a step forward — but it still places a hidden burden on developers and operators. Cross-cloud federation helps, but it’s not workload-aware, not granular, and certainly not zero-trust.
Riptides provides a truly secure, zero-touch identity solution:
If you’re building secure cloud-native systems at scale, it’s time to rethink how you manage non-human identity. Let Riptides handle it for you securely, automatically, and correctly.
In our next post, we’ll dive deeper into the on-the-wire credential injection method and demonstrate how Riptides works in practice with real applications and cloud providers, all without secrets. Stay tuned.