Architecture

Build-Time Secret Serialization Case Study: When a Secure Vault Fed a Public React Bundle

Published April 2, 2026

This is a composite technical case study based on a real class of deployment failure common to frontend pipelines that mix secret management with browser-delivered assets.

Plain-English Version

The short version is simple.

A company head stored secrets locally, then pushed them into the wrong place.

Trying to do the work himself through AI, despite not being a developer and not having relevant engineering experience, he kept private values in local environment files and fed them directly into a React frontend build. The build process did not keep them private. It baked them into the JavaScript files shipped to the public website.

That turned private credentials into downloadable website content.

The situation was made worse by a build environment that was not properly isolated. Local .env values from a developer machine also made their way into the production artifact, leaking internal paths and dev-only credentials alongside production values.

When the issue was reported, the response focused on patching code and disguising the exposed value rather than removing the exposure path and rotating keys.

Situation

The head of the company wanted a straightforward deployment model for a React application and was using AI to fill the gap in technical execution.

The intended pattern looked responsible on the surface:

  1. store secrets in local environment files
  2. load them during deploy
  3. expose them as environment variables
  4. run the frontend build
  5. publish the static bundle behind CloudFront

The architectural mistake was hidden in step four.

He treated the frontend build as though it were a trusted runtime boundary. It was not.

The Technical Failure

The deployment script passed private values into the frontend build environment immediately before invoking npm run build.

In this specific case, the frontend was using Create React App. CRA treats REACT_APP_* values as build-time inputs, not protected runtime secrets.

That means values referenced in the code are inserted directly into the generated bundle. Once that build completes, the secret is no longer secret material held by infrastructure. It is static content embedded in a public asset.

In practical terms, the pipeline did the equivalent of this:

export REACT_APP_INTERNAL_API_KEY="$INTERNAL_API_KEY"
npm run build

That pattern is operationally convenient and architecturally wrong.

Why This Became a Public Incident

Once the bundle was built, the exposure was no longer theoretical.

The secrets could appear in:

  • minified JavaScript chunks
  • string tables visible through devtools
  • source maps if enabled or accidentally deployed
  • CDN caches and downstream mirrors

From there, CloudFront multiplied the blast radius.

The moment those assets were published, anyone who downloaded the frontend had a copy of the exposed material. That is why frontend secret leaks must be treated as public disclosure, not as an internal configuration mistake.

The Local Leak

The build process had a second failure mode.

It was not cleanly isolated from the developer environment, so local .env values were able to influence the production artifact.

That led to production bundles containing information such as:

  • internal filesystem paths
  • development endpoints
  • production credentials
  • environment names and topology clues

This matters because a production build should be reproducible from controlled inputs only. If a workstation's local state can change what ships to the public internet, the release process is not auditable.

Why the Architecture Failed

The problem was not where the values were stored first. The problem was that a non-developer treated a frontend build like a safe place to handle secrets.

Locally stored environment values were pushed straight into a public asset pipeline.

That is the architectural lesson:

  • local environment storage does not create a trust boundary
  • a browser bundle is not a secret-holding runtime
  • once a secret enters a public frontend build, it should be treated as disclosed

The browser is a public runtime. Anything compiled into a frontend bundle should be assumed public by definition.

Automation Without Ground Truth

This incident is also a case study in what happens when automation outruns architectural understanding.

An autonomous agent can correctly:

  • load environment variables from local files
  • wire environment values into scripts and builds
  • export variables
  • build the app
  • deploy assets to a CDN

Every one of those steps can be mechanically correct while the overall system remains insecure.

That is what happened here. Procedure was automated successfully. Trust boundaries were not understood.

The core governance problem was not a weak engineering team. It was the absence of actual engineering judgment while a non-developer tried to orchestrate architecture through AI prompts and partial fixes.

The Remediation Failure

After the leak was reported, the response drifted into what can best be described as vibe-fixing.

This was not a case where no competent help was available. The subject matter expert who surfaced the issue also knew how to fix it and offered help, but that help was declined.

Instead of starting with containment, key rotation, and removal of build-time secret injection, the response created a new unauthenticated endpoint that accepted a company ID, returned an API key, and then stored that key in the browser. The project did not actually need that key path, so the "fix" amounted to an unnecessary credential exfiltration service.

That failed for three reasons.

1. It created a second exposure path

The original leak was caused by sending a secret into a frontend build. The follow-on "fix" did not remove that design error. It added another one by creating a public-facing mechanism to hand credentials to the browser on request.

Once a secret can be requested from an unauthenticated endpoint and stored client-side, the system has moved from accidental disclosure to deliberate distribution.

2. It targeted the wrong path

The API key being returned was not even necessary for the project's real request flow, which was handled through a proxy. That means a new credential-delivery mechanism was created for behavior tied to effectively dead or non-primary code.

That meant response time was spent building a key-returning endpoint for a path the application did not need, while the architectural cause of the incident remained in place.

3. It confused delivery with security

Returning a key from an endpoint based on a company identifier may have looked like a more dynamic or controlled approach, but it did not create security. It simply automated the delivery of a sensitive value to an untrusted client.

That is why the response became security theater. The mechanics of key delivery were polished while the endpoint remained unauthenticated and the credential remained unnecessary.

Deterministic Engineering vs. Vibe-Based Engineering

The subject matter expert (ReddingCTO) recommended the correct model: keep secrets behind a backend or proxy, and expose only public configuration to the browser.

That model is deterministic because it starts from explicit rules:

  • secrets do not belong in client bundles
  • production builds run in clean isolated environments
  • public artifacts are scanned before release
  • exposed credentials are rotated on report, not debated

The failed model worked backward from convenience:

  • load the local secret
  • inject the secret
  • if exposed, create another path to hand it to the browser
  • ship the bundle
  • patch whatever looks suspicious afterward

That is not reliable engineering. It is improvisation with better tooling.

The 24-Hour Exposure Window

One of the most serious parts of this kind of failure is not just the leak itself. It is the period after credible notification when the exposed secret remains active.

Leaving a publicly disclosed credential live for roughly 24 hours after notice materially increases risk because discovery is cheap:

  • download the bundle
  • grep for known key patterns
  • inspect strings in devtools
  • test access against APIs and third-party services

The right question is not whether abuse can be proven immediately.

The right question is whether the organization knowingly left a compromised key active for a full day after notice.

In this instance, that risk was compounded by the fact that competent remediation help had been offered and declined.

If the answer is yes, the response failed.

What a Correct Response Looks Like

The minimum defensible sequence is:

  1. remove or invalidate the public artifact
  2. rotate every credential that may have been serialized
  3. audit downstream service logs for suspicious use
  4. remove any endpoint that returns credentials to the browser
  5. remove build-time secret injection from the frontend pipeline
  6. rebuild from a clean environment

Anything less treats a known public leak as a hypothetical problem.

Takeaway

This incident is a warning for founders, operators, technical leads, and CTOs.

Automation did not create the architecture flaw, but it accelerated it. Locally stored secrets fed an insecure public asset. A non-developer using AI as a substitute for engineering practice then optimized for visible activity instead of actual containment.

The result was high-speed liability generation.

The durable lesson is blunt: automation without architectural understanding is not leverage. It is exposure at scale.

Want this kind of workflow in your business?

Book a quick call to map your current process and identify the highest-impact automation opportunities.

Book a Strategy Call