BYO-Login in Python: Building a Credential Proxy When the Third Party Has No OAuth

When the upstream system you depend on has no OAuth, no API keys, and no developer surface, your only path is to log in as the user. Here is how to do that without becoming a security hazard.

I was scoping a product around an Indian government portal a few weeks ago. The public pages were fine, scrapeable, well-formed JSON behind an Angular SPA. The page that mattered, the one that owned 80% of the search volume, sat behind a registered account with email-OTP. No OAuth. No API keys. No “developer portal.” Just a login form and a captive audience of users who already had accounts.

There were three roads. Register a pool of fake accounts and rotate (terms-of-service violating, fragile, ugly). Skip the feature and lose 80% of the addressable market. Or do the thing nobody writes about: ask the user for their portal credentials and log in on their behalf. Plaid did this for years before banks built open APIs. Mint did it. Yodlee, Tiller, every PFM tool in the pre-PSD2 era. The pattern has a name now: BYO-login.

This post is the implementation. Real Python, real FastAPI, real KMS, real WebSocket OTP exchange. The bits most tutorials skip are the ones that decide whether you ship a useful product or a security incident.

What BYO-Login Actually Is

The shorthand “BYO-login” gets used loosely. To be precise: BYO-login is a pattern where your application stores a user’s credentials for a third-party system and acts as that user against the third party on demand. It is not OAuth. OAuth is the opposite, the user authenticates with the third party and the third party hands you a scoped token. BYO-login is what you reach for when the third party offers no such mechanism.

The defining property is that you, the application, end up holding the keys. Not a token, not a session ID, not a derived secret. The actual password. You need it because the only programmatic interface the third party exposes is “fill the login form.” Tomorrow’s session expires. Tomorrow’s OTP demands a fresh authentication. You have to be able to do it again without bothering the user every single time.

This is uncomfortable. It should be uncomfortable. There is no version of this pattern where you do not become a high-value target the moment your database leaks. The right question is not whether to be uncomfortable, but whether your discomfort produces good engineering or just hand-wringing.

When You Should Reach for It

Three preconditions. Miss any of them and the pattern is wrong.

The third party has no OAuth, no service account, no API key. You have looked, hard. You have read their API docs (they have none) and you have asked their support team (they referred you to the API docs). The “API gateway” they proudly announced last year is gated by a Power-of-Attorney + audited balance sheet + India-located static IP whitelist + annual VAPT. For a one-person product, that is “no API.”

The user already has an account on the third party. You are not registering accounts on their behalf. You are not impersonating them. You are doing what they would otherwise do manually: type their password, wait for an OTP, click a button, copy a result. The user understands this and consents to it explicitly.

The use case is concrete enough to justify the security overhead. “Save users 90 seconds on a query they do once a quarter” probably does not justify it. “Save a customs broker 4 hours per shipment, 30 shipments per week” does. The investment is not small, and the operational tax is not zero.

If those three are true, the pattern is on the table. Now the work begins.

The Threat Model Nobody Writes About

Three actors care about your design choices.

The user. Your customer. They handed you a password to a system that may also gate their bank, their tax filings, their import licence. Your job is to ensure that even if your own systems are compromised, the blast radius for them is bounded. Concretely: an attacker who walks off with your database should not walk off with their plaintext credentials.

The third party. Not an adversary, but not aligned. Their fraud detection has no idea your traffic is legitimate-on-behalf-of. They will see logins from your data centre IPs, with browser fingerprints that look nothing like the user’s normal sessions, and they will sometimes lock the account. Their legal team has not blessed this arrangement. Their terms of service may forbid it.

The attacker. Wants the credentials. Plural. Yours is a high-value target precisely because you have many users’ credentials in one place. You will be probed.

The interesting design pressure is that the user trusts you in two distinct ways and you have to honour both: they trust that you will not look at their credentials, and they trust that you will use their credentials to do work for them. Those two requirements pull in opposite directions, because to use the password you must, at some moment, hold it in plaintext.

Four Hard Problems

Strip away the framework choices and the database vendors and four problems remain. Storage. OTP exchange. Session reuse. Failure detection. The rest of the post is one section per problem, with code, plus a wiring section showing how they fit together in FastAPI.

Problem 1: Storage Without a Single Point of Compromise

The naive instinct is to encrypt the credentials with a key in your .env file. This is approximately as useful as not encrypting them. An attacker who reaches your database almost always reaches your application server, and your .env file goes with it. One key, all users compromised, no rotation story.

The right pattern is envelope encryption with a managed KMS. Two layers of keys. The outer key (the KEK, key-encryption-key) lives in a hardware-backed KMS like Google Cloud KMS or AWS KMS. You never see it; you can only ask the KMS to encrypt and decrypt small payloads on your behalf. The inner key (the DEK, data-encryption-key) is generated fresh per credential, used to encrypt the actual password, then itself wrapped with the KEK and stored alongside the ciphertext. The plaintext DEK never persists.

# app/auth/crypto.py
"""Envelope encryption for user credentials, backed by Google Cloud KMS."""

import os
from dataclasses import dataclass

from cryptography.hazmat.primitives.ciphers.aead import AESGCM
from google.cloud import kms

KMS_KEY_NAME = os.environ["KMS_KEY_NAME"]
# Format: projects/PROJ/locations/LOC/keyRings/RING/cryptoKeys/KEY


@dataclass(frozen=True)
class EncryptedBlob:
    wrapped_dek: bytes  # DEK encrypted by KMS KEK
    nonce: bytes  # 12 bytes, fresh per encryption
    ciphertext: bytes  # AES-GCM ciphertext + auth tag

    def serialize(self) -> dict:
        import base64

        return {
            "wrapped_dek": base64.b64encode(self.wrapped_dek).decode(),
            "nonce": base64.b64encode(self.nonce).decode(),
            "ciphertext": base64.b64encode(self.ciphertext).decode(),
        }

    @classmethod
    def deserialize(cls, data: dict) -> "EncryptedBlob":
        import base64

        return cls(
            wrapped_dek=base64.b64decode(data["wrapped_dek"]),
            nonce=base64.b64decode(data["nonce"]),
            ciphertext=base64.b64decode(data["ciphertext"]),
        )


def encrypt(plaintext: str, *, owner_id: str) -> EncryptedBlob:
    """Encrypt plaintext for a specific owner.

    The owner_id is bound into AES-GCM's additional authenticated data.
    Swapping a blob between two users will fail decryption.
    """
    dek = AESGCM.generate_key(bit_length=256)
    nonce = os.urandom(12)
    cipher = AESGCM(dek)
    ciphertext = cipher.encrypt(
        nonce, plaintext.encode("utf-8"), owner_id.encode("utf-8")
    )

    client = kms.KeyManagementServiceClient()
    wrap = client.encrypt(request={"name": KMS_KEY_NAME, "plaintext": dek})

    # Best-effort scrub. Python does not guarantee this, but it costs nothing.
    dek = b"\x00" * 32
    del dek

    return EncryptedBlob(
        wrapped_dek=wrap.ciphertext,
        nonce=nonce,
        ciphertext=ciphertext,
    )


def decrypt(blob: EncryptedBlob, *, owner_id: str) -> str:
    client = kms.KeyManagementServiceClient()
    unwrap = client.decrypt(
        request={"name": KMS_KEY_NAME, "ciphertext": blob.wrapped_dek}
    )
    cipher = AESGCM(unwrap.plaintext)
    plaintext = cipher.decrypt(blob.nonce, blob.ciphertext, owner_id.encode("utf-8"))
    return plaintext.decode("utf-8")

A few details that matter and are easy to skip.

The owner_id goes into the AES-GCM additional authenticated data (AAD). This means a stolen blob attached to user A cannot be silently re-attached to user B. If you forget this and your database lets an attacker swap rows, they bypass your authorisation logic by laundering blobs through user accounts they control.

The DEK is never persisted in plaintext. KMS unwraps it on every decrypt. This costs you a network round trip per credential use, around 20 milliseconds in the same region. That cost is the price of being able to revoke a single key version in KMS and lose access to every blob encrypted under it. It is also the cost of getting an audit trail in KMS of every decryption, with caller identity, that you cannot tamper with from your application.

The KMS IAM grant matters. Your application’s service account needs cloudkms.cryptoKeyEncrypterDecrypter on the specific key, scoped tighter than the rest of the service. If your admin tooling runs under the same identity as the request handlers, you have a problem. Split them. Admins can read encrypted blobs but cannot decrypt them.

Problem 2: OTP Exchange in Real Time

The third party sends an OTP to the user’s phone or email. Your worker is in the middle of a login flow, frozen at the OTP prompt, with maybe 60 to 180 seconds of validity. The user is staring at your app’s interface waiting for the OTP field to appear. You need a real-time bridge from the user’s browser into your worker process.

WebSockets are the right tool. The user opens a WebSocket as part of starting the connect flow. The worker, when it reaches the OTP step, publishes an “OTP_NEEDED” event on a per-user channel. The browser shows an input. The user types. The browser sends the OTP back over the WebSocket. The worker, which has been awaiting on a queue, receives it and continues the login.

For a single-process FastAPI app the channel can be an asyncio.Queue keyed by user_id. For multi-process you need Redis pub/sub or similar. The pattern is the same.

# app/auth/otp_channel.py
"""Per-user OTP exchange channel.

Single-process implementation. For multi-process replace the dict with
Redis pub/sub on a per-user-id channel.
"""

import asyncio
from dataclasses import dataclass


@dataclass
class OTPRequest:
    """A login flow is paused waiting for an OTP from the user."""

    user_id: str
    delivery_target: str  # e.g. "registered mobile ending 4127"
    expires_in_seconds: int


class OTPChannel:
    def __init__(self) -> None:
        self._requests: dict[str, asyncio.Queue[OTPRequest]] = {}
        self._submissions: dict[str, asyncio.Queue[str]] = {}

    def _ensure(self, user_id: str) -> None:
        self._requests.setdefault(user_id, asyncio.Queue(maxsize=1))
        self._submissions.setdefault(user_id, asyncio.Queue(maxsize=1))

    async def request_from_user(self, req: OTPRequest) -> str:
        """Called by the worker. Publishes a request, awaits the OTP."""
        self._ensure(req.user_id)
        await self._requests[req.user_id].put(req)
        try:
            otp = await asyncio.wait_for(
                self._submissions[req.user_id].get(),
                timeout=req.expires_in_seconds,
            )
        except asyncio.TimeoutError:
            raise OTPTimeout(req.user_id) from None
        return otp

    async def next_request(self, user_id: str) -> OTPRequest:
        """Called by the WebSocket handler. Awaits a pending OTP request."""
        self._ensure(user_id)
        return await self._requests[user_id].get()

    async def submit(self, user_id: str, otp: str) -> None:
        """Called by the WebSocket handler when the user types an OTP."""
        self._ensure(user_id)
        await self._submissions[user_id].put(otp)


class OTPTimeout(Exception):
    pass


# Module-level singleton. In a real app, inject via FastAPI's Depends.
OTP_CHANNEL = OTPChannel()

The WebSocket handler is straightforward.

# app/api/otp_socket.py
from fastapi import APIRouter, WebSocket, WebSocketDisconnect

from app.auth.otp_channel import OTP_CHANNEL
from app.auth.session import current_user_from_socket

router = APIRouter()


@router.websocket("/ws/otp")
async def otp_socket(ws: WebSocket) -> None:
    await ws.accept()
    user = await current_user_from_socket(ws)
    if user is None:
        await ws.close(code=4401, reason="unauthenticated")
        return

    try:
        # Send any pending OTP request immediately.
        request = await OTP_CHANNEL.next_request(user.id)
        await ws.send_json(
            {
                "type": "otp_required",
                "delivery_target": request.delivery_target,
                "expires_in_seconds": request.expires_in_seconds,
            }
        )
        message = await ws.receive_json()
        if message.get("type") == "otp_submit":
            await OTP_CHANNEL.submit(user.id, message["otp"])
            await ws.send_json({"type": "ack"})
    except WebSocketDisconnect:
        # User closed the tab. The worker's wait_for will time out
        # and surface a clean error to the next connect attempt.
        pass

A subtle one. If the user walks away and the OTP times out, you need the worker side to fail gracefully and the next attempt to start fresh, not get stuck behind a stale queue entry. Hence maxsize=1 on the queues; a duplicate publish blocks (and the worker can detect that case to abort).

Do not log OTPs. They are short-lived single-use tokens, but they are also the only thing standing between an attacker who already has the password and a fully authenticated session. Treat them like passwords for the eight seconds you hold them.

Problem 3: Session Reuse

If you re-authenticate from scratch on every query, you will burn out your users on OTPs within a week. The third party already has a session model: cookies, bearer tokens, sometimes both. Persist them and reuse them.

The lifetime of a session cookie is rarely documented. Treat it as opaque: persist it, try to use it, and if the third party rejects with a 401 or a redirect to the login page, fall back to fresh login. The key is that session reuse is the happy path; re-login is the fallback.

# app/proxy/session.py
"""Session-cookie persistence per user, encrypted alongside credentials."""

import json
from dataclasses import dataclass
from datetime import datetime, timezone
from typing import Optional

import httpx

from app.auth.crypto import EncryptedBlob, decrypt, encrypt


@dataclass
class PortalSession:
    cookies: dict[str, str]
    created_at: datetime

    def to_jar(self) -> httpx.Cookies:
        jar = httpx.Cookies()
        for name, value in self.cookies.items():
            jar.set(name, value)
        return jar

    @classmethod
    def from_response(cls, response: httpx.Response) -> "PortalSession":
        return cls(
            cookies={c.name: c.value for c in response.cookies.jar},
            created_at=datetime.now(timezone.utc),
        )


def encrypt_session(session: PortalSession, owner_id: str) -> dict:
    payload = json.dumps(
        {
            "cookies": session.cookies,
            "created_at": session.created_at.isoformat(),
        }
    )
    return encrypt(payload, owner_id=owner_id).serialize()


def decrypt_session(blob: dict, owner_id: str) -> PortalSession:
    raw = decrypt(EncryptedBlob.deserialize(blob), owner_id=owner_id)
    parsed = json.loads(raw)
    return PortalSession(
        cookies=parsed["cookies"],
        created_at=datetime.fromisoformat(parsed["created_at"]),
    )


async def is_session_alive(client: httpx.AsyncClient, probe_url: str) -> bool:
    """Cheap GET that returns 200 when authenticated, 401/redirect otherwise."""
    response = await client.get(probe_url, follow_redirects=False)
    if response.status_code in (200, 204):
        return True
    if response.status_code in (301, 302, 303, 307, 308):
        location = response.headers.get("location", "")
        if "login" in location.lower():
            return False
    return False

The probe URL is portal-specific. For an SPA-backed portal it is usually a lightweight authenticated endpoint that returns a small JSON blob (“user profile,” “current quota,” whatever). Pick one that is cheap and reliably 401s when unauthenticated. Avoid long expensive endpoints; you will call this on every request.

Store sessions separately from credentials in your database. Same encryption pattern, different row. When a user disconnects (“revoke this account”), you delete both. When you detect “session is dead but credentials work,” you delete only the session and re-login. When you detect “credentials are dead,” you delete both and tell the user to reconnect.

Problem 4: Failure Detection That Is Not “Try Again”

The third-party portal will fail in five distinct ways, and they require five distinct responses. Treating them all as transient errors is how you lock out your user’s account.

Stale session. Cookies expired. Re-login from stored credentials, retry the query. Invisible to the user.

Wrong password. The user changed their password on the portal directly without updating you. Stop trying. Mark the account as needs_reauth. Tell the user once via email or in-app banner; do not keep retrying. Three failed logins in a short window will lock out their account on the portal side, and they will blame you.

Account locked. The portal explicitly says “your account is locked.” Stop trying. Unlock typically requires a manual step on the user’s side (security questions, support call). Give them a clear path: “Your portal account is locked. Visit X to unlock, then reconnect here.”

Captcha introduced. The portal added a captcha to the login flow that was not there yesterday. Punt to the user: open a browser-side flow where they complete the captcha themselves. This is rare for SPA-backed portals but common for Struts2/JSP-backed ones.

Site is down or changed shape. The portal is returning 500s, or the JSON shape changed. This is an operational problem for you, not the user. Surface a generic “this service is temporarily unavailable” and page yourself. Users do not need a 500-page debug session.

The discriminator is response shape, not status code. Modern SPAs return 200 with {"error": "INVALID_CREDENTIALS"} for everything. You must read the body. Build a portal-specific classifier and unit test it against captured fixtures.

# app/proxy/icegate.py (sketched)
"""Driver for one specific portal. Replace per third party."""

from enum import Enum
import httpx


class LoginOutcome(Enum):
    OK = "ok"
    OTP_REQUIRED = "otp_required"
    BAD_CREDENTIALS = "bad_credentials"
    ACCOUNT_LOCKED = "account_locked"
    CAPTCHA_REQUIRED = "captcha_required"
    SITE_ERROR = "site_error"


def classify_login_response(response: httpx.Response) -> LoginOutcome:
    if response.status_code >= 500:
        return LoginOutcome.SITE_ERROR
    try:
        body = response.json()
    except ValueError:
        return LoginOutcome.SITE_ERROR

    code = body.get("status") or body.get("error_code") or ""
    if code == "OTP_SENT":
        return LoginOutcome.OTP_REQUIRED
    if code in ("INVALID_CREDENTIALS", "AUTH_FAILED"):
        return LoginOutcome.BAD_CREDENTIALS
    if code == "ACCOUNT_LOCKED":
        return LoginOutcome.ACCOUNT_LOCKED
    if code == "CAPTCHA_REQUIRED":
        return LoginOutcome.CAPTCHA_REQUIRED
    if response.status_code == 200 and body.get("isLoggedIn"):
        return LoginOutcome.OK
    return LoginOutcome.SITE_ERROR

The classifier is the single most fragile part of the system. It will break the day the portal team renames an error code. Pin it with fixture-based tests, alarm on the SITE_ERROR bucket growing, and accept that you will need to ship a small change every few months to keep it accurate.

Wiring It Together in FastAPI

The end-to-end shape, with each piece deliberately thin so you can read it.

# app/api/connect.py
from fastapi import APIRouter, Depends, HTTPException
from pydantic import BaseModel

from app.auth.crypto import encrypt
from app.auth.otp_channel import OTP_CHANNEL, OTPRequest, OTPTimeout
from app.proxy import icegate
from app.proxy.session import encrypt_session
from app.repository import CredentialRepo, SessionRepo
from app.security import current_user

router = APIRouter()


class ConnectRequest(BaseModel):
    portal_username: str
    portal_password: str


@router.post("/connect")
async def connect(
    body: ConnectRequest,
    user=Depends(current_user),
    cred_repo: CredentialRepo = Depends(),
    session_repo: SessionRepo = Depends(),
):
    # Step 1: hand creds to the portal driver. It returns either OK,
    # OTP_REQUIRED with a continuation, or a hard error.
    outcome = await icegate.start_login(
        body.portal_username, body.portal_password
    )

    if outcome.kind == "BAD_CREDENTIALS":
        # Do not store. Tell the user immediately.
        raise HTTPException(status_code=400, detail="Invalid portal credentials")
    if outcome.kind == "ACCOUNT_LOCKED":
        raise HTTPException(status_code=400, detail="Portal account locked")
    if outcome.kind == "SITE_ERROR":
        raise HTTPException(status_code=502, detail="Portal unavailable")

    # Step 2: OTP loop. The driver tells us the OTP destination.
    if outcome.kind == "OTP_REQUIRED":
        try:
            otp = await OTP_CHANNEL.request_from_user(
                OTPRequest(
                    user_id=user.id,
                    delivery_target=outcome.delivery_target,
                    expires_in_seconds=outcome.expires_in_seconds,
                )
            )
        except OTPTimeout:
            raise HTTPException(status_code=408, detail="OTP timed out")
        outcome = await icegate.submit_otp(outcome.continuation, otp)

    if outcome.kind != "OK":
        raise HTTPException(status_code=400, detail="Login did not complete")

    # Step 3: persist credentials and the freshly minted session.
    cred_repo.put(
        user_id=user.id,
        portal_username=body.portal_username,
        encrypted_password=encrypt(body.portal_password, owner_id=user.id),
    )
    session_repo.put(
        user_id=user.id,
        encrypted_session=encrypt_session(outcome.session, user.id),
    )

    return {"status": "connected"}

The query path is the symmetric mirror.

# app/api/query.py
from fastapi import APIRouter, Depends, HTTPException

from app.auth.crypto import decrypt
from app.proxy import icegate
from app.proxy.session import decrypt_session, encrypt_session, is_session_alive
from app.repository import CredentialRepo, SessionRepo
from app.security import current_user

router = APIRouter()


@router.post("/query/boe-status")
async def boe_status(
    boe_number: str,
    user=Depends(current_user),
    cred_repo: CredentialRepo = Depends(),
    session_repo: SessionRepo = Depends(),
):
    cred = cred_repo.get(user.id)
    if cred is None:
        raise HTTPException(status_code=412, detail="Connect your portal account first")

    sess_blob = session_repo.get(user.id)
    session = decrypt_session(sess_blob, user.id) if sess_blob else None

    async with icegate.client(session) as client:
        if session is None or not await is_session_alive(client, icegate.PROBE_URL):
            password = decrypt(cred.encrypted_password, owner_id=user.id)
            outcome = await icegate.start_login(cred.portal_username, password)
            # password reference goes out of scope here; nothing else holds it
            if outcome.kind == "BAD_CREDENTIALS":
                cred_repo.mark_needs_reauth(user.id)
                raise HTTPException(status_code=412, detail="Re-enter portal password")
            # OTP_REQUIRED can happen here too. In practice this is rare for a
            # session-restore path, but the same OTP_CHANNEL flow handles it.
            if outcome.kind != "OK":
                raise HTTPException(status_code=502, detail="Portal login failed")
            session_repo.put(
                user_id=user.id,
                encrypted_session=encrypt_session(outcome.session, user.id),
            )

        return await icegate.fetch_boe_status(client, boe_number)

A few things this code does not do that production code must.

It does not retry. Add a single retry at the per-request layer for transient network errors, with jitter. Do not retry login attempts; that is how you lock accounts.

It does not rate-limit. Per user account, hard cap login attempts to something like three per hour. Without this, a misconfigured client will trigger account lockout on the portal side and your users will hate you.

It does not audit. Every credential decrypt and every portal request should produce an audit log row keyed to user_id, with timestamp, request type, and outcome. KMS gives you decrypt audit for free. Do the same for portal calls in your application logs.

Operational Concerns That Decide Whether You Survive

A handful of things will bite you if you skip them.

Scrubbing credentials from error reporting. Every error tracker (Sentry, Rollbar, GCP Error Reporting) by default captures request bodies and exception locals. The connect endpoint receives the password in the request body. If your error reporter snapshots that body on a 500, the password is now in your error reporter’s database, where you cannot envelope-encrypt it. Configure your reporter to redact the entire /connect endpoint, or do not send request bodies at all.

# app/observability.py
from fastapi import Request

SENSITIVE_ROUTES = {"/connect", "/ws/otp"}


def scrub_request(request: Request, body: bytes | None) -> bytes | None:
    if request.url.path in SENSITIVE_ROUTES:
        return b"[scrubbed]"
    return body

Logging discipline. Print statements during debugging are how passwords end up in Cloud Logging. Set up a structured logger with a sensitive-fields filter. Forbid the password variable name from ever being passed to logger.info without going through a redactor. A pre-commit hook that greps for logger.*\(.*password catches the obvious cases.

Memory residency. Python does not let you securely zero memory. The string immutability that makes the language pleasant means an attacker with a memory dump can fish out password strings until garbage collection runs (and probably after, since CPython interns short strings). You cannot fully solve this. You can minimise it: never store the password in a long-lived object. Pass it directly from the request handler to the encryption call, then to a single login attempt, then let it go out of scope. Do not stash it in self.password on a class instance that lives across requests.

Key rotation. KMS keys can be rotated, and your stored blobs reference a specific key version. A rotated KMS key keeps old versions available for decrypt by default, so existing blobs continue to work. New encryptions use the new version. Once a year, run a rewrap job: decrypt every blob with the old version, re-encrypt with the new, write back. This bounds your exposure if a key version is ever compromised.

Per-user blast radius. The owner_id in AAD prevents cross-user blob substitution at the crypto layer. Also enforce it at the database layer: the row is keyed by user_id, the application query filters by user_id, the IAM role serving requests has Datastore-level row filters where the platform supports them. Defence in depth.

Incident posture. Decide ahead of time what you do if your database leaks. A tested runbook beats a heroic response. The minimum: rotate the KMS key (which makes every existing blob unreadable, even with the database in hand), force every user to reconnect, and email every affected user with the truth. The harder one: have a way to forcibly invalidate every active third-party session you minted. For most portals this is “do nothing, let the cookies expire,” which is acceptable but worth being honest about.

The /connect screen is not a form. It is a consent moment. The user is about to hand you a password to a system that may also gate things you have nothing to do with. Tell them, in plain language, what you will do with it.

The minimum elements:

  • Which portal you will log in to and under what username (echoed back to them).
  • That you will store the password encrypted, decryptable only by the application service identity.
  • What queries you will make on their behalf, with examples.
  • That they can disconnect at any time, and what disconnect does (delete encrypted credentials and session).
  • A link to a security page that explains envelope encryption and KMS in three short paragraphs (a one-time investment that pays off forever).

If you are not comfortable saying these things out loud to the user, you should not be building the feature. The opposite is also true: if you can say them clearly and the user proceeds, you have informed consent, which is the only credible legal posture for this pattern.

When Not to Build This

This pattern is not free. The costs:

  • A real KMS bill (small but nonzero, around 3 USD per million decrypts).
  • A real engineering bill (the four problems above are not weekend work).
  • A real operational bill (broken logins, locked accounts, support tickets, captcha races).
  • A real reputational risk (one breach and you are the cautionary tale).

If a real OAuth integration is plausibly six months away on the third party’s roadmap, wait six months. If a partnership conversation can get you a service account, have the conversation. If you can build a useful product around the public, no-auth subset of the portal, ship that first and let usage prove the case for going deeper.

The pattern is correct when the third party will never offer OAuth (government portals, legacy enterprise systems, a small vendor with no developer programme), the user genuinely needs the integration, and you can absorb the operational cost. It is wrong as a shortcut around the friction of a real OAuth integration that exists but is annoying to set up.

In the customs-tracker scoping I mentioned at the top, we ended up parking the BYO-login feature for Phase 2. Not because the pattern is bad, but because the product had not yet earned the right to ask users for that level of trust. Phase 1 ships the public, no-auth subset and builds an audience. When that audience is asking for the authenticated feature directly, in their own words, the conversation about handing us their portal password is one we have earned the right to have.

The pattern is a hammer. Make sure you are looking at the right kind of nail.

About the Author

Ashish Anand

Ashish Anand

Founder & Lead Developer

Full-stack developer with 10+ years experience in Python, JavaScript, and DevOps. Creator of DevGuide.dev. Previously worked at Microsoft. Specializes in developer tools and automation.