periscopev1.0.0-rc9

docsrfcs0002 auth

RFC 0002 — OIDC Login + Per-User K8s Authorization

## 1. Summary

rfcs3 min read
StatusDraft (PR-A landed, PR-A.1 landed, PR-B next)
Owner@gnana997
Started2026-05-02
Targetsv1 (ship), v1.x (UX polish, CLI, cross-account), v2 (per-user AWS IDC)
RelatedGROUND_RULES.md, RFC 0001 (pod exec)

1. Summary

Periscope ships with two orthogonal authentication layers wired into a single Provider abstraction:

  • Layer A — user identity: generic OIDC (tested with Auth0, Okta), Authorization Code + PKCE, BFF pattern. The Go backend is the OAuth client; the SPA never sees a token. Session is an httpOnly; Secure; SameSite=Lax cookie bound to a server-side session record. Shipped in PR-A + PR-A.1.

  • Layer B — application authentication and per-user K8s authorization: the Periscope pod uses EKS Pod Identity (preferred) or IRSA (documented fallback) to obtain its own AWS credentials. K8s API calls go through K8s impersonation so each user's IdP groups translate into per-cluster K8s RBAC. Three operator-selectable modes:

    ModeWhat it doesOperator burden
    shared (default)No impersonation. All users share the pod role's K8s perms.Lowest — one Access Entry per cluster. Matches the pre-PR-B status quo.
    tierImpersonate one of five built-in tiers (read/triage/write/maintain/admin). Operator maps IdP groups → tier.Medium — apply 7 shipped manifests per cluster; ~5-line config.
    rawImpersonate the user's actual IdP groups (prefixed). Operator owns all per-cluster RBAC.High — full RBAC YAML per cluster. CLI tool (periscope-rbac, PR-B.2) makes this manageable.

Both layers populate the existing internal/credentials.Provider interface, which every operation already takes as an explicit argument. v1 audit records the OIDC subject; v1's K8s audit log records the same subject via the impersonation metadata. v2 swaps Layer B's Provider for UserSsoProvider so AWS calls run as the user's real AWS Identity Center session, with no API changes anywhere else in the codebase.

Cross-account sts:AssumeRole (multiple AWS accounts) is explicitly deferred to PR-C / v1.x. Same-account multi-cluster works with the v1 Layer B; the kubeconfig backend is the documented escape hatch for cross-account in the meantime.


2. Motivation

Pitch-defining. Periscope's headline differentiator is keyless — no long-lived AWS keys mounted in the pod, no kubeconfig with embedded creds on a laptop. Layer B is the technical substance of that pitch.

Per-user K8s perms is the differentiator from "prettier kubectl." A dashboard where everyone has identical K8s permissions is strictly worse than kubectl + AWS SSO + Access Entries for any team beyond ~5 people. Periscope without per-user authorization doesn't compete with Rancher; it competes with kubectl. We need at minimum the ability for an org with viewers + operators + admins to express that distinction.

Stops the dev-stub from leaking. sessionFromRequest() returned a hardcoded identity until PR-A. Every audit log entry read actor=dev@local. Shipping without fixing this means audit was theatre. PR-A fixed Layer A; PR-B brings the same attribution to the K8s side via impersonation.

Unblocks v2. The current Provider abstraction already separates "who the actor is" (Session) from "how AWS creds are obtained" (Provider). Layer A populates the actor; Layer B obtains the creds AND propagates the actor's groups into K8s. v2 swaps the Provider implementation for one that uses the user's actual AWS IDC session. The Layer A code is unchanged; the impersonation logic in Layer B can be retained or removed depending on whether each cluster trusts the same IDP directly.

Catches up on AWS auth. Original plan said "shared IRSA." That was correct in 2024. As of 2026: EKS Pod Identity (GA Nov 2023) is now AWS's recommended default; EKS Access Entries (GA 2024) replaced the aws-auth ConfigMap. The Periscope code is identical for IRSA vs Pod Identity — the AWS SDK v2 default credential chain handles both. The choice is purely deployment doc. We standardize on Pod Identity and document IRSA as a fallback for non-EKS hosting.


3. Goals and non-goals

Goals

Layer A (PR-A — done):

  • OIDC login via Authorization Code + PKCE, BFF pattern, httpOnly cookie session.
  • Generic OIDC (tested with Auth0 and Okta).
  • Optional audience field for IdPs that need it (e.g. Auth0).
  • Refresh-token rotation; absolute session lifetime (8 h default); idle (30 min default); forced re-login at absolute expiry.
  • Local logout (default) + RP-initiated logout (kills IdP session too).
  • Provider.Actor() returns the OIDC sub (or email if configured).
  • Secret resolver (internal/secrets) for oidc.clientSecret: env var, file, AWS Secrets Manager, AWS SSM Parameter Store. Shipped in PR-A.1.

Layer B (PR-B — next):

  • SharedIrsaProvider works under EKS Pod Identity and IRSA without code changes; documented deployment for both. Already shipped in scaffolding.
  • In-process EKS bearer-token generation (presigned sts:GetCallerIdentity). Already shipped in internal/k8s/token.go.
  • Three authorization modes: shared (default) / tier / raw.
  • Five built-in tiers (read / triage / write / maintain / admin) named after GitHub repo roles for familiarity.
  • Helm chart ships the per-cluster RBAC manifests for tier mode (3 built-in ClusterRoleBindings + 2 custom ClusterRoles + 2 custom ClusterRoleBindings = 7 manifests applied once per cluster).
  • Periscope's pod role only has impersonate on each cluster — no other K8s perms. Defense-in-depth.
  • All deploy guidance uses EKS Access Entries (no aws-auth ConfigMap).

Non-goals

  • Cross-account sts:AssumeRole. Deferred to PR-C / v1.x. Same-account multi-cluster works with v1 Layer B as-is. Multi-account orgs use the kubeconfig backend escape hatch.
  • Multi-tenant OIDC tenant support. Single IdP tenant per Periscope deployment.
  • HA session store (Redis, encrypted-cookie stateless). v1 = in-memory map. Single-replica deployment is fine for the audience size; HA is a v1.x concern.
  • AWS Identity Center user pass-through. That is v2.
  • Verifying additional IdPs. The flow is OIDC-standard and works against any compliant IdP; v1 ships tested against Auth0 and Okta. Azure AD/Entra, Google, and Keycloak should work but are not exercised in CI.
  • In-app RBAC management UI. v1 doesn't render or edit ClusterRoleBindings inside Periscope. The CLI (PR-B.2) is the authoring path.
  • Periscope-issued API tokens for non-browser clients. Not in v1.
  • mTLS / SPIFFE. Out of scope for the foreseeable.
  • Auto-refresh of resolved secrets. Resolution is at startup; rotation = restart in v1. Auto-refresh is a v1.x concern.

4. User experience

Login

  1. User opens Periscope. No session cookie → SPA shows <LoginScreen>.
  2. Click sign in with okta (label is generic; works for any IdP) → full-page redirect to /api/auth/login → backend generates PKCE verifier + state, stores them in a short-lived cookie, 302s to the IdP's authorization_endpoint.
  3. User authenticates with the IdP. The IdP 302s back to /api/auth/callback.
  4. Backend validates state, exchanges code for tokens, validates ID token (issuer, audience, expiry, signature against the IdP's JWKS), pulls the configured groups claim, evaluates allowedGroups + tier mapping, creates session record, sets session cookie, 302s to /.
  5. SPA loads. First request includes the cookie. <App> reads /api/auth/whoami which returns {subject, email, groups, mode, tier?, expiresAt}. Header shows the user's email + tier badge + dropdown for Sign out.

Authorization across the three modes

shared mode. All users have identical K8s permissions on each cluster — whatever the cluster's Access Entry binds Periscope's pod principal to. Periscope does NOT impersonate. Auth0/Okta groups affect the gate (allowedGroups) but not in-cluster authz. Best for small teams or POC deployments.

tier mode. Periscope impersonates one of five built-in groups based on the user's IdP claims:

code
IdP group                       Periscope tier        Impersonate-Group sent to K8s
────────────────────────────────────────────────────────────────────────────────────
SRE-Platform           ─────→   admin                 periscope-tier:admin
SRE-OnCall             ─────→   triage                periscope-tier:triage
Backend-TeamLeads      ─────→   maintain              periscope-tier:maintain
Engineering-All        ─────→   write                 periscope-tier:write
Contractors            ─────→   read                  periscope-tier:read
(no matching group)    ─────→   defaultTier           periscope-tier:read   (configurable)

Per-cluster RBAC bindings (shipped by the chart) bind these periscope-tier:* groups to standard ClusterRoles. Operators apply them once per cluster and don't write per-IdP-group YAML.

raw mode. Periscope impersonates with the user's actual IdP groups, prefixed (periscope: by default). Operators write all RBAC bindings against those prefixed group names. Maximum flexibility; maximum operator effort. The PR-B.2 CLI tool generates the bindings from a declarative intent file to make this manageable.

Tier definitions (GitHub-shaped)

TierK8s mappingPlain English
readview (built-in)Read everything except secrets.
triageshipped periscope-triageRead + debug verbs (exec, logs, port-forward, restart pods, scale workloads). No spec edits.
writeedit (built-in)Modify all namespaced resources except RBAC.
maintainshipped periscope-maintainadmin (namespaced incl. RoleBindings) + cluster-scoped reads on nodes/namespaces/storageclasses. No cluster-level RBAC create.
admincluster-admin (built-in)Everything.

The two custom ClusterRoles ship with sensible default verb sets; operators can edit them per cluster with kubectl edit clusterrole. Verb sets evolve as we learn from real use; chart appVersion tracks shipped role contents.

Logout

  • Default — local logout. "Sign out" → GET /api/auth/logout → backend clears session, 302s to / → SPA back to login screen. IdP session stays alive (expected behavior for "log out of this app").
  • Optional — RP-initiated. "Sign out everywhere" menu item → backend clears local session, then 302s the browser through the IdP's end_session_endpoint with id_token_hint and post_logout_redirect_uri=…/api/auth/loggedout. Both sessions end.

Session lifecycle

  • Idle ≥ 30 min: next request returns 401; SPA redirects to login.
  • Absolute ≥ 8 h: same; user must re-auth from scratch.
  • Refresh on activity: silent. Backend refreshes the access token using the rotated refresh token whenever it's within 60s of expiry; user never notices.
  • Refresh failure (IdP revoked the RT): next request 401 → login screen.
  • Backend restart (in-memory store): all sessions invalidated. Login required.

Error states

  • IdP unreachable during login: 502 on /api/auth/callback. Retry button.
  • State/PKCE mismatch: 400, "Login attempt invalid." Defends against replayed callback URLs.
  • Forbidden (403) from K8s (tier/raw modes): UI shows a calm "your role doesn't allow this" toast for actions, and empty states with a "contact your cluster admin" hint for resources the user can't list. Detailed UX pass is PR-B.1.
  • Cluster role assumption fails (cross-account, future PR-C): cluster appears in the picker as red, hovering shows the AWS error.

5. Architecture

5.1 Layered model

code
Browser
  │  (httpOnly session cookie; no tokens)

Go backend
  ├─ Layer A: OIDC client (BFF)
  │   └─ Session store ⇒ populates credentials.Session{Subject, Email, Groups}

  ├─ Authorization mode resolver (shared|tier|raw)
  │   └─ shared:  no impersonation
  │      tier:    map IdP groups → periscope-tier:<tier>
  │      raw:     prefix-passthrough → periscope:<group>

  └─ Layer B: AWS credentials + K8s client
      ├─ AWS default chain → STS creds (Pod Identity / IRSA / local profile)
      ├─ EKS bearer token: presigned sts:GetCallerIdentity (in-process)
      └─ K8s client.Config.Impersonate populated per-request from mode resolver

5.2 Existing scaffolding

  • internal/credentials/provider.goProvider, Factory, Session.
  • internal/credentials/middleware.goWrap(factory, handler).
  • internal/credentials/shared_irsa.goSharedIrsaProvider. Exposes AWSConfig() for the secrets resolver.
  • internal/k8s/client.gobuildEKSRestConfig, buildKubeconfigRestConfig.
  • internal/k8s/token.goMintEKSToken (in-process EKS bearer token).
  • internal/auth/* — Layer A + secret resolver wiring (PR-A, PR-A.1).
  • internal/secrets/resolver.go — secret-reference resolution (PR-A.1).

5.3 New components for PR-B

  • internal/authz/mode.go — mode resolver. Resolve(ctx, session) → []string returns the impersonated groups based on mode + identity.
  • internal/authz/tiers.go — built-in tier definitions (5 tiers, GitHub-named).
  • internal/k8s/client.go — extend buildRestConfig to set cfg.Impersonate from the resolver.
  • internal/k8s/exec.go — verify SPDY/WS exec carries impersonation.
  • cmd/periscope/main.go — wire the mode resolver into the request flow.
  • internal/auth/config.go — extend with authorization.mode, authorization.groupTiers, authorization.defaultTier, authorization.groupPrefix.
  • deploy/helm/periscope/templates/cluster-rbac.yaml (new) — gated render of the 7 tier-mode RBAC manifests.
  • docs/setup/cluster-rbac.md (new) — walk-through of all three modes with worked examples.

5.4 Deferred to later PRs

  • PR-B.1: Forbidden-aware UI — friendly empty states + toast for 403s, tier badge in <UserMenu>, role indicator on the cluster row.
  • PR-B.2: cmd/periscope-rbac CLI — declarative intent file → generated RBAC YAML. Separate binary, vendored alongside Periscope.
  • PR-C: Cross-account sts:AssumeRole per cluster. Adds Cluster.AssumeRoleArn, role-session-name = periscope/<oidc-sub>, per-cluster STS creds caching. ~80 LoC.

6. Layer A — OIDC

6.1 Library choice

  • github.com/coreos/go-oidc/v3 — discovery, ID-token verification, JWKS.
  • golang.org/x/oauth2 — code exchange, refresh, token source.

Skip ory/fosite (it's for building an OAuth provider, not consuming one). Skip lestrrat-go/jwx for v1 (only needed if we later do signed request objects or DPoP).

6.2 Configuration

config/auth.yaml:

code
oidc:
  # Any OIDC-compliant issuer; tested with Auth0 + Okta.
  issuer: https://your-tenant.us.auth0.com/
  clientID: your-application-client-id

  # Secret reference. Resolved through internal/secrets at startup.
  # Schemes: ${ENV}, file://path, aws-secretsmanager://name[#json-key],
  # aws-ssm:///path/to/parameter, or a literal (discouraged).
  clientSecret: ${OIDC_CLIENT_SECRET}

  redirectURL: https://periscope.corp.com/api/auth/callback
  scopes: [openid, profile, email, offline_access]

  # Auth0-only: API audience identifier so the IdP issues a JWT
  # access token. Empty for Okta and most other IdPs.
  audience: ""

  postLogoutRedirect: https://periscope.corp.com/api/auth/loggedout

session:
  cookieName: periscope_session
  idleTimeout: 30m
  absoluteTimeout: 8h
  cookieDomain: periscope.corp.com         # optional

authorization:
  # one of: shared | tier | raw
  mode: shared

  # tier mode:
  groupTiers:
    SRE-Platform:        admin
    SRE-OnCall:          triage
    Backend-TeamLeads:   maintain
    Engineering-All:     write
    Contractors:         read
  defaultTier: read    # users in no listed group; "" = deny

  # raw mode:
  groupPrefix: "periscope:"

  # gate (all modes): empty = any authenticated user
  allowedGroups: []

  # IdP token claim name. Auth0 needs a namespaced custom claim
  # (e.g. https://periscope/groups); Okta exposes "groups" natively.
  groupsClaim: groups

Secret resolution. clientSecret is the only field that flows through internal/secrets.Resolver today. Resolution happens once at startup; rotation = restart in v1. The AWS-backed schemes share the pod's default credential chain (Pod Identity / IRSA / local profile).

6.3 Endpoints

PathMethodPurpose
/api/auth/loginGETGenerate PKCE+state, set short-lived cookie, 302 to the IdP.
/api/auth/callbackGETExchange code, verify ID token, create session, set session cookie, 302 to /.
/api/auth/logoutGETClear local session, 302 to /. (No IdP call.)
/api/auth/logout/everywhereGETClear local + 302 to IdP end_session_endpoint.
/api/auth/loggedoutGETStatic "you've been signed out" page after IdP logout.
/api/auth/whoamiGET{subject, email, groups, mode, tier?, expiresAt} for the SPA.

6.4 Session record

code
type Session struct {
    ID            string            // 32 random bytes, base64
    Subject       string            // OIDC sub
    Email         string
    Groups        []string
    AccessToken   string            // server-side only
    RefreshToken  string            // server-side only
    IDToken       string            // for RP-initiated logout id_token_hint
    AccessExpiry  time.Time
    AbsoluteExpiry time.Time
    LastActivity  time.Time
}

credentials.Session is the exposed slice — {Subject, Email, Groups}. Tokens never leave the auth package.

  • Name: periscope_session
  • Value: session ID (32 random bytes, base64)
  • HttpOnly: true
  • Secure: true on TLS / behind X-Forwarded-Proto=https
  • SameSite: Lax (see CHANGELOG v1.0.0 — Strict broke the post-callback redirect; #37)
  • Path: /
  • MaxAge: matches absolute timeout (8 h)
  • Domain: configurable; default unset (host-only cookie)

6.6 Refresh

Background-free design. On each authenticated request, middleware checks time.Until(session.AccessExpiry) < 60s. If so, refresh inline using the RT. The IdP is configured for rotation: each refresh issues a new RT; the old one is valid for a 30-second grace window. Refresh failure → invalidate session, 401, SPA redirects to login.


7. Layer B — AWS auth + K8s impersonation

7.1 AWS credentials (already in place)

The pod's base AWS credentials come from the SDK's default chain. Two supported deployment paths; both produce identical Periscope behavior:

Pod Identity (preferred for new EKS deployments):

code
aws eks create-pod-identity-association \
  --cluster-name <hosting-cluster> \
  --namespace periscope \
  --service-account periscope \
  --role-arn arn:aws:iam::111111111111:role/periscope-base

Trust policy on periscope-base:

code
{ "Effect": "Allow",
  "Principal": { "Service": "pods.eks.amazonaws.com" },
  "Action": ["sts:AssumeRole", "sts:TagSession"] }

IRSA (fallback for older EKS or non-EKS hosting): SA annotation eks.amazonaws.com/role-arn: arn:aws:iam::...:role/periscope-base. Standard pre-2024 pattern.

7.2 EKS API auth (already in place)

internal/k8s/token.go mints bearer tokens in-process: presigned sts:GetCallerIdentity URL with x-k8s-aws-id header → base64url → prefixed with k8s-aws-v1.. 15-min token TTL; refreshed before expiry. No aws eks get-token shell-out, no kubeconfig file.

7.3 Per-cluster Access Entry

Each managed EKS cluster gets a single Access Entry binding Periscope's pod principal to a K8s group (we call it periscope-bridge):

code
aws eks create-access-entry \
  --cluster-name prod-eu-west-1 \
  --principal-arn arn:aws:iam::222...:role/periscope-base \
  --kubernetes-groups periscope-bridge \
  --type STANDARD

The periscope-bridge group has only one ClusterRole bound to it: a custom periscope-impersonator granting impersonate on users and groups. Periscope's pod has no other K8s perms on the cluster. Impersonate is strictly the only thing the principal can do natively; everything else flows through the impersonated user's RBAC.

7.4 Per-user K8s impersonation

buildRestConfig populates rest.Config.Impersonate based on the active mode:

  • shared: no Impersonate fields. Periscope acts as periscope-bridge.
  • tier: Impersonate.UserName = session.Subject, Impersonate.Groups = ["periscope-tier:<tier>"] where tier is resolved from groupTiers (default defaultTier).
  • raw: Impersonate.UserName = session.Subject, Impersonate.Groups = ["periscope:<g>" for g in session.Groups].

K8s client-go automatically sends Impersonate-User and Impersonate-Group headers on every request. The apiserver re-evaluates RBAC under the impersonated identity. Audit log shows:

code
user.username = auth0|alice
user.groups = ["periscope-tier:admin"]
impersonatedBy.username = system:node:periscope-bridge

7.5 Group prefix discipline

Critical. Both tier and raw modes prefix the impersonated groups (periscope-tier: and periscope: respectively). This prevents an attacker who compromises Periscope from impersonating into system:masters or other privileged groups — RBAC bindings on those groups won't match the prefixed form.

The chart enforces this by:

  1. Refusing to render a periscope-impersonator ClusterRole that allows impersonating un-prefixed groups.
  2. Rendering periscope-tier:* or periscope:* group RBAC only.
  3. Emitting a startup-time log line listing the exact impersonate verbs the pod role has, so operators can spot drift.

7.6 Tier mode RBAC (shipped manifests)

The Helm chart renders 7 manifests per cluster (when clusterRBAC.enabled and authorization.mode: tier):

ManifestPurpose
ClusterRole periscope-impersonatorThe verb (impersonate users,groups)
ClusterRoleBinding periscope-impersonatorbinds periscope-bridge group → impersonator role
ClusterRoleBinding periscope-tier-readviewread tier
ClusterRoleBinding periscope-tier-writeeditwrite tier
ClusterRoleBinding periscope-tier-admincluster-adminadmin tier
ClusterRole periscope-triage + ClusterRoleBindingtriage tier (custom verb set)
ClusterRole periscope-maintain + ClusterRoleBindingmaintain tier (custom verb set)

Operators apply with kubectl apply -f. Drift between shipped roles and what's on the cluster is the operator's problem; chart's appVersion tracks role contents so operators can pin and rerun on chart upgrade.

The two custom roles' exact verb sets are intentionally not locked in this RFC (decision 11). They ship with sensible defaults and evolve in v1.x based on real-world feedback.

7.7 Cross-account (deferred to PR-C)

Cross-account access requires the pod role in account A to sts:AssumeRole into a per-cluster role in account B. PR-C adds:

  • Cluster.AssumeRoleArn (and optional ExternalID, SessionDurationSeconds) in clusters.yaml.
  • SharedIrsaProvider extension that calls AssumeRole per cluster, caching STS creds keyed by (cluster, session.Subject).
  • RoleSessionName = periscope/<oidc-sub> so CloudTrail attributes AWS calls back to the human.
  • Same-account v1 deployments are unaffected when AssumeRoleArn is empty.

The escape hatch for v1 is the kubeconfig backend with an exec credential plugin that runs aws eks get-token --role-arn .... Workable for a few clusters; ugly at scale; sized appropriately as v1.x scope.


8. Wiring it together

cmd/periscope/main.go:

code
oidcClient, _   := auth.NewOIDCClient(ctx, cfg.OIDC, cfg.Authorization.GroupsClaim)
sessions        := auth.NewMemoryStore()
authMW          := auth.Middleware(oidcClient, sessions, cfg)
modeResolver    := authz.NewResolver(cfg.Authorization)
factory         := credentials.NewSharedIrsaFactoryWithModeResolver(awsCfg, modeResolver)

router.Use(authMW)
auth.RegisterRoutes(router, oidcClient, sessions, cfg)

// Existing routes unchanged. credentials.Wrap pulls the session from
// context, the factory builds a Provider, and k8s.NewClientset pulls
// the impersonation strings from the Provider.
mux.Handle("/api/...", authMW(credentials.Wrap(factory, apiHandler)))

credentials.Wrap is unchanged externally; the Provider it builds carries the impersonation context internally.


9. Audit logging

Every audit line gains:

code
actor=alice@corp.com   actor_groups=periscope-tier:admin   session_id=…

Audit events:

  • auth.login (subject, email, groups, ip, user_agent, resolved_tier)
  • auth.login_failed (reason — state_mismatch, code_exchange_failed, id_token_invalid, not_in_allowed_groups)
  • auth.logout (subject, kind=local|everywhere)
  • auth.session_expired (subject, kind=idle|absolute|refresh_failed)
  • (PR-C) aws.assume_role_failed (cluster, role_arn, error_code)

Per-cluster K8s audit logs additionally show:

code
user.username = auth0|alice
user.groups = ["periscope-tier:admin"]
impersonatedBy.username = system:node:periscope-bridge

This is the full attribution chain — anyone can join app-level audit, K8s audit, and (in PR-C) CloudTrail back to the same OIDC sub.


10. Smoke-test plan

Layer A (already exercised)

  1. Login happy path; allowed-groups gate; idle/absolute expiry; refresh rotation; local logout; logout-everywhere.

Layer B — Mode resolution

  1. shared mode: install with default mode: shared. Two users in two different IdP groups both get full pod-role perms. Confirm K8s audit log shows the pod principal directly (no impersonatedBy field).
  2. tier mode: configure groupTiers mapping group Engineers to read and SREs to admin. Engineer can list pods but not delete. SRE can do anything. Confirm K8s audit log shows user.username = <oidc-sub>, user.groups = [periscope-tier:read] for the engineer.
  3. raw mode: configure mode: raw. Bind a ClusterRole to group periscope:Engineers. User in Engineers IdP group gets exactly that binding's perms.

Layer B — Tier semantics (custom roles)

  1. triage tier: confirm pods/exec, pods/log, pods/portforward work; confirm scaling deployments via /scale works; confirm pod delete works; confirm deployment spec patch is denied.
  2. maintain tier: confirm namespace-scoped admin works (RoleBindings ok); confirm cluster-level RBAC create is denied; confirm node read works.
  3. defaultTier: user in no listed group lands on the default tier (or is denied if defaultTier: "").

Security

  1. Group prefix discipline: try to bind a ClusterRole to plain system:masters (no prefix) and confirm Periscope's impersonation NEVER produces system:masters regardless of input groups.
  2. Pod role can't act directly: unbind everything except periscope-impersonator from the pod role. Confirm kubectl get pods (issued by Periscope without impersonation) returns 403.

Deferred — PR-C smoke tests

10–13: cross-account AssumeRole tests; trust-policy validation; CloudTrail attribution.


11. Decisions

#DecisionRationale
1Authorization Code + PKCE + BFF + httpOnly cookie2025–2026 industry consensus; SPA token storage is a known XSS-exfil class.
2coreos/go-oidc/v3 + golang.org/x/oauth2Boring, well-tested, ~150 LoC of glue.
3In-memory session store, single-replica deployFits v1 audience size; HA is v1.x.
4Local logout default; "logout everywhere" optionalMatches user expectation for "sign out of this app."
5Pod Identity preferred, IRSA documented as alternativeCode is identical; deploy doc differs.
6EKS Access Entries only (no aws-auth ConfigMap)aws-auth is deprecated by AWS as of 2024.
7In-process EKS bearer token (presigned STS URL); no subprocessNo aws CLI dependency in the pod; fewer moving parts.
8Three authorization modes: shared (default) / tier / rawshared matches current state, tier for most teams, raw for power users. Progressive disclosure.
9Five built-in tiers with GitHub names: read / triage / write / maintain / adminGitHub roles are widely understood; triage and maintain fill gaps that 3-tier can't.
10Tier custom roles ship with default verb sets, evolve in v1.xLock the interface (5 names, GitHub mapping), let implementation learn from real use.
11Always prefix impersonated groups (periscope-tier: / periscope:)Defense against the "impersonate into system:masters" attack class.
12Periscope's pod role gets ONLY the impersonate verbPrinciple of least privilege; matches Rancher's pattern.
13Generic OIDC, tested against Auth0 + OktaSame code path works for any compliant IdP; ship verified on two, document the rest.
14audience config field for IdPs that need it (e.g. Auth0)Optional; absent for Okta. Keeps the surface generic without breaking Auth0.
15Secret references via URL scheme (env / file / aws-secretsmanager / aws-ssm)One chokepoint, scheme-discoverable from auth.yaml; AWS schemes share the pod's existing credential chain.
16Cross-account sts:AssumeRole deferred to PR-C / v1.xSingle-account multi-cluster covers the v1 audience; multi-account orgs use kubeconfig-backend escape hatch.
17periscope-rbac CLI deferred to PR-B.2Doesn't change what's possible; only changes how nice raw mode is to adopt.
18Single IdP tenant per Periscope deploymentMulti-tenant orgs are a hosted-product feature, not OSS v1.

12. Out of scope (deferred)

  • Cross-account sts:AssumeRole — PR-C. Same-account is in scope for v1.
  • HA session store — v1.x.
  • Verifying additional IdPs (Azure AD/Entra, Google, Keycloak) — possible v1.x; v1 ships verified on Auth0 + Okta only.
  • AWS Identity Center user pass-through — v2 (the headline architectural shift; separate RFC).
  • In-app RBAC management UI — v2.x or later.
  • Auto-refresh of resolved secrets — v1.x.
  • Periscope-issued API tokens for non-browser clients — v2.x (needed once MCP exposure lands without a browser).

13. Phasing

PR-A (shipped):

  • OIDC login (PKCE + BFF), tested with Auth0 and Okta.
  • allowedGroups authorization gate.
  • LoginScreen + UserMenu in the SPA.
  • internal/auth/* package; replaces the dev@local stub.

PR-A.1 (shipped):

  • Generic OIDC rename (Okta → OIDC; package, types, env var).
  • audience config field for Auth0.
  • internal/secrets resolver; hooks oidc.clientSecret to URL-scheme resolution (env / file / aws-secretsmanager / aws-ssm).

Helm chart + setup docs (shipped):

  • deploy/helm/periscope/ with four secret modes (existing/plain/external/native).
  • docs/setup/{auth0,okta,deploy}.md.
  • examples/config/auth.yaml.{auth0,okta} and clusters.yaml.

PR-B (next):

  • Three modes (shared / tier / raw), default shared.
  • internal/authz mode resolver.
  • k8s.Config.Impersonate populated from the resolver.
  • 5 GitHub-named tiers with shipped Helm RBAC manifests.
  • docs/setup/cluster-rbac.md walking through all three modes.
  • Update auth.yaml schema with authorization.mode and friends.

PR-B.1 (UX polish):

  • Forbidden-aware UI (empty states for unlistable resources, calm 403 toasts).
  • Tier badge in <UserMenu>.
  • Role-aware action visibility (don't show "delete" if the user can't delete).

PR-B.2 (CLI):

  • cmd/periscope-rbac — declarative intent file → generated RBAC YAML.
  • --apply + --output modes.
  • ClusterRole discovery + validation.

PR-C / v1.x:

  • Cross-account sts:AssumeRole. Cluster.AssumeRoleArn, role-session-name = periscope/<oidc-sub>, per-cluster STS creds caching, CloudTrail attribution.

v2 (separate RFC):

  • Replace SharedIrsaProvider with UserSsoProvider. AWS calls run as the user's AWS Identity Center session. Layer A is unchanged. Tier/raw modes may fold away on clusters that trust the same IDP directly.

14. Critical files reference

FilePRChange
internal/auth/{config,oidc,session,handlers,middleware,util}.goPR-ALayer A — OIDC + sessions + handlers + middleware.
internal/secrets/resolver.goPR-A.1Secret-reference URL scheme resolution.
internal/credentials/{provider,middleware,shared_irsa}.goPR-ASession in context; AWSConfig() exposed; Email/Groups added.
cmd/periscope/main.goPR-A, PR-BAuth wiring; mode resolver wiring.
web/src/auth/{AuthContext,LoginScreen,types}.tsxPR-ASPA-side auth.
web/src/components/shell/UserMenu.tsxPR-AAvatar popover.
internal/authz/{mode,tiers}.goPR-BMode resolver + 5 tiers. NEW.
internal/k8s/client.goPR-Bcfg.Impersonate population.
internal/k8s/exec.goPR-BVerify SPDY/WS impersonation.
deploy/helm/periscope/templates/cluster-rbac.yamlPR-BThe 7 tier-mode RBAC manifests. NEW.
docs/setup/cluster-rbac.mdPR-BMode-selection + tier walkthrough. NEW.
web/src/auth/RoleAwareUI.tsxPR-B.1Forbidden-aware empty states + toasts. NEW.
cmd/periscope-rbac/*.goPR-B.2Declarative RBAC CLI. NEW.
internal/credentials/shared_irsa.goPR-CPer-cluster AssumeRole + STS creds caching.
internal/clusters/cluster.goPR-CAssumeRoleArn, ExternalID, SessionDurationSeconds.
docs/setup/cross-account.mdPR-CMulti-account deployment guide. NEW.

PR-B sizing: ~180 LoC backend + 7 manifests in chart + ~300 lines of doc. PR-B.1: ~150 LoC mostly frontend. PR-B.2: ~400 LoC + tests. PR-C: ~80 LoC + ~200 lines of doc.

Total v1 surface delta from where we are now: ~810 LoC + chart RBAC + docs.

edit this page on github →