docsrfcs0002 auth
RFC 0002 — OIDC Login + Per-User K8s Authorization
## 1. Summary
| Status | Draft (PR-A landed, PR-A.1 landed, PR-B next) |
| Owner | @gnana997 |
| Started | 2026-05-02 |
| Targets | v1 (ship), v1.x (UX polish, CLI, cross-account), v2 (per-user AWS IDC) |
| Related | GROUND_RULES.md, RFC 0001 (pod exec) |
1. Summary
Periscope ships with two orthogonal authentication layers wired into a single
Provider abstraction:
-
Layer A — user identity: generic OIDC (tested with Auth0, Okta), Authorization Code + PKCE, BFF pattern. The Go backend is the OAuth client; the SPA never sees a token. Session is an
httpOnly; Secure; SameSite=Laxcookie bound to a server-side session record. Shipped in PR-A + PR-A.1. -
Layer B — application authentication and per-user K8s authorization: the Periscope pod uses EKS Pod Identity (preferred) or IRSA (documented fallback) to obtain its own AWS credentials. K8s API calls go through K8s impersonation so each user's IdP groups translate into per-cluster K8s RBAC. Three operator-selectable modes:
Mode What it does Operator burden shared(default)No impersonation. All users share the pod role's K8s perms. Lowest — one Access Entry per cluster. Matches the pre-PR-B status quo. tierImpersonate one of five built-in tiers (read/triage/write/maintain/admin). Operator maps IdP groups → tier. Medium — apply 7 shipped manifests per cluster; ~5-line config. rawImpersonate the user's actual IdP groups (prefixed). Operator owns all per-cluster RBAC. High — full RBAC YAML per cluster. CLI tool ( periscope-rbac, PR-B.2) makes this manageable.
Both layers populate the existing internal/credentials.Provider interface, which
every operation already takes as an explicit argument. v1 audit records the OIDC
subject; v1's K8s audit log records the same subject via the impersonation
metadata. v2 swaps Layer B's Provider for UserSsoProvider so AWS calls run as
the user's real AWS Identity Center session, with no API changes anywhere else
in the codebase.
Cross-account sts:AssumeRole (multiple AWS accounts) is explicitly deferred
to PR-C / v1.x. Same-account multi-cluster works with the v1 Layer B; the
kubeconfig backend is the documented escape hatch for cross-account in the
meantime.
2. Motivation
Pitch-defining. Periscope's headline differentiator is keyless — no long-lived AWS keys mounted in the pod, no kubeconfig with embedded creds on a laptop. Layer B is the technical substance of that pitch.
Per-user K8s perms is the differentiator from "prettier kubectl." A dashboard
where everyone has identical K8s permissions is strictly worse than kubectl + AWS SSO + Access Entries for any team beyond ~5 people. Periscope without per-user
authorization doesn't compete with Rancher; it competes with kubectl. We need
at minimum the ability for an org with viewers + operators + admins to express
that distinction.
Stops the dev-stub from leaking. sessionFromRequest() returned a hardcoded
identity until PR-A. Every audit log entry read actor=dev@local. Shipping
without fixing this means audit was theatre. PR-A fixed Layer A; PR-B brings the
same attribution to the K8s side via impersonation.
Unblocks v2. The current Provider abstraction already separates "who the actor is" (Session) from "how AWS creds are obtained" (Provider). Layer A populates the actor; Layer B obtains the creds AND propagates the actor's groups into K8s. v2 swaps the Provider implementation for one that uses the user's actual AWS IDC session. The Layer A code is unchanged; the impersonation logic in Layer B can be retained or removed depending on whether each cluster trusts the same IDP directly.
Catches up on AWS auth. Original plan said "shared IRSA." That was correct
in 2024. As of 2026: EKS Pod Identity (GA Nov 2023) is now AWS's recommended
default; EKS Access Entries (GA 2024) replaced the aws-auth ConfigMap.
The Periscope code is identical for IRSA vs Pod Identity — the AWS SDK v2
default credential chain handles both. The choice is purely deployment doc. We
standardize on Pod Identity and document IRSA as a fallback for non-EKS hosting.
3. Goals and non-goals
Goals
Layer A (PR-A — done):
- OIDC login via Authorization Code + PKCE, BFF pattern, httpOnly cookie session.
- Generic OIDC (tested with Auth0 and Okta).
- Optional
audiencefield for IdPs that need it (e.g. Auth0). - Refresh-token rotation; absolute session lifetime (8 h default); idle (30 min default); forced re-login at absolute expiry.
- Local logout (default) + RP-initiated logout (kills IdP session too).
Provider.Actor()returns the OIDCsub(oremailif configured).- Secret resolver (
internal/secrets) foroidc.clientSecret: env var, file, AWS Secrets Manager, AWS SSM Parameter Store. Shipped in PR-A.1.
Layer B (PR-B — next):
SharedIrsaProviderworks under EKS Pod Identity and IRSA without code changes; documented deployment for both. Already shipped in scaffolding.- In-process EKS bearer-token generation (presigned
sts:GetCallerIdentity). Already shipped ininternal/k8s/token.go. - Three authorization modes:
shared(default) /tier/raw. - Five built-in tiers (
read/triage/write/maintain/admin) named after GitHub repo roles for familiarity. - Helm chart ships the per-cluster RBAC manifests for tier mode (3 built-in ClusterRoleBindings + 2 custom ClusterRoles + 2 custom ClusterRoleBindings = 7 manifests applied once per cluster).
- Periscope's pod role only has
impersonateon each cluster — no other K8s perms. Defense-in-depth. - All deploy guidance uses EKS Access Entries (no
aws-authConfigMap).
Non-goals
- Cross-account
sts:AssumeRole. Deferred to PR-C / v1.x. Same-account multi-cluster works with v1 Layer B as-is. Multi-account orgs use the kubeconfig backend escape hatch. - Multi-tenant OIDC tenant support. Single IdP tenant per Periscope deployment.
- HA session store (Redis, encrypted-cookie stateless). v1 = in-memory map. Single-replica deployment is fine for the audience size; HA is a v1.x concern.
- AWS Identity Center user pass-through. That is v2.
- Verifying additional IdPs. The flow is OIDC-standard and works against any compliant IdP; v1 ships tested against Auth0 and Okta. Azure AD/Entra, Google, and Keycloak should work but are not exercised in CI.
- In-app RBAC management UI. v1 doesn't render or edit ClusterRoleBindings inside Periscope. The CLI (PR-B.2) is the authoring path.
- Periscope-issued API tokens for non-browser clients. Not in v1.
- mTLS / SPIFFE. Out of scope for the foreseeable.
- Auto-refresh of resolved secrets. Resolution is at startup; rotation = restart in v1. Auto-refresh is a v1.x concern.
4. User experience
Login
- User opens Periscope. No session cookie → SPA shows
<LoginScreen>. - Click sign in with okta (label is generic; works for any IdP) → full-page
redirect to
/api/auth/login→ backend generates PKCE verifier + state, stores them in a short-lived cookie, 302s to the IdP'sauthorization_endpoint. - User authenticates with the IdP. The IdP 302s back to
/api/auth/callback. - Backend validates state, exchanges code for tokens, validates ID token (issuer,
audience, expiry, signature against the IdP's JWKS), pulls the configured
groups claim, evaluates
allowedGroups+ tier mapping, creates session record, sets session cookie, 302s to/. - SPA loads. First request includes the cookie.
<App>reads/api/auth/whoamiwhich returns{subject, email, groups, mode, tier?, expiresAt}. Header shows the user's email + tier badge + dropdown for Sign out.
Authorization across the three modes
shared mode. All users have identical K8s permissions on each cluster —
whatever the cluster's Access Entry binds Periscope's pod principal to. Periscope
does NOT impersonate. Auth0/Okta groups affect the gate (allowedGroups) but
not in-cluster authz. Best for small teams or POC deployments.
tier mode. Periscope impersonates one of five built-in groups based on the
user's IdP claims:
IdP group Periscope tier Impersonate-Group sent to K8s
────────────────────────────────────────────────────────────────────────────────────
SRE-Platform ─────→ admin periscope-tier:admin
SRE-OnCall ─────→ triage periscope-tier:triage
Backend-TeamLeads ─────→ maintain periscope-tier:maintain
Engineering-All ─────→ write periscope-tier:write
Contractors ─────→ read periscope-tier:read
(no matching group) ─────→ defaultTier periscope-tier:read (configurable)Per-cluster RBAC bindings (shipped by the chart) bind these periscope-tier:*
groups to standard ClusterRoles. Operators apply them once per cluster and don't
write per-IdP-group YAML.
raw mode. Periscope impersonates with the user's actual IdP groups,
prefixed (periscope: by default). Operators write all RBAC bindings against
those prefixed group names. Maximum flexibility; maximum operator effort. The
PR-B.2 CLI tool generates the bindings from a declarative intent file to make
this manageable.
Tier definitions (GitHub-shaped)
| Tier | K8s mapping | Plain English |
|---|---|---|
read | view (built-in) | Read everything except secrets. |
triage | shipped periscope-triage | Read + debug verbs (exec, logs, port-forward, restart pods, scale workloads). No spec edits. |
write | edit (built-in) | Modify all namespaced resources except RBAC. |
maintain | shipped periscope-maintain | admin (namespaced incl. RoleBindings) + cluster-scoped reads on nodes/namespaces/storageclasses. No cluster-level RBAC create. |
admin | cluster-admin (built-in) | Everything. |
The two custom ClusterRoles ship with sensible default verb sets; operators can
edit them per cluster with kubectl edit clusterrole. Verb sets evolve as we
learn from real use; chart appVersion tracks shipped role contents.
Logout
- Default — local logout. "Sign out" → GET
/api/auth/logout→ backend clears session, 302s to/→ SPA back to login screen. IdP session stays alive (expected behavior for "log out of this app"). - Optional — RP-initiated. "Sign out everywhere" menu item → backend clears
local session, then 302s the browser through the IdP's
end_session_endpointwithid_token_hintandpost_logout_redirect_uri=…/api/auth/loggedout. Both sessions end.
Session lifecycle
- Idle ≥ 30 min: next request returns 401; SPA redirects to login.
- Absolute ≥ 8 h: same; user must re-auth from scratch.
- Refresh on activity: silent. Backend refreshes the access token using the rotated refresh token whenever it's within 60s of expiry; user never notices.
- Refresh failure (IdP revoked the RT): next request 401 → login screen.
- Backend restart (in-memory store): all sessions invalidated. Login required.
Error states
- IdP unreachable during login: 502 on
/api/auth/callback. Retry button. - State/PKCE mismatch: 400, "Login attempt invalid." Defends against replayed callback URLs.
- Forbidden (403) from K8s (tier/raw modes): UI shows a calm "your role doesn't allow this" toast for actions, and empty states with a "contact your cluster admin" hint for resources the user can't list. Detailed UX pass is PR-B.1.
- Cluster role assumption fails (cross-account, future PR-C): cluster appears in the picker as red, hovering shows the AWS error.
5. Architecture
5.1 Layered model
Browser
│ (httpOnly session cookie; no tokens)
▼
Go backend
├─ Layer A: OIDC client (BFF)
│ └─ Session store ⇒ populates credentials.Session{Subject, Email, Groups}
│
├─ Authorization mode resolver (shared|tier|raw)
│ └─ shared: no impersonation
│ tier: map IdP groups → periscope-tier:<tier>
│ raw: prefix-passthrough → periscope:<group>
│
└─ Layer B: AWS credentials + K8s client
├─ AWS default chain → STS creds (Pod Identity / IRSA / local profile)
├─ EKS bearer token: presigned sts:GetCallerIdentity (in-process)
└─ K8s client.Config.Impersonate populated per-request from mode resolver5.2 Existing scaffolding
internal/credentials/provider.go—Provider,Factory,Session.internal/credentials/middleware.go—Wrap(factory, handler).internal/credentials/shared_irsa.go—SharedIrsaProvider. ExposesAWSConfig()for the secrets resolver.internal/k8s/client.go—buildEKSRestConfig,buildKubeconfigRestConfig.internal/k8s/token.go—MintEKSToken(in-process EKS bearer token).internal/auth/*— Layer A + secret resolver wiring (PR-A, PR-A.1).internal/secrets/resolver.go— secret-reference resolution (PR-A.1).
5.3 New components for PR-B
internal/authz/mode.go— mode resolver.Resolve(ctx, session) → []stringreturns the impersonated groups based on mode + identity.internal/authz/tiers.go— built-in tier definitions (5 tiers, GitHub-named).internal/k8s/client.go— extendbuildRestConfigto setcfg.Impersonatefrom the resolver.internal/k8s/exec.go— verify SPDY/WS exec carries impersonation.cmd/periscope/main.go— wire the mode resolver into the request flow.internal/auth/config.go— extend withauthorization.mode,authorization.groupTiers,authorization.defaultTier,authorization.groupPrefix.deploy/helm/periscope/templates/cluster-rbac.yaml(new) — gated render of the 7 tier-mode RBAC manifests.docs/setup/cluster-rbac.md(new) — walk-through of all three modes with worked examples.
5.4 Deferred to later PRs
- PR-B.1: Forbidden-aware UI — friendly empty states + toast for 403s,
tier badge in
<UserMenu>, role indicator on the cluster row. - PR-B.2:
cmd/periscope-rbacCLI — declarative intent file → generated RBAC YAML. Separate binary, vendored alongside Periscope. - PR-C: Cross-account
sts:AssumeRoleper cluster. AddsCluster.AssumeRoleArn, role-session-name =periscope/<oidc-sub>, per-cluster STS creds caching. ~80 LoC.
6. Layer A — OIDC
6.1 Library choice
github.com/coreos/go-oidc/v3— discovery, ID-token verification, JWKS.golang.org/x/oauth2— code exchange, refresh, token source.
Skip ory/fosite (it's for building an OAuth provider, not consuming one).
Skip lestrrat-go/jwx for v1 (only needed if we later do signed request objects
or DPoP).
6.2 Configuration
config/auth.yaml:
oidc:
# Any OIDC-compliant issuer; tested with Auth0 + Okta.
issuer: https://your-tenant.us.auth0.com/
clientID: your-application-client-id
# Secret reference. Resolved through internal/secrets at startup.
# Schemes: ${ENV}, file://path, aws-secretsmanager://name[#json-key],
# aws-ssm:///path/to/parameter, or a literal (discouraged).
clientSecret: ${OIDC_CLIENT_SECRET}
redirectURL: https://periscope.corp.com/api/auth/callback
scopes: [openid, profile, email, offline_access]
# Auth0-only: API audience identifier so the IdP issues a JWT
# access token. Empty for Okta and most other IdPs.
audience: ""
postLogoutRedirect: https://periscope.corp.com/api/auth/loggedout
session:
cookieName: periscope_session
idleTimeout: 30m
absoluteTimeout: 8h
cookieDomain: periscope.corp.com # optional
authorization:
# one of: shared | tier | raw
mode: shared
# tier mode:
groupTiers:
SRE-Platform: admin
SRE-OnCall: triage
Backend-TeamLeads: maintain
Engineering-All: write
Contractors: read
defaultTier: read # users in no listed group; "" = deny
# raw mode:
groupPrefix: "periscope:"
# gate (all modes): empty = any authenticated user
allowedGroups: []
# IdP token claim name. Auth0 needs a namespaced custom claim
# (e.g. https://periscope/groups); Okta exposes "groups" natively.
groupsClaim: groupsSecret resolution. clientSecret is the only field that flows through
internal/secrets.Resolver today. Resolution happens once at startup; rotation
= restart in v1. The AWS-backed schemes share the pod's default credential chain
(Pod Identity / IRSA / local profile).
6.3 Endpoints
| Path | Method | Purpose |
|---|---|---|
/api/auth/login | GET | Generate PKCE+state, set short-lived cookie, 302 to the IdP. |
/api/auth/callback | GET | Exchange code, verify ID token, create session, set session cookie, 302 to /. |
/api/auth/logout | GET | Clear local session, 302 to /. (No IdP call.) |
/api/auth/logout/everywhere | GET | Clear local + 302 to IdP end_session_endpoint. |
/api/auth/loggedout | GET | Static "you've been signed out" page after IdP logout. |
/api/auth/whoami | GET | {subject, email, groups, mode, tier?, expiresAt} for the SPA. |
6.4 Session record
type Session struct {
ID string // 32 random bytes, base64
Subject string // OIDC sub
Email string
Groups []string
AccessToken string // server-side only
RefreshToken string // server-side only
IDToken string // for RP-initiated logout id_token_hint
AccessExpiry time.Time
AbsoluteExpiry time.Time
LastActivity time.Time
}credentials.Session is the exposed slice — {Subject, Email, Groups}. Tokens
never leave the auth package.
6.5 Cookie semantics
Name:periscope_sessionValue: session ID (32 random bytes, base64)HttpOnly: trueSecure: true on TLS / behind X-Forwarded-Proto=httpsSameSite: Lax (see CHANGELOG v1.0.0 — Strict broke the post-callback redirect; #37)Path:/MaxAge: matches absolute timeout (8 h)Domain: configurable; default unset (host-only cookie)
6.6 Refresh
Background-free design. On each authenticated request, middleware checks
time.Until(session.AccessExpiry) < 60s. If so, refresh inline using the RT.
The IdP is configured for rotation: each refresh issues a new RT; the old
one is valid for a 30-second grace window. Refresh failure → invalidate session,
401, SPA redirects to login.
7. Layer B — AWS auth + K8s impersonation
7.1 AWS credentials (already in place)
The pod's base AWS credentials come from the SDK's default chain. Two supported deployment paths; both produce identical Periscope behavior:
Pod Identity (preferred for new EKS deployments):
aws eks create-pod-identity-association \
--cluster-name <hosting-cluster> \
--namespace periscope \
--service-account periscope \
--role-arn arn:aws:iam::111111111111:role/periscope-baseTrust policy on periscope-base:
{ "Effect": "Allow",
"Principal": { "Service": "pods.eks.amazonaws.com" },
"Action": ["sts:AssumeRole", "sts:TagSession"] }IRSA (fallback for older EKS or non-EKS hosting): SA annotation
eks.amazonaws.com/role-arn: arn:aws:iam::...:role/periscope-base. Standard
pre-2024 pattern.
7.2 EKS API auth (already in place)
internal/k8s/token.go mints bearer tokens in-process: presigned
sts:GetCallerIdentity URL with x-k8s-aws-id header → base64url → prefixed
with k8s-aws-v1.. 15-min token TTL; refreshed before expiry. No aws eks get-token shell-out, no kubeconfig file.
7.3 Per-cluster Access Entry
Each managed EKS cluster gets a single Access Entry binding Periscope's pod
principal to a K8s group (we call it periscope-bridge):
aws eks create-access-entry \
--cluster-name prod-eu-west-1 \
--principal-arn arn:aws:iam::222...:role/periscope-base \
--kubernetes-groups periscope-bridge \
--type STANDARDThe periscope-bridge group has only one ClusterRole bound to it: a custom
periscope-impersonator granting impersonate on users and groups.
Periscope's pod has no other K8s perms on the cluster. Impersonate is strictly
the only thing the principal can do natively; everything else flows through the
impersonated user's RBAC.
7.4 Per-user K8s impersonation
buildRestConfig populates rest.Config.Impersonate based on the active mode:
- shared: no Impersonate fields. Periscope acts as
periscope-bridge. - tier:
Impersonate.UserName = session.Subject,Impersonate.Groups = ["periscope-tier:<tier>"]where tier is resolved fromgroupTiers(defaultdefaultTier). - raw:
Impersonate.UserName = session.Subject,Impersonate.Groups = ["periscope:<g>" for g in session.Groups].
K8s client-go automatically sends Impersonate-User and Impersonate-Group
headers on every request. The apiserver re-evaluates RBAC under the impersonated
identity. Audit log shows:
user.username = auth0|alice
user.groups = ["periscope-tier:admin"]
impersonatedBy.username = system:node:periscope-bridge7.5 Group prefix discipline
Critical. Both tier and raw modes prefix the impersonated groups
(periscope-tier: and periscope: respectively). This prevents an attacker
who compromises Periscope from impersonating into system:masters or other
privileged groups — RBAC bindings on those groups won't match the prefixed
form.
The chart enforces this by:
- Refusing to render a
periscope-impersonatorClusterRole that allows impersonating un-prefixed groups. - Rendering
periscope-tier:*orperiscope:*group RBAC only. - Emitting a startup-time log line listing the exact impersonate verbs the pod role has, so operators can spot drift.
7.6 Tier mode RBAC (shipped manifests)
The Helm chart renders 7 manifests per cluster (when clusterRBAC.enabled and
authorization.mode: tier):
| Manifest | Purpose |
|---|---|
ClusterRole periscope-impersonator | The verb (impersonate users,groups) |
ClusterRoleBinding periscope-impersonator | binds periscope-bridge group → impersonator role |
ClusterRoleBinding periscope-tier-read → view | read tier |
ClusterRoleBinding periscope-tier-write → edit | write tier |
ClusterRoleBinding periscope-tier-admin → cluster-admin | admin tier |
ClusterRole periscope-triage + ClusterRoleBinding | triage tier (custom verb set) |
ClusterRole periscope-maintain + ClusterRoleBinding | maintain tier (custom verb set) |
Operators apply with kubectl apply -f. Drift between shipped roles and what's
on the cluster is the operator's problem; chart's appVersion tracks role
contents so operators can pin and rerun on chart upgrade.
The two custom roles' exact verb sets are intentionally not locked in this RFC (decision 11). They ship with sensible defaults and evolve in v1.x based on real-world feedback.
7.7 Cross-account (deferred to PR-C)
Cross-account access requires the pod role in account A to sts:AssumeRole
into a per-cluster role in account B. PR-C adds:
Cluster.AssumeRoleArn(and optionalExternalID,SessionDurationSeconds) inclusters.yaml.SharedIrsaProviderextension that callsAssumeRoleper cluster, caching STS creds keyed by(cluster, session.Subject).RoleSessionName = periscope/<oidc-sub>so CloudTrail attributes AWS calls back to the human.- Same-account v1 deployments are unaffected when
AssumeRoleArnis empty.
The escape hatch for v1 is the kubeconfig backend with an exec credential
plugin that runs aws eks get-token --role-arn .... Workable for a few
clusters; ugly at scale; sized appropriately as v1.x scope.
8. Wiring it together
cmd/periscope/main.go:
oidcClient, _ := auth.NewOIDCClient(ctx, cfg.OIDC, cfg.Authorization.GroupsClaim)
sessions := auth.NewMemoryStore()
authMW := auth.Middleware(oidcClient, sessions, cfg)
modeResolver := authz.NewResolver(cfg.Authorization)
factory := credentials.NewSharedIrsaFactoryWithModeResolver(awsCfg, modeResolver)
router.Use(authMW)
auth.RegisterRoutes(router, oidcClient, sessions, cfg)
// Existing routes unchanged. credentials.Wrap pulls the session from
// context, the factory builds a Provider, and k8s.NewClientset pulls
// the impersonation strings from the Provider.
mux.Handle("/api/...", authMW(credentials.Wrap(factory, apiHandler)))credentials.Wrap is unchanged externally; the Provider it builds carries the
impersonation context internally.
9. Audit logging
Every audit line gains:
actor=alice@corp.com actor_groups=periscope-tier:admin session_id=…Audit events:
auth.login(subject, email, groups, ip, user_agent, resolved_tier)auth.login_failed(reason —state_mismatch,code_exchange_failed,id_token_invalid,not_in_allowed_groups)auth.logout(subject, kind=local|everywhere)auth.session_expired(subject, kind=idle|absolute|refresh_failed)- (PR-C)
aws.assume_role_failed(cluster, role_arn, error_code)
Per-cluster K8s audit logs additionally show:
user.username = auth0|alice
user.groups = ["periscope-tier:admin"]
impersonatedBy.username = system:node:periscope-bridgeThis is the full attribution chain — anyone can join app-level audit, K8s audit, and (in PR-C) CloudTrail back to the same OIDC sub.
10. Smoke-test plan
Layer A (already exercised)
- Login happy path; allowed-groups gate; idle/absolute expiry; refresh rotation; local logout; logout-everywhere.
Layer B — Mode resolution
- shared mode: install with default
mode: shared. Two users in two different IdP groups both get full pod-role perms. Confirm K8s audit log shows the pod principal directly (noimpersonatedByfield). - tier mode: configure
groupTiersmapping groupEngineerstoreadandSREstoadmin. Engineer can list pods but not delete. SRE can do anything. Confirm K8s audit log showsuser.username = <oidc-sub>,user.groups = [periscope-tier:read]for the engineer. - raw mode: configure
mode: raw. Bind a ClusterRole to groupperiscope:Engineers. User inEngineersIdP group gets exactly that binding's perms.
Layer B — Tier semantics (custom roles)
- triage tier: confirm
pods/exec,pods/log,pods/portforwardwork; confirm scaling deployments via/scaleworks; confirm pod delete works; confirm deployment spec patch is denied. - maintain tier: confirm namespace-scoped admin works (RoleBindings ok); confirm cluster-level RBAC create is denied; confirm node read works.
- defaultTier: user in no listed group lands on the default tier (or is
denied if
defaultTier: "").
Security
- Group prefix discipline: try to bind a ClusterRole to plain
system:masters(no prefix) and confirm Periscope's impersonation NEVER producessystem:mastersregardless of input groups. - Pod role can't act directly: unbind everything except
periscope-impersonatorfrom the pod role. Confirmkubectl get pods(issued by Periscope without impersonation) returns 403.
Deferred — PR-C smoke tests
10–13: cross-account AssumeRole tests; trust-policy validation; CloudTrail attribution.
11. Decisions
| # | Decision | Rationale |
|---|---|---|
| 1 | Authorization Code + PKCE + BFF + httpOnly cookie | 2025–2026 industry consensus; SPA token storage is a known XSS-exfil class. |
| 2 | coreos/go-oidc/v3 + golang.org/x/oauth2 | Boring, well-tested, ~150 LoC of glue. |
| 3 | In-memory session store, single-replica deploy | Fits v1 audience size; HA is v1.x. |
| 4 | Local logout default; "logout everywhere" optional | Matches user expectation for "sign out of this app." |
| 5 | Pod Identity preferred, IRSA documented as alternative | Code is identical; deploy doc differs. |
| 6 | EKS Access Entries only (no aws-auth ConfigMap) | aws-auth is deprecated by AWS as of 2024. |
| 7 | In-process EKS bearer token (presigned STS URL); no subprocess | No aws CLI dependency in the pod; fewer moving parts. |
| 8 | Three authorization modes: shared (default) / tier / raw | shared matches current state, tier for most teams, raw for power users. Progressive disclosure. |
| 9 | Five built-in tiers with GitHub names: read / triage / write / maintain / admin | GitHub roles are widely understood; triage and maintain fill gaps that 3-tier can't. |
| 10 | Tier custom roles ship with default verb sets, evolve in v1.x | Lock the interface (5 names, GitHub mapping), let implementation learn from real use. |
| 11 | Always prefix impersonated groups (periscope-tier: / periscope:) | Defense against the "impersonate into system:masters" attack class. |
| 12 | Periscope's pod role gets ONLY the impersonate verb | Principle of least privilege; matches Rancher's pattern. |
| 13 | Generic OIDC, tested against Auth0 + Okta | Same code path works for any compliant IdP; ship verified on two, document the rest. |
| 14 | audience config field for IdPs that need it (e.g. Auth0) | Optional; absent for Okta. Keeps the surface generic without breaking Auth0. |
| 15 | Secret references via URL scheme (env / file / aws-secretsmanager / aws-ssm) | One chokepoint, scheme-discoverable from auth.yaml; AWS schemes share the pod's existing credential chain. |
| 16 | Cross-account sts:AssumeRole deferred to PR-C / v1.x | Single-account multi-cluster covers the v1 audience; multi-account orgs use kubeconfig-backend escape hatch. |
| 17 | periscope-rbac CLI deferred to PR-B.2 | Doesn't change what's possible; only changes how nice raw mode is to adopt. |
| 18 | Single IdP tenant per Periscope deployment | Multi-tenant orgs are a hosted-product feature, not OSS v1. |
12. Out of scope (deferred)
- Cross-account
sts:AssumeRole— PR-C. Same-account is in scope for v1. - HA session store — v1.x.
- Verifying additional IdPs (Azure AD/Entra, Google, Keycloak) — possible v1.x; v1 ships verified on Auth0 + Okta only.
- AWS Identity Center user pass-through — v2 (the headline architectural shift; separate RFC).
- In-app RBAC management UI — v2.x or later.
- Auto-refresh of resolved secrets — v1.x.
- Periscope-issued API tokens for non-browser clients — v2.x (needed once MCP exposure lands without a browser).
13. Phasing
PR-A (shipped):
- OIDC login (PKCE + BFF), tested with Auth0 and Okta.
allowedGroupsauthorization gate.- LoginScreen + UserMenu in the SPA.
internal/auth/*package; replaces thedev@localstub.
PR-A.1 (shipped):
- Generic OIDC rename (Okta → OIDC; package, types, env var).
audienceconfig field for Auth0.internal/secretsresolver; hooksoidc.clientSecretto URL-scheme resolution (env / file / aws-secretsmanager / aws-ssm).
Helm chart + setup docs (shipped):
deploy/helm/periscope/with four secret modes (existing/plain/external/native).docs/setup/{auth0,okta,deploy}.md.examples/config/auth.yaml.{auth0,okta}andclusters.yaml.
PR-B (next):
- Three modes (
shared/tier/raw), defaultshared. internal/authzmode resolver.k8s.Config.Impersonatepopulated from the resolver.- 5 GitHub-named tiers with shipped Helm RBAC manifests.
docs/setup/cluster-rbac.mdwalking through all three modes.- Update
auth.yamlschema withauthorization.modeand friends.
PR-B.1 (UX polish):
- Forbidden-aware UI (empty states for unlistable resources, calm 403 toasts).
- Tier badge in
<UserMenu>. - Role-aware action visibility (don't show "delete" if the user can't delete).
PR-B.2 (CLI):
cmd/periscope-rbac— declarative intent file → generated RBAC YAML.--apply+--outputmodes.- ClusterRole discovery + validation.
PR-C / v1.x:
- Cross-account
sts:AssumeRole.Cluster.AssumeRoleArn, role-session-name =periscope/<oidc-sub>, per-cluster STS creds caching, CloudTrail attribution.
v2 (separate RFC):
- Replace
SharedIrsaProviderwithUserSsoProvider. AWS calls run as the user's AWS Identity Center session. Layer A is unchanged. Tier/raw modes may fold away on clusters that trust the same IDP directly.
14. Critical files reference
| File | PR | Change |
|---|---|---|
internal/auth/{config,oidc,session,handlers,middleware,util}.go | PR-A | Layer A — OIDC + sessions + handlers + middleware. |
internal/secrets/resolver.go | PR-A.1 | Secret-reference URL scheme resolution. |
internal/credentials/{provider,middleware,shared_irsa}.go | PR-A | Session in context; AWSConfig() exposed; Email/Groups added. |
cmd/periscope/main.go | PR-A, PR-B | Auth wiring; mode resolver wiring. |
web/src/auth/{AuthContext,LoginScreen,types}.tsx | PR-A | SPA-side auth. |
web/src/components/shell/UserMenu.tsx | PR-A | Avatar popover. |
internal/authz/{mode,tiers}.go | PR-B | Mode resolver + 5 tiers. NEW. |
internal/k8s/client.go | PR-B | cfg.Impersonate population. |
internal/k8s/exec.go | PR-B | Verify SPDY/WS impersonation. |
deploy/helm/periscope/templates/cluster-rbac.yaml | PR-B | The 7 tier-mode RBAC manifests. NEW. |
docs/setup/cluster-rbac.md | PR-B | Mode-selection + tier walkthrough. NEW. |
web/src/auth/RoleAwareUI.tsx | PR-B.1 | Forbidden-aware empty states + toasts. NEW. |
cmd/periscope-rbac/*.go | PR-B.2 | Declarative RBAC CLI. NEW. |
internal/credentials/shared_irsa.go | PR-C | Per-cluster AssumeRole + STS creds caching. |
internal/clusters/cluster.go | PR-C | AssumeRoleArn, ExternalID, SessionDurationSeconds. |
docs/setup/cross-account.md | PR-C | Multi-account deployment guide. NEW. |
PR-B sizing: ~180 LoC backend + 7 manifests in chart + ~300 lines of doc. PR-B.1: ~150 LoC mostly frontend. PR-B.2: ~400 LoC + tests. PR-C: ~80 LoC + ~200 lines of doc.
Total v1 surface delta from where we are now: ~810 LoC + chart RBAC + docs.