Skip to main content
This page covers sandbox security posture and how to configure it via the API.

Non-root execution

  • pod_non_root (boolean, default false): Run the Pod as non-root (UID/GID/FSGroup 65532). Applies pod-wide filesystem ownership.
  • container_non_root (boolean, default false): Run the main container as non-root (UID 65532) and disallow privilege escalation.
Guidance:
  • Enable both flags for consistent non-root behavior and fewer permission surprises when writing to volumes.
  • Some package managers (e.g., Alpine apk add) require root. To run apk add inside the container, you have options:
    • Use before_script with a base image that already includes needed tools, or
    • Temporarily run the main container with root by leaving container_non_root disabled for setup, or
    • Build a custom image with dependencies pre-installed (recommended for production reproducibility).
Example (non-root):
{
  "name": "nr-example",
  "image": "alpine:latest",
  "pod_non_root": true,
  "container_non_root": true
}
Example (install packages first as root, then lock down egress):
{
  "name": "setup-then-lock",
  "image": "alpine:latest",
  "before_script": "apk add --no-cache curl git",
  "egress_whitelist": ["203.0.113.0/24"]
}

Linux capabilities

Default policy: drop ALL capabilities. Add back only what you need. If you specify cap_drop explicitly, you override the default; to keep drop ALL and add back minimal caps, leave cap_drop unset and only use cap_add.
  • cap_drop (string[]): Capabilities to drop. If omitted, ALL is dropped by default.
  • cap_add (string[]): Capabilities to add back.
  • allow_privilege_escalation: always set to false.
  • Seccomp profile: RuntimeDefault.
Examples: Minimal add-back while still dropping ALL by default:
{
  "name": "caps-minimal",
  "image": "alpine:latest",
  "cap_add": ["CHOWN"],
  "cap_drop": null
}
Override drop policy (not recommended unless you know why):
{
  "name": "caps-custom",
  "image": "alpine:latest",
  "cap_drop": ["NET_RAW"],
  "cap_add": []
}

Network isolation and egress lockdown

Ingress isolation (Default: Enabled)

All inter-VM communication is blocked by default to prevent sandbox-to-sandbox access. This provides strong isolation between different sandboxes running in the same cluster. Key points:
  • Ingress blocking: VM sandboxes cannot communicate with each other by default
  • Administrative access preserved: kubectl exec and k7 shell still work normally (they use the Kubernetes API, not pod networking)
  • System services allowed: Traffic from kube-system namespace is permitted for cluster functionality
  • No configuration needed: This security feature is enabled by default for all sandboxes

Egress lockdown and whitelisting

Use egress_whitelist to control outbound traffic. The policy is applied after the container becomes Ready so before_script runs with open egress. Behavior:
  • Omit egress_whitelist: egress open (external internet allowed).
  • []: full egress block (no DNS resolution; no outbound IPs).
  • ["CIDR", ...]: allow only listed CIDR blocks; DNS is blocked.
Examples: Full isolation (no inter-VM communication, no external access):
{ "name": "fully-isolated", "image": "alpine:latest", "egress_whitelist": [] }
Partial isolation (no inter-VM communication, but external internet allowed):
{ "name": "partial-isolation", "image": "alpine:latest" }
Whitelist specific external services (avoid public DNS resolvers):
{
  "name": "egress-restricted",
  "image": "alpine:latest",
  "egress_whitelist": ["10.0.0.5/32"]
}
Network Policy Details:
  • Ingress: Blocked by default (inter-VM isolation) - system services and kubectl exec still work
  • DNS: When egress is locked down, DNS resolution is blocked by default (no CoreDNS exception)
  • Administrative access: kubectl exec, k7 shell, and API operations bypass network policies
Do not whitelist public DNS resolver IPs (e.g., 1.1.1.1, 8.8.8.8). Because K7’s egress_whitelist is CIDR-only (no L7/port rules), allowing those IPs enables outbound DNS (UDP/TCP 53) and DNS-over-HTTPS (443), which can be used for exfiltration. If you want an egress deny with whitelisting, prefer whitelisting only your own egress proxy/gateway IP and enforce DNS/DoH policy at that proxy. Later, whenever we integrate Cilium (a roadmap feature), it will be much simpler as you’ll be able to whitelist domain names directly.

Mitigations when DNS is blocked

  • Use IP/CIDR whitelisting only (no domains post-lockdown)
  • Pre-resolve/fetch in before_script (runs before lockdown with open egress)
  • If you must allow DNS temporarily, consider an operational override at cluster level (not provided by K7 config)