- 64 Actual Exam Questions
- Compatible with all Devices
- Printable Format
- No Download Limits
- 90 Days Free Updates
Get All Certified Kubernetes Security Specialist Exam Questions with Validated Answers
| Vendor: | Linux Foundation |
|---|---|
| Exam Code: | CKS |
| Exam Name: | Certified Kubernetes Security Specialist |
| Exam Questions: | 64 |
| Last Updated: | February 24, 2026 |
| Related Certifications: | Kubernetes Security Specialist |
| Exam Tags: | Intermediate Kubernetes SpecialistKubernetes AdministratorKubernetes Practitioner |
Looking for a hassle-free way to pass the Linux Foundation Certified Kubernetes Security Specialist exam? DumpsProvider provides the most reliable Dumps Questions and Answers, designed by Linux Foundation certified experts to help you succeed in record time. Available in both PDF and Online Practice Test formats, our study materials cover every major exam topic, making it possible for you to pass potentially within just one day!
DumpsProvider is a leading provider of high-quality exam dumps, trusted by professionals worldwide. Our Linux Foundation CKS exam questions give you the knowledge and confidence needed to succeed on the first attempt.
Train with our Linux Foundation CKS exam practice tests, which simulate the actual exam environment. This real-test experience helps you get familiar with the format and timing of the exam, ensuring you're 100% prepared for exam day.
Your success is our commitment! That's why DumpsProvider offers a 100% money-back guarantee. If you don’t pass the Linux Foundation CKS exam, we’ll refund your payment within 24 hours no questions asked.
Don’t waste time with unreliable exam prep resources. Get started with DumpsProvider’s Linux Foundation CKS exam dumps today and achieve your certification effortlessly!
SIMULATION
Documentation Namespace, NetworkPolicy, Pod
You must connect to the correct host . Failure to do so may result in a zero score.
[candidate@base] $ ssh cks000031
Context
You must implement NetworkPolicies controlling the traffic flow of existing Deployments across namespaces.
Task
First, create a NetworkPolicy named deny-policy in the prod namespace to block all ingress traffic.
The prod namespace is labeled env:prod
Next, create a NetworkPolicy named allow-from-prod in the data namespace to allow ingress traffic only from Pods in the prod namespace.
Use the label of the prod names & Click to copy traffic.
The data namespace is labeled env:data
Do not modify or delete any namespaces or Pods . Only create the required NetworkPolicies.
1) Connect to the correct host
ssh cks000031
sudo -i
2) Use admin kubeconfig (safe default)
export KUBECONFIG=/etc/kubernetes/admin.conf
PART A --- Deny ALL ingress traffic in prod namespace
Requirement:
NetworkPolicy name: deny-policy
Namespace: prod (namespace is labeled env=prod)
Effect: block all ingress
3) Create deny-policy in prod
Create the policy directly with kubectl (fastest & safest):
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-policy
namespace: prod
spec:
podSelector: {}
policyTypes:
- Ingress
EOF
What this does:
podSelector: {} selects all Pods in prod
No ingress: rules deny all ingress traffic
4) Verify
kubectl -n prod get networkpolicy deny-policy
PART B --- Allow ingress to data ONLY from Pods in prod
Requirement:
NetworkPolicy name: allow-from-prod
Namespace: data (namespace is labeled env=data)
Allow ingress only from Pods in prod namespace
Use namespace label (env=prod)
5) Create allow-from-prod policy in data
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-prod
namespace: data
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
env: prod
EOF
What this does:
Applies to all Pods in data
Allows ingress only from namespaces labeled env=prod
All other ingress traffic is denied by default
6) Verify
kubectl -n data get networkpolicy allow-from-prod
FINAL CHECK (What the examiner expects)
kubectl get networkpolicy -n prod
kubectl get networkpolicy -n data
You should see:
deny-policy in prod
allow-from-prod in data
SIMULATION
You can switch the cluster/configuration context using the following command:
[desk@cli] $kubectl config use-context dev
Context:
A CIS Benchmark tool was run against the kubeadm created cluster and found multiple issues that must be addressed.
Task:
Fix all issues via configuration and restart the affected components to ensure the new settings take effect.
Fix all of the following violations that were found against the API server:
1.2.7authorization-modeargument is not set toAlwaysAllow FAIL
1.2.8authorization-modeargument includesNode FAIL
1.2.7authorization-modeargument includesRBAC FAIL
Fix all of the following violations that were found against the Kubelet:
4.2.1 Ensure that theanonymous-auth argumentis set to false FAIL
4.2.2authorization-modeargument is not set to AlwaysAllow FAIL (UseWebhookautumn/authz where possible)
Fix all of the following violations that were found against etcd:
2.2 Ensure that theclient-cert-authargument is set to true
worker1 $ vim /var/lib/kubelet/config.yaml
anonymous:
enabled: true #Delete this
enabled: false #Replace by this
authorization:
mode: AlwaysAllow #Delete this
mode: Webhook #Replace by this
worker1 $ systemctl restart kubelet. # To reload kubelet config
ssh to master1
master1 $ vim /etc/kubernetes/manifests/kube-apiserver.yaml
- -- authorization-mode=Node,RBAC
master1 $ vim /etc/kubernetes/manifests/etcd.yaml
- --client-cert-auth=true
Explanation
ssh to worker1
worker1 $ vim /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: true #Delete this
enabled: false #Replace by this
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: AlwaysAllow #Delete this
mode: Webhook #Replace by this
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
worker1 $ systemctl restart kubelet. # To reload kubelet config
ssh to master1
master1 $ vim /etc/kubernetes/manifests/kube-apiserver.yaml

master1 $ vim /etc/kubernetes/manifests/etcd.yaml

SIMULATION
You can switch the cluster/configuration context using the following command:
[desk@cli] $kubectl config use-context qa
Context:
A pod fails to run because of an incorrectly specified ServiceAccount
Task:
Create a new service account named backend-qa in an existing namespace qa, which must not have access to any secret.
Edit the frontend pod yaml to use backend-qa service account
Note:You can find the frontend pod yaml at /home/cert_masters/frontend-pod.yaml
[desk@cli] $k create sa backend-qa -n qa
sa/backend-qa created
[desk@cli] $k get role,rolebinding -n qa
No resources found in qa namespace.
[desk@cli] $k create role backend -n qa --resource pods,namespaces,configmaps --verb list
#No access to secret
[desk@cli] $k create rolebinding backend -n qa --role backend --serviceaccount qa:backend-qa
[desk@cli] $vim /home/cert_masters/frontend-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
serviceAccountName: backend-qa # Add this
image: nginx
name: frontend
[desk@cli] $k apply -f /home/cert_masters/frontend-pod.yaml
pod created
[desk@cli] $k create sa backend-qa -n qa
serviceaccount/backend-qa created
[desk@cli] $k get role,rolebinding -n qa
No resources found in qa namespace.
[desk@cli] $k create role backend -n qa --resource pods,namespaces,configmaps --verb list
role.rbac.authorization.k8s.io/backend created
[desk@cli] $k create rolebinding backend -n qa --role backend --serviceaccount qa:backend-qa
rolebinding.rbac.authorization.k8s.io/backend created
[desk@cli] $vim /home/cert_masters/frontend-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
serviceAccountName: backend-qa # Add this
image: nginx
name: frontend
[desk@cli] $k apply -f /home/cert_masters/frontend-pod.yaml
pod/frontend created
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
SIMULATION
Context:
Cluster:prod
Master node:master1
Worker node:worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $kubectl config use-context prod
Task:
Analyse and edit the given Dockerfile (based on theubuntu:18:04image)
/home/cert_masters/Dockerfilefixing two instructions present in the file being prominent security/best-practice issues.
Analyse and edit the given manifest file
/home/cert_masters/mydeployment.yamlfixing two fields present in the file being prominent security/best-practice issues.
Note:Don't add or remove configuration settings; only modify the existing configuration settings, so that two configuration settings each are no longer security/best-practice concerns.
Should you need an unprivileged user for any of the tasks, use usernobodywith user id65535
1. For Dockerfile:Fix the image version & user name in Dockerfile
2. For mydeployment.yaml : Fix security contexts
Explanation
[desk@cli] $vim /home/cert_masters/Dockerfile
FROM ubuntu:latest # Remove this
FROM ubuntu:18.04 # Add this
USER root # Remove this
USER nobody # Add this
RUN apt get install -y lsof=4.72 wget=1.17.1 nginx=4.2
ENV ENVIRONMENT=testing
USER root # Remove this
USER nobody # Add this
CMD ['nginx -d']

[desk@cli] $vim/home/cert_masters/mydeployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: kafka
name: kafka
spec:
replicas: 1
selector:
matchLabels:
app: kafka
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: kafka
spec:
containers:
- image: bitnami/kafka
name: kafka
volumeMounts:
- name: kafka-vol
mountPath: /var/lib/kafka
securityContext:
{'capabilities':{'add':['NET_ADMIN'],'drop':['all']},'privileged': True,'readOnlyRootFilesystem': False, 'runAsUser': 65535} # Delete This
{'capabilities':{'add':['NET_ADMIN'],'drop':['all']},'privileged': False,'readOnlyRootFilesystem': True, 'runAsUser': 65535} # Add This
resources: {}
volumes:
- name: kafka-vol
emptyDir: {}
status: {}
Pictorial View:
[desk@cli] $vim/home/cert_masters/mydeployment.yaml

SIMULATION
Documentation Secrets, TLS Secrets, Volumes
You must connect to the correct host . Failure to do so may result in a zero score.
[candidate@base] $ ssh cks000m40
Path
Key
Context
You must complete securing access to a web server using SSL files stored in a TLS Secret .
Task
Create a TLS Secret named clever-cactus in the clever-cactus namespace for an existing Deployment named clever-cactus.
Use the following SSL files:
File
Certificate /home/candidate/clever-cactus/web.k8s.local.crt
/home/candidate/clever-cactus/web.k8s.local.key
The Deployment is already configured to use the TLS Secret.
Do not modify the existing Deployment.
Failure to do so may result in a reduced score.
1) Connect to the correct host
ssh cks000m40
sudo -i
export KUBECONFIG=/etc/kubernetes/admin.conf
2) Verify namespace exists (quick check)
kubectl get ns clever-cactus
3) Verify certificate and key files exist
ls -l /home/candidate/clever-cactus/web.k8s.local.crt
ls -l /home/candidate/clever-cactus/web.k8s.local.key
Both files must exist.
4) Create the TLS Secret (THIS IS THE MAIN TASK)
Create a TLS Secret named clever-cactus in namespace clever-cactus:
kubectl -n clever-cactus create secret tls clever-cactus \
--cert=/home/candidate/clever-cactus/web.k8s.local.crt \
--key=/home/candidate/clever-cactus/web.k8s.local.key
Do NOT use apply
Do NOT edit the Deployment
5) Verify the Secret
kubectl -n clever-cactus get secret clever-cactus
Expected type:
kubernetes.io/tls
Optional detail check:
kubectl -n clever-cactus describe secret clever-cactus
You should see:
tls.crt
tls.key
6) (Optional) Confirm Pods are running
Since the Deployment is already configured to use the Secret, Pods should now work.
kubectl -n clever-cactus get pods
Security & Privacy
Satisfied Customers
Committed Service
Money Back Guranteed