- 48 Actual Exam Questions
- Compatible with all Devices
- Printable Format
- No Download Limits
- 90 Days Free Updates
Get All Certified Kubernetes Application Developer Exam Questions with Validated Answers
| Vendor: | Linux Foundation |
|---|---|
| Exam Code: | CKAD |
| Exam Name: | Certified Kubernetes Application Developer |
| Exam Questions: | 48 |
| Last Updated: | April 11, 2026 |
| Related Certifications: | Kubernetes Application Developer |
| Exam Tags: | Intermediate Kubernetes Application DeveloperKubernetes Developers |
Looking for a hassle-free way to pass the Linux Foundation Certified Kubernetes Application Developer exam? DumpsProvider provides the most reliable Dumps Questions and Answers, designed by Linux Foundation certified experts to help you succeed in record time. Available in both PDF and Online Practice Test formats, our study materials cover every major exam topic, making it possible for you to pass potentially within just one day!
DumpsProvider is a leading provider of high-quality exam dumps, trusted by professionals worldwide. Our Linux Foundation CKAD exam questions give you the knowledge and confidence needed to succeed on the first attempt.
Train with our Linux Foundation CKAD exam practice tests, which simulate the actual exam environment. This real-test experience helps you get familiar with the format and timing of the exam, ensuring you're 100% prepared for exam day.
Your success is our commitment! That's why DumpsProvider offers a 100% money-back guarantee. If you don’t pass the Linux Foundation CKAD exam, we’ll refund your payment within 24 hours no questions asked.
Don’t waste time with unreliable exam prep resources. Get started with DumpsProvider’s Linux Foundation CKAD exam dumps today and achieve your certification effortlessly!
SIMULATION
You are asked to prepare a canary deployment for testing a new application release.
You must connect to the correct host . Failure to do so may result in a zero score.
[candidate@base] $ ssh ckad00023
Modify the Deployments so that:
a maximum number of 10 Pods run in the moose namespace.
20% of the chipmunk-service 's traffic goes to the canary-chipmunk-deployment Pod
(s).

The Service is exposed on NodePort 30000. To test its load- balancing, run
[candidate@ckad00023] $ curl http://localhost:30000/
or open this URL in the remote desktop's browser.
ssh ckad00023
You need two outcomes in moose:
At most 10 Pods total (across both Deployments).
About 20% of chipmunk-service traffic goes to canary-chipmunk-deployment.
In Kubernetes Services, traffic distribution is (roughly) proportional to the number of ready endpoints behind the Service. So the standard canary trick is:
total endpoints = 10
canary endpoints = 2
current endpoints = 8
That gives ~20% to canary.
1) Inspect what exists
kubectl -n moose get deploy
kubectl -n moose get svc chipmunk-service -o wide
kubectl -n moose describe svc chipmunk-service
Get the Service selector (important):
kubectl -n moose get svc chipmunk-service -o jsonpath='{.spec.selector}{'\n'}'
Check current replicas:
kubectl -n moose get deploy current-chipmunk-deployment -o jsonpath='{.spec.replicas}{'\n'}'
kubectl -n moose get deploy canary-chipmunk-deployment -o jsonpath='{.spec.replicas}{'\n'}'
List pods + labels (to confirm both Deployments' pods match the Service selector):
kubectl -n moose get pods --show-labels
2) Ensure both Deployments are behind the Service
This is the key: the pods from BOTH deployments must match the Service selector.
If the Service selector is something like app=chipmunk, then both Deployments' pod templates must include app: chipmunk.
If one Deployment doesn't match, patch its pod template labels to match the selector.
2A) Example: selector is app=chipmunk
(Only do this if you see the Service selector contains app=chipmunk and one of the deployments is missing it.)
kubectl -n moose patch deploy current-chipmunk-deployment \
-p '{'spec':{'template':{'metadata':{'labels':{'app':'chipmunk'}}}}}'
kubectl -n moose patch deploy canary-chipmunk-deployment \
-p '{'spec':{'template':{'metadata':{'labels':{'app':'chipmunk'}}}}}'
Wait for rollouts if patches triggered new ReplicaSets:
kubectl -n moose rollout status deploy current-chipmunk-deployment
kubectl -n moose rollout status deploy canary-chipmunk-deployment
Verify endpoints now include pods from both deployments:
kubectl -n moose get endpoints chipmunk-service -o wide
3) Set replicas to enforce ''max 10 pods'' and ''20% canary''
Set:
current = 8
canary = 2
Total = 10.
kubectl -n moose scale deploy current-chipmunk-deployment --replicas=8
kubectl -n moose scale deploy canary-chipmunk-deployment --replicas=2
Wait until ready:
kubectl -n moose rollout status deploy current-chipmunk-deployment
kubectl -n moose rollout status deploy canary-chipmunk-deployment
Confirm total pods is 10 (or less) and all are Running/Ready:
kubectl -n moose get pods
kubectl -n moose get pods | tail -n +2 | wc -l
Confirm endpoints count matches 10:
kubectl -n moose get endpoints chipmunk-service -o jsonpath='{.subsets[*].addresses[*].ip}' | wc -w
4) Test load balancing via NodePort 30000
Run several times:
for i in $(seq 1 30); do curl -s http://localhost:30000/; echo; done
You should see canary responses appear roughly ~20% of the time (not exact every run).
If you want a clearer signal, check which pods are endpoints and ensure 2 belong to canary and 8 to current:
kubectl -n moose get pods -l app=chipmunk -o wide
kubectl -n moose get endpoints chipmunk-service -o wide
SIMULATION

Set Configuration Context:
[student@node-1] $ | kubectl
Config use-context k8s
Context
A web application requires a specific version of redis to be used as a cache.
Task
Create a pod with the following characteristics, and leave it running when complete:
* The pod must run in the web namespace.
The namespace has already been created
* The name of the pod should be cache
* Use the Ifccncf/redis image with the 3.2 tag
* Expose port 6379
Solution:

SIMULATION
You must connect to the correct host . Failure to do so may result in a zero score.
[candidate@base] $ ssh ckad00029
Task
Modify the existing Deployment named store-deployment, running in namespace
grubworm, so that its containers
run with user ID 10000 and
have the NET_BIND_SERVICE capability added
The store-deployment 's manifest file Click to copy
/home/candidate/daring-moccasin/store-deplovment.vaml
ssh ckad00029
You must modify the existing Deployment store-deployment in namespace grubworm so that its containers:
run as user ID 10000
have Linux capability NET_BIND_SERVICE added
And you're told to use the manifest file at:
/home/candidate/daring-moccasin/store-deplovment.vaml (note: the filename looks misspelled; follow it exactly on the host)
1) Inspect the current Deployment and locate the manifest file
kubectl -n grubworm get deploy store-deployment
ls -l /home/candidate/daring-moccasin/
Open the manifest:
sed -n '1,200p' '/home/candidate/daring-moccasin/store-deplovment.vaml'
2) Edit the manifest to add SecurityContext
Edit the file:
vi '/home/candidate/daring-moccasin/store-deplovment.vaml'
2.1 Set Pod-level runAsUser = 10000
Under:
spec.template.spec add:
securityContext:
runAsUser: 10000
2.2 Add NET_BIND_SERVICE capability at container-level
Under the container spec (for each container in containers:), add:
securityContext:
capabilities:
add: ['NET_BIND_SERVICE']
A complete example of what it should look like (mind indentation):
apiVersion: apps/v1
kind: Deployment
metadata:
name: store-deployment
namespace: grubworm
spec:
template:
spec:
securityContext:
runAsUser: 10000
containers:
- name: store
image: someimage
securityContext:
capabilities:
add: ['NET_BIND_SERVICE']
Important notes:
runAsUser can be set at Pod level (applies to all containers) or per-container. Pod-level is cleanest if all containers should run as 10000.
Capabilities must be set per-container (that's where Kubernetes supports it).
Save and exit.
3) Apply the updated manifest
kubectl apply -f '/home/candidate/daring-moccasin/store-deplovment.vaml'
4) Ensure the Deployment rolls out
kubectl -n grubworm rollout status deploy store-deployment
5) Verify the settings are in effect
Check the rendered pod template:
kubectl -n grubworm get deploy store-deployment -o jsonpath='{.spec.template.spec.securityContext}{'\n'}'
kubectl -n grubworm get deploy store-deployment -o jsonpath='{.spec.template.spec.containers[0].securityContext}{'\n'}'
Verify on a running pod:
kubectl -n grubworm get pods
kubectl -n grubworm describe pod
kubectl -n grubworm describe pod
If there are multiple containers
Repeat the container-level securityContext.capabilities.add block for each container under spec.template.spec.containers.
SIMULATION

Task:
The pod for the Deployment named nosql in the craytisn namespace fails to start because its container runs out of resources.
Update the nosol Deployment so that the Pod:
1) Request 160M of memory for its Container
2) Limits the memory to half the maximum memory constraint set for the crayfah name space.

Solution:




SIMULATION
You must connect to the correct host . Failure to do so may result in a zero score.
[candidate@base] $ ssh ckad00032
The Pod for the Deployment named nosql in the haddock namespace fails to start because its Container runs out of resources.
Update the nosql Deployment so that the Container :
requests 128Mi of memory
limits the memory to half the maximum memory constraint set for the haddock namespace
Goal: fix nosql Deployment in haddock so the container stops OOM'ing by setting:
memory request = 128Mi
memory limit = half of the namespace's maximum memory constraint
You must do this on the correct host.
0) Connect to the correct host
ssh ckad00032
1) Confirm the failing Deployment / Pods
kubectl -n haddock get deploy nosql
kubectl -n haddock get pods -l app=nosql 2>/dev/null || kubectl -n haddock get pods
If pods are crashing, check why (you'll likely see OOMKilled):
kubectl -n haddock describe pod
2) Find the maximum memory constraint set for the haddock namespace
In CKAD labs, this is commonly enforced by a LimitRange (max memory per container). Sometimes it can also be a ResourceQuota.
2A) Check LimitRange (most likely)
kubectl -n haddock get limitrange
kubectl -n haddock get limitrange -o yaml
Extract the max memory value quickly:
MAX_MEM=$(kubectl -n haddock get limitrange -o jsonpath='{.items[0].spec.limits[0].max.memory}')
echo 'Namespace max memory constraint: $MAX_MEM'
2B) If no LimitRange exists, check ResourceQuota
kubectl -n haddock get resourcequota
kubectl -n haddock describe resourcequota
If quota is used, you're looking for something like limits.memory (but the question wording ''maximum memory constraint'' usually points to LimitRange max.memory).
3) Compute ''half of the max memory constraint''
Run this small snippet to compute HALF in Mi (handles Mi and Gi):
HALF_MEM=$(python3 - <<'PY'
import os, re
q = os.environ.get('MAX_MEM','').strip()
m = re.fullmatch(r'(\d+)(Mi|Gi)', q)
if not m:
raise SystemExit(f'Cannot parse MAX_MEM='{q}'. Expected like 512Mi or 1Gi.')
val = int(m.group(1))
unit = m.group(2)
# convert to Mi
mi = val if unit == 'Mi' else val * 1024
half_mi = mi // 2
print(f'{half_mi}Mi')
PY
)
echo 'Half of max: $HALF_MEM'
Example: if MAX_MEM=512Mi HALF_MEM=256Mi
Example: if MAX_MEM=1Gi HALF_MEM=512Mi
4) Update the nosql Deployment (DO NOT delete it)
First, get the container name (Deployment may have a custom container name):
kubectl -n haddock get deploy nosql -o jsonpath='{.spec.template.spec.containers[*].name}{'\n'}'
Now set resources (this updates the Deployment in-place):
kubectl -n haddock set resources deploy nosql \
--requests=memory=128Mi \
--limits=memory=$HALF_MEM
5) Ensure the update rolls out successfully
kubectl -n haddock rollout status deploy nosql
6) Verify the pod has the right requests/limits
kubectl -n haddock get deploy nosql -o jsonpath='{.spec.template.spec.containers[0].resources}{'\n'}'
kubectl -n haddock get pods
Pick the new pod and confirm:
kubectl -n haddock describe pod <new-pod-name> | sed -n '/Requests:/,/Limits:/p'
You should see:
Requests: memory 128Mi
Limits: memory <HALF_MEM>
If rollout fails (common cause)
If you accidentally set a limit above the namespace max, pods won't start. Check events:
kubectl -n haddock describe deploy nosql
kubectl -n haddock get events --sort-by=.lastTimestamp | tail -n 20
Security & Privacy
Satisfied Customers
Committed Service
Money Back Guranteed