Linux Foundation CKAD Exam Dumps

Get All Certified Kubernetes Application Developer Exam Questions with Validated Answers

CKAD Pack
Vendor: Linux Foundation
Exam Code: CKAD
Exam Name: Certified Kubernetes Application Developer
Exam Questions: 48
Last Updated: April 11, 2026
Related Certifications: Kubernetes Application Developer
Exam Tags: Intermediate Kubernetes Application DeveloperKubernetes Developers
Gurantee
  • 24/7 customer support
  • Unlimited Downloads
  • 90 Days Free Updates
  • 10,000+ Satisfied Customers
  • 100% Refund Policy
  • Instantly Available for Download after Purchase

Get Full Access to Linux Foundation CKAD questions & answers in the format that suits you best

PDF Version

$40.00
$24.00
  • 48 Actual Exam Questions
  • Compatible with all Devices
  • Printable Format
  • No Download Limits
  • 90 Days Free Updates

Discount Offer (Bundle pack)

$80.00
$48.00
  • Discount Offer
  • 48 Actual Exam Questions
  • Both PDF & Online Practice Test
  • Free 90 Days Updates
  • No Download Limits
  • No Practice Limits
  • 24/7 Customer Support

Online Practice Test

$30.00
$18.00
  • 48 Actual Exam Questions
  • Actual Exam Environment
  • 90 Days Free Updates
  • Browser Based Software
  • Compatibility:
    supported Browsers

Pass Your Linux Foundation CKAD Certification Exam Easily!

Looking for a hassle-free way to pass the Linux Foundation Certified Kubernetes Application Developer exam? DumpsProvider provides the most reliable Dumps Questions and Answers, designed by Linux Foundation certified experts to help you succeed in record time. Available in both PDF and Online Practice Test formats, our study materials cover every major exam topic, making it possible for you to pass potentially within just one day!

DumpsProvider is a leading provider of high-quality exam dumps, trusted by professionals worldwide. Our Linux Foundation CKAD exam questions give you the knowledge and confidence needed to succeed on the first attempt.

Train with our Linux Foundation CKAD exam practice tests, which simulate the actual exam environment. This real-test experience helps you get familiar with the format and timing of the exam, ensuring you're 100% prepared for exam day.

Your success is our commitment! That's why DumpsProvider offers a 100% money-back guarantee. If you don’t pass the Linux Foundation CKAD exam, we’ll refund your payment within 24 hours no questions asked.
 

Why Choose DumpsProvider for Your Linux Foundation CKAD Exam Prep?

  • Verified & Up-to-Date Materials: Our Linux Foundation experts carefully craft every question to match the latest Linux Foundation exam topics.
  • Free 90-Day Updates: Stay ahead with free updates for three months to keep your questions & answers up to date.
  • 24/7 Customer Support: Get instant help via live chat or email whenever you have questions about our Linux Foundation CKAD exam dumps.

Don’t waste time with unreliable exam prep resources. Get started with DumpsProvider’s Linux Foundation CKAD exam dumps today and achieve your certification effortlessly!

Free Linux Foundation CKAD Exam Actual Questions

Question No. 1

SIMULATION

You are asked to prepare a canary deployment for testing a new application release.

You must connect to the correct host . Failure to do so may result in a zero score.

[candidate@base] $ ssh ckad00023

Modify the Deployments so that:

a maximum number of 10 Pods run in the moose namespace.

20% of the chipmunk-service 's traffic goes to the canary-chipmunk-deployment Pod

(s).

The Service is exposed on NodePort 30000. To test its load- balancing, run

[candidate@ckad00023] $ curl http://localhost:30000/

or open this URL in the remote desktop's browser.

Show Answer Hide Answer
Correct Answer: A

ssh ckad00023

You need two outcomes in moose:

At most 10 Pods total (across both Deployments).

About 20% of chipmunk-service traffic goes to canary-chipmunk-deployment.

In Kubernetes Services, traffic distribution is (roughly) proportional to the number of ready endpoints behind the Service. So the standard canary trick is:

total endpoints = 10

canary endpoints = 2

current endpoints = 8

That gives ~20% to canary.

1) Inspect what exists

kubectl -n moose get deploy

kubectl -n moose get svc chipmunk-service -o wide

kubectl -n moose describe svc chipmunk-service

Get the Service selector (important):

kubectl -n moose get svc chipmunk-service -o jsonpath='{.spec.selector}{'\n'}'

Check current replicas:

kubectl -n moose get deploy current-chipmunk-deployment -o jsonpath='{.spec.replicas}{'\n'}'

kubectl -n moose get deploy canary-chipmunk-deployment -o jsonpath='{.spec.replicas}{'\n'}'

List pods + labels (to confirm both Deployments' pods match the Service selector):

kubectl -n moose get pods --show-labels

2) Ensure both Deployments are behind the Service

This is the key: the pods from BOTH deployments must match the Service selector.

If the Service selector is something like app=chipmunk, then both Deployments' pod templates must include app: chipmunk.

If one Deployment doesn't match, patch its pod template labels to match the selector.

2A) Example: selector is app=chipmunk

(Only do this if you see the Service selector contains app=chipmunk and one of the deployments is missing it.)

kubectl -n moose patch deploy current-chipmunk-deployment \

-p '{'spec':{'template':{'metadata':{'labels':{'app':'chipmunk'}}}}}'

kubectl -n moose patch deploy canary-chipmunk-deployment \

-p '{'spec':{'template':{'metadata':{'labels':{'app':'chipmunk'}}}}}'

Wait for rollouts if patches triggered new ReplicaSets:

kubectl -n moose rollout status deploy current-chipmunk-deployment

kubectl -n moose rollout status deploy canary-chipmunk-deployment

Verify endpoints now include pods from both deployments:

kubectl -n moose get endpoints chipmunk-service -o wide

3) Set replicas to enforce ''max 10 pods'' and ''20% canary''

Set:

current = 8

canary = 2

Total = 10.

kubectl -n moose scale deploy current-chipmunk-deployment --replicas=8

kubectl -n moose scale deploy canary-chipmunk-deployment --replicas=2

Wait until ready:

kubectl -n moose rollout status deploy current-chipmunk-deployment

kubectl -n moose rollout status deploy canary-chipmunk-deployment

Confirm total pods is 10 (or less) and all are Running/Ready:

kubectl -n moose get pods

kubectl -n moose get pods | tail -n +2 | wc -l

Confirm endpoints count matches 10:

kubectl -n moose get endpoints chipmunk-service -o jsonpath='{.subsets[*].addresses[*].ip}' | wc -w

4) Test load balancing via NodePort 30000

Run several times:

for i in $(seq 1 30); do curl -s http://localhost:30000/; echo; done

You should see canary responses appear roughly ~20% of the time (not exact every run).

If you want a clearer signal, check which pods are endpoints and ensure 2 belong to canary and 8 to current:

kubectl -n moose get pods -l app=chipmunk -o wide

kubectl -n moose get endpoints chipmunk-service -o wide


Question No. 2

SIMULATION

Set Configuration Context:

[student@node-1] $ | kubectl

Config use-context k8s

Context

A web application requires a specific version of redis to be used as a cache.

Task

Create a pod with the following characteristics, and leave it running when complete:

* The pod must run in the web namespace.

The namespace has already been created

* The name of the pod should be cache

* Use the Ifccncf/redis image with the 3.2 tag

* Expose port 6379

Show Answer Hide Answer
Correct Answer: A

Solution:


Question No. 3

SIMULATION

You must connect to the correct host . Failure to do so may result in a zero score.

[candidate@base] $ ssh ckad00029

Task

Modify the existing Deployment named store-deployment, running in namespace

grubworm, so that its containers

run with user ID 10000 and

have the NET_BIND_SERVICE capability added

The store-deployment 's manifest file Click to copy

/home/candidate/daring-moccasin/store-deplovment.vaml

Show Answer Hide Answer
Correct Answer: A

ssh ckad00029

You must modify the existing Deployment store-deployment in namespace grubworm so that its containers:

run as user ID 10000

have Linux capability NET_BIND_SERVICE added

And you're told to use the manifest file at:

/home/candidate/daring-moccasin/store-deplovment.vaml (note: the filename looks misspelled; follow it exactly on the host)

1) Inspect the current Deployment and locate the manifest file

kubectl -n grubworm get deploy store-deployment

ls -l /home/candidate/daring-moccasin/

Open the manifest:

sed -n '1,200p' '/home/candidate/daring-moccasin/store-deplovment.vaml'

2) Edit the manifest to add SecurityContext

Edit the file:

vi '/home/candidate/daring-moccasin/store-deplovment.vaml'

2.1 Set Pod-level runAsUser = 10000

Under:

spec.template.spec add:

securityContext:

runAsUser: 10000

2.2 Add NET_BIND_SERVICE capability at container-level

Under the container spec (for each container in containers:), add:

securityContext:

capabilities:

add: ['NET_BIND_SERVICE']

A complete example of what it should look like (mind indentation):

apiVersion: apps/v1

kind: Deployment

metadata:

name: store-deployment

namespace: grubworm

spec:

template:

spec:

securityContext:

runAsUser: 10000

containers:

- name: store

image: someimage

securityContext:

capabilities:

add: ['NET_BIND_SERVICE']

Important notes:

runAsUser can be set at Pod level (applies to all containers) or per-container. Pod-level is cleanest if all containers should run as 10000.

Capabilities must be set per-container (that's where Kubernetes supports it).

Save and exit.

3) Apply the updated manifest

kubectl apply -f '/home/candidate/daring-moccasin/store-deplovment.vaml'

4) Ensure the Deployment rolls out

kubectl -n grubworm rollout status deploy store-deployment

5) Verify the settings are in effect

Check the rendered pod template:

kubectl -n grubworm get deploy store-deployment -o jsonpath='{.spec.template.spec.securityContext}{'\n'}'

kubectl -n grubworm get deploy store-deployment -o jsonpath='{.spec.template.spec.containers[0].securityContext}{'\n'}'

Verify on a running pod:

kubectl -n grubworm get pods

kubectl -n grubworm describe pod | sed -n '/Security Context:/,/Containers:/p'

kubectl -n grubworm describe pod | sed -n '/Containers:/,/Conditions:/p'

If there are multiple containers

Repeat the container-level securityContext.capabilities.add block for each container under spec.template.spec.containers.


Question No. 4

SIMULATION

Task:

The pod for the Deployment named nosql in the craytisn namespace fails to start because its container runs out of resources.

Update the nosol Deployment so that the Pod:

1) Request 160M of memory for its Container

2) Limits the memory to half the maximum memory constraint set for the crayfah name space.

Show Answer Hide Answer
Correct Answer: A

Solution:


Question No. 5

SIMULATION

You must connect to the correct host . Failure to do so may result in a zero score.

[candidate@base] $ ssh ckad00032

The Pod for the Deployment named nosql in the haddock namespace fails to start because its Container runs out of resources.

Update the nosql Deployment so that the Container :

requests 128Mi of memory

limits the memory to half the maximum memory constraint set for the haddock namespace

Show Answer Hide Answer
Correct Answer: A

Goal: fix nosql Deployment in haddock so the container stops OOM'ing by setting:

memory request = 128Mi

memory limit = half of the namespace's maximum memory constraint

You must do this on the correct host.

0) Connect to the correct host

ssh ckad00032

1) Confirm the failing Deployment / Pods

kubectl -n haddock get deploy nosql

kubectl -n haddock get pods -l app=nosql 2>/dev/null || kubectl -n haddock get pods

If pods are crashing, check why (you'll likely see OOMKilled):

kubectl -n haddock describe pod

2) Find the maximum memory constraint set for the haddock namespace

In CKAD labs, this is commonly enforced by a LimitRange (max memory per container). Sometimes it can also be a ResourceQuota.

2A) Check LimitRange (most likely)

kubectl -n haddock get limitrange

kubectl -n haddock get limitrange -o yaml

Extract the max memory value quickly:

MAX_MEM=$(kubectl -n haddock get limitrange -o jsonpath='{.items[0].spec.limits[0].max.memory}')

echo 'Namespace max memory constraint: $MAX_MEM'

2B) If no LimitRange exists, check ResourceQuota

kubectl -n haddock get resourcequota

kubectl -n haddock describe resourcequota

If quota is used, you're looking for something like limits.memory (but the question wording ''maximum memory constraint'' usually points to LimitRange max.memory).

3) Compute ''half of the max memory constraint''

Run this small snippet to compute HALF in Mi (handles Mi and Gi):

HALF_MEM=$(python3 - <<'PY'

import os, re

q = os.environ.get('MAX_MEM','').strip()

m = re.fullmatch(r'(\d+)(Mi|Gi)', q)

if not m:

raise SystemExit(f'Cannot parse MAX_MEM='{q}'. Expected like 512Mi or 1Gi.')

val = int(m.group(1))

unit = m.group(2)

# convert to Mi

mi = val if unit == 'Mi' else val * 1024

half_mi = mi // 2

print(f'{half_mi}Mi')

PY

)

echo 'Half of max: $HALF_MEM'

Example: if MAX_MEM=512Mi HALF_MEM=256Mi

Example: if MAX_MEM=1Gi HALF_MEM=512Mi

4) Update the nosql Deployment (DO NOT delete it)

First, get the container name (Deployment may have a custom container name):

kubectl -n haddock get deploy nosql -o jsonpath='{.spec.template.spec.containers[*].name}{'\n'}'

Now set resources (this updates the Deployment in-place):

kubectl -n haddock set resources deploy nosql \

--requests=memory=128Mi \

--limits=memory=$HALF_MEM

5) Ensure the update rolls out successfully

kubectl -n haddock rollout status deploy nosql

6) Verify the pod has the right requests/limits

kubectl -n haddock get deploy nosql -o jsonpath='{.spec.template.spec.containers[0].resources}{'\n'}'

kubectl -n haddock get pods

Pick the new pod and confirm:

kubectl -n haddock describe pod <new-pod-name> | sed -n '/Requests:/,/Limits:/p'

You should see:

Requests: memory 128Mi

Limits: memory <HALF_MEM>

If rollout fails (common cause)

If you accidentally set a limit above the namespace max, pods won't start. Check events:

kubectl -n haddock describe deploy nosql

kubectl -n haddock get events --sort-by=.lastTimestamp | tail -n 20


100%

Security & Privacy

10000+

Satisfied Customers

24/7

Committed Service

100%

Money Back Guranteed