LPIC-3: Virtualization and Containerization - Exam 305, version 3.0 Questions and Answers
Which of the following commands lists all differences between the disk images vm1-snap.img and vm1.img?
Options:
virt-delta -a vm1-snap.img -A vm1.img
virt-cp-in -a vm1-snap.img -A vm1.img
virt-cmp -a vm1-snap.img -A vm1.img
virt-history -a vm1-snap.img -A vm1.img
virt-diff -a vm1-snap.img -A vm1.img
Answer:
EExplanation:
The virt-diff command-line tool can be used to list the differences between files in two virtual machines or disk images. The output shows the changes to a virtual machine’s disk images after it has been running. The command can also be used to show the difference between overlays1. To specify two guests, you have to use the -a or -d option for the first guest, and the -A or -D option for the second guest. For example: virt-diff -a old.img -A new.img1. Therefore, the correct command to list all differences between the disk images vm1-snap.img and vm1.img is: virt-diff -a vm1-snap.img -A vm1.img. The other commands are not related to finding differences between disk images. virt-delta is a tool to create delta disks from two disk images2. virt-cp-in is a tool to copy files and directories into a virtual machine disk image3. virt-cmp is a tool to compare two files or directories in a virtual machine disk image4. virt-history is a tool to show the history of a virtual machine disk image5. References:
- 21.13. virt-diff: Listing the Differences between Virtual Machine Files …
- 21.14. virt-delta: Creating Delta Disks from Two Disk Images …
- 21.6. virt-cp-in: Copying Files and Directories into a Virtual Machine Disk Image …
- 21.7. virt-cmp: Comparing Two Files or Directories in a Virtual Machine Disk Image …
- 21.8. virt-history: Showing the History of a Virtual Machine Disk Image …
Which of the following statements in aDockerfileleads to a container which outputs hello world? (Choose two.)
Options:
ENTRYPOINT "echo Hello World"
ENTRYPOINT [ "echo hello world" ]
ENTRYPOINT [ "echo", "hello", "world" ]
ENTRYPOINT echo Hello World
ENTRYPOINT "echo", "Hello", "World*
Answer:
B, CExplanation:
The ENTRYPOINT instruction in a Dockerfile specifies the default command to run when a container is started from the image. The ENTRYPOINT instruction can be written in two forms: exec form and shell form. The exec form uses a JSON array to specify the command and its arguments, such as [ “executable”, “param1”, “param2” ]. The shell form uses a single string to specify the command and its arguments, such as “executable param1 param2”. The shell form is converted to the exec form by adding /bin/sh -c to the beginning of the command. Therefore, the following statements in a Dockerfile are equivalent and will lead to a container that outputs hello world:
ENTRYPOINT [ “echo hello world” ] ENTRYPOINT [ “/bin/sh”, “-c”, “echo hello world” ] ENTRYPOINT “echo hello world” ENTRYPOINT [ “echo”, “hello”, “world” ] ENTRYPOINT [ “/bin/sh”, “-c”, “echo”, “hello”, “world” ] ENTRYPOINT “echo hello world”
The other statements in the question are invalid or incorrect. The statement A. ENTRYPOINT “echo Hello World” is invalid because it uses double quotes to enclose the entire command, which is not allowed in the shell form. The statement D. ENTRYPOINT echo Hello World is incorrect because it does not use quotes to enclose the command, which is required in the shell form. The statement E. ENTRYPOINT “echo”, “Hello”, “World” is invalid because it uses double quotes to separate the command and its arguments, which is not allowed in the exec form. References:
- Dockerfile reference | Docker Docs
- Using the Dockerfile ENTRYPOINT and CMD Instructions - ATA Learning
- Difference Between run, cmd and entrypoint in a Dockerfile
Which of the following kinds of data cancloud-initprocess directly from user-data? (Choose three.)
Options:
Shell scripts to execute
Lists of URLs to import
ISO images to boot from
cloud-config declarations in YAML
Base64-encoded binary files to execute
Answer:
A, B, DExplanation:
Cloud-init is a tool that allows users to customize the configuration and behavior of cloud instances during the boot process. Cloud-init can process different kinds of data that are passed to the instance via user-data, which is a mechanism provided by various cloud providers to inject data into the instance. Among the kinds of data that cloud-init can process directly from user-data are:
- Shell scripts to execute: Cloud-init can execute user-data that is formatted as a shell script, starting with the #!/bin/sh or #!/bin/bash shebang. The script can contain any commands that are valid in the shell environment of the instance. The script is executed as the root user during the boot process12.
- Lists of URLs to import: Cloud-init can import user-data that is formatted as a list of URLs, separated by newlines. The URLs can point to any valid data source that cloud-init supports, such as shell scripts, cloud-config files, or include files. The URLs are fetched and processed by cloud-init in the order they appear in the list13.
- cloud-config declarations in YAML: Cloud-init can process user-data that is formatted as a cloud-config file, which is a YAML document that contains declarations for various cloud-init modules. The cloud-config file can specify various aspects of the instance configuration, such as hostname, users, packages, commands, services, and more. The cloud-config file must start with the #cloud-config header14.
The other kinds of data listed in the question are not directly processed by cloud-init from user-data. They are either not supported, not recommended, or require additional steps to be processed. These kinds of data are:
- ISO images to boot from: Cloud-init does not support booting from ISO images that are passed as user-data. ISO images are typically used to install an operating system on a physical or virtual machine, not to customize an existing cloud instance. To boot from an ISO image, the user would need to attach it as a secondary disk to the instance and configure the boot order accordingly5.
- Base64-encoded binary files to execute: Cloud-init does not recommend passing binary files as user-data, as they may not be compatible with the instance’s architecture or operating system. Base64-encoding does not change this fact, as it only converts the binary data into ASCII characters. To execute a binary file, the user would need to decode it and make it executable on the instance6.
References:
- User-Data Formats — cloud-init 22.1 documentation
- User-Data Scripts
- Include File
- Cloud Config
- How to Boot From ISO Image File Directly in Windows
- How to run a binary file as a command in the terminal?.
What kind of virtualization is implemented by LXC?
Options:
System containers
Application containers
Hardware containers
CPU emulation
Paravirtualization
Answer:
AExplanation:
LXC implements system containers, which are a type of operating-system-level virtualization. System containers allow running multiple isolated Linux systems on a single Linux control host, using a single Linux kernel. System containers share the same kernel with the host and each other, but have their own file system, libraries, andprocesses. System containers are different from application containers, which are designed to run a single application or service in an isolated environment. Application containers are usually smaller and more portable than system containers, but also more dependent on the host kernel and libraries. Hardware containers, CPU emulation, and paravirtualization are not related to LXC, as they are different kinds of virtualization methods that involve hardware abstraction, instruction translation, or modification of the guest operating system. References:
- 1: LXC - Wikipedia
- 2: Linux Virtualization : Linux Containers (lxc) - GeeksforGeeks
- 3: Features - Proxmox Virtual Environment
Which of the following statements are true regarding VirtualBox?
Options:
It is a hypervisor designed as a special kernel that is booted before the first regular operating system starts.
It only supports Linux as a guest operating system and cannot run Windows inside a virtual machine.
It requires dedicated shared storage, as it cannot store virtual machine disk images locally on block devices of the virtualization host.
It provides both a graphical user interface and command line tools to administer virtual machines.
It is available for Linux only and requires the source code of the currently running Linux kernel to be available.
Answer:
DExplanation:
VirtualBox is a hosted hypervisor, which means it runs as an application on top of an existing operating system, not as a special kernel that is booted before the first regular operating system starts1. VirtualBox supports a large number of guest operating systems, including Windows, Linux, Solaris, OS/2, and OpenBSD1. VirtualBox does not require dedicated shared storage, as it can store virtual machine disk images locally on block devices of the virtualization host, or on network shares, or on iSCSI targets1. VirtualBox provides both a graphical user interface (GUI) and command line tools (VBoxManage) to administer virtual machines1. VirtualBox is available for Windows, Linux, macOS, and Solaris hosts1, and does not require the source code of the currently running Linux kernel to be available. References:
- Oracle VM VirtualBox: Features Overview
Which of the following commands deletes all volumes which are not associated with a container?
Options:
docker volume cleanup
docker volume orphan -d
docker volume prune
docker volume vacuum
docker volume garbage-collect
Answer:
CExplanation:
The command that deletes all volumes which are not associated with a container is docker volume prune. This command removes all unused local volumes, which are those that are not referenced by any containers. By default, it only removes anonymous volumes, which are those that are not given a specific name when they are created. To remove both unused anonymous and named volumes, the --all or -a flag can be added to the command. The command will prompt for confirmation before deleting the volumes, unless the --force or -f flag is used to bypass the prompt. The command will also show the total reclaimed space after deleting the volumes12.
The other commands listed in the question are not valid or do not have the same functionality as docker volume prune. They are either made up, misspelled, or have a different purpose. These commands are:
- docker volume cleanup: This command does not exist in Docker. There is no cleanup subcommand for docker volume.
- docker volume orphan -d: This command does not exist in Docker. There is no orphan subcommand for docker volume, and the -d flag is not a valid option for any docker volume command.
- docker volume vacuum: This command does not exist in Docker. There is no vacuum subcommand for docker volume.
- docker volume garbage-collect: This command does not exist in Docker. There is no garbage-collect subcommand for docker volume.
References:
- docker volume prune | Docker Docs
- How to Remove all Docker Volumes - YallaLabs.
Which of the following types of guest systems does Xen support? (Choose two.)
Options:
Foreign architecture guests (FA)
Paravirtualized quests (PVI
Emulated guests
Container virtualized guests
Fully virtualized guests
Answer:
B, EExplanation:
Xen supports two types of guest systems: paravirtualized guests (PV) and fully virtualized guests (HVM).
- Paravirtualized guests (PV) are guests that have been modified to run on the Xen hypervisor. They use a special kernel that communicates with the hypervisor through hypercalls, and use paravirtualized drivers for I/O devices. PV guests can run faster and more efficiently than HVM guests, but they require the guest operating system to be ported to Xen and to support the Xen ABI12.
- Fully virtualized guests (HVM) are guests that run unmodified operating systems on the Xen hypervisor. They use hardware virtualization extensions, such as Intel VT-x or AMD-V, to create a virtual platform for the guest. HVM guests can run any operating system that supports the hardware architecture, but they incur more overhead and performance penalties than PV guests. HVM guests can also use paravirtualized drivers for I/O devices to improve their performance12.
The other options are not correct. Xen does not support foreign architecture guests (FA), emulated guests, or container virtualized guests.
- Foreign architecture guests (FA) are guests that run on a different hardware architecture than the host. For example, running an ARM guest on an x86 host. Xen does not support this type of virtualization, as it would require emulation or binary translation, which are very complex and slow techniques3.
- Emulated guests are guests that run on a software emulator that mimics the hardware of the host or another platform. For example, running a Windows guest on a QEMU emulator. Xen does not support this type of virtualization, as it relies on the emulator to provide the virtual platform, not the hypervisor. Xen can use QEMU to emulate some devices for HVM guests, but not the entire platform14.
- Container virtualized guests are guests that run on a shared kernel with the host and other guests, using namespaces and cgroups to isolate them. For example, running a Linux guest on a Docker container. Xen does not support this type of virtualization, as it requires the guest operating system to be compatible with the host kernel, and does not provide the same level of isolation and security as hypervisor-based virtualization56.
References:
- Xen Project Software Overview - Xen
- Xen ARM with Virtualization Extensions - Xen
- Xen Project Beginners Guide - Xen
- QEMU - Xen
- Docker overview | Docker Documentation
- What is a Container? | App Containerization | VMware
Which of the following statements are true about sparse images in the context of virtual machine storage? (Choose two.)
Options:
Sparse images are automatically shrunk when files within the image are deleted.
Sparse images may consume an amount of space different from their nominal size.
Sparse images can only be used in conjunction with paravirtualization.
Sparse images allocate backend storage at the first usage of a block.
Sparse images are automatically resized when their maximum capacity is about to be exceeded.
Answer:
B, DExplanation:
Sparse images are a type of virtual disk images that grow in size as data is written to them, but do not shrink when data is deleted from them. Sparse images may consume an amount of space different from their nominal size, which is the maximum size that the image can grow to. For example, a sparse image with a nominal size of 100 GB may only take up 20 GB of physical storage if only 20 GB of data is written to it. Sparse images allocate backend storage at the first usage of a block, which means that the physical storage is only used when the virtual machine actually writes data to a block. This can save storage space and improve performance, as the image does not need to be pre-allocated or zeroed out.
Sparse images are not automatically shrunk when files within the image are deleted, because the virtual machine does not inform the host system about the freed blocks. To reclaim the unused space, a special tool such as virt-sparsify1 or qemu-img2 must be used to compact the image. Sparse images can be used with both full virtualization and paravirtualization, as the type of virtualization does not affect the format of the disk image. Sparse images are not automatically resized when their maximum capacity is about to be exceeded, because this would require changing the partition table and the filesystem of the image, which is not a trivial task. To resize a sparse image, a tool such as virt-resize3 or qemu-img2 must be used to increase the nominal size and the filesystem size of the image. References: 1 (search for “virt-sparsify”), 2 (search for “qemu-img”), 3 (search for “virt-resize”).
What does IaaS stand for?
Options:
Information as a Service
Intelligence as a Service
Integration as a Service
Instances as a Service
Infrastructure as a Service
Answer:
EExplanation:
IaaS is a type of cloud computing service that offers essential compute, storage, and networking resources on demand, on a pay-as-you-go basis. IaaS is one of the four types of cloud services, along with software as a service (SaaS), platform as a service (PaaS), and serverless12. IaaS eliminates the need for enterprises to procure, configure, or manage infrastructure themselves, and they only pay for what they use23. Some examples of IaaS providers are Microsoft Azure, Google Cloud, and Amazon Web Services.
Virtualization of which hardware component is facilitated by CPUs supporting nested page table extensions, such as Intel Extended Page Table (EPT) or AMD Rapid Virtualization Indexing (RVI)?
Options:
Memory
Network Interfaces
Host Bus Adapters
Hard Disks
IO Cache
Answer:
AExplanation:
Nested page table extensions, such as Intel Extended Page Table (EPT) or AMD Rapid Virtualization Indexing (RVI), are hardware features that facilitate the virtualization of memory. They allow the CPU to perform the translation of guest virtual addresses to host physical addresses in a single step, without the need for software-managed shadow page tables. This reduces the overhead and complexity of memory management for virtual machines, and improves their performance and isolation. Nested page table extensions do not directly affect the virtualization of other hardware components, such as network interfaces, host bus adapters, hard disks, or IO cache.
References:
- Second Level Address Translation - Wikipedia
- c - What is use of extended page table? - Stack Overflow
- Hypervisor From Scratch – Part 4: Address Translation Using Extended …
What is true aboutcontainerd?
Options:
It is a text file format defining the build process of containers.
It runs in each Docker container and provides DHCP client functionality
It uses rune to start containers on a container host.
It is the initial process run at the start of any Docker container.
It requires the Docker engine and Docker CLI to be installed.
Answer:
CExplanation:
Containerd is an industry-standard container runtime that uses Runc (a low-level container runtime) by default, but can be configured to use others as well1. Containerd manages the complete container lifecycle of its host system, from image transfer and storage to containerexecution and supervision1. It supports the standards established by the Open Container Initiative (OCI)1. Containerd does not require the Docker engine and Docker CLI to be installed, as it can be used independently or with other container platforms2. Containerd is not a text file format, nor does it run in each Docker container or provide DHCP client functionality. Containerd is not the initial process run at the start of any Docker container, as that is the role of the container runtime, such as Runc3. References: 1 (search for “containerd”), 2 (search for “Containerd is an open source”), 3 (search for “It uses rune to start containers”).
Which file format is used by libvirt to store configuration data?
Options:
INI-style text files
SQLite databases
XML files
Java-like properties files
Text files containing key/value pairs
Answer:
CExplanation:
Libvirt uses XML files to store configuration data for objects in the libvirt API, such as domains, networks, storage, etc. This allows for ease of extension in future releases and validation of documents prior to usage. Libvirt does not use any of the other file formats listed in the question. References:
- libvirt: XML Format
- LPIC-3 Virtualization and Containerization: Topic 305.1: Virtualization Concepts and Theory
After creating a new Docker network using the following command:
docker network create --driver bridge isolated_nw
which parameter must be added todocker createin order to attach a container to the network?
Options:
--eth0=isolated_nw
--alias=isolated_nw
--ethernet=isolated_nw
--network=isolated_nw
--attach=isolated_nw
Answer:
DExplanation:
To attach a container to a network when creating it, the --network flag must be used with the name of the network as the argument. The --network flag specifies the network mode for the container. By default, the network mode is bridge, which means the container is connected to the default bridge network. However, if a custom network is created, such as isolated_nw in this case, the container must be explicitly attached to it using the --network flag. For example, to create a container named web1 and attach it to the isolated_nw network, the command would be:
docker create --name web1 --network isolated_nw nginx
The other options are not valid parameters for docker create. The --eth0, --ethernet, and --attach flags do not exist. The --alias flag is used to specify an additional network alias for the container on a user-defined network, but it does not attach the container to the network. References:
- docker network create | Docker Documentation1
- docker create | Docker Documentation
- Networking overview | Docker Docs2
FILL BLANK
Which subcommand ofvirshopens the XML configuration of a virtual network in an editor in order to make changes to that configuration? (Specify ONLY the subcommand without any parameters.)
Options:
Answer:
net-edit
Explanation:
The subcommand of virsh that opens the XML configuration of a virtual network in an editor in order to make changes to that configuration is net-edit1. This subcommand takes the name or UUID of the network as a parameter and opens the network XML file in the default editor, which is specified by the $EDITOR shell variable1. The changes made to the network configuration are applied immediately after saving and exiting the editor1.
References:
- 1: net-edit - libvirt.
Which of the following commands boots a QEMU virtual machine using hardware virtualization extensions?
Options:
qvirt -create -drive file=debian.img -cdrom debian.iso -m 1024 -boot d -driver hvm
vm -kvm -drive file=debian.img -cdrom debian.iso -m 1024 -boot d
qemu-hw -create -drive file=debian.img -cdrom debian.iso -m 1024 -boot d
qemu -accel kvm -drive file-debian.img -cdrom debian.iso -m 1024 -boot d
qvm start -vmx -drive file=debian.img -cdrom debian.iso -m 1024 -boot d
Answer:
DExplanation:
The correct command to boot a QEMU virtual machine using hardware virtualization extensions is qemu -accel kvm -drive file-debian.img -cdrom debian.iso -m 1024 -boot d. This command uses the -accel option to specify the hardware accelerator to use, which in this case is kvm. KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V)1. The -drive option specifies the disk image file to use, which in this case is debian.img. The -cdrom option specifies the ISO image file to use as a CD-ROM, which in this case is debian.iso. The -m option specifies the amount of memory to allocate to the virtualmachine, which in this case is 1024 MB. The -boot option specifies the boot order, which in this case is d, meaning to boot from the CD-ROM first. References:
Which of the following statements are true regarding a Pod in Kubernetes? (Choose two.)
Options:
All containers of a Pod run on the same node.
Pods are always created automatically and cannot be explicitly configured.
A Pod is the smallest unit of workload Kubernetes can run.
When a Pod fails, Kubernetes restarts the Pod on another node by default.
systemd is used to manage individual Pods on the Kubernetes nodes.
Answer:
A, CExplanation:
A Pod in Kubernetes is a collection of one or more containers that share the same network and storage resources, and a specification for how to run the containers. A Pod is the smallest unit of workload Kubernetes can run, meaning that it cannot be divided into smaller units. Therefore, option C is correct. All containers of a Pod run on the same node, which is the smallest unit of computing hardware in Kubernetes. A node is a physical or virtual machine that hosts one or more Pods. Therefore, option A is also correct. Pods are not always created automatically and cannot be explicitly configured. Pods can be created manually using YAML or JSON files, or using commands like kubectl run or kubectl create. Pods can also be created automatically by higher-level controllers, such as Deployment, ReplicaSet, or StatefulSet. Therefore, option B is incorrect. When a Pod fails, Kubernetes does not restart the Pod on another node by default. Pods are ephemeral by nature, meaning that they can be terminated or deleted at any time. If a Pod is managed by a controller, the controller will create a new Pod to replace the failed one, but it may not be on the same node. Therefore, option D is incorrect. systemd is not used to manage individual Pods on the Kubernetes nodes. systemd is a system and service manager for Linux operating systems that can start and stop services, such as docker or kubelet. However, systemd does not interact with Podsdirectly. Pods are managed by the kubelet service, which is an agent that runs on each node and communicates with the Kubernetes control plane. Therefore, option E is incorrect. References:
- Pods | Kubernetes
- What is a Kubernetes pod? - Red Hat
- What’s the difference between a pod, a cluster, and a container?
- What are Kubernetes Pods? | VMware Glossary
- Kubernetes Node Vs. Pod Vs.Cluster: Key Differences - CloudZero
Which of the following commands executes a command in a running LXC container?
Options:
lxc-accach
lxc-batch
lxc-run
lxc-enter
lxc-eval
Answer:
AExplanation:
The command lxc-attach is used to execute a command in a running LXC container. It allows the user to start a process inside the container and attach to its standard input, output, and error streams1. For example, the command lxc-attach -n mycontainer -- ls -lh /home will list all the files and directories in the /home directory of the container named mycontainer1. The other options are not valid LXC commands. The command lxc-batch does not exist. The command lxc-run is an alias for lxc-start, which is used to start a container, not to execute a command in it2. The command lxc-enter is also an alias for lxc-attach, but it is deprecated and should not be used3. The command lxc-eval is also not a valid LXC command. References:
- 1: Executing a command inside a running LXC - Unix & Linux Stack Exchange.
- 2: lxc-start: start a container. - SysTutorials.
- 3: lxc-attach: start a process inside a running container. - SysTutorials.
Which of the following devices exist by default in an LXC container? (Choose three.)
Options:
/dev/log
/dev/console
/dev/urandom
/dev/kmem
/dev/root
Answer:
A, B, CExplanation:
LXC (Linux Containers) is a lightweight virtualization technology that allows multiple isolated Linux systems (containers) to run on the same host. LXC uses Linux kernel features such as namespaces, cgroups, and AppArmor to create and manage containers. Each container has its own file system, network interfaces, process tree, and resource limits. However, containers share the same kernel and hardware with the host, which makes them more efficient and faster than full virtualization.
By default, an LXC container has a minimal set of devices that are needed for its operation. These devices are created by the LXC library when the container is started, and are removed when the container is stopped. The default devices are:
- /dev/log: This is a Unix domain socket that connects to the syslog daemon on the host. It allows the container to send log messages to the host’s system log1.
- /dev/console: This is a character device that provides access to the container’s console. It is usually connected to the host’s terminal or a file. It allows the container to interact with the user or the host’s init system12.
- /dev/urandom: This is a character device that provides an unlimited source of pseudo-random numbers. It is used by various applications and libraries that need randomness, such as cryptography, UUID generation, and hashing13.
The other devices listed in the question do not exist by default in an LXC container. They are either not needed, not allowed, or not supported by the container’s namespace or cgroup configuration. These devices are:
- /dev/kmem: This is a character device that provides access to the kernel’s virtual memory. It is not needed by the container, as it can access its own memory through the /proc filesystem. It isalso not allowed by the container, as it would expose the host’s kernel memory and compromise its security4.
- /dev/root: This is a symbolic link that points to the root device of the system. It is not supported by the container, as it does not have a separate root device from the host. The container’s root file system is mounted from a directory, an image file, or a loop device on the host5.
References:
- Linux Containers - LXC - Manpages - lxc.container.conf.5
- Linux Containers - LXC - Getting started
- Random number generation - Wikipedia
- /dev/kmem - Wikipedia
- Linux Containers - LXC - Manpages - lxc.container.conf.5