LPIC-3: Virtualization and Containerization - Exam 305, version 3.0 Questions and Answers
Which of the following statements are true about container-based virtualization? (Choose two.)
Options:
Each container runs its own operating system kernel.
Different containers may use different distributions of the same operating system.
Container-based virtualization relies on hardware support from the host system's CPU.
All containers run within the operating system kernel of the host system.
Linux does not support container-based virtualization because of missing kernel APIs.
Answer:
B, DExplanation:
Container-based virtualization is a method of operating system-level virtualization that allows multiple isolated user spaces (containers) to run on the same host system1. Each container shares the same operating system kernel as the host, but has its own file system, libraries, and processes2. Therefore, the statements A and C are false, as containers do not run their own kernels or rely on hardware support from the CPU. The statement E is also false, as Linux does support container-based virtualization through various technologies, such as cgroups, namespaces, LXC, Docker, etc12. The statement B is true, as different containers may use different distributions of the same operating system, such as Debian, Ubuntu, Fedora, etc., as long as they are compatible with the host kernel3. The statement D is also true, as all containers run within the operating system kernel of the host system, which provides isolation and resource management for them12. References:
1: Containerization (computing) - Wikipedia.
2: What are containers? | Google Cloud.
3: What is Container-Based Virtualization? - StackHowTo.
FILL BLANK
What is the default path to the Docker daemon configuration file on Linux? (Specify the full name of the file,Including path.)
Options:
Answer:
/etc/docker/daemon.json
Explanation:
The default path to the Docker daemon configuration file on Linux is /etc/docker/daemon.json. This file is a JSON file that contains the settings and options for the Docker daemon, which is the service that runs on the host operating system and manages the containers, images, networks, and other Docker resources. The /etc/docker/daemon.json file does not exist by default, but it can be created by the user to customize the Docker daemon behavior. The file can also be specified by using the --config-file flag when starting the Docker daemon. The file must be a valid JSON object and follow the syntax and structure of the dockerd reference docs12. References:
Docker daemon configuration file - Medium3
Docker daemon configuration overview | Docker Docs4
docker daemon | Docker Docs5
Which file format is used by libvirt to store configuration data?
Options:
INI-style text files
SQLite databases
XML files
Java-like properties files
Text files containing key/value pairs
Answer:
CExplanation:
Libvirt uses XML files to store configuration data for objects in the libvirt API, such as domains, networks, storage, etc. This allows for ease of extension in future releases and validation of documents prior to usage. Libvirt does not use any of the other file formats listed in the question. References:
libvirt: XML Format
LPIC-3 Virtualization and Containerization: Topic 305.1: Virtualization Concepts and Theory
What is the default provider of Vagrant?
Options:
lxc
hyperv
virtualbox
vmware_workstation
docker
Answer:
CExplanation:
Vagrant is a tool that allows users to create and configure lightweight, reproducible, and portable development environments. Vagrant supports multiple providers, which are the backends that Vagrant uses to create and manage the virtual machines. By default, VirtualBox is the default provider for Vagrant. VirtualBox is still the most accessible platform to use Vagrant: it is free, cross-platform, and has been supported by Vagrant for years. With VirtualBox as the default provider, it provides the lowest friction for new users to get started with Vagrant. However, users can also use other providers, such as VMware, Hyper-V, Docker, or LXC, depending on their preferences and needs. To use another provider, users must install it as a Vagrant plugin and specify it when running Vagrant commands. Users can also change the default provider by setting the VAGRANT_DEFAULT_PROVIDER environmental variable. References:
Default Provider - Providers | Vagrant | HashiCorp Developer1
Providers | Vagrant | HashiCorp Developer2
How To Set Default Vagrant Provider to Virtualbox3
The commandvirsh vol-list vmsreturns the following error:
error: failed to get pool 'vms'
error: Storage pool not found: no storage pool with matching name 'vms '
Given that the directory/vmsexists, which of the following commands resolves this issue?
Options:
dd if=/dev/zero of=/vms bs=1 count=0 flags=name:vms
libvirt-poolctl new –-name=/vms –-type=dir –-path=/vms
qemu-img pool vms:/vms
virsh pool-create-as vms dir --target /vms
touch /vms/.libvirtpool
Answer:
DExplanation:
The command virsh pool-create-as vms dir --target /vms creates and starts a transient storage pool named vms of type dir with the target directory /vms12. This command resolves the issue of the storage pool not found error, as it makes the existing directory /vms visible to libvirt as a storage pool. The other commands are invalid because:
dd if=/dev/zero of=/vms bs=1 count=0 flags=name:vms is not a valid command syntax. The dd command does not take a flags argument, and the output file /vms should be a regular file, not a directory3.
libvirt-poolctl new --name=/vms --type=dir --path=/vms is not a valid command name. There is no such command as libvirt-poolctl in the libvirt package4.
qemu-img pool vms:/vms is not a valid command syntax. The qemu-img command does not have a pool subcommand, and the vms:/vms argument is not a valid image specification5.
touch /vms/.libvirtpool is not a valid command to create a storage pool. The touch command only creates an empty file, and the .libvirtpool file is not recognized by libvirt as a storage pool configuration file6.
References:
1: virsh - difference between pool-define-as and pool-create-as - Stack Overflow
2: dd(1) - Linux manual page - man7.org
3: 12.3.3. Creating a Directory-based Storage Pool with virsh - Red Hat Customer Portal
4: libvirt - Linux Man Pages (3)
5: qemu-img(1) - Linux manual page - man7.org
6: touch(1) - Linux manual page - man7.org
Which command within virsh lists the virtual machines that are running on the current host?
Options:
I view
list-vm
list
show
list-all
Answer:
CExplanation:
The command virsh list is used to list all running domains (VMs) on the current host. The command virsh list --all can be used to list both active and inactive domains. The other options are not valid virsh commands. The command virsh list is a basic command that lists all running domains (VMs). You can also list all configured VMs by adding the --all option. This is useful if you want to see all VMs configured in the target hypervisor that you can use on subsequent commands1. References:
1: 8 Linux virsh subcommands for managing VMs on the command line | Enable Sysadmin.
Which of the following resources can be limited by libvirt for a KVM domain? (Choose two.)
Options:
Amount of CPU lime
Size of available memory
File systems allowed in the domain
Number of running processes
Number of available files
Answer:
A, BExplanation:
Libvirt is a toolkit that provides a common API for managing different virtualization technologies, such as KVM, Xen, LXC, and others. Libvirt allows users to configure and control various aspects of a virtual machine (also called a domain), such as its CPU, memory, disk, network, and other resources. Among the resources that can be limited by libvirt for a KVM domain are:
Amount of CPU time: Libvirt allows users to specify the number of virtual CPUs (vCPUs) that a domain can use, as well as the CPU mode, model, topology, and tuning parameters. Users can also set the CPU shares, quota, and period to control the relative or absolute amount of CPU time that a domain can consume. Additionally, users can pin vCPUs to physical CPUs or NUMA nodes to improve performance and isolation. These settings can be configured in the domain XML file under the
Size of available memory: Libvirt allows users to specify the amount of memory that a domain can use, as well as the memory backing, tuning, and NUMA node parameters. Users can also set the memory hard and soft limits, swap hard limit, and minimum guarantee to control the memory allocation and reclaim policies for a domain. These settings can be configured in the domain XML file under the
The other resources listed in the question are not directly limited by libvirt for a KVM domain. File systems allowed in the domain are determined by the disk and filesystem devices that are attached to the domain, which can be configured in the domain XML file under the
References:
libvirt: Domain XML format
CPU Allocation
Memory Allocation
Hard drives, floppy disks, CDROMs
If aDockerfilecontains the following lines:
WORKDIR /
RUN cd /tmp
RUN echo test > test
where is the filetestlocated?
Options:
/ting/test within the container image.
/root/tesc within the container image.
/test within the container image.
/tmp/test on the system running docker build.
test in the directory holding the Dockerf ile.
Answer:
CExplanation:
The WORKDIR instruction sets the working directory for any subsequent RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile1. The RUN instruction executes commands in a new layer on top of the current image and commits the results2. The RUN cd command does not change the working directory for the next RUN instruction, because each RUN command runs in a new shell and a new environment3. Therefore, the file test is created in the root directory (/) of the container image, not in the /tmp directory. References:
Dockerfile reference: WORKDIR
Dockerfile reference: RUN
difference between RUN cd and WORKDIR in Dockerfile
What is the purpose of a .dockerignore file?
Options:
It lists files existing in a Docker image which should be excluded when building a derivative image.
It specifies files that Docker does not submit to the Docker daemon when building a Docker image
It exists in the root file system of containers that should ignore volumes and ports provided by Docker.
It must be placed in the top level directory of volumes that Docker should never attach automatically to a container
It specifies which parts of a Dockerfile should be ignored when building a Docker image.
Answer:
BExplanation:
The purpose of a .dockerignore file is to specify files that Docker does not submit to the Docker daemon when building a Docker image. A .dockerignore file is a text file that contains a list of files or directories that should be excluded from the build context, which is the set of files and folders that are available for use in a Dockerfile. By using a .dockerignore file, you can avoid sending files or directories that are large, contain sensitive information, or are irrelevant to the Docker image to the daemon, which can improve the efficiency and security of the build process. The other options are incorrect because they do not describe the function of a .dockerignore file. Option A is wrong because a .dockerignore file does not affect the files existing in a Docker image, but only the files sent to the daemon during the build. Option C is wrong because a .dockerignore file does not exist in the root file system of containers, but in the same directory as the Dockerfile. Option D is wrong because a .dockerignore file does not affect the volumes that Docker attaches to a container, but only the files included in the build context. Option E is wrong because a .dockerignore file does not affect the parts of a Dockerfile that are executed, but only the files available for use in a Dockerfile. References:
What are .dockerignore files, and why you should use them?
Dockerfile reference | Docker Docs
How to use .dockerignore and its importance - Shisho Cloud
Ifdocker stackis to be used to run a Docker Compose file on a Docker Swarm, how are the images referenced in the Docker Compose configuration made available on the Swarm nodes?
Options:
docker stack builds the images locally and copies them to only those Swarm nodes which run the service.
docker stack passes the images to the Swarm master which distributes the images to all other Swarm nodes.
docker stack instructs the Swarm nodes to pull the images from a registry, although it does not upload the images to the registry.
docker stack transfers the image from its local Docker cache to each Swarm node.
docker stack triggers the build process for the images on all nodes of the Swarm.
Answer:
CExplanation:
Docker stack is a command that allows users to deploy and manage a stack of services on a Docker Swarm cluster. A stack is a group of interrelated services that share dependencies and can be orchestrated and scaled together. A stack is typically defined by a Compose file, which is a YAML file that describes the services, networks, volumes, and other resources of the stack. To use docker stack to run a Compose file on a Swarm, the user must first create and initialize a Swarm cluster, which is a group of machines (nodes) that are running the Docker Engine and are joined into a single entity. The Swarm cluster has one or more managers, which are responsible for maintaining the cluster state and orchestrating the services, and one or more workers, which are the nodes that run the services.
When the user runs docker stack deploy with a Compose file, the command parses the file and creates the services as specified. However, docker stack does not build or upload the images referenced in the Compose file to any registry. Instead, it instructs the Swarm nodes to pull the images from a registry, which can be the public Docker Hub or a private registry. The user must ensure that the images are available in the registry before deploying the stack, otherwise the deployment will fail. The user can use docker build and docker push commands to create and upload the images to the registry, or use an automated build service such as Docker Hub or GitHub Actions. The user must also make sure that the image names and tags in the Compose file match the ones in the registry, and that the Swarm nodes have access to the registry if it is private. By pulling the images from a registry, docker stack ensures that the Swarm nodes have the same and latest version of the images, and that the images are distributed across the cluster in an efficient way.
The other options are not correct. Docker stack does not build the images locally or on the Swarm nodes, nor does it copy or transfer the images to the Swarm nodes. Dockerstack also does not pass the images to the Swarm master, as this would create a bottleneck and a single point of failure. Docker stack relies on the registry as the source of truth for the images, and delegates the image pulling to the Swarm nodes. References:
Deploy a stack to a swarm | Docker Docs1
docker stack deploy | Docker Docs2
docker build | Docker Docs3
docker push | Docker Docs4
What is the purpose of capabilities in the context of container virtualization?
Options:
Map potentially dangerous system calls to an emulation layer provided by the container virtualization.
Restrict the disk space a container can consume.
Enable memory deduplication to cache files which exist in multiple containers.
Allow regular users to start containers with elevated permissions.
Prevent processes from performing actions which might infringe the container.
Answer:
EExplanation:
Capabilities are a way of implementing fine-grained access control in Linux. They are a set of flags that define the privileges that a process can have. By default, a process inherits the capabilities of its parent, but some capabilities can be dropped or added by the process itself or by the kernel. In the context of container virtualization, capabilities are used to prevent processes from performing actions that might infringe the container, such as accessing the host’s devices, mounting filesystems, changing the system time, or killing other processes. Capabilities allow containers to run with a reduced set of privileges, enhancing the security and isolation of the container environment. For example, Docker uses a default set of capabilities that are granted to the processes running inside a container, and allows users to add or drop capabilities as needed12. References:
Capabilities | Docker Documentation1
Linux Capabilities: Making Them Work in Containers2
Virtualization of which hardware component is facilitated by CPUs supporting nested page table extensions, such as Intel Extended Page Table (EPT) or AMD Rapid Virtualization Indexing (RVI)?
Options:
Memory
Network Interfaces
Host Bus Adapters
Hard Disks
IO Cache
Answer:
AExplanation:
Nested page table extensions, such as Intel Extended Page Table (EPT) or AMD Rapid Virtualization Indexing (RVI), are hardware features that facilitate the virtualization of memory. They allow the CPU to perform the translation of guest virtual addresses to host physical addresses in a single step, without the need for software-managed shadow page tables. This reduces the overhead and complexity of memory management for virtual machines, and improves their performance and isolation. Nested page table extensions do not directly affect the virtualization of other hardware components, such as network interfaces, host bus adapters, hard disks, or IO cache.
References:
Second Level Address Translation - Wikipedia
c - What is use of extended page table? - Stack Overflow
Hypervisor From Scratch – Part 4: Address Translation Using Extended …
FILL BLANK
What LXC command lists containers sorted by their CPU, block I/O or memory consumption? (Specify ONLY the command without any path or parameters.)
Options:
Answer:
lxc-top
Explanation:
LXD supports the following network interface types for containers: macvlan, bridged, physical, sriov, and ovn1. Macvlan creates a virtual interface on the host that is connected to the same network as the parent interface2. Bridged connects the container to a network bridge that acts as a virtual switch3. Physical attaches the container to a physical network interface on the host2. Ipsec and wifi are not valid network interface types for LXD containers. References:
1: Bridge network - Canonical LXD documentation
2: How to create a network - Canonical LXD documentation
4: LXD containers and networking with static IP - Super User
FILL BLANK
What LXC command starts a new process within a running LXC container? (Specify ONLY the command without any path or parameters.)
Options:
Answer:
lxc-attach
Explanation:
The lxc-attach command allows the user to start a new process within a running LXC container12. It takes the name of the container as an argument and optionally a command to execute inside the container. If no command is specified, it creates a new shell inside the container1. For example, to list all the files in the home directory of a container named myContainer, one can use:
lxc-attach -n myContainer – ls -lh /home
References:
1: Executing a command inside a running LXC - Unix & Linux Stack Exchange
FILL BLANK
Which subcommand ofvirshopens the XML configuration of a virtual network in an editor in order to make changes to that configuration? (Specify ONLY the subcommand without any parameters.)
Options:
Answer:
net-edit
Explanation:
The subcommand of virsh that opens the XML configuration of a virtual network in an editor in order to make changes to that configuration is net-edit1. This subcommand takes the name or UUID of the network as a parameter and opens the network XML file in the default editor, which is specified by the $EDITOR shell variable1. The changes made to the network configuration are applied immediately after saving and exiting the editor1.
References:
1: net-edit - libvirt.
Which of the following types of guest systems does Xen support? (Choose two.)
Options:
Foreign architecture guests (FA)
Paravirtualized quests (PVI
Emulated guests
Container virtualized guests
Fully virtualized guests
Answer:
B, EExplanation:
Xen supports two types of guest systems: paravirtualized guests (PV) and fully virtualized guests (HVM).
Paravirtualized guests (PV) are guests that have been modified to run on the Xen hypervisor. They use a special kernel that communicates with the hypervisor through hypercalls, and use paravirtualized drivers for I/O devices. PV guests can run faster and more efficiently than HVM guests, but they require the guest operating system to be ported to Xen and to support the Xen ABI12.
Fully virtualized guests (HVM) are guests that run unmodified operating systems on the Xen hypervisor. They use hardware virtualization extensions, such as Intel VT-x or AMD-V, to create a virtual platform for the guest. HVM guests can run any operating system that supports the hardware architecture, but they incur more overhead and performance penalties than PV guests. HVM guests can also use paravirtualized drivers for I/O devices to improve their performance12.
The other options are not correct. Xen does not support foreign architecture guests (FA), emulated guests, or container virtualized guests.
Foreign architecture guests (FA) are guests that run on a different hardware architecture than the host. For example, running an ARM guest on an x86 host. Xen does not support this type of virtualization, as it would require emulation or binary translation, which are very complex and slow techniques3.
Emulated guests are guests that run on a software emulator that mimics the hardware of the host or another platform. For example, running a Windows guest on a QEMU emulator. Xen does not support this type of virtualization, as it relies on the emulator to provide the virtual platform, not the hypervisor. Xen can use QEMU to emulate some devices for HVM guests, but not the entire platform14.
Container virtualized guests are guests that run on a shared kernel with the host and other guests, using namespaces and cgroups to isolate them. For example, running a Linux guest on a Docker container. Xen does not support this type of virtualization, as it requires the guest operating system to be compatible with the host kernel, and does not provide the same level of isolation and security as hypervisor-based virtualization56.
References:
Xen Project Software Overview - Xen
Xen ARM with Virtualization Extensions - Xen
Xen Project Beginners Guide - Xen
QEMU - Xen
Docker overview | Docker Documentation
What is a Container? | App Containerization | VMware
Which of the following devices exist by default in an LXC container? (Choose three.)
Options:
/dev/log
/dev/console
/dev/urandom
/dev/kmem
/dev/root
Answer:
A, B, CExplanation:
LXC (Linux Containers) is a lightweight virtualization technology that allows multiple isolated Linux systems (containers) to run on the same host. LXC uses Linux kernel features such as namespaces, cgroups, and AppArmor to create and manage containers. Each container has its own file system, network interfaces, process tree, and resource limits. However, containers share the same kernel and hardware with the host, which makes them more efficient and faster than full virtualization.
By default, an LXC container has a minimal set of devices that are needed for its operation. These devices are created by the LXC library when the container is started, and are removed when the container is stopped. The default devices are:
/dev/log: This is a Unix domain socket that connects to the syslog daemon on the host. It allows the container to send log messages to the host’s system log1.
/dev/console: This is a character device that provides access to the container’s console. It is usually connected to the host’s terminal or a file. It allows the container to interact with the user or the host’s init system12.
/dev/urandom: This is a character device that provides an unlimited source of pseudo-random numbers. It is used by various applications and libraries that need randomness, such as cryptography, UUID generation, and hashing13.
The other devices listed in the question do not exist by default in an LXC container. They are either not needed, not allowed, or not supported by the container’s namespace or cgroup configuration. These devices are:
/dev/kmem: This is a character device that provides access to the kernel’s virtual memory. It is not needed by the container, as it can access its own memory through the /proc filesystem. It isalso not allowed by the container, as it would expose the host’s kernel memory and compromise its security4.
/dev/root: This is a symbolic link that points to the root device of the system. It is not supported by the container, as it does not have a separate root device from the host. The container’s root file system is mounted from a directory, an image file, or a loop device on the host5.
References:
Linux Containers - LXC - Manpages - lxc.container.conf.5
Linux Containers - LXC - Getting started
Random number generation - Wikipedia
/dev/kmem - Wikipedia
Linux Containers - LXC - Manpages - lxc.container.conf.5
What is the purpose of the packer inspect subcommand?
Options:
Retrieve files from an existing Packer image.
Execute commands within a running instance of a Packer image.
List the artifacts created during the build process of a Packer image.
Show usage statistics of a Packer image.
Display an overview of the configuration contained in a Packer template.
Answer:
EExplanation:
The purpose of the packer inspect subcommand is to display an overview of the configuration contained in a Packer template1. A Packer template is a file that defines the various components a Packer build requires, such as variables, sources, provisioners, and post-processors2. The packer inspect subcommand can help you quickly learn about a template without having to dive into the HCL (HashiCorp Configuration Language) itself1. The subcommand will tell you things like what variables a template accepts, the sources it defines, the provisioners it defines and the order they’ll run, and more1.
The other options are not correct because:
A) Retrieve files from an existing Packer image. This is not the purpose of the packer inspect subcommand. To retrieve files from an existing Packer image, you need to use the packer scp subcommand, which copies files from a running instance of a Packer image to your local machine2.
B) Execute commands within a running instance of a Packer image. This is not the purpose of the packer inspect subcommand. To execute commands within a running instance of a Packer image, you need to use the packer ssh subcommand, which connects to a running instance of a Packer image via SSH and runs the specified command2.
C) List the artifacts created during the build process of a Packer image. This is not the purpose of the packer inspect subcommand. To list the artifacts created during the build process of a Packer image, you need to use the packer build subcommand with the -machine-readable flag, which outputs the build information in a machine-friendly format that includes the artifact details2.
D) Show usage statistics of a Packer image. This is not the purpose of the packer inspect subcommand. To show usage statistics of a Packer image, you need to use the packer console subcommand with the -stat flag, which launches an interactive console that allows you to inspect and modify variables, sources, and functions, and displays the usage statistics of the current session2. References: 1: packer inspect - Commands | Packer | HashiCorp Developer 2: Commands | Packer | HashiCorp Developer