Disclaimer
Please be aware that the information and procedures described herein are provided "as is" and without any warranty, express or implied. I assume no liability for any potential damages or issues that may arise from applying these contents. Any action you take upon the information is strictly at your own risk.
All actions and outputs documented were performed within a virtual machine running a Linux Debian server as the host system. The output and results you experience may differ depending on the specific Linux distribution and version you are using.
It is strongly recommended that you test all procedures and commands in a virtual machine or an isolated test environment before applying them to any production or critical systems.
- No warranty for damages.
- Application of content at own risk.
- Author used a virtual machine with a Linux Debian server as host.
- Output may vary for the reader based on their Linux version.
- Strong recommendation to test in a virtual machine.
Author | Nejat Hakan |
License | CC BY-SA 4.0 |
nejat.hakan@outlook.de | |
PayPal Me | https://paypal.me/nejathakan |
Simulating a network infrastructure with KVM/QEMU with virt-manager
Introduction to Virtualization and Network Simulation
Welcome to the fascinating world of network infrastructure simulation using KVM/QEMU and virt-manager! In today's complex IT landscape, the ability to design, build, test, and troubleshoot network environments in a controlled, cost-effective manner is an invaluable skill. This guide is designed to equip you, as university students and aspiring IT professionals, with the knowledge and practical experience to master these technologies. We will embark on a journey from the fundamental concepts of virtualization to building intricate network setups, all within your own virtual lab.
What is Virtualization
At its core, virtualization is the process of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms, storage devices, and computer network resources. In the context of this guide, we are primarily concerned with hardware virtualization or platform virtualization. This involves creating Virtual Machines (VMs), which are emulations of physical computer systems. Each VM runs its own operating system (called a guest OS) and applications, just like a physical computer, but it shares the underlying physical hardware resources (CPU, memory, storage, network interfaces) of a single host machine.
Key Concepts in Virtualization:
- Host Machine:
The physical computer that runs the virtualization software and hosts the VMs. - Guest Machine (VM):
The virtual computer environment running its own operating system and applications. - Hypervisor (or Virtual Machine Monitor - VMM):
The software, firmware, or hardware that creates and runs virtual machines. It's the intermediary layer between the physical hardware and the virtual machines. Hypervisors allocate physical resources to each VM.
Types of Hypervisors:
- Type 1 (Bare-metal):
These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. Examples include VMware ESXi, Microsoft Hyper-V (when installed as a role on Windows Server), and KVM (Kernel-based Virtual Machine) when integrated into the Linux kernel. They generally offer better performance and scalability. - Type 2 (Hosted):
These hypervisors run on a conventional operating system just as other computer programs do. The guest operating system runs as a process on the host. Examples include VMware Workstation, Oracle VirtualBox, and QEMU (when not used with KVM). They are often easier to set up and manage for desktop use.
Our focus, KVM, functions as a Type 1 hypervisor once the Linux kernel is loaded, making it highly efficient.
KVM and QEMU Explained
KVM (Kernel-based Virtual Machine):
KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT-x or AMD-V). It consists of a loadable kernel module, kvm.ko
, that provides the core virtualization infrastructure and a processor-specific module, kvm-intel.ko
or kvm-amd.ko
.
KVM itself does not perform emulation. Instead, it exposes the /dev/kvm
interface, which a userspace program can use to:
- Set up the guest VM's address space.
- Feed it simulated I/O.
- Map its video display back to the host's display.
- Emulate other hardware components.
KVM leverages the processor's virtualization extensions to run guest operating systems directly on the host CPU, leading to near-native performance for CPU-intensive tasks.
QEMU (Quick EMUlator):
QEMU is a powerful open-source machine emulator and virtualizer. It can perform two main functions:
- Full System Emulation:
QEMU can emulate a full computer system, including a processor and various peripherals. This allows it to run operating systems and programs made for one machine (e.g., an ARM board) on a different machine (e.g., an x86 PC). This emulation can be slower because it involves translating CPU instructions. - Virtualization (with KVM):
When used with KVM, QEMU leverages KVM to run guest code directly on the host CPU if the guest and host architectures match (e.g., x86 guest on an x86 host). QEMU is still responsible for emulating the I/O hardware (storage controllers, network cards, USB controllers, etc.) and setting up the virtual machine environment. This combination provides the best of both worlds: hardware-assisted speed for CPU and memory operations (via KVM) and comprehensive hardware emulation (via QEMU).
In our context, KVM provides the kernel-level virtualization capabilities, and QEMU provides the machine emulation and device model for the VMs. They work hand-in-hand.
Introducing virt-manager
While KVM and QEMU provide the powerful backend for virtualization, interacting with them directly via the command line (e.g., using qemu-system-x86_64
commands) can be complex and cumbersome for managing multiple VMs. This is where virt-manager (Virtual Machine Manager) comes in.
virt-manager is a graphical desktop user interface for managing virtual machines through libvirt
. libvirt
is an open-source API, daemon, and management tool for managing virtualization platforms. It supports various hypervisors, including KVM/QEMU, Xen, LXC, VirtualBox, and others.
Key Features of virt-manager:
- Graphical VM Creation:
A wizard-driven interface to create new VMs, specifying CPU, memory, disk, and network settings. - VM Lifecycle Management:
Start, stop, pause, resume, save, restore, and delete VMs. - Resource Allocation:
Dynamically adjust (to some extent) resources allocated to VMs. - Hardware Configuration:
Add, remove, and modify virtual hardware (disks, network interfaces, USB devices, etc.). - Performance Monitoring:
View real-time performance graphs for CPU, memory, disk, and network usage of VMs. - Virtual Network Management:
Create and manage virtual networks (NAT, bridged, isolated). - Storage Pool Management:
Manage storage for VM disk images. - Built-in VNC/SPICE Client:
Access the graphical console of VMs.
virt-manager simplifies the management of KVM/QEMU VMs, making it an excellent tool for both beginners and experienced users looking for a convenient GUI.
Why Simulate Network Infrastructures
Simulating network infrastructures offers a multitude of benefits, especially in a learning and experimental context:
- Cost-Effectiveness:
Building physical network labs with routers, switches, firewalls, servers, and clients can be prohibitively expensive. Virtualization allows you to create complex topologies using only the resources of your host machine. - Safe Learning Environment:
You can experiment with configurations, security exploits (ethically, on your own systems), and potentially disruptive changes without impacting a live production network. Mistakes in a virtual lab are easily reversible (e.g., by reverting to a snapshot or deleting and recreating a VM). - Scalability and Flexibility:
Need to add another server, a new subnet, or a different type of firewall? In a virtual environment, this often involves just a few clicks or commands, rather than acquiring and physically installing new hardware. - Reproducibility:
You can save VM configurations, clone VMs, and take snapshots, allowing you to easily replicate specific scenarios or revert to a known good state. This is invaluable for systematic testing and troubleshooting. - Exposure to Diverse Technologies:
You can easily install and experiment with various operating systems (Linux distributions, Windows Server, BSD-based systems like pfSense), network services (DNS, DHCP, web servers, mail servers), and security tools. - Understanding Complex Interactions:
By building and observing a network from the ground up, you gain a deeper understanding of how different components (routers, firewalls, DNS, clients, servers) interact and depend on each other. - Preparation for Certifications and Real-World Scenarios:
Many IT certifications (like CompTIA Network+, CCNA, Linux certifications) require hands-on experience. A virtual lab is an excellent platform to practice for these and to model real-world network designs. - Testing Configurations Before Deployment:
For small and medium-sized businesses (SMBs) or even larger enterprises, simulating a planned network change or a new deployment in a virtual environment can help identify potential issues and refine configurations before investing in hardware or risking downtime.
Benefits for Learning and Experimentation
For university students, the benefits are particularly profound:
- Practical Application of Theory:
Network theory, TCP/IP models, routing protocols, and security concepts can seem abstract. Simulation brings these concepts to life. - Hands-on Skill Development:
You'll develop practical skills in OS installation, server configuration, network troubleshooting, and security hardening. - Encourages Curiosity and Exploration:
The low-risk nature of a virtual lab encourages you to try things out, break them (virtually!), and learn how to fix them. - Develops Problem-Solving Abilities:
When something doesn't work as expected in your virtual network, you'll need to diagnose the issue, research solutions, and apply them – a critical skill for any IT professional. - Foundation for Advanced Topics:
Mastering virtualization and basic network simulation opens doors to more advanced topics like cloud computing, containerization (e.g., Docker, Kubernetes), and sophisticated cybersecurity practices.
This guide aims to provide you with a solid foundation, empowering you to explore the topics mentioned – Networking, TCP, Ethical Hacking, DNS, Firewalls, Web/Mail Servers, Security, Routing, and more – in a practical, engaging, and deeply educational way.
Workshop Introduction to virt-manager
This first workshop will familiarize you with the virt-manager interface, assuming KVM, QEMU, and virt-manager are already installed (installation will be covered in the next section). If they are not yet installed, you can read through this workshop to get an idea of what to expect.
Objective:
To launch virt-manager, explore its main components, and understand how to connect to the local KVM/QEMU hypervisor.
Prerequisites:
- A Linux desktop environment.
- KVM, QEMU, and virt-manager installed (we'll cover this in detail in the next section, but if you already have it, proceed).
- Your user account should be part of the
libvirt
orkvm
group to manage VMs withoutsudo
for every action (also covered in the next section).
Steps:
-
Launch virt-manager:
- Open your application launcher or terminal.
- Search for "Virtual Machine Manager" or type
virt-manager
in the terminal and press Enter. - If prompted for a password, it might be because your user isn't in the correct group, or it's the first time libvirtd is being accessed with administrative privileges.
-
The Main virt-manager Window:
- You should see the main "Virtual Machine Manager" window.
- By default, it usually tries to connect to the local KVM/QEMU hypervisor, often listed as
QEMU/KVM - Not Connected
orQEMU/KVM - Active
. - If it shows "Not Connected", double-click on
QEMU/KVM
or select it and click "Open". It should then change to "Active".
-
Exploring the Interface - Key Areas:
- List of Virtual Machines:
The central pane will display a list of your virtual machines. Initially, this will be empty. Each VM will have its name, state (Running, Shutoff, Paused), and CPU usage shown. - Menu Bar:
- File:
New Virtual Machine
:
This is where you'll start creating VMs.Add Connection...
:
Allows you to connect to remote hypervisors (not covered in this basic workshop but good to know).Preferences
:
Global settings for virt-manager.
- Edit:
Connection Details
:
Shows details about the connected hypervisor (QEMU/KVM in our case). This is very important and we'll explore it now.Virtual Networks
:
Manage virtual networks.Storage
:
Manage storage pools.
- View:
Controls what information is displayed (Toolbar, Graph, etc.). - Virtual Machine:
Actions for selected VMs (Run, Pause, Shutdown, Migrate, Delete, etc.). - Help:
Access to documentation.
- File:
- Toolbar:
Quick access buttons for common actions like creating a new VM, starting, pausing, and shutting down selected VMs.
- List of Virtual Machines:
-
Exploring Connection Details:
- Select the
QEMU/KVM
connection in the list (if you have multiple connections, ensure the local one is selected). - Go to
Edit
->Connection Details
. A new window will open with several tabs:- Overview:
Shows basic hypervisor information, hostname, and libvirt version. You can also see CPU and Memory usage of the host dedicated to VMs. - Virtual Networks:
Lists available virtual networks. You'll likely see adefault
network. We'll delve deep into this later. You can see its state, whether it's on Autostart, its IP address range, and DHCP information. - Storage:
Lists storage pools. You'll likely see adefault
pool, usually located at/var/lib/libvirt/images/
. This is where VM disk images are stored by default. You can see its type (usuallydir
for directory-based), status, and capacity. We will cover creating different types of storage pools. - Network Interfaces:
Shows physical network interfaces on your host thatlibvirt
can potentially use for bridged networking. - Secrets:
For managing sensitive data (not typically used for basic setups).
- Overview:
- Select the
-
Exploring Preferences:
- Go to
File
->Preferences
. - General:
Enable XML editing
:
Useful for advanced users to directly edit the libvirt XML configuration of VMs.Confirm KVM/QEMU acceleration is available
:
Ensures KVM is active.
- Console:
Settings for the graphical console (e.g., keyboard shortcuts, VNC/SPICE options). - New VM:
Default settings for creating new VMs (e.g., default OS type, enabling local browse). - Stats:
Configuration for performance graphs.
- Go to
-
Understanding the "New Virtual Machine" Icon:
- Locate the icon that looks like a computer screen with a plus sign on it, usually in the top-left corner of the toolbar, or go to
File
->New Virtual Machine
. - Clicking this icon opens the "Create a new virtual machine" wizard, which we will use extensively in later sections. You can click through the initial screens to get a feel for it, but don't create a VM yet. Click "Cancel" to exit the wizard for now.
- Locate the icon that looks like a computer screen with a plus sign on it, usually in the top-left corner of the toolbar, or go to
-
Closing virt-manager:
- You can close the virt-manager window. The hypervisor (
libvirtd
daemon) and any running VMs will continue to run in the background. virt-manager is just a client application.
- You can close the virt-manager window. The hypervisor (
Workshop Summary:
You have now launched virt-manager, connected to the local KVM/QEMU hypervisor, and explored its main interface components, including Connection Details (Virtual Networks, Storage) and Preferences. You also know where to start the VM creation process. This initial exploration is crucial for navigating the tool effectively as we proceed.
In the next section, we will ensure your host system is correctly set up with KVM, QEMU, and virt-manager, and that your user account has the necessary permissions.
1. Setting Up Your KVM/QEMU Virtualization Host
Before you can start creating and managing virtual machines, your physical computer (the host) needs to be properly prepared. This involves ensuring your hardware supports virtualization, installing the necessary software packages (KVM, QEMU, libvirt, and virt-manager), and verifying the installation.
Hardware Requirements and Recommendations
While KVM/QEMU can run on relatively modest hardware, a more capable host system will allow you to run more VMs simultaneously and provide better performance for each.
Minimum Requirements (for a few light VMs):
- CPU:
A modern x86-64 processor (Intel or AMD) with virtualization extensions:- Intel: VT-x (Virtualization Technology)
- AMD: AMD-V (AMD Virtualization) Most processors manufactured since 2006-2008 include these.
- RAM:
At least 4GB. Your host OS will consume some, and each VM will need its own allocation. 2GB might technically work for one tiny VM, but it would be very slow. - Disk Space:
At least 20-30GB of free disk space for the host OS, virtualization software, and a couple of small VMs. VM disk images can grow quite large. - Operating System:
A Linux distribution. KVM is a Linux kernel feature.
Recommended Specifications (for a comfortable learning environment with multiple VMs):
- CPU:
A multi-core processor (e.g., Intel Core i5/i7/i9, AMD Ryzen 5/7/9) with virtualization extensions. More cores allow you to assign dedicated cores to VMs or run more VMs smoothly. - RAM:
16GB or more. This is often the biggest limiting factor. For example:- Host OS: 2-4GB
- pfSense Firewall VM: 1GB
- Linux Server VM (DNS/Web): 1-2GB
- Linux Client VM: 1-2GB
- Another Server VM: 1-2GB This quickly adds up. More RAM means more (or more powerful) concurrent VMs.
- Disk Space:
A fast SSD (Solid State Drive) with 256GB or more free space. SSDs dramatically improve VM boot times, application responsiveness, and overall performance compared to traditional HDDs. NVMe SSDs are even faster. - Network:
A stable network connection for the host, especially if you plan to use bridged networking or download OS images and updates for your VMs. - Graphics:
Basic integrated graphics are usually sufficient unless you plan to do GPU passthrough for demanding graphical applications within VMs (an advanced topic).
Checking for Virtualization Support (VT-x/AMD-V)
Before proceeding with the installation, you must verify that your CPU supports virtualization extensions and that they are enabled in your system's BIOS/UEFI.
-
Check in BIOS/UEFI:
- Reboot your computer and enter the BIOS/UEFI setup utility. This is usually done by pressing a key like
Del
,F2
,F10
,F12
, orEsc
during startup. The key varies by manufacturer. - Look for settings related to "Virtualization Technology," "Intel VT-x," "AMD-V," "SVM (Secure Virtual Machine)," or similar. These are often found under "CPU Configuration," "Advanced Chipset Features," or "Northbridge" sections.
- Ensure these settings are Enabled.
- Save changes and exit the BIOS/UEFI setup.
- Reboot your computer and enter the BIOS/UEFI setup utility. This is usually done by pressing a key like
-
Check from Linux:
Once your Linux system is booted, you can verify virtualization support from the command line. Open a terminal and run:- For Intel CPUs, you should see
VT-x
. - For AMD CPUs, you should see
AMD-V
.
If you don't see any output, either your CPU doesn't support it, or it's disabled in the BIOS/UEFI.
You can also check if the KVM kernel modules can be loaded (this command might require
If KVM is properly set up and supported, you should seesudo
or root privileges if the modules are not yet loaded automatically):kvm_intel
(for Intel) orkvm_amd
(for AMD) andkvm
.Another useful command is
This command will explicitly tell you if KVM acceleration can be used. If it says:kvm-ok
(you might need to install thecpu-checker
package first:sudo apt install cpu-checker
on Debian/Ubuntu, orsudo dnf install cpu-checker
on Fedora).INFO: /dev/kvm exists
KVM acceleration can be used
Then you are good to go. If it indicates an issue, it will often provide a hint (e.g., "KVM is disabled in the BIOS"). - For Intel CPUs, you should see
Installing KVM QEMU and virt-manager on Linux
The installation process varies slightly depending on your Linux distribution. Below are instructions for common distributions. Ensure your system is up-to-date before you begin:
# For Debian/Ubuntu
sudo apt update && sudo apt upgrade -y
# For Fedora
sudo dnf update -y
# For CentOS/RHEL (example for CentOS Stream/RHEL 8+)
sudo dnf update -y
# For Arch Linux
sudo pacman -Syu
Debian/Ubuntu
-
Install necessary packages:
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager spice-vdagent -y
qemu-kvm
: Provides QEMU user-space components and KVM kernel module integration.libvirt-daemon-system
: The libvirt daemon that manages the VMs. The-system
variant ensures it runs as a system service.libvirt-clients
: Provides command-line tools for managing libvirt, such asvirsh
.bridge-utils
: Contains utilities for creating and managing bridge network devices (e.g.,brctl
), essential for bridged networking.virt-manager
: The graphical management tool.spice-vdagent
: (Optional, but highly recommended for guest VMs) This agent enhances the guest experience with features like copy-paste, automatic resolution resizing, and mouse sharing when using SPICE for console access. You install this inside your guest VMs later, but it's good to be aware of. For the host, the relevant SPICE client libraries are usually pulled in as dependencies ofvirt-manager
.
-
Add your user to the
libvirt
andkvm
groups:
This allows you to manage virtual machines as a non-root user.$(whoami)
will be replaced with your current username.- You will need to log out and log back in or reboot your system for these group changes to take effect.
Fedora/CentOS/RHEL
-
Install necessary packages:
Fedora typically has more up-to-date packages. For CentOS/RHEL, ensure your repositories (like EPEL for some tools if needed, though core KVM tools are in base repos) are configured.For Fedora:
The@virtualization
group installsqemu-kvm
,libvirt-daemon
,virt-manager
, and other common virtualization tools.For CentOS Stream/RHEL 8+ (and similar distributions like AlmaLinux, Rocky Linux):
Or, more specifically: -
Add your user to the
Log out and log back in or reboot for the change to take effect.libvirt
group:
(Thekvm
group might not be used or necessary in the same way as on Debian/Ubuntu;libvirt
group membership is key).
Arch Linux
-
Install necessary packages:
qemu-full
: Installs all QEMU components including system emulators, tools, and UI components. You could useqemu-desktop
orqemu-base
for a more minimal install if you know exactly what you need.libvirt
: The libvirt daemon and client tools.virt-manager
: The graphical management tool.bridge-utils
: For network bridging.edk2-ovmf
: Provides UEFI firmware for VMs, which is often preferred over traditional SeaBIOS for modern operating systems.spice-vdagent
: Again, for guest VMs, but good to install its dependencies on the host.
-
Add your user to the
Log out and log back in or reboot.libvirt
group: -
Enable UEFI/OVMF (Optional but Recommended):
This step ensures that VMs can be created with UEFI firmware. After editing, restart
If you installededk2-ovmf
, libvirt might not use it by default for QEMU sessions. You may need to uncomment/edit lines in/etc/libvirt/qemu.conf
:libvirtd.service
.
Verifying the Installation
After installing the packages and re-logging in (or rebooting), verify that everything is working correctly.
-
Check
You should see output indicating it is "active (running)". If not, start and enable it: On some systems, the service name might belibvirtd
service status:
Thelibvirtd
daemon (orlibvirtd.service
) should be running and enabled to start on boot.libvirt-daemon.service
. -
Check KVM module loading:
You should see
As done previously in the support check:kvm_intel
orkvm_amd
andkvm
. -
Verify
This command should execute without errors and show an empty list of VMs (Id, Name, State), for example: If you get a permission denied error, it means your group membership change hasn't taken effect (try rebooting) or there's another permissions issue.virsh
connectivity:
virsh
is a command-line tool to interact with libvirt. Running a simple list command should work withoutsudo
if your user is correctly added to thelibvirt
group. -
Check for the default network:
You should see a network named
Libvirt usually creates a default NAT-based network.default
, likely active and set to autostart. You can check its details: This will show its XML configuration, including the IP address range and DHCP settings.
Initial virt-manager Configuration
While virt-manager often works out-of-the-box, here are a few things to check or be aware of:
-
Launch virt-manager:
As described in the previous workshop, launch it from your application menu or by typingvirt-manager
in a terminal. -
Connect to QEMU/KVM:
Ensure it connects to theQEMU/KVM
session. If it's "Not Connected," double-click it. -
Check for KVM Acceleration:
- Go to
File
->Preferences
. - Under
New VM
, ensure the checkboxEnable KVM/QEMU hardware acceleration
(or similar wording) is checked if available. - In
virt-manager
's main window, select theQEMU/KVM
connection. - Go to
Edit
->Connection Details
. - On the
Overview
tab, it should indicate that KVM acceleration is available or in use.
- Go to
-
Default Storage Pool:
- Go to
Edit
->Connection Details
->Storage
tab. - You should see a
default
storage pool, usually located at/var/lib/libvirt/images/
. - Ensure it's active. This is where VM disk images will be stored unless you specify otherwise. We will explore creating custom storage pools later.
- Go to
Workshop Setting Up Your Virtualization Host
Objective:
To install KVM, QEMU, libvirt, and virt-manager on your Linux host, verify the installation, and ensure your user account has the necessary permissions.
Prerequisites:
- A Linux host machine.
- Administrative (sudo) privileges.
- Internet connection (for downloading packages).
Instructions:
Part 1: System Check and Preparation
-
Verify Hardware Virtualization Support in BIOS/UEFI:
- Action:
Reboot your computer and enter the BIOS/UEFI setup. - Guidance:
Look for settings like "Intel Virtualization Technology (VT-x)," "AMD-V," or "SVM Mode." - Expected Outcome:
These settings should be Enabled. If not, enable them, save, and exit.
- Action:
-
Verify Hardware Virtualization Support in Linux:
- Action:
Open a terminal and runlscpu | grep -i virtualization
. - Expected Outcome:
Output should showVT-x
(for Intel) orAMD-V
(for AMD). - Action (Optional):
Installcpu-checker
(e.g.,sudo apt install cpu-checker
on Debian/Ubuntu orsudo dnf install cpu-checker
on Fedora) and runkvm-ok
. - Expected Outcome:
kvm-ok
should report:KVM acceleration can be used
.
- Action:
Part 2: Software Installation (Choose the section for your distribution)
-
For Debian/Ubuntu users:
- Update package lists:
- Install virtualization packages:
- Add user to groups:
-
For Fedora users:
- Update system:
- Install virtualization package group:
- Add user to libvirt group:
-
For Arch Linux users:
- Update system:
- Install virtualization packages:
- Add user to libvirt group:
- (Arch Linux Specific - Optional but Recommended for UEFI VMs):
Edit
/etc/libvirt/qemu.conf
as root (e.g.,sudo nano /etc/libvirt/qemu.conf
): Find thenvram
line (it might be commented out) and set it or uncomment it: Save the file.
Part 3: Post-Installation Steps (Common to all distributions)
-
Apply Group Changes:
- Action:
Log out of your current session and log back in, or reboot your computer. This is crucial for group membership changes to take effect.
- Action:
-
Verify
libvirtd
Service:- Action:
Open a terminal after logging back in. Check the service status: - Expected Outcome:
The service should beactive (running)
. - Troubleshooting:
If not active, try starting it:sudo systemctl start libvirtd
and enabling it:sudo systemctl enable libvirtd
.
- Action:
-
Verify KVM Kernel Modules:
- Action:
Runlsmod | grep kvm
. - Expected Outcome:
You should seekvm_intel
orkvm_amd
, andkvm
.
- Action:
-
Test
virsh
Access (without sudo):- Action:
Runvirsh list --all
. - Expected Outcome:
The command should execute without permission errors and show an empty list of VMs or any pre-existing VMs if this isn't a fresh setup. If you get an error, the group changes likely didn't apply (re-login/reboot again) or there was an issue with the group addition.
- Action:
-
Check Default Network:
- Action:
Runvirsh net-list --all
. - Expected Outcome:
A network nameddefault
should be listed, active, and set to autostart.
- Action:
-
Launch virt-manager and Basic Checks:
- Action:
Launch "Virtual Machine Manager" from your applications menu or by typingvirt-manager
in the terminal. - Verify Connection:
EnsureQEMU/KVM
connection is active. - Check Acceleration:
Navigate toEdit
->Connection Details
. TheOverview
tab should confirm KVM acceleration. - Check Default Storage:
InConnection Details
, go to theStorage
tab. Verify thedefault
pool exists and is active (usually at/var/lib/libvirt/images/
).
- Action:
Workshop Summary:
If you've completed all these steps successfully, your host system is now a fully functional KVM/QEMU virtualization platform, and virt-manager is ready to use. You've verified hardware support, installed all necessary software, configured user permissions, and confirmed that the core services are running. You are now well-prepared to move on to understanding disk architectures and creating your first virtual machines.
2. Understanding Virtual Machine Disk Architectures
A virtual machine, much like a physical one, requires storage for its operating system, applications, and data. In a KVM/QEMU environment, this storage is typically provided by disk images, which are files on the host filesystem that appear as physical hard drives to the guest VM. Understanding the different disk image formats, provisioning methods, and storage management features is crucial for efficient and effective virtualization.
Disk Image Formats
QEMU supports various disk image formats, each with its own characteristics, features, and trade-offs. libvirt
and virt-manager
allow you to choose the format when creating a virtual disk.
qcow2 (QEMU Copy-On-Write version 2)
This is the most commonly used and recommended format for KVM/QEMU VMs. It offers a rich feature set:
- Smaller Image Size:
qcow2 images can be smaller than their raw counterparts because they only allocate space as data is written (thin provisioning by default). Empty sectors within the VM's filesystem do not consume space in the image file on the host until they are written to. - Snapshots:
qcow2 supports internal snapshots, allowing you to save the state of a VM's disk at a particular point in time and revert to it later. This is incredibly useful for testing and recovery. - AES Encryption:
qcow2 images can be encrypted for enhanced security (though managing keys can be complex). - zlib-based Compression:
Supports transparent compression of image data, which can save disk space but may incur a slight performance overhead. - Backing Files (Differencing Images):
A qcow2 image can use another image (usually a read-only base image) as its backing file. The new image only stores changes relative to the backing file. This is powerful for creating multiple VMs from a common template or for non-persistent VMs. - Preallocation:
While qcow2 supports thin provisioning, you can also preallocate metadata or full disk space to improve performance in some scenarios by reducing fragmentation and ensuring space availability.
When to use qcow2:
Generally, qcow2 is the default choice for most use cases due to its flexibility, snapshot support, and efficient space usage.
raw
The raw format is a plain, bit-for-bit image of a disk.
- Performance:
Raw images can offer slightly better performance than qcow2 in I/O intensive workloads because there's no metadata overhead or complex feature processing. The difference is often negligible for many common tasks, especially with modern KVM/QEMU versions and VirtIO drivers. - Simplicity:
It's a straightforward format. - Space Usage:
A raw image file consumes the full specified disk size on the host filesystem from the moment it's created, regardless of how much data is actually inside the VM's filesystem (thick provisioning). For example, a 100GB raw disk image will immediately occupy 100GB on your host's disk. - No Native Snapshots:
The raw format itself doesn't support internal snapshots. Snapshots with raw images usually involve external mechanisms, often managed by LVM (Logical Volume Manager) if the raw image is stored on an LVM logical volume, or by using QEMU's external snapshot capabilities with a qcow2 overlay. - Interoperability:
Raw images are easily converted to and from other formats and can be directly manipulated by tools likedd
or mounted via loopback devices (with care).
When to use raw:
- When absolute maximum I/O performance is critical and snapshot features are not needed or are handled externally (e.g., via LVM snapshots).
- For exporting/importing images to/from other virtualization platforms that prefer raw format.
- If you need to directly access the disk image content from the host system frequently.
vmdk vdi and others
QEMU also has support for other disk image formats, primarily for compatibility with other virtualization solutions:
- VMDK (Virtual Machine Disk):
Developed by VMware. QEMU can read and write many variants of VMDK, which is useful if you're migrating VMs from VMware products. Some advanced VMware features within VMDK files might not be fully supported. - VDI (Virtual Disk Image):
Used by Oracle VirtualBox. Similar to VMDK, QEMU's support for VDI facilitates migration from VirtualBox. - VHD (Virtual Hard Disk) / VHDX (Virtual Hard Disk v2):
Used by Microsoft Hyper-V. QEMU has support for these as well.
While QEMU can use these formats, it's often recommended to convert them to qcow2 or raw for use with KVM for optimal features and performance within the KVM/QEMU ecosystem, unless you have specific reasons for maintaining the original format (e.g., frequent back-and-forth migration). The qemu-img
command-line tool is excellent for converting between formats.
Example conversion:
Here,-f vmdk
specifies the input format, and -O qcow2
specifies the output format.
Thin vs Thick Provisioning
This concept relates to how disk space is allocated for a virtual disk image on the host's storage.
-
Thin Provisioning (Sparse Allocation):
- Disk space is allocated from the host's storage pool only when data is actually written to the virtual disk by the guest OS.
- A newly created thin-provisioned 100GB virtual disk might initially consume only a few megabytes (for metadata) on the host. As the guest OS installs and writes data, the image file grows.
- Advantages:
Efficient use of storage space, especially when creating many VMs or large virtual disks that won't be filled immediately. Faster initial creation of the disk image file. - Disadvantages:
- Risk of over-provisioning: You can define virtual disks whose total size exceeds the actual available space in the storage pool. If all VMs start consuming their allocated space, you can run out of physical storage, potentially leading to VM pauses or crashes.
- Slight performance overhead: There might be a minor performance hit when new blocks need to be allocated and metadata updated. This is usually minimal with modern systems.
- Fragmentation: The image file can become fragmented on the host filesystem over time, potentially impacting performance.
- qcow2 images are thin-provisioned by default.
-
Thick Provisioning (Fully Allocated):
- The full size of the virtual disk is allocated on the host's storage at the time of creation. A 100GB thick-provisioned virtual disk will immediately consume 100GB on the host.
- Advantages:
- Guaranteed space: The space is reserved, preventing out-of-space issues for that VM (assuming the host pool had enough space initially).
- Potentially better performance: Less fragmentation and no overhead for on-demand allocation can lead to more consistent I/O performance.
- Predictable space usage.
- Disadvantages:
Slower initial creation of the disk image file. Less efficient use of storage space if the virtual disks are not fully utilized. - Raw images are always thick-provisioned.
- qcow2 images can be preallocated to behave like thick-provisioned disks. This can be done during creation using
qemu-img create
with options likepreallocation=metadata
(preallocates metadata, still grows for data) orpreallocation=full
(preallocates all space).virt-manager
might also offer options to "Allocate entire disk now."
Consideration:
For a learning lab, thin provisioning (default qcow2) is usually perfectly fine and more space-efficient. For production systems with critical performance needs, thick provisioning or preallocated qcow2 might be considered.
Snapshots and Their Importance
A snapshot captures the state of a virtual machine's disk(s) and, optionally, its memory (RAM) and device state at a specific point in time. This is one of the most powerful features of virtualization.
- Disk Snapshots:
Capture the contents of the virtual hard disk(s). - Live Snapshots:
If the VM is running, a live snapshot also attempts to capture the contents of its RAM and the state of its virtual devices. This allows you to restore the VM to the exact running state it was in when the snapshot was taken. - Offline Snapshots:
Taken when the VM is powered off. These only capture the disk state.
How qcow2 Snapshots Work (Internally):
When you take a snapshot of a qcow2 image, the current state of the image is frozen. Subsequent writes to the disk are stored in a new area within the qcow2 file or in a separate overlay file (depending on the snapshot mode and libvirt version). The snapshot itself contains pointers to the state of the disk blocks at the time of the snapshot. This copy-on-write mechanism is efficient.
Benefits of Snapshots:
- Testing Software/Updates:
Before installing a new piece of software, a major update, or making a significant configuration change, take a snapshot. If something goes wrong, you can quickly revert the VM to its pre-change state. - Experimentation:
Try risky configurations or security tests. If you break the VM, revert. - Backup Points:
While not a replacement for a full backup strategy, snapshots provide quick restore points. - Development/Debugging:
Capture a specific state of a system for debugging purposes.
Managing Snapshots in virt-manager:
- Select a VM.
- Click the "Show virtual machine console" button (looks like a monitor).
- In the VM's console window, go to
Virtual Machine
->Snapshot Manager
(or there might be a "Manage Snapshots" icon). - Here you can create new snapshots (give them descriptive names!), revert to existing snapshots, and delete old ones.
Important Considerations for Snapshots:
- Performance:
Having many snapshots, especially in a long chain, can slightly degrade disk I/O performance as QEMU may need to traverse the chain to find the correct data blocks. - Disk Space:
Snapshots consume disk space on the host. Each snapshot stores the differences from its parent state. If you make many changes after a snapshot, it can grow significantly. - Do Not Treat Snapshots as Primary Backups:
Snapshots are typically stored with the VM's disk image. If the host storage fails or the primary disk image gets corrupted, your snapshots might also be lost. Use a separate backup solution for critical data. - Snapshot Chains:
Deleting a snapshot in the middle of a chain requires merging its data into its child snapshot or its parent, which can be an I/O intensive operation.libvirt
handles this. - External vs. Internal Snapshots:
- Internal:
Snapshot data is stored within the qcow2 file itself. Simpler to manage as a single file. - External:
Snapshot data is stored in a new qcow2 overlay file, and the original base image becomes read-only. This is more flexible for certain workflows, like creating templates.libvirt
can manage both.
- Internal:
Storage Pools
In libvirt
(and thus virt-manager
), a storage pool is a source of storage managed by the hypervisor, from which virtual disk images (called storage volumes within libvirt terminology) can be allocated for VMs.
virt-manager
provides an interface to manage these pools (Edit
-> Connection Details
-> Storage
tab).
Default Pool (directory based)
- By default,
libvirt
usually sets up a storage pool nameddefault
. - Type:
dir
(directory-based). - Location:
Typically/var/lib/libvirt/images/
on the host filesystem. - How it works:
This pool is simply a directory on your host. When you create a virtual disk in this pool,libvirt
creates a file (e.g.,myvm.qcow2
) inside this directory. - Pros:
Simple to set up and understand. Uses existing host filesystem. - Cons:
Performance is tied to the underlying host filesystem and disk. Features like advanced snapshotting or cloning might rely on qcow2 features rather than underlying storage capabilities.
LVM-based Pools
- Type:
logical
(LVM Volume Group). - How it works:
You dedicate an LVM Volume Group (VG) on your host tolibvirt
. When you create a virtual disk,libvirt
carves out a Logical Volume (LV) from this VG. This LV is then presented to the VM as a raw block device (though you can still format it with qcow2 on top if desired, but typically you'd use it raw for LVM benefits). - Pros:
- Performance:
Can offer good performance, especially if the LVs are used as raw devices by VMs. - Advanced Snapshotting:
LVM has its own robust and efficient snapshot mechanism. You can snapshot entire LVs, which can be faster and more space-efficient at the block level than qcow2 internal snapshots for some use cases. - Thin Provisioning at LVM Level:
LVM VGs can have thin-provisioned LVs. - Scalability:
Easier to manage and resize storage if your VG has free space or can be extended.
- Performance:
- Cons:
Requires you to set up and manage LVM on your host, which adds a layer of complexity.
iSCSI Pools
- Type:
iscsi
. - How it works:
Connects to an iSCSI target (a SAN or a server providing block storage over the network). Each LUN (Logical Unit Number) on the iSCSI target can be used as a virtual disk. - Pros:
Centralized storage, suitable for larger deployments or when using a dedicated SAN. Features depend on the SAN (snapshots, replication, etc.). - Cons:
Requires an iSCSI target. Network performance and latency become critical factors. More complex to set up.
NFS Pools
- Type:
netfs
(Network File System). - How it works:
Uses an NFS share as a storage pool. Virtual disk image files (e.g., qcow2 files) are stored on the NFS server. - Pros:
Centralized file-based storage. Easy to share VM images between multiple KVM hosts (though live migration needs shared storage configured correctly). - Cons:
Performance is heavily dependent on network speed and NFS server performance. Can have higher latency than local storage.
For a learning lab on a single host, the default directory-based pool is often sufficient. If you are comfortable with LVM and have a separate partition or disk, an LVM-based pool can be a good option to explore for its snapshot capabilities.
Filesystem Considerations within VMs
Just like a physical machine, the operating system inside your VM needs a filesystem (e.g., ext4, XFS, NTFS) on its virtual disk. The choice of filesystem inside the VM is largely independent of the disk image format (qcow2, raw) on the host, though there can be some interactions.
ext4 XFS Btrfs
These are common Linux filesystems:
- ext4:
Mature, reliable, and widely used default for many Linux distributions. Good all-around performance. Supports journaling for crash consistency. - XFS:
High-performance journaling filesystem, particularly good for large files and high concurrency. Often favored for server workloads. - Btrfs (B-tree Filesystem):
A more modern filesystem with advanced features like built-in snapshotting (at the filesystem level inside the VM, distinct from qcow2 or LVM snapshots), checksums, compression, and integrated volume management. It's powerful but can be more complex.
When you install an OS in a VM, the OS installer will typically partition the virtual disk and format it with a chosen filesystem. Your choice depends on the needs of the guest OS and its applications.
Swap Space and Hibernation
Swap Space:
- Swap space is used by the OS when the amount of physical RAM (for the VM, this is the allocated virtual RAM) is full. Inactive pages from RAM are moved to the swap space on the disk to free up RAM for active processes.
- Swap is much slower than RAM, so relying heavily on it will degrade performance. However, having some swap is generally recommended as a safety net to prevent out-of-memory errors from crashing applications or the OS.
- Swap Partition:
A dedicated partition on the virtual disk for swap. This is the traditional method. - Swap File:
A file on an existing filesystem that is designated as swap space. More flexible for resizing. Many modern Linux distributions default to a swap file.
Hibernation (Suspending to Disk):
- Hibernation saves the entire state of the VM's RAM to disk (specifically, to the swap space) and then powers off the VM. When you resume, the state is read back from disk into RAM, restoring your session exactly as it was.
- Requirements for Hibernation:
- The swap partition or swap file must be at least as large as the VM's allocated RAM. Some recommendations suggest slightly larger (e.g., RAM size + 20% or RAM size + square root of RAM size) to accommodate kernel data structures.
- The OS inside the VM must support hibernation.
- The bootloader must be configured to resume from the correct swap location.
- Hibernation in a VM Context:
- This refers to hibernation within the guest OS.
- It's different from
libvirt
's "save" feature (managed save), which saves the VM state (including RAM) to a file on the host and stops the VM instance, allowing it to be restored later. Guest-initiated hibernation writes to the guest's swap. - Enabling hibernation inside a guest can be useful, but ensure your guest's swap is configured appropriately.
Planning Disk Layout in Guest VMs:
When installing an OS in a VM, you'll typically be presented with partitioning options.
- Automatic Partitioning:
Most OS installers offer an automatic option that creates a root partition (/
), and possibly a separate/boot
and swap. This is often fine for general use. - Manual Partitioning:
Gives you more control. You might consider:- A
/boot
partition (e.g., 500MB - 1GB). - A root (
/
) partition for the OS and applications (size depends on needs). - A swap partition/file (e.g., equal to RAM or slightly more if hibernation is desired, or a smaller amount like 2-4GB if RAM is plentiful and hibernation isn't a primary goal).
- Potentially separate partitions for
/home
,/var
(if you expect lots of logs or variable data), or/srv
(for server data), depending on the VM's role.
- A
For your simulated network, standard guest OS partitioning is usually sufficient. If you plan to experiment with hibernation, pay close attention to swap size.
Workshop Creating and Managing Storage Pools and Disk Images
Objective:
To understand and practice creating different types of storage pools in virt-manager
, create various disk image types, and explore their properties.
Prerequisites:
- KVM/QEMU and
virt-manager
installed and working. - Sudo privileges.
- Some free disk space on your host.
Part 1: Exploring the Default Storage Pool
- Open virt-manager.
- Go to
Edit
->Connection Details
. - Select the
Storage
tab. You should see thedefault
pool.- Observe: Note its
Type
(likelydir
),Status
(should beActive
),Path
(likely/var/lib/libvirt/images/
),Capacity
, andAllocation
.
- Observe: Note its
- Browse the Pool:
- Select the
default
pool. On the right side, you'll see a list of "Volumes" (disk images). Initially, it might be empty. - Click the "Browse Local" button if available or navigate to the
Path
(e.g.,/var/lib/libvirt/images/
) using your file manager withsudo
if needed (e.g.,sudo nautilus /var/lib/libvirt/images/
) to see the directory contents. Be careful not to modify files manually here unless you know what you're doing.
- Select the
-
Create a New Volume (Disk Image) in the Default Pool:
- With the
default
pool selected, click the+
button under the "Volumes" list. - Name:
test-vol1.qcow2
- Format: Select
qcow2
. - Max Capacity:
Set to5
GB (this is the virtual size). - Allocation:
Do not select checkbox "Allocate entire volume now"
This makes it thin-provisioned; the actual file size will be small initially. - Click
Finish
. - You should now see
test-vol1.qcow2
in the list of volumes. -
Verify Actual Size:
Check the disk size and virtual size:
- virtual size should be 5 GiB
- disk size should be 968 KiB
sudo needed because the image has been created as user root because we used virt-manager UI
$ sudo qemu-img info /var/lib/libvirt/images/test-vol1.qcow2 image: /var/lib/libvirt/images/test-vol1.qcow2 file format: qcow2 virtual size: 5 GiB (5368709120 bytes) disk size: 968 KiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: true refcount bits: 16 corrupt: false extended l2: false Child node '/file': filename: /var/lib/libvirt/images/test-vol1.qcow2 protocol type: file file length: 5 GiB (5369757696 bytes) disk size: 968 KiB
- With the
-
Create a Preallocated qcow2 Volume (Simulating Thick Provisioning):
- Click
+
again in thedefault
pool. - Name:
test-vol2-thick.qcow2
- Format:
qcow2
. - Max Capacity:
1
GB. - Allocation:
1
GB (set allocation equal to max capacity).virt-manager
might have a checkbox like "Allocate entire volume now". If so, check it. - Click
Finish
. - You should now see
test-vol2-thick.qcow2
in the list of volumes. -
Verify Actual Size:
Check the disk size and virtual size:
- virtual size should be 1 GiB
- disk size should also be 1 GiB
sudo needed because the image has been created as user root because we used virt-manager UI
$ sudo qemu-img info /var/lib/libvirt/images/test-vol2-thick.qcow2 image: /var/lib/libvirt/images/test-vol2-thick.qcow2 file format: qcow2 virtual size: 1 GiB (1073741824 bytes) disk size: 1 GiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: true refcount bits: 16 corrupt: false extended l2: false Child node '/file': filename: /var/lib/libvirt/images/test-vol2-thick.qcow2 protocol type: file file length: 1 GiB (1074135040 bytes) disk size: 1 GiB
- Click
-
Create a Raw Volume:
- Click
+
again. - Name:
test-vol3.raw
- Format: Select
raw
. - Max Capacity:
1
GB. (Allocation for raw is always full capacity). - Click
Finish
. - You should now see
test-vol3.raw
in the list of volumes. -
Verify Actual Size:
Check the disk size and virtual size:
- virtual size should be 1 GiB
- disk size should also be 1 GiB
sudo needed because the image has been created as user root because we used virt-manager UI
$ sudo qemu-img info /var/lib/libvirt/images/test-vol3.raw image: /var/lib/libvirt/images/test-vol3.raw file format: raw virtual size: 1 GiB (1073741824 bytes) disk size: 1 GiB Child node '/file': filename: /var/lib/libvirt/images/test-vol3.raw protocol type: file file length: 1 GiB (1073741824 bytes) disk size: 1 GiB
- Click
Part 2: Creating a New Directory-Based Storage Pool
Let's create a new storage pool in a different directory, perhaps in your home directory (for ease of access, though for "production" VMs, /var/lib/libvirt/images
or dedicated LVM is better).
- Create a Directory on Your Host:
- Open a terminal.
- Create a directory, for example:
mkdir ~/kvm_storage_pool
-
Add the New Pool in virt-manager:
- In
virt-manager
->Connection Details
->Storage
tab, click the+
button at the bottom left (to add a pool, not a volume). - Name:
my_custom_pool
- Type: Select
dir: Filesystem Directory
. - Click
Forward
. -
Target Path:
Browse to or type the path you created:~/kvm_storage_pool
(libvirt might resolve~
to/home/youruser/
).WARNING
In my case just entering the relative path
~/kvm_storage_pool
did not work.
That way Step "4. Create a New Volume (Disk Image) in the Pool my_custom_pool:" will not really create a volume.
Though we will see the volume in virtman, it will not exist in the file system.
This seems to be a bug.
If you know more about it, feel free to let me know --> nejat.hakan@outlook.de
I had to type the full path/home/nejat/kvm_storage_pool
.
Please adjust my usernamenejat
with your own username. -
Click
Finish
.
- In
-
Activate and Use the New Pool:
- The new pool
my_custom_pool
should appear in the list. - Select it. If it's not
Active
, click the "Play" button (Start pool). - It might ask if you want to build the pool. Click
Yes
. - Ensure "Autostart" is checked (
On boot: Yes
) if you want it available after reboots. - Now, you can create volumes within this pool just like you did with the
default
pool.
- The new pool
-
Create a New Volume (Disk Image) in the Pool my_custom_pool:
- With the
my_custom_pool
pool selected, click the+
button under the "Volumes" list. - Name:
custom-test-vol1.qcow2
- Format: Select
qcow2
. - Max Capacity:
Set to5
GB (this is the virtual size). -
Allocation:
Do not select checkbox "Allocate entire volume now".
This should make it thin-provisioned; the actual file size should be small initially.
But as you can see below that is not in my case. -
Click
Finish
. - You should now see
custom-test-vol1.qcow2
in the list of volumes. -
Verify Actual Size:
Check the disk size and virtual size:
- virtual size should be about 1 GiB but is 5 GiB in my case though we did not select checkbox "Allocate entire volume now".
This seems to be a bug. Please let me (nejat.hakan@outlook.de) know if you have more informations about it. - disk size should be 5 GiB
sudo needed because the image has been created as user root because we used virt-manager UI
$ sudo qemu-img info ~/kvm_storage_pool/custom-test-vol1.qcow2 image: /home/nejat/kvm_storage_pool/custom-test-vol1.qcow2 file format: qcow2 virtual size: 5 GiB (5368709120 bytes) disk size: 5 GiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: true refcount bits: 16 corrupt: false extended l2: false Child node '/file': filename: /home/nejat/kvm_storage_pool/custom-test-vol1.qcow2 protocol type: file file length: 5 GiB (5369757696 bytes) disk size: 5 GiB
- virtual size should be about 1 GiB but is 5 GiB in my case though we did not select checkbox "Allocate entire volume now".
- With the
Part 3: Using qemu-img
(Command Line Tool)
qemu-img
is a powerful command-line tool for creating, converting, and inspecting disk images.
- Open a terminal.
-
Create a qcow2 Image:
$ qemu-img create -f qcow2 ~/kvm_storage_pool/cli-vol1.qcow2 2G Formatting '/home/nejat/kvm_storage_pool/cli-vol1.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=2147483648 lazy_refcounts=off refcount_bits=16
-f qcow2
: specifies the format.~/kvm_storage_pool/cli-vol1.qcow2
: the path and name of the image.2G
: the virtual size (2 Gigabytes).
-
Get Information about an Image:
-
Verify Actual Size:
Check the disk size and virtual size:
- virtual size should be 2 GiB
- disk size should also be 204 KiB
sudo is not needed because the image has been created as user nejat because we used the users terminal
$ qemu-img info ~/kvm_storage_pool/cli-vol1.qcow2 image: /home/nejat/kvm_storage_pool/cli-vol1.qcow2 file format: qcow2 virtual size: 2 GiB (2147483648 bytes) disk size: 204 KiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: false refcount bits: 16 corrupt: false extended l2: false Child node '/file': filename: /home/nejat/kvm_storage_pool/cli-vol1.qcow2 protocol type: file file length: 192 KiB (197120 bytes) disk size: 204 KiB
-
-
Create a Raw Image:
-
Get Information about the Raw Image:
-
Verify Actual Size:
Check the disk size and virtual size:
- virtual size should be 1 GiB
- disk size should also be 1 GiB
$ qemu-img info ~/kvm_storage_pool/cli-vol2.raw image: /home/nejat/kvm_storage_pool/cli-vol2.raw file format: raw virtual size: 1 GiB (1073741824 bytes) disk size: 1 GiB Child node '/file': filename: /home/nejat/kvm_storage_pool/cli-vol2.raw protocol type: file file length: 1 GiB (1073741824 bytes) disk size: 1 GiB
-
-
Convert Raw to qcow2:
$ qemu-img convert -f raw -O qcow2 ~/kvm_storage_pool/cli-vol2.raw ~/kvm_storage_pool/cli-vol2-converted.qcow2
-f raw
: input format.-O qcow2
: output format.
-
Get Information about the Converted Image:
-
Verify Actual Size:
Check the disk size and virtual size:
- virtual size should be 1 GiB
- disk size should be 204 KiB because the original image "cli-vol2.raw" was mostly empty in our case
$ qemu-img info ~/kvm_storage_pool/cli-vol2-converted.qcow2 image: /home/nejat/kvm_storage_pool/cli-vol2-converted.qcow2 file format: qcow2 virtual size: 1 GiB (1073741824 bytes) disk size: 204 KiB cluster_size: 65536 Format specific information: compat: 1.1 compression type: zlib lazy refcounts: false refcount bits: 16 corrupt: false extended l2: false Child node '/file': filename: /home/nejat/kvm_storage_pool/cli-vol2-converted.qcow2 protocol type: file file length: 192 KiB (197120 bytes) disk size: 204 KiB
-
-
Refresh Storage Pool in virt-manager:
- Go back to
virt-manager
->Connection Details
->Storage
. - Select
my_custom_pool
. - The images created with
qemu-img
(cli-vol1.qcow2
,cli-vol2.raw
,cli-vol2-converted.qcow2
) might not appear immediately. You might need to refresh the pool view (sometimes closing and reopening Connection Details works, or deactivating and reactivating the pool if it's safe). Libvirt doesn't constantly scan directories unless explicitly told. - Note: It's generally better to create volumes through
libvirt
(viavirt-manager
orvirsh
) so it's aware of them. If you create them externally,libvirt
might not manage them as "volumes" within the pool until they are "defined" or used by a VM.
- Go back to
Part 4: Cleanup (Optional)
- Delete Volumes in virt-manager:
- In
virt-manager
->Connection Details
->Storage
. - Select a pool (e.g.,
default
ormy_custom_pool
). - Select a volume you created (e.g.,
test-vol1.qcow2
). - Click the
-
button under the "Volumes" list to delete it. Confirm the deletion.
- In
- Deactivate and Delete a Storage Pool:
- Select
my_custom_pool
. - Click the "Stop" button (Stop pool) to deactivate it.
- With it selected and inactive, click the
-
button at the bottom left (to delete the pool definition from libvirt). This does not delete the directory (~/kvm_storage_pool
) or its contents, onlylibvirt
's knowledge of it as a pool.
- Select
- Manually Delete Image Files and Directory:
- If you want to reclaim the disk space, manually delete the files and the directory:
Workshop Summary:
You have now practiced:
- Inspecting the default storage pool.
- Creating qcow2 (thin and preallocated) and raw disk images using
virt-manager
. - Creating a new directory-based storage pool.
- Using
qemu-img
to create, inspect, and convert disk images. - Understanding the difference between virtual size and actual disk usage for thin-provisioned images.
This hands-on experience with storage pools and disk image formats will be invaluable as you start creating and configuring virtual machines. You now have a better understanding of where your VM disks will live and the characteristics of different storage options.
3. Creating Your First Virtual Machines
With your KVM/QEMU host set up and a basic understanding of virtual disk architectures, it's time for the exciting part: creating and installing your first virtual machines! This section will guide you through obtaining OS installation media, using the virt-manager
wizard, installing an OS, and performing essential post-installation steps.
Obtaining Operating System Installation Media (ISO Files)
To install an operating system (OS) in a virtual machine, you need its installation media, typically in the form of an ISO file. An ISO file is an archive file that contains an exact copy (or image) of the data on an optical disc (like a CD or DVD).
Linux Distributions
Many Linux distributions are excellent choices for server and client VMs in your lab due to their open-source nature, flexibility, and robust networking capabilities.
-
Ubuntu Server:
- Very popular, large community, extensive documentation. Good for general-purpose servers.
- Download: https://ubuntu.com/download/server
- Choose the LTS (Long Term Support) version for stability.
-
Debian:
- Known for its stability and adherence to free software principles. The foundation for Ubuntu and many other distributions.
- Download: https://www.debian.org/distrib/
- You'll likely want the "netinst" (network install) ISO for a minimal installation, or a full CD/DVD image.
-
CentOS Stream / AlmaLinux / Rocky Linux:
- These are RHEL (Red Hat Enterprise Linux) compatible distributions. Excellent for learning enterprise Linux environments. CentOS Stream is upstream of RHEL, while AlmaLinux and Rocky Linux are downstream, 1:1 binary compatible rebuilds.
- CentOS Stream: https://www.centos.org/centos-stream/
- AlmaLinux: https://almalinux.org/
- Rocky Linux: https://rockylinux.org/
-
Fedora Server:
- Cutting-edge, community-supported distribution sponsored by Red Hat. Good for experiencing the latest Linux technologies. Shorter release cycle than LTS distros.
- Download: https://getfedora.org/server/download/
Recommendation for this guide:
We will primarily use Ubuntu Server LTS for many examples due to its popularity and ease of use. However, feel free to use any distribution you prefer. The general VM creation process in virt-manager
is the same.
Where to save ISO files:
Create a dedicated directory on your host machine to store your ISO files, for example, ~/ISOs
.
Other OS (e.g., pfSense OpenWrt)
For specialized network functions, you'll use different OS images:
-
pfSense:
- An open-source firewall/router distribution based on FreeBSD. Excellent for creating a powerful virtual firewall.
- Download: https://www.pfsense.org/download/
- You'll typically download the "amd64 (64-bit)" version and the "CD Image (ISO) Installer".
-
OpenWrt:
- An open-source Linux distribution primarily used for embedded devices, including routers. It can also be run as a VM.
- Download: https://openwrt.org/downloads
- Downloading OpenWrt for x86/64 VMs can be a bit more involved. You'll look for "x86/64" targets and often download a pre-built image file (e.g.,
.img.gz
which you'll need to extract to.img
) rather than a traditional ISO installer. We will cover this specifically when setting up an OpenWrt router. For now, focus on a standard Linux ISO.
The VM Creation Wizard in virt-manager
virt-manager
provides a user-friendly wizard to guide you through the process of creating a new virtual machine.
Step-by-step VM Creation
- Launch virt-manager.
-
Start the New VM Wizard:
- Click the "Create a new virtual machine" icon (often a computer screen with a
+
) in the toolbar. - Or, go to
File
->New Virtual Machine
.
The wizard typically has 4 or 5 steps.
- Click the "Create a new virtual machine" icon (often a computer screen with a
Step 1: Choose how you would like to install the operating system.
- Local install media (ISO image or CDROM):
This is the most common method. You'll select an ISO file you've downloaded. - Network Install (HTTP, FTP, or NFS):
Install from a URL or network share. Requires a network boot setup (PXE) or accessible installer tree. - Network Boot (PXE):
Boot from the network using PXE. -
Import existing disk image:
If you already have a pre-installed virtual disk image (e.g.,.qcow2
,.vmdk
), you can use this option.For now, select "Local install media (ISO image or CDROM)" and click
Forward
.
Step 2: Locate installation media.
- Select "Use ISO image".
- Click the "Browse..." button.
- In the "Locate ISO media volume" window, if you created a dedicated storage pool for ISOs, you can select it. Otherwise, click "Browse Local".
- Navigate to the directory where you saved your ISO file (e.g.,
~/ISOs
) and select the desired ISO (e.g.,ubuntu-22.04.3-live-server-amd64.iso
). - "Automatically detect OS based on install media":
virt-manager
will try to guess the OS type and version from the ISO name.- If it detects correctly (e.g., "Ubuntu 22.04 LTS"), great.
- If it doesn't detect it or detects it incorrectly (e.g., "Generic Linux 2020"), uncheck the box and manually type in the OS type (start typing "Ubuntu" or "Debian" etc., and select from the list) or choose a generic option. This helps
libvirt
apply some default optimal settings.
- Click
Forward
.
Step 3: Choose Memory and CPU settings.
- Memory (RAM):
- Allocate RAM for your VM. This RAM will be reserved from your host's RAM while the VM is running.
- For a basic Linux server (like a DNS or small web server),
1024
MB (1 GB) or2048
MB (2 GB) is often sufficient to start. - For a Linux desktop VM or more demanding servers, you'll need more.
- Consider your host's total RAM.
Don't over-allocate. Leave enough for your host OS and other applications/VMs. - Let's start with
2048
MB for our first Ubuntu Server.
- CPUs:
- Assign the number of virtual CPUs (vCPUs) to the VM. These vCPUs will be scheduled onto your host's physical CPU cores/threads.
- For a basic server,
1
or2
vCPUs are often enough. - You generally shouldn't assign more vCPUs to a single VM than the number of physical cores (or threads, if hyperthreading is enabled) your host CPU has, though you can assign more vCPUs across all running VMs than physical cores (this is over-subscription).
- Let's start with
2
vCPUs.
- Click
Forward
.
Step 4: Configure storage.
- "Enable storage for this virtual machine" should be checked.
- "Create a disk image for the virtual machine":
- Size: Specify the virtual size of the hard disk. For an Ubuntu Server,
20
GB or25
GB is a good starting point for basic services. - This will create a new disk image file (e.g.,
.qcow2
).
- Size: Specify the virtual size of the hard disk. For an Ubuntu Server,
-
"Select or create custom storage":
- This option lets you choose an existing disk image file or create one with more detailed options (like format and location in a specific storage pool).
- If you choose this, click "Manage...".
- Select a storage pool (e.g.,
default
ormy_custom_pool
). - Click the
+
(Add Volume) button. - Name: e.g.,
ubuntu-server-vm1.qcow2
- Format:
qcow2
(recommended). - Max Capacity:
25
GB. - Allocation: Leave at
0
GB for thin provisioning (or set to25
GB if you want to preallocate). - Click
Finish
. - Select the newly created volume and click "Choose Volume".
- Select a storage pool (e.g.,
For simplicity in this first run, you can just use "Create a disk image..." and set the size to
25
GB.virt-manager
will typically create it in thedefault
storage pool as a qcow2 file. -
Click
Forward
.
Step 5: Ready to begin installation.
- Name:
Give your VM a descriptive name (e.g.,Ubuntu-Server-01
). This name is used bylibvirt
andvirt-manager
to identify the VM. It's not the hostname inside the VM (you'll set that during OS installation). - "Customize configuration before install":
Check this box!
This is very important as it allows you to review and fine-tune settings, especially the network configuration, before the VM starts. - Network selection:
You'll see a dropdown for network selection. It usually defaults to "Virtual network 'default': NAT". We will explore networking in detail later. For now, the default NAT network is fine for getting an internet connection during OS installation. - Click
Finish
.
CPU and Memory Allocation (Revisited in Customization)
If you checked "Customize configuration before install," a new window will appear showing the VM's hardware details.
- CPUs:
- You can verify the number of vCPUs.
- Advanced options (CPU model, topology):
You can specify a CPU model (e.g.,host-passthrough
to expose most of your host CPU features, or a generic model likeqemu64
).host-passthrough
orhost-model
is often good for performance if you don't plan to live-migrate the VM to a host with a different CPU. You can also define sockets, cores, and threads for the vCPU topology. For now, defaults are usually fine.
- Memory:
- You can adjust the "Current allocation" and "Maximum allocation" (for memory ballooning, an advanced feature where the guest can dynamically adjust its RAM usage). For now, keep current and maximum the same.
Disk Configuration (Revisited in Customization)
- Select "VirtIO Disk 1" (or similar, like "SATA Disk 1") from the left-hand list.
- Advanced options:
- Disk bus:
VirtIO
is generally the best performing bus for disks when the guest OS has VirtIO drivers (most modern Linux distros do). Other options includeSATA
,SCSI
,IDE
.VirtIO
is highly recommended for Linux guests. - Storage format:
Should showqcow2
if you chose that. - Cache mode:
default
ornone
are common.writeback
can be faster but carries a risk of data loss on host crash if not handled carefully (e.g., with fsync). For lab VMs,default
is fine. - IO mode:
threads
ornative
(AIO).threads
is often the default and works well.
- Disk bus:
Network Configuration (Initial Setup - Revisited in Customization)
- Select "NIC" (Network Interface Controller) from the left-hand list.
- Network source: This is critical.
Virtual network 'default': NAT
:
Connects the VM tolibvirt
's default NATed network. The VM will get an IP address fromlibvirt
's DHCP server on this network and can access the internet through the host, but it's not directly accessible from your physical LAN by default.Bridge device
:
Connects the VM directly to your physical network (e.g.,br0
if you have a bridge set up). The VM will get an IP from your LAN's DHCP server (e.g., your home router) and act like another physical machine on your network. We'll cover setting up bridges later.Specify shared device name
:
For direct connection to a host interface (less common for typical VMs).- For the first install,
Virtual network 'default': NAT
is usually the easiest.
- Device model:
virtio
is highly recommended for network interfaces for Linux guests due to its superior performance. Other options likee1000e
(Intel) orrtl8139
(Realtek) emulate older physical cards and can be used ifvirtio
drivers are not available in the guest initially (rare for modern Linux). - MAC address:
A unique MAC address is automatically generated. You can change it if needed, but usually, the auto-generated one is fine.
Graphics and Display Options
- Display SPICE:
- SPICE is a remote display protocol that provides a good desktop experience with features like copy-paste, audio, and USB redirection (if
spice-vdagent
is installed in the guest). This is generally recommended. - Listen type:
None
orAddress
(if you need to connect from a remote SPICE client).None
is fine for localvirt-manager
console. - Keymap: Set to your host's keyboard layout.
- SPICE is a remote display protocol that provides a good desktop experience with features like copy-paste, audio, and USB redirection (if
- Video Model:
VirtIO
: Provides good performance for 2D graphics in guests with VirtIO GPU drivers.QXL
: Often used with SPICE for good 2D performance.VGA
,Cirrus
: Basic, more compatible video models.- For a server installation (mostly command-line), the video model is less critical, but
QXL
orVirtIO
are good choices.
Once you've reviewed and adjusted these settings in the customization screen, click "Apply" if you made changes, and then click the "Begin Installation" button at the top left of this hardware configuration window.
Installing an Operating System in a VM
After you click "Begin Installation," a new window will open, displaying the VM's console. The VM will boot from the ISO image you selected.
- Boot Process:
You'll see the typical boot messages from the installer (e.g., GRUB menu for Ubuntu, then kernel loading). - OS Installer Prompts:
Follow the on-screen instructions provided by the operating system installer. This process is very similar to installing an OS on a physical machine.- Language, Keyboard Layout, Timezone: Select your preferences.
- Network Configuration:
- If using the
default
NAT network, the VM should automatically get an IP address via DHCP fromlibvirt
. The installer might show this. - If you need a static IP, some installers allow you to configure it here, or you can do it after installation.
- If using the
- Partitioning:
- You can choose guided/automatic partitioning or manual partitioning.
- For a first server VM, "Use entire disk" (or similar) with the default layout is usually fine. This will typically create a root filesystem and swap.
- Remember our discussion on filesystems (ext4, XFS) and swap.
- User Account Creation:
Create a user account and password. This will be your login for the VM. - Software Selection:
- Server installers often ask if you want to install specific software sets (e.g., OpenSSH server, standard system utilities).
- For Ubuntu Server, it's highly recommended to select "Install OpenSSH server". This allows you to connect to your VM via SSH from your host terminal, which is much more convenient than using the graphical console for command-line work.
- Installation Progress:
The installer will copy files and configure the system. This can take some time.
- Reboot:
Once the installation is complete, the installer will prompt you to reboot the VM.- Important:
Before rebooting,virt-manager
should ideally automatically "eject" the ISO image so the VM boots from its virtual hard disk instead of the installer again. If it doesn't, or if the VM boots back into the installer:- Force off the VM from
virt-manager
. - Go back to the VM's hardware details (select VM in main list, click "Open," then the "Show virtual hardware details" icon (wrench/screwdriver)).
- Select "Boot Options." Ensure "Hard Disk" is at the top of the boot order, or CDROM is unchecked/removed.
- Alternatively, in the "SATA CDROM 1" (or similar) device, ensure the source path is empty or points to no media.
- Then start the VM again.
- Force off the VM from
- Ubuntu's installer usually prompts you to "remove the installation medium and press Enter." At this point, it should be safe to proceed.
- Important:
Post-Installation Best Practices
Once your new VM has rebooted and you can log in with the user account you created:
Updating the System
The first thing you should always do after installing an OS is to update it to get the latest security patches and software versions. Open a terminal in the VM (either via the graphical console or SSH if you installed OpenSSH server).
- For Debian/Ubuntu:
- For Fedora/CentOS Stream/RHEL-likes:
- For Arch Linux: You might need to reboot again if a new kernel was installed.
Installing Guest Utilities (spice-vdagent)
If you are using SPICE for the graphical console (which virt-manager
often defaults to), installing guest utilities can greatly enhance the experience. spice-vdagent
enables features like:
- Automatic resizing of the VM's display resolution to match the console window size.
- Copy and paste of text between the host and the VM.
- Smoother mouse integration.
- USB redirection (though this requires more setup).
Installation:
- For Debian/Ubuntu:
- For Fedora/CentOS Stream/RHEL-likes:
- For Arch Linux: After installation, you might need to reboot the VM or at least log out and log back into the graphical session (if any) for the agent to become fully active. For server installs without a GUI, its primary benefit is often improved copy/paste and mouse in the virt-manager console.
Other Guest Additions/Drivers:
- VirtIO Drivers:
Modern Linux distributions almost always include VirtIO drivers by default for disk, network, memory ballooning, GPU, etc. If you chose VirtIO devices during VM creation, these should be active. You can verify by checking kernel modules (lsmod | grep virtio
) or network interface names (oftenenpXsY
orethX
if usingvirtio-net
). - For Windows VMs, you would need to download and install VirtIO drivers for Windows separately (often provided as an ISO file that you attach to the VM's virtual CDROM).
Workshop Creating and Configuring Your First Linux VMs
Objective:
To create two Linux virtual machines: one Ubuntu Server and one lightweight distribution like Alpine Linux (optional, for variety and to see a very minimal install).
Prerequisites:
- KVM/QEMU and
virt-manager
installed and working. - Downloaded ISO images:
- Ubuntu Server LTS (e.g., 22.04 or newer). Save to
~/ISOs/
. - (Optional) Alpine Linux "Virtual" ISO (e.g.,
alpine-virt-3.19.0-x86_64.iso
). Alpine is very small and boots quickly. Save to~/ISOs/
.- Alpine Download: https://alpinelinux.org/downloads/ (Look for "VIRTUAL" under x86_64)
- Ubuntu Server LTS (e.g., 22.04 or newer). Save to
Part 1: Creating the Ubuntu Server VM
- Launch virt-manager.
- Start the New VM Wizard: Click "Create a new virtual machine."
- Step 1: Installation Method:
- Select "Local install media (ISO image or CDROM)."
- Click
Forward
.
- Step 2: Locate Media:
- Select "Use ISO image." Click "Browse...".
- Click "Browse Local," navigate to
~/ISOs/
, and select your Ubuntu Server ISO file. - Ensure OS type is correctly detected (e.g., "Ubuntu 22.04") or uncheck auto-detect and choose it manually.
- Click
Forward
.
- Step 3: Memory and CPU:
- Memory (RAM):
2048
MB. - CPUs:
2
. - Click
Forward
.
- Memory (RAM):
- Step 4: Storage:
- Select "Create a disk image for the virtual machine."
- Size:
25
GB. - Click
Forward
.
- Step 5: Final Configuration:
- Name:
ubuntu-server-01
- Network selection: Ensure it's
Virtual network 'default': NAT
. - Check the box: "Customize configuration before install."
- Click
Finish
.
- Name:
- Customize Configuration (Hardware Details Window):
- Overview: Briefly check the summary.
- CPUs: Defaults are likely fine.
- Memory: Verify
2048
MB. - Disks (VirtIO Disk 1 or SATA Disk 1):
- Expand "Advanced options."
- Disk bus: Change to
VirtIO
if it's not already selected. - Storage format: Should be
qcow2
.
- NICs (Network Interface):
- Device model: Change to
virtio
if it's not already.
- Device model: Change to
- Boot Options: (Important for later)
- Ensure "Hard Disk" is listed and enabled.
- Ensure "CDROM" is listed and enabled for the initial boot. You can adjust boot order here if needed later.
- Video:
QXL
orVirtIO
is fine. - Display SPICE: Default settings are usually OK.
- Click "Begin Installation" in the top-left of this hardware configuration window.
- Install Ubuntu Server:
- A console window for
ubuntu-server-01
will open. - Follow the Ubuntu Server installer prompts:
- Language: English (or your preference).
- Keyboard Layout: Your preference.
- Network: It should detect an IP via DHCP (e.g.,
192.168.122.x
). Proceed. - Proxy: Leave blank unless you need one.
- Mirror: Default is fine.
- Storage Layout: Choose "Use an entire disk." Select the
25GB VirtIO Block Device
. Choose the default partitioning scheme. Confirm. - Profile setup:
- Your name: e.g.,
Student User
- Your server's name (hostname):
srv01
- Username: e.g.,
student
- Password: Choose a strong password (e.g.,
Password123!
) and confirm.
- Your name: e.g.,
- SSH Setup: Select "Install OpenSSH server."
- Featured Server Snaps: Skip for now (don't select any).
- The installation will proceed. Wait for it to complete.
- When it says "Installation complete!", select "Reboot Now".
- The installer might say "Please remove the installation medium, then press ENTER." Just press Enter.
virt-manager
usually handles detaching the ISO.
- A console window for
- First Boot and Login:
- The VM will reboot. You should see login prompts.
- Login as
student
with the password you set.
- Post-Installation Tasks (inside
ubuntu-server-01
VM):- Update the system:
- Install
spice-vdagent
(for better console interaction): - Install
net-tools
(forifconfig
, optional but some find it useful): - Check IP address:
Note the IP address (likely in the
192.168.122.0/24
range). - Shutdown the VM:
- The VM console window will close, and
virt-manager
will showubuntu-server-01
as "Shutoff".
Part 2: (Optional) Creating an Alpine Linux VM (Quick and Minimal)
Alpine Linux is very lightweight and installs quickly, making it great for network appliances or simple test nodes.
- Start the New VM Wizard in virt-manager.
- Step 1: Installation Method: "Local install media (ISO image or CDROM)."
- Step 2: Locate Media:
- Browse to and select your Alpine Linux "Virtual" ISO (e.g.,
alpine-virt-x.y.z-x86_64.iso
). - OS Type: If not detected, type "Alpine" and select "Alpine Linux". If not listed, "Generic Linux 2020" or a recent generic Linux will work.
- Browse to and select your Alpine Linux "Virtual" ISO (e.g.,
- Step 3: Memory and CPU:
- Memory (RAM):
512
MB (Alpine is very efficient). - CPUs:
1
.
- Memory (RAM):
- Step 4: Storage:
- "Create a disk image..."
- Size:
5
GB (Alpine needs very little space).
- Step 5: Final Configuration:
- Name:
alpine-01
- Network:
Virtual network 'default': NAT
. - Check "Customize configuration before install."
- Click
Finish
.
- Name:
- Customize Configuration:
- Disk: Ensure bus is
VirtIO
. - NIC: Ensure model is
virtio
. - Click "Begin Installation."
- Disk: Ensure bus is
- Install Alpine Linux:
- The console will show a boot prompt. Login as
root
(no password initially). - Run the installer script:
setup-alpine
- Follow the prompts:
- Keyboard layout: e.g.,
us
, thenus
. - Hostname:
alpine01
- Initialize network interface:
eth0
(default). - IP address for eth0:
dhcp
. - DHCP client:
udhcpc
(default). - Manual network config?
no
. - Password for root: Set a password (e.g.,
Password123!
). - Timezone: Your timezone (e.g.,
UTC
orEurope/London
). - Proxy:
none
. - NTP client:
chrony
(default). - Mirror: Choose a mirror (e.g.,
1
for the first one, orf
to find fastest). - Setup user: You can skip (
no
) or create one (e.g.,student
). - SSH Server:
openssh
(default). - Which disk(s) to use:
vda
(the VirtIO disk). - How would you like to use it:
sys
(install for "Traditional system"). - Erase disk and continue?
y
.
- Keyboard layout: e.g.,
- Installation is very quick.
- Once done, reboot:
reboot
- The console will show a boot prompt. Login as
- Post-Installation (inside
alpine-01
VM):- Login as
root
or the user you created. - Alpine uses
apk
for package management. - Update:
apk update && apk upgrade
- Check IP:
ip addr show
- (Optional) Install
spice-vdagent
if needed (it's available in Alpine repos:apk add spice-vdagent
). - Shutdown:
poweroff
- Login as
Workshop Summary:
You have successfully created one (or two) Linux virtual machines!
- You learned to use the
virt-manager
wizard. - You practiced customizing VM hardware settings like disk bus (VirtIO) and network adapter model (virtio).
- You installed Ubuntu Server and (optionally) Alpine Linux.
- You performed essential post-installation tasks like system updates and guest agent installation.
- You can now start and stop your VMs from
virt-manager
.
These VMs will serve as the building blocks for your simulated network. In the next sections, we'll dive into virtual networking to connect them in meaningful ways.
4. Core Networking Concepts in a Virtualized Environment
Understanding how networking operates within a KVM/QEMU environment is fundamental to building any simulated infrastructure. libvirt
, the management API used by virt-manager
, provides several ways to configure virtual networks and connect your VMs. This section explores these virtual network types, the underlying libvirt
networking components, and essential IP addressing concepts.
Virtual Network Types in KVM/QEMU
libvirt
defines several modes for connecting virtual machines to a network. The most common ones you'll encounter and use are:
Default NAT Network
- How it works:
This is often the default configuration when you installlibvirt
. A virtual network switch (bridge) is created on the host (commonly namedvirbr0
). VMs connected to this network are on a private IP subnet (e.g.,192.168.122.0/24
).libvirt
runs a DHCP server on this private network to assign IP addresses to VMs. The host machine performs Network Address Translation (NAT) for traffic from these VMs destined for external networks (like the internet). - Connectivity:
- VMs can connect to each other if they are on the same NAT network.
- VMs can connect to the host machine (using the host's IP on the
virbr0
interface, e.g.,192.168.122.1
). - VMs can access external networks (e.g., the internet) via NAT through the host.
- By default, external machines (on your physical LAN) cannot directly initiate connections to VMs on the NAT network. Port forwarding can be configured on the host to allow this.
- Pros:
- Simple to set up (usually works out-of-the-box).
- Isolates VMs from the physical LAN to some extent, which can be good for security or avoiding IP conflicts.
- Doesn't require any special configuration on your physical network infrastructure.
- Cons:
- VMs are not directly accessible from the external LAN without port forwarding, which can be cumbersome for server VMs.
- The "double NAT" (if your host is already behind a NAT router) can sometimes complicate certain protocols.
- Typical Use Cases:
Quick internet access for VMs, simple testing environments where direct external access to VMs isn't critical.
Bridged Networking (Sharing Host's Physical Interface)
- How it works:
A software bridge (e.g.,br0
) is created on the host machine. The host's physical network interface (e.g.,eth0
orenp3s0
) is added to this bridge, and the bridge itself gets the IP address previously assigned to the physical interface (or gets an IP via DHCP from the LAN). Virtual machines are then connected directly to this bridge. - Connectivity:
- VMs appear as independent devices on the physical LAN, just like any other physical computer.
- They obtain IP addresses from the DHCP server on your physical LAN (e.g., your home router) or can be assigned static IPs from your LAN's subnet.
- VMs can freely communicate with other devices on the physical LAN and vice-versa, subject to firewall rules.
- VMs can access the internet directly through your physical LAN's gateway.
- Pros:
- Full network visibility and accessibility for VMs from the physical LAN. Ideal for running server services in VMs that need to be accessible.
- Simpler network model from the VM's perspective (it's just another device on the LAN).
- Cons:
- Requires more setup on the host (creating the bridge, reconfiguring the physical interface). This can sometimes temporarily interrupt host network connectivity if not done correctly.
- Each VM consumes an IP address from your physical LAN's IP pool.
- Less isolation from the physical LAN, which might be a security concern in some scenarios.
- Bridging over Wi-Fi can be problematic or unsupported on some systems/drivers, as Wi-Fi interfaces often operate in a mode that doesn't allow transparent bridging of MAC addresses. Wired Ethernet is much more reliable for bridging.
- Typical Use Cases:
Running server VMs (web servers, file servers, etc.) that need to be accessible from other computers on your network, simulating physical network setups more closely.
Isolated Host-Only Network
- How it works:
Similar to the NAT network, a virtual switch (bridge) is created on the host, and VMs connect to it. However, there is no NAT or forwarding to external networks.libvirt
can still provide DHCP on this isolated network. - Connectivity:
- VMs can communicate with each other if they are on the same isolated network.
- VMs can communicate with the host machine (via its IP on the isolated network's bridge).
- VMs cannot access external networks (like the internet) or other devices on the physical LAN directly.
- Pros:
- Provides a completely isolated environment for VMs to communicate only among themselves and the host.
- Useful for security testing, malware analysis, or creating private lab networks that should not interact with the outside world.
- Cons:
No external connectivity unless you specifically configure a VM on this network to also connect to another network (e.g., NAT or bridged) and act as a router/gateway. - Typical Use Cases:
Creating secure, sandboxed environments; building private backend networks for multi-tiered applications where only specific frontend VMs have external access.
Routed Network (Advanced)
- How it works:
Instead of NAT or bridging,libvirt
can be configured to route traffic between a virtual network and the host's network (or another network). This requires configuring static routes on your host and potentially on your main physical router. The VMs are on a separate IP subnet, and the host acts as a router for that subnet. - Connectivity:
Similar to NAT in that VMs are on a separate subnet, but without the address translation aspect. Other devices on your LAN would need a route pointing to your KVM host for the VM subnet to be reachable. - Pros:
Avoids NAT, giving VMs "real" (though private) IPs that can be routed. More control over traffic flow. - Cons:
More complex to set up, requiring manual route configuration on the host and potentially other network devices. - Typical Use Cases:
More complex lab scenarios where NAT is undesirable, and you need to simulate routed environments more accurately.
For most of our learning and SMB simulation, we will extensively use the Default NAT Network for initial setup and outbound internet access, and Bridged Networking for making our servers accessible. We will also create custom isolated networks to build different network segments (like LANs and DMZs) interconnected by a virtual router/firewall (e.g., pfSense).
Understanding libvirt Networking
libvirt
manages the virtual networking components. Key elements include:
Virtual Network Switches (Bridges)
- When you create a virtual network (like the
default
NAT network or an isolated network),libvirt
often creates a Linux bridge device on the host (e.g.,virbr0
,virbr1
). - You can think of this as a virtual Layer 2 switch.
- VMs' virtual network interfaces (vNICs) are "plugged into" this virtual switch.
- For bridged networking, you create a bridge (e.g.,
br0
) and enslave your physical NIC (e.g.,eth0
) to it. VMs then connect their vNICs tobr0
.
You can inspect bridge devices on your host using the brctl
command (from the bridge-utils
package) or ip link show type bridge
.
Example:
brctl show
# Output might look like:
# bridge name bridge id STP enabled interfaces
# br0 8000.001122334455 no eth0
# vnet0
# vnet1
# virbr0 8000.525400aabbcc yes virbr0-nic # (This is for NAT, not a VM interface directly)
# vnet2 # A VM connected to default NAT
vnet0
, vnet1
, vnet2
are the host-side ends of the virtual network interfaces connected to the VMs.
Virtual Network Interfaces (vNICs)
- Each VM that needs network connectivity is given one or more virtual network interface controllers (vNICs).
- When a VM is running, a corresponding
tap
interface (e.g.,vnet0
,vnet1
) is created on the host. Thistap
interface is the host-side connection point for the VM's vNIC. - This
tap
interface is then "plugged into" one of the virtual switches (bridges) likevirbr0
(for NAT/isolated) orbr0
(for bridged). - The guest OS inside the VM sees its vNIC as a regular network card (e.g.,
eth0
,ens3
). If you chosevirtio
as the device model, it will be a VirtIO network device.
MAC Addresses in VMs
- Each vNIC assigned to a VM has its own unique MAC (Media Access Control) address.
libvirt
automatically generates a MAC address for new vNICs, usually starting with52:54:00:...
. This prefix is reserved for QEMU/KVM.- It's crucial that MAC addresses are unique within the same Layer 2 network segment to avoid conflicts.
libvirt
generally handles this well. - You can manually set a MAC address for a vNIC in
virt-manager
if needed, but usually, the auto-generated one is fine.
IP Addressing and Subnetting Primer
A solid understanding of IP addressing is vital for network configuration.
IPv4 Addressing
- An IPv4 address is a 32-bit number, typically written in dotted-decimal notation (e.g.,
192.168.1.100
). - Each 32-bit address uniquely identifies a device (host or network interface) on an IP network.
- It's divided into two parts:
- Network ID: Identifies the network to which the device belongs. All devices on the same logical network share the same network ID.
- Host ID: Identifies a specific device (host) within that network.
Subnet Masks
- A subnet mask is also a 32-bit number used to distinguish the Network ID from the Host ID in an IP address.
- It's written in dotted-decimal notation (e.g.,
255.255.255.0
). - In binary, the subnet mask has a series of '1's followed by a series of '0's. The '1's correspond to the Network ID portion of the IP address, and the '0's correspond to the Host ID portion.
- Example:
192.168.1.100
with subnet mask255.255.255.0
192.168.1.100
->11000000.10101000.00000001.01100100
255.255.255.0
->11111111.11111111.11111111.00000000
- Network ID:
192.168.1.0
(first 24 bits) - Host ID:
100
(last 8 bits)
- Example:
Private vs Public IP Ranges
- Public IP Addresses: Globally unique and routable on the internet. Assigned by Internet Assigned Numbers Authority (IANA) and Regional Internet Registries (RIRs).
- Private IP Addresses: Reserved for use within private networks (like your home LAN or internal corporate networks). These are not routable on the public internet. Routers performing NAT are needed to allow devices with private IPs to access the internet.
- Reserved Private Ranges (RFC 1918):
10.0.0.0
to10.255.255.255
(10.0.0.0/8)172.16.0.0
to172.31.255.255
(172.16.0.0/12)192.168.0.0
to192.168.255.255
(192.168.0.0/16)
- The
libvirt
default NAT network (192.168.122.0/24
) falls within these private ranges.
- Reserved Private Ranges (RFC 1918):
CIDR Notation
- Classless Inter-Domain Routing (CIDR) notation is a more compact way to represent an IP address and its associated subnet mask/network prefix.
- It's written as the IP address followed by a slash (
/
) and the number of leading '1' bits in the subnet mask (the prefix length).192.168.1.0/24
is equivalent to192.168.1.0
with subnet mask255.255.255.0
(24 leading '1's).10.0.0.0/8
is equivalent to10.0.0.0
with subnet mask255.0.0.0
(8 leading '1's).192.168.122.0/24
:- Network Address:
192.168.122.0
- First Usable Host IP:
192.168.122.1
(often the gateway) - Last Usable Host IP:
192.168.122.254
- Broadcast Address:
192.168.122.255
- Total addresses: 2(32-24) = 28 = 256
- Usable host addresses: 256 - 2 = 254 (network and broadcast addresses are not assignable to hosts)
- Network Address:
You will use CIDR notation extensively when defining virtual networks and planning your IP address schemes.
Basic Network Troubleshooting Tools
When building your virtual network, you'll inevitably need to troubleshoot connectivity issues. These command-line tools are essential (run them inside your VMs or on the host as appropriate):
-
ping [destination_ip_or_hostname]
:- Sends ICMP Echo Request packets to a target host and waits for Echo Replies.
- Used to test basic Layer 3 reachability and measure round-trip time and packet loss.
- Example:
ping 192.168.122.1
(ping the gateway from a VM on the default NAT network) orping google.com
(test internet connectivity).
-
ip addr show
(orifconfig
on older systems/ifnet-tools
is installed):- Displays information about all network interfaces on the system, including their IP addresses, MAC addresses, and state (UP/DOWN).
- Crucial for verifying IP configuration.
- Example:
ip addr show eth0
(show details for interfaceeth0
).
-
traceroute [destination_ip_or_hostname]
(Linux) ortracert
(Windows):- Shows the path (sequence of routers) that packets take to reach a destination.
- Useful for identifying where connectivity is breaking down in a routed network.
- Example:
traceroute google.com
.
-
netstat -tulnp
(Linux) orss -tulnp
(Linux, more modern):- Shows network connections, listening sockets, and the programs using them.
-t
: TCP sockets-u
: UDP sockets-l
: Listening sockets-n
: Show numerical addresses and port numbers (don't resolve names)-p
: Show the PID and name of the program to which each socket belongs (requires sudo/root).- Useful for checking if a server application (e.g., web server on port 80, SSH on port 22) is actually listening for connections.
- Example:
sudo ss -tulnp | grep :80
(check if anything is listening on port 80).
-
dig [hostname]
ornslookup [hostname]
:- Used to query DNS servers and troubleshoot DNS resolution issues.
- Example:
dig www.example.com
(shows the A record and other DNS info forwww.example.com
).
-
ip route show
(orroute -n
):- Displays the kernel IP routing table.
- Essential for understanding how the system decides where to send packets for different destinations.
- Example:
ip route show
. Look for thedefault
route (gateway).
Mastering these tools will make diagnosing problems in your virtual network much easier.
Workshop Configuring Different Virtual Network Types
Objective:
To explore libvirt
's default NAT network, create a new isolated network, and set up a bridged network to connect a VM directly to your physical LAN.
Prerequisites:
- KVM/QEMU,
libvirt
, andvirt-manager
installed and working. - At least one VM created (e.g.,
ubuntu-server-01
from the previous workshop). It should be shut down for some operations. - Sudo privileges on the host.
- For Bridged Networking: A wired Ethernet connection on your host is highly recommended. Bridging over Wi-Fi is often problematic and beyond the scope of this basic workshop. If you only have Wi-Fi, you can still do the NAT and isolated network parts.
Part 1: Exploring the Default NAT Network (default
)
- Inspect the
default
network in virt-manager:- Open
virt-manager
. - Go to
Edit
->Connection Details
. - Select the
Virtual Networks
tab. - Select the
default
network. - Observe:
Network
: e.g.,192.168.122.0/24
. This is the IP range.Gateway
: e.g.,192.168.122.1
. This is the host's IP on this virtual network.DHCP Range
: Start and end IPs for DHCP allocation.Forwarding
: Should showNAT
.Device
: e.g.,virbr0
. This is the bridge interface on the host.
- Open
- Inspect
virbr0
on the host:- Open a terminal on your KVM host.
- Run:
ip addr show virbr0
- Note the IP address matches the gateway seen in
virt-manager
.
- Connect a VM and Verify:
- Ensure your
ubuntu-server-01
VM's NIC is configured to use thedefault
NAT network.- Select
ubuntu-server-01
invirt-manager
. - Click "Open," then the "Show virtual hardware details" icon.
- Select the NIC. "Network source" should be
Virtual network 'default': NAT
.
- Select
- Start the
ubuntu-server-01
VM. - Login to the VM.
- Check its IP address:
ip addr show
(look for the interface likeeth0
orenp1s0
). It should have an IP from the192.168.122.x
range (e.g.,192.168.122.123/24
). - Test connectivity:
- Ping the gateway:
ping 192.168.122.1
- Ping an external site:
ping 8.8.8.8
(Google's DNS) orping google.com
- Ping the gateway:
- Shutdown the VM:
sudo shutdown now
.
- Ensure your
Part 2: Creating a New Isolated Network
- In virt-manager:
- Go to
Edit
->Connection Details
->Virtual Networks
tab. - Click the
+
button at the bottom left to add a new network.
- Go to
- Name the Network:
isolated-net
- Click
Forward
.
- Configure IP Address Space:
- Check "Enable IPv4 network address space definition."
- Network:
10.10.10.0/24
(You can choose any private range not already in use). - Enable DHCPv4: Check this box.
- Start:
10.10.10.100
- End:
10.10.10.200
- Click
Forward
.
- Configure IPv6 (Optional):
- You can leave IPv6 disabled or configure it if you wish. For now, "Disable IPv6" is fine.
- Click
Forward
.
- Ready to Complete:
- Network type: Select
Isolated network
(this means no forwarding/NAT). - DNS Domain name (optional): You can leave this blank or e.g.,
isolated.lab
- Click
Finish
. - The new network
isolated-net
should appear. It will likely be associated with a new bridge, e.g.,virbr1
.
- Network type: Select
- Verify
isolated-net
on the host:- In the host terminal:
ip addr show
(look for a newvirbrX
interface with an IP like10.10.10.1/24
). - Check
libvirt
's XML definition:virsh net-dumpxml isolated-net
- In the host terminal:
- Connect a VM to
isolated-net
:- Select
ubuntu-server-01
(ensure it's off). - Open its hardware details.
- Select the NIC. Change "Network source" to
Virtual network 'isolated-net': Isolated
. - Click
Apply
. - Start the VM.
- Login and check its IP:
ip addr show
. It should have an IP from10.10.10.x
. - Test connectivity:
- Ping the gateway of this isolated network (e.g.,
10.10.10.1
if that's whatvirbr1
got):ping 10.10.10.1
- Try to ping an external site:
ping 8.8.8.8
. This should fail because it's an isolated network.
- Ping the gateway of this isolated network (e.g.,
- Shutdown the VM.
- Change the VM's network back to
default
NAT for now.
- Select
Part 3: Setting Up Bridged Networking (Wired Ethernet Host)
Caution:
This part involves modifying your host's network configuration. If done incorrectly, your host might lose network connectivity. Proceed carefully. Using nmcli
(NetworkManager command-line tool) or nmtui
(NetworkManager text UI) is often safer on systems managed by NetworkManager (most modern desktops). The example below uses nmcli
. If your system uses ifupdown
or systemd-networkd
, the commands will differ.
Assumptions:
- Your host uses NetworkManager.
- Your wired Ethernet interface is
eth0
(replace with your actual interface name, find withip addr
). -
Your LAN uses DHCP.
-
Identify your active wired connection and physical interface:
-
Create a bridge interface using
nmcli
:# Create the bridge interface sudo nmcli connection add type bridge con-name br0 ifname br0 # Configure the bridge to get IP via DHCP (like your physical NIC did) sudo nmcli connection modify br0 bridge.stp no # Spanning Tree Protocol, usually no for simple bridges sudo nmcli connection modify br0 ipv4.method auto # For DHCP # If you used a static IP on eth0, you'd configure it on br0 instead: # sudo nmcli connection modify br0 ipv4.addresses 192.168.1.50/24 ipv4.gateway 192.168.1.1 ipv4.dns "8.8.8.8,1.1.1.1" ipv4.method manual # Enslave your physical interface (e.g., eth0) to the bridge # First, delete or modify the existing connection for eth0 if it's managing eth0 directly # Let's assume your current active wired connection is named "Wired connection 1" and uses device eth0 # It's safer to add a new slave connection for eth0 to the bridge. sudo nmcli connection add type bridge-slave con-name br0-slave-eth0 ifname eth0 master br0 # Activate the bridge and the slave connection # It might be best to DEACTIVATE the old connection for eth0 first # Find the name of the old connection: nmcli con show # Suppose it's "Wired connection 1" # sudo nmcli connection down "Wired connection 1" # This might disconnect you briefly if on SSH sudo nmcli connection up br0 sudo nmcli connection up br0-slave-eth0 # If the old connection for "eth0" is still active and causing issues, you might need to delete it # (e.g., sudo nmcli connection delete "Wired connection 1") # or modify it to not auto-connect / set its device to something else temporarily. # This is the trickiest part and depends on your current NetworkManager setup. # A reboot after these nmcli commands often helps solidify the new configuration.
Alternative using
nmtui
(often easier):- Select "Edit a connection."
- Add a new connection: Type "Bridge", Create.
- Profile name:
br0
- Device:
br0
- Profile name:
- Add a slave to this bridge: Click "Add" next to "Slaves", Type "Ethernet".
- Profile name:
br0-slave-eth0
- Device: Select your physical Ethernet interface (e.g.,
eth0
).
- Profile name:
- Configure IPv4 settings for the
br0
profile (e.g., to "Automatic" for DHCP, or Manual with your static IP details). - Go back, select your old Ethernet connection profile (the one that was directly using
eth0
). Edit it.- Under "General", uncheck "Automatically connect".
- Optionally, under IPv4, set Method to "Disabled" or "Link-Local Only". This prevents it from conflicting with
br0
.
- Activate
br0
.nmtui
might do this automatically, or you may need to go to "Activate a connection" and activatebr0
and deactivate the old one. - Quit
nmtui
.
-
Verify the bridge on the host:
ip addr show br0
(should have your LAN IP).ip addr show eth0
(should NOT have an IP, but be UP and MASTERbr0
).brctl show
(should showeth0
as an interface ofbr0
).- Test host internet connectivity:
ping google.com
.
-
Connect a VM to the bridge:
- In
virt-manager
, selectubuntu-server-01
(ensure it's off). - Open its hardware details.
- Select the NIC.
- Network source: Choose
Bridge device
. - Device name: Type
br0
(or the name of your bridge).
- Network source: Choose
- Click
Apply
. - Start the
ubuntu-server-01
VM.
- In
- Verify VM connectivity on the bridge:
- Login to the VM.
- Check its IP address:
ip addr show
. It should now have an IP address from your physical LAN's DHCP server (e.g.,192.168.1.x
, different from the192.168.122.x
NAT range). - Test connectivity:
- Ping your physical LAN gateway (e.g.,
192.168.1.1
). - Ping another physical machine on your LAN.
- Ping an external site:
ping google.com
. - From another physical machine on your LAN, try to ping the VM's new LAN IP address. This should work.
- Ping your physical LAN gateway (e.g.,
- Shutdown the VM.
Cleanup (If you want to revert bridged networking):
Reverting bridge setup with NetworkManager:
sudo nmcli connection down br0
sudo nmcli connection down br0-slave-eth0
sudo nmcli connection delete br0
sudo nmcli connection delete br0-slave-eth0
# Then reactivate your original Ethernet connection, e.g.:
# sudo nmcli connection up "Wired connection 1"
# Or simply reboot, and NetworkManager should try to bring up the original configuration
# if "Automatically connect" was still checked for it.
# Using nmtui: delete the br0 and br0-slave-eth0 profiles, and ensure your original eth0 profile is set to "Automatic" for IPv4 and "Automatically connect".
Workshop Summary:
You have now:
- Explored the default NAT network and its properties.
- Created a new isolated virtual network and tested its isolation.
- (If you have wired Ethernet) Configured a bridged network on your host, allowing a VM to connect directly to your physical LAN.
- Practiced changing a VM's network attachment in
virt-manager
.
This hands-on experience is crucial for understanding how libvirt
handles different networking scenarios. You are now better equipped to design network topologies for your simulations. Remember to set your ubuntu-server-01
VM back to the default
NAT network if you are not keeping the bridged setup active, or if the next workshops assume NAT. For now, leaving it on bridged mode is fine if it works and you want it accessible from your LAN.
5. Building a Basic Network Topology
Now that you understand the different virtual network types and how to create VMs, let's start building a slightly more complex network. We'll design a simple topology with multiple network segments (simulating a LAN and perhaps a DMZ precursor) and connect VMs to these segments. This will involve creating custom virtual networks and configuring static IP addresses within your VMs.
Designing a Simple Network
A network topology defines how different network devices and segments are interconnected. For our basic setup, let's envision:
-
Management/NAT Network:
- This will be the
default
NAT network (192.168.122.0/24
) provided bylibvirt
. - It will provide internet access to VMs that need it during setup or for specific roles.
- We might place a "management" interface of some VMs on this network.
- This will be the
-
Internal LAN Segment (LAN1):
- A custom isolated virtual network.
- IP Subnet:
10.0.1.0/24
- Purpose: For internal servers and clients that should communicate with each other but not directly with the internet (unless routed through a firewall, which we'll add later).
-
DMZ-like Segment (DMZ1):
- Another custom isolated virtual network.
- IP Subnet:
10.0.2.0/24
- Purpose: For servers that might eventually be exposed to an "external" network (simulated) through a firewall.
VMs in this Topology (Initial Plan):
ubuntu-server-01
(from previous workshop):- We might give it two interfaces:
eth0
ondefault
NAT (for easy updates, SSH from host). IP: DHCP (192.168.122.x
).eth1
onLAN1
(10.0.1.0/24
). Static IP:10.0.1.10
.
- We might give it two interfaces:
client-vm-01
(New VM - e.g., a lightweight Linux desktop or another server):- One interface:
eth0
onLAN1
(10.0.1.0/24
). Static IP:10.0.1.50
.
- One interface:
dmz-server-01
(New VM - e.g., another Ubuntu Server):- One interface:
eth0
onDMZ1
(10.0.2.0/24
). Static IP:10.0.2.20
.
- One interface:
Later, we'll introduce a pfSense VM to act as a router/firewall between these segments and the internet. For now, LAN1
and DMZ1
will be isolated from each other and from the internet.
IP Addressing Plan
A clear IP addressing plan is crucial.
Network Name | libvirt Network |
Subnet | Gateway (on libvirt bridge) |
DHCP Range (for libvirt ) |
VM Static IPs |
---|---|---|---|---|---|
Default NAT | default |
192.168.122.0/24 |
192.168.122.1 |
192.168.122.100-254 (example) |
ubuntu-server-01 (eth0): DHCP |
Internal LAN 1 | lan1-net |
10.0.1.0/24 |
10.0.1.1 (host's IP on virbrX ) |
10.0.1.100-200 (optional) |
ubuntu-server-01 (eth1): 10.0.1.10/24 |
client-vm-01 (eth0): 10.0.1.50/24 |
|||||
DMZ 1 | dmz1-net |
10.0.2.0/24 |
10.0.2.1 (host's IP on virbrY ) |
10.0.2.100-200 (optional) |
dmz-server-01 (eth0): 10.0.2.20/24 |
Notes on Gateways for lan1-net
and dmz1-net
:
- When
libvirt
creates an isolated network with DHCP enabled, it typically assigns the.1
address of that subnet to its virtual bridge interface (e.g.,virbr1
gets10.0.1.1
). This.1
address acts as the DHCP server and DNS forwarder (if enabled) for that isolated segment. - When we configure static IPs on VMs within these isolated networks, they won't have a gateway to the internet through this interface directly. Their "gateway" for this segment would be the
.1
address if they needed to resolve DNS vialibvirt
's DNSmasq on that segment or reach the host on that segment. True inter-segment routing will come with the pfSense VM. - For static IP configuration within VMs on
lan1-net
anddmz1-net
, we will initially omit a default gateway setting for these interfaces, or set it to the segment's.1
address if we want them to uselibvirt
's DNS on that segment. The actual default route for internet access will be via the pfSense VM later.
Creating Custom Virtual Networks in virt-manager
We'll create lan1-net
and dmz1-net
as isolated networks. libvirt
can provide DHCP on these, but since we plan to use static IPs for our servers, we can either disable DHCP or just configure static IPs outside the DHCP range. For simplicity, we can leave DHCP enabled for now; it won't interfere with correctly configured static IPs.
Steps to Create lan1-net
:
- Open
virt-manager
. - Go to
Edit
->Connection Details
->Virtual Networks
tab. - Click the
+
button (Add Network). - Name:
lan1-net
- Click
Forward
. - IPv4 Network Address Space:
- Check "Enable IPv4 network address space definition."
- Network:
10.0.1.0/24
- Enable DHCPv4: Check or uncheck. If checked, set a range like:
- Start:
10.0.1.100
- End:
10.0.1.200
(This keeps.1
to.99
free for static assignments or the gateway).
- Start:
- Click
Forward
. - IPv6: Disable or configure as needed.
Disable IPv6
is fine. - Click
Forward
. - Ready to Complete:
- Network type:
Isolated network
(ensure "Forwarding to physical network" is NOT selected, and if there's a NAT option, it's also not selected). - DNS Domain name (optional):
lan1.lab
- Network type:
- Click
Finish
.
Steps to Create dmz1-net
:
Repeat the process above with these details:
- Name:
dmz1-net
- IPv4 Network:
10.0.2.0/24
- DHCPv4 Range (if enabled): Start:
10.0.2.100
, End:10.0.2.200
- Network type:
Isolated network
- DNS Domain name (optional):
dmz1.lab
After creation, you should see lan1-net
and dmz1-net
in your list of virtual networks. They will be associated with new bridge interfaces on the host (e.g., virbr1
, virbr2
). You can verify their IP addresses (e.g., 10.0.1.1
, 10.0.2.1
) using ip addr show <bridge_name>
on the host.
Connecting VMs to Different Networks
Now we need to add/modify network interfaces on our VMs to connect them to these new networks.
Modifying ubuntu-server-01
:
This VM already exists and should currently have one NIC connected to the default
NAT network. We need to add a second NIC for lan1-net
.
- Ensure
ubuntu-server-01
is Shutoff. - Select
ubuntu-server-01
invirt-manager
, click "Open," then click the "Show virtual hardware details" icon. - Click "Add Hardware" (bottom left).
- Select "Network" from the left pane.
- Host device / Network source: Select
Virtual network 'lan1-net': Isolated network
. - Device model: Choose
virtio
. - Click
Finish
. - You should now see two NICs listed for
ubuntu-server-01
. The first one is likely ondefault
, the new one onlan1-net
.
Creating client-vm-01
:
This will be a new, simple VM (e.g., another Ubuntu Server, or a very minimal Linux like Alpine, or even a graphical Linux desktop if your host has resources). For this example, let's use another Ubuntu Server.
- Create a new VM named
client-vm-01
. - ISO: Use your Ubuntu Server ISO.
- Memory:
1024
MB (or more if you prefer). - CPUs:
1
(or2
). - Disk:
15
GB (qcow2). - Name:
client-vm-01
. - Network Selection (in the last step of wizard): Choose
Virtual network 'lan1-net': Isolated network
. - Check "Customize configuration before install." Click
Finish
. - In Customization:
- Verify NIC is on
lan1-net
and model isvirtio
. - Verify Disk bus is
VirtIO
.
- Verify NIC is on
- Click "Begin Installation."
- Install Ubuntu Server on
client-vm-01
.- Hostname:
client01
- User:
student
, Password:Password123!
- No need to install OpenSSH server for this client if you don't want to, but it can be useful.
- During network configuration, it might get an IP via DHCP from
lan1-net
(10.0.1.x
) or might show no internet. This is fine; we'll set a static IP later.
- Hostname:
Creating dmz-server-01
:
Similar to client-vm-01
.
- Create a new VM named
dmz-server-01
. - ISO: Ubuntu Server ISO.
- Memory:
1024
MB. - CPUs:
1
. - Disk:
15
GB (qcow2). - Name:
dmz-server-01
. - Network Selection: Choose
Virtual network 'dmz1-net': Isolated network
. - Check "Customize configuration before install." Click
Finish
. - In Customization:
- Verify NIC is on
dmz1-net
and model isvirtio
. - Verify Disk bus is
VirtIO
.
- Verify NIC is on
- Click "Begin Installation."
- Install Ubuntu Server on
dmz-server-01
.- Hostname:
dmz01
- User:
student
, Password:Password123!
- Install OpenSSH server.
- Network config during install will use
dmz1-net
(10.0.2.x
).
- Hostname:
After installations, update all new VMs: sudo apt update && sudo apt upgrade -y
.
Configuring Static IP Addresses in VMs
Now, let's configure the static IPs as per our plan. For modern Ubuntu Server versions (18.04+), network configuration is typically handled by Netplan. Netplan uses YAML files in /etc/netplan/
.
On ubuntu-server-01
:
- Start
ubuntu-server-01
. Login. - Identify network interface names:
ip -br addr show
- You should see two interfaces (e.g.,
eth0
,eth1
orenp1s0
,enp2s0
). - One will have an IP from
192.168.122.x
(this iseth0
for our plan, connected todefault
NAT). - The other will have an IP from
10.0.1.x
(DHCP fromlan1-net
) or no IP if DHCP failed/disabled (this iseth1
for our plan, connected tolan1-net
). Let's assume the NAT interface isenp1s0
and thelan1-net
interface isenp2s0
. Adjust these names based on your VM's actual interface names.
- You should see two interfaces (e.g.,
- Edit the Netplan configuration file. There's usually one file like
/etc/netplan/00-installer-config.yaml
or01-netcfg.yaml
. Your existing file might look something like this (forenp1s0
getting DHCP):Modify it to:# This is the network config written by 'subiquity' network: ethernets: enp1s0: # Interface connected to default NAT dhcp4: true enp2s0: # Interface connected to lan1-net (might be DHCP or unconfigured) dhcp4: true # Or might not exist yet if added post-install version: 2
Important: YAML is sensitive to indentation (use spaces, not tabs).# This is the network config for ubuntu-server-01 network: ethernets: enp1s0: # Interface connected to default NAT network dhcp4: true enp2s0: # Interface connected to lan1-net dhcp4: no # Disable DHCP for this interface addresses: - 10.0.1.10/24 # No gateway here, as it's an internal network. # If you wanted it to use libvirt's DNS on this segment: # nameservers: # addresses: [10.0.1.1] # libvirt bridge IP for lan1-net version: 2 renderer: networkd # or NetworkManager if that's what your server uses
- Apply the Netplan configuration:
- Verify:
ip addr show enp2s0
. It should now have10.0.1.10/24
.
On client-vm-01
:
- Start
client-vm-01
. Login. - Identify its network interface (e.g.,
enp1s0
).ip -br addr show
- Edit Netplan:
sudo nano /etc/netplan/00-installer-config.yaml
Modify to:# This is the network config for client-vm-01 network: ethernets: enp1s0: # Interface connected to lan1-net dhcp4: no addresses: - 10.0.1.50/24 # No gateway specified for this internal interface yet. # For DNS, we might point to ubuntu-server-01 later if it runs DNS, or 10.0.1.1 # nameservers: # addresses: [10.0.1.1] # Or later, our own DNS server 10.0.1.10 version: 2 renderer: networkd
- Apply:
sudo netplan apply
- Verify:
ip addr show enp1s0
. It should have10.0.1.50/24
.
On dmz-server-01
:
- Start
dmz-server-01
. Login. - Identify its network interface (e.g.,
enp1s0
).ip -br addr show
- Edit Netplan:
sudo nano /etc/netplan/00-installer-config.yaml
Modify to: - Apply:
sudo netplan apply
- Verify:
ip addr show enp1s0
. It should have10.0.2.20/24
.
Testing Connectivity Between VMs
Now, let's test if our VMs can communicate within their respective segments.
-
Test within
lan1-net
:- On
ubuntu-server-01
(IP10.0.1.10
), pingclient-vm-01
(IP10.0.1.50
): This should work. - On
client-vm-01
(IP10.0.1.50
), pingubuntu-server-01
(IP10.0.1.10
): This should also work.
- On
-
Test connectivity from
lan1-net
todmz1-net
(or vice-versa):- On
ubuntu-server-01
(onlan1-net
), try to pingdmz-server-01
(IP10.0.2.20
): This should FAIL. These networks are isolated and have no router between them yet. - Similarly, pinging from
dmz-server-01
to any VM onlan1-net
should fail.
- On
-
Test internet connectivity:
ubuntu-server-01
has an interface (enp1s0
) on thedefault
NAT network. It should be able to ping the internet:This depends on its routing table (ping 8.8.8.8 -I enp1s0 # Specify interface if default route isn't via enp1s0 # Or simply: ping 8.8.8.8
ip route show
). If it has a default route via192.168.122.1
, it will work.client-vm-01
anddmz-server-01
only have interfaces on isolated networks. They should NOT be able to ping8.8.8.8
. This is expected.
Workshop Building a Multi-Segment Network
Objective:
To implement the designed network topology with lan1-net
and dmz1-net
, configure VMs with static IPs, and test intra-segment and inter-segment connectivity (or lack thereof).
Prerequisites:
virt-manager
and KVM host ready.- Ubuntu Server ISO available.
ubuntu-server-01
VM created from previous workshops (or recreate it if necessary, initially with one NIC ondefault
NAT).
Steps:
-
Create Custom Virtual Networks:
- Follow the detailed steps in "Creating Custom Virtual Networks in virt-manager" to create:
lan1-net
(Network:10.0.1.0/24
, Type:Isolated
)dmz1-net
(Network:10.0.2.0/24
, Type:Isolated
)
- Verify they are active and note the host bridge interfaces
libvirt
creates for them (e.g.,virbr1
,virbr2
). Check their IPs on the host (e.g.,10.0.1.1
,10.0.2.1
).
- Follow the detailed steps in "Creating Custom Virtual Networks in virt-manager" to create:
-
Prepare/Configure
ubuntu-server-01
:- Ensure
ubuntu-server-01
is shut down. - Add a second network interface to
ubuntu-server-01
and connect it tolan1-net
(Device model:virtio
). - Start
ubuntu-server-01
. - Log in and identify the interface names (e.g.,
enp1s0
for default NAT,enp2s0
forlan1-net
). - Configure static IPs using Netplan as described in "Configuring Static IP Addresses in VMs":
enp1s0
(or equivalent NAT interface):dhcp4: true
enp2s0
(or equivalentlan1-net
interface):addresses: [10.0.1.10/24]
,dhcp4: no
sudo netplan apply
. Verify IPs withip addr show
.
- Ensure
-
Create and Configure
client-vm-01
:- Create a new Ubuntu Server VM named
client-vm-01
. - During creation, connect its single NIC to
lan1-net
. Usevirtio
model. Customize before install. - Install Ubuntu Server (hostname:
client01
, user:student
). - After installation and updates, log in. Identify interface name (e.g.,
enp1s0
). - Configure static IP using Netplan:
enp1s0
:addresses: [10.0.1.50/24]
,dhcp4: no
sudo netplan apply
. Verify IP.
- Create a new Ubuntu Server VM named
-
Create and Configure
dmz-server-01
:- Create a new Ubuntu Server VM named
dmz-server-01
. - During creation, connect its single NIC to
dmz1-net
. Usevirtio
model. Customize before install. - Install Ubuntu Server (hostname:
dmz01
, user:student
, install OpenSSH). - After installation and updates, log in. Identify interface name (e.g.,
enp1s0
). - Configure static IP using Netplan:
enp1s0
:addresses: [10.0.2.20/24]
,dhcp4: no
sudo netplan apply
. Verify IP.
- Create a new Ubuntu Server VM named
-
Test Connectivity:
- Intra-LAN1 Test:
- From
ubuntu-server-01
(10.0.1.10),ping 10.0.1.50
(client-vm-01
). Expected: Success. - From
client-vm-01
(10.0.1.50),ping 10.0.1.10
(ubuntu-server-01
). Expected: Success.
- From
- Inter-Segment Test (LAN1 to DMZ1):
- From
ubuntu-server-01
(10.0.1.10),ping 10.0.2.20
(dmz-server-01
). Expected: Fail (Destination Host Unreachable or timeout). - From
dmz-server-01
(10.0.2.20),ping 10.0.1.10
(ubuntu-server-01
). Expected: Fail.
- From
- Internet Test:
- From
ubuntu-server-01
,ping 8.8.8.8
. Expected: Success (via its NAT interface). - From
client-vm-01
,ping 8.8.8.8
. Expected: Fail. - From
dmz-server-01
,ping 8.8.8.8
. Expected: Fail.
- From
- Host-to-VM (on custom nets) Ping Test:
- From your KVM host terminal, ping
10.0.1.10
(ubuntu-server-01
's LAN1 IP). Expected: Success. (The host has an interfacevirbrX
on this network). - From KVM host, ping
10.0.2.20
(dmz-server-01
's DMZ1 IP). Expected: Success.
- From your KVM host terminal, ping
- Intra-LAN1 Test:
Workshop Summary:
You have successfully:
- Designed a basic multi-segment network.
- Created custom isolated virtual networks (
lan1-net
,dmz1-net
) invirt-manager
. - Configured multiple VMs (
ubuntu-server-01
,client-vm-01
,dmz-server-01
) and connected them to the appropriate networks. - Assigned static IP addresses to VMs using Netplan.
- Tested connectivity, confirming that VMs within the same isolated segment can communicate, but VMs in different isolated segments cannot (yet). You also confirmed which VMs have internet access.
This setup forms the foundation for the next major step: introducing a router/firewall (like pfSense) to manage traffic between these segments and control access to the internet. Your network is starting to take shape!
6. Implementing a Router and Firewall with pfSense or OpenWrt
So far, our lan1-net
and dmz1-net
segments are isolated. To enable communication between them and to provide controlled internet access, we need a router and a firewall. Software solutions like pfSense and OpenWrt can run as virtual machines and perform these roles effectively in our simulated environment. We will focus on pfSense for this section due to its powerful firewalling capabilities and user-friendly web interface.
Introduction to Software Routers/Firewalls
A software router/firewall is an operating system specifically designed to perform routing and network security functions. It runs on standard computer hardware (or, in our case, a VM) and uses its network interfaces to connect to different network segments.
pfSense
- Based on: FreeBSD.
- Features: A very comprehensive open-source firewall and router solution. Includes:
- Stateful packet inspection (SPI) firewall.
- Network Address Translation (NAT).
- DHCP server and relay.
- DNS forwarder/resolver.
- VPN capabilities (OpenVPN, IPsec, L2TP).
- Intrusion Detection/Prevention (via packages like Snort or Suricata).
- Traffic shaping, load balancing, captive portal, and much more.
- Managed primarily through a web-based GUI.
- Use Cases: Widely used in homes, small businesses, and even enterprise environments. Excellent for our simulation.
OpenWrt
- Based on: Linux.
- Features: A highly flexible and customizable distribution primarily for embedded devices and routers.
- Good routing capabilities.
- Firewalling (based on
iptables
/nftables
). - DHCP, DNS (dnsmasq or BIND).
- Extensive package repository for adding features.
- Managed via web GUI (LuCI) and command line (UCI).
- Use Cases: Reviving old routers, custom router builds, specialized network appliances. Can be run as a VM for x86. While powerful, its configuration can be more command-line intensive for advanced setups compared to pfSense's GUI focus for firewall rules.
For this guide, pfSense is generally a better choice for learning comprehensive firewall rule management and common SMB gateway features due to its rich GUI.
Installing pfSense in a VM
Network Plan with pfSense:
- pfSense VM (
fw01
): Will have three network interfaces:- WAN: Connects to the
default
NAT network (192.168.122.0/24
) to get "internet" access from our KVM host. IP: DHCP fromlibvirt
(e.g.,192.168.122.x
). - LAN: Connects to
lan1-net
(10.0.1.0/24
). Static IP:10.0.1.1/24
. This will be the gateway for VMs onlan1-net
. - OPT1 (DMZ): Connects to
dmz1-net
(10.0.2.0/24
). Static IP:10.0.2.1/24
. This will be the gateway for VMs ondmz1-net
.
- WAN: Connects to the
VM Resource Allocation for pfSense:
- RAM:
Minimum 512 MB, recommended 1 GB or more, especially if using packages like Suricata/Snort later. Let's use1024
MB. - CPUs:
1
vCPU is usually fine for basic routing/firewalling in a lab.2
if you plan heavier loads. - Disk:
8
GB to16
GB is plenty. pfSense itself is small.qcow2
format. - NICs:
Three, allVirtIO
model.
Steps to Install pfSense:
-
Download pfSense ISO:
- Go to https://www.pfsense.org/download/.
- Architecture:
AMD64 (64-bit)
. - Installer:
CD Image (ISO) Installer
. - Mirror: Select a suitable mirror.
- Download the
.iso.gz
file, then extract it to get the.iso
file (e.g.,pfSense-CE-version-amd64.iso
). Save it to your~/ISOs
directory.
-
Create the pfSense VM in
virt-manager
:- Click "Create a new virtual machine."
- Step 1 (Media): " Local install media," browse to your pfSense ISO.
virt-manager
might not auto-detect pfSense. Manually type "FreeBSD" and select "FreeBSD 12.x" or the closest version (pfSense is based on FreeBSD). If no specific FreeBSD option, a "Generic" OS might work, but FreeBSD specific is better if available.- Step 2 (CPU/RAM):
RAM:1024
MB, CPUs:1
. - Step 3 (Disk):
Create disk image,10
GB, qcow2. - Step 4 (Name/Network):
- Name:
fw01
- Check "Customize configuration before install."
- Network selection: Initially, it will only let you pick one network. Pick
Virtual network 'default': NAT
for now. We'll add the others in customization. - Click
Finish
.
- Name:
-
Customize Configuration (
fw01
):- NICs:
- You'll see one NIC (e.g.,
NIC: xx:yy:zz
) connected todefault
(this will be our WAN). Ensure its Device model isvirtio
. - Click "Add Hardware." Select "Network."
- Network source:
Virtual network 'lan1-net': Isolated network
. - Device model:
virtio
. - Click
Finish
. (This will be our LAN interface).
- Network source:
- Click "Add Hardware" again. Select "Network."
- Network source:
Virtual network 'dmz1-net': Isolated network
. - Device model:
virtio
. - Click
Finish
. (This will be our OPT1/DMZ interface).
- Network source:
- You should now have three VirtIO NICs. Note their MAC addresses if you want to be precise during pfSense interface assignment, but pfSense usually lists them by their VirtIO names (
vtnet0
,vtnet1
,vtnet2
).
- You'll see one NIC (e.g.,
- Boot Options:
Ensure CDROM is first in boot order, then Hard Disk. - Disk (VirtIO Disk 1):
Ensure "Disk bus" isVirtIO
. - OS Information:
If you selected "Generic OS", try changing it to a FreeBSD type if possible. This can sometimes help with default device drivers. - Click "Begin Installation."
- NICs:
-
pfSense Installation Process (in VM Console):
- The VM will boot from the ISO.
- Copyright/Distribution Notice: Accept.
- Install:
Choose "Install pfSense." Continue. - Keymap:
Select your preferred keymap (e.g., "US keyboard"). Continue. - Partitioning:
Choose "Auto (UFS) BIOS" (or "Auto (ZFS)" if you prefer ZFS and have allocated enough RAM/CPU, but UFS is simpler for now). Select this option for the guided installer. - The installer will proceed.
- Once installation is complete, it will ask if you want to open a shell for manual modifications. Choose "No."
- Reboot:
Select "Reboot" and press Enter. - IMPORTANT:
As pfSense reboots, you need to "eject" the ISO so it boots from the hard disk.- Quickly, in
virt-manager
, select thefw01
VM. Click "Open," then "Show virtual hardware details." - Select "SATA CDROM 1."
- On the right, for "Source path," click "Disconnect" or clear the path. Click
Apply
. - Alternatively, in Boot Options, uncheck CDROM or move Hard Disk to the top.
- If you miss this, pfSense might boot from the ISO again. If so, force off the VM, fix the CDROM/boot order, and start again.
- Quickly, in
Basic pfSense Configuration
After rebooting from the virtual hard disk, pfSense will start its initial setup.
Interface Assignment
- VLANs first?
pfSense will ask: "Should VLANs be set up now? [y/n]". Entern
(no) and press Enter. - Interface Mapping (WAN, LAN, OPTx):
- pfSense will list available network interfaces (e.g.,
vtnet0
,vtnet1
,vtnet2
). These correspond to the NICs you added invirt-manager
. The order (vtnet0
,vtnet1
,vtnet2
) usually matches the order you added them.vtnet0
(first NIC) should be connected todefault
NAT (our WAN).vtnet1
(second NIC) should be connected tolan1-net
(our LAN).vtnet2
(third NIC) should be connected todmz1-net
(our OPT1/DMZ).
- Enter the WAN interface name:
Typevtnet0
(or the one connected to yourdefault
NAT network) and press Enter. - Enter the LAN interface name:
Typevtnet1
(or the one forlan1-net
) and press Enter. - Enter the Optional 1 interface name (OPT1):
Typevtnet2
(or the one fordmz1-net
) and press Enter. - If you have more, it will ask for OPT2, etc. If not, just press Enter when prompted for further optional interfaces.
- Confirm:
It will show you the assignments. If correct, typey
and press Enter.
- pfSense will list available network interfaces (e.g.,
- pfSense will then configure these interfaces and start services. You'll see a console menu.
pfSense Console Menu:
The console menu provides options for basic configuration if you can't access the web GUI.
pfSense CE X.Y.Z-RELEASE (amd64)
...
WAN (vtnet0) -> dhcp -> 192.168.122.X (WAN IP from libvirt)
LAN (vtnet1) -> static -> 192.168.1.1/24 (Default pfSense LAN IP)
OPT1 (vtnet2) -> <unassigned>
0) Logout
1) Assign Interfaces
2) Set interface(s) IP address
3) Reset webConfigurator password
...
15) Reboot system
16) Halt system
Important Note:
By default, pfSense assigns 192.168.1.1/24
to its LAN interface. Our lan1-net
is 10.0.1.0/24
. We need to change pfSense's LAN IP.
WAN Configuration (DHCP or Static)
- Our WAN (
vtnet0
) is connected tolibvirt
'sdefault
NAT network, which has a DHCP server. - pfSense should have automatically picked up an IP for WAN via DHCP (e.g.,
192.168.122.123
). This is correct.
LAN Configuration (Static IP DHCP Server)
-
Change LAN IP Address:
- From the pfSense console menu, choose option
2
("Set interface(s) IP address"). - It will ask which interface: choose
2
(for LAN, as per its current menu numbering based on assigned interfaces). - Configure IPv4 address LAN interface via DHCP? [y/n] Enter
n
(no, we want static). - Enter the new LAN IPv4 address:
10.0.1.1
- Enter the new LAN IPv4 subnet bit count (1 to 31):
24
- For a WAN, enter the new LAN IPv4 upstream gateway address...: Press Enter (no upstream gateway for LAN interface itself).
- Configure IPv6 address LAN interface via DHCP6? [y/n] Enter
n
. - Enter the new LAN IPv6 address. Press
for none: Press Enter. - Do you want to enable the DHCP server on LAN? [y/n] Enter
y
. - Enter the start address of the IPv4 client address range:
10.0.1.100
- Enter the end address of the IPv4 client address range:
10.0.1.200
- Do you want to revert to HTTP as the webConfigurator protocol? [y/n] Enter
n
(keep HTTPS). - Press Enter to continue. pfSense will apply changes. The console should now show:
LAN (vtnet1) -> static -> 10.0.1.1/24
- From the pfSense console menu, choose option
-
Configure OPT1 (DMZ) Interface:
- From the console menu, choose option
2
again. - Select the OPT1 interface (likely option
3
now). - Configure IPv4 address OPT1 interface via DHCP? [y/n]
n
. - Enter the new OPT1 IPv4 address:
10.0.2.1
- Subnet bit count:
24
- Upstream gateway: Press Enter.
- IPv6:
n
, then Enter. - Enable DHCP server on OPT1? [y/n]
y
. - Start address:
10.0.2.100
- End address:
10.0.2.200
- Press Enter to continue. The console should show:
OPT1 (vtnet2) -> static -> 10.0.2.1/24
- From the console menu, choose option
Accessing the pfSense WebGUI:
The web GUI is accessible from a machine on the pfSense LAN network (10.0.1.0/24
).
- Our
client-vm-01
is onlan1-net
. It currently has a static IP (10.0.1.50
) and no gateway configured. - Our
ubuntu-server-01
also has an interface onlan1-net
(10.0.1.10
).
Modify Client VMs' Network Configuration to use pfSense as Gateway/DNS:
On client-vm-01
:
- Start
client-vm-01
. - Edit Netplan:
sudo nano /etc/netplan/00-installer-config.yaml
sudo netplan apply
.
On ubuntu-server-01
(for its lan1-net
interface):
-
Edit Netplan:
sudo nano /etc/netplan/00-installer-config.yaml
network: ethernets: enp1s0: # NAT interface dhcp4: true enp2s0: # lan1-net interface dhcp4: no addresses: - 10.0.1.10/24 # This VM's default route will still be via enp1s0 (NAT) # If we want it to primarily use pfSense for this segment's specific routing or DNS on this segment: # routes: # - to: 0.0.0.0/0 # Or specific routes # via: 10.0.1.1 # metric: 200 # Higher metric than enp1s0's default route nameservers: # This sets DNS for the whole system, consider carefully if also using NAT's DNS. # Better to let enp1s0 handle DNS for general internet if that's primary. addresses: [10.0.1.1] # If you want to test pfSense DNS for this segment # or remove this line to use DNS from enp1s0 (NAT) version: 2 # You might need to manage default routes carefully if both interfaces could provide one. # For now, let enp1s0 (NAT) provide the default route. # We'll primarily PING through 10.0.1.1 to test.
For
ubuntu-server-01
, since it already has a default route via its NAT interface, we won't setgateway4: 10.0.1.1
on theenp2s0
interface in Netplan, as that would create a second default route. We just need the IP and the ability to reach10.0.1.1
. If we wantedubuntu-server-01
to route all its traffic via pfSense (including internet), we'd remove/disable the NAT NIC or heavily manipulate routing metrics. For now,ubuntu-server-01
will use pfSense for traffic destined to/from the 10.0.1.0/24 network.
On dmz-server-01
:
- Start
dmz-server-01
. - Edit Netplan:
sudo nano /etc/netplan/00-installer-config.yaml
sudo netplan apply
.
Accessing pfSense WebGUI from client-vm-01
:
client-vm-01
needs a web browser. Since it's Ubuntu Server, it doesn't have one by default.- Option 1 (Quick Test): Use
curl
fromclient-vm-01
to see if you can get the page: You should see HTML output. This confirms Layer 3/4 connectivity. - Option 2 (Better): Install a lightweight graphical desktop and browser on
client-vm-01
(if your host has resources) OR create a new VM,gui-client-01
, onlan1-net
with a desktop Linux (e.g., Lubuntu, Xubuntu) and use its browser.- To install a minimal GUI + Firefox on
client-vm-01
(Ubuntu Server):After reboot, log in graphically. Open Firefox and go tosudo apt update sudo apt install xubuntu-core firefox -y # xubuntu-core is lighter than full ubuntu-desktop # This will take time and disk space. sudo systemctl set-default graphical.target sudo reboot
https://10.0.1.1
.
- To install a minimal GUI + Firefox on
- Initial pfSense WebGUI Setup:
- Browser:
https://10.0.1.1
- Accept security warning (self-signed certificate).
- Login:
- Username:
admin
- Password:
pfsense
- Username:
- Setup Wizard:
- Click "Next."
- "pfSense Gold Subscription": Click "Next" (skip for CE).
- General Information:
- Hostname:
fw01
- Domain:
home.arpa
(default) oryourlab.lan
(e.g.,smb.lab
) - Primary DNS Server:
8.8.8.8
(Google) or1.1.1.1
(Cloudflare) - these are for pfSense itself to use. - Secondary DNS Server: (Optional)
- Uncheck "Override DNS."
- Click "Next."
- Hostname:
- Time Server Configuration:
- Time server hostname: Default is fine.
- Timezone: Select your timezone.
- Click "Next."
- WAN Interface Configuration:
- SelectedType:
DHCP
(this should be pre-filled from console setup). - Other settings (MAC, MTU, etc.): Defaults are fine.
- Scroll down to "Block RFC1918 Private Networks." Uncheck this for now. Our "WAN" (
192.168.122.x
) is an RFC1918 network. If checked, pfSense would block traffic fromlibvirt
's NAT network. - Also uncheck "Block bogon networks."
- Click "Next."
- SelectedType:
- LAN Interface Configuration:
- LAN IP Address:
10.0.1.1
(should be pre-filled). - Subnet Mask:
24
(should be pre-filled). - Click "Next."
- LAN IP Address:
- Admin Password:
- Change the
admin
password to something secure. - Click "Next."
- Change the
- Reload Configuration: Click "Reload."
- pfSense will apply settings and reload. Click "Finish." You'll be taken to the dashboard.
- Browser:
Enable OPT1 (DMZ) Interface in pfSense GUI:
By default, optional interfaces like OPT1 are created but not enabled (no firewall rules, no DHCP server active on them from GUI perspective initially, even if set via console).
- In pfSense WebGUI, go to
Interfaces
->OPT1
. - Check "Enable interface."
- Description:
DMZ
- IPv4 Configuration Type:
Static IPv4
. - IPv4 Address:
10.0.2.1
/24
. - Scroll down and click
Save
. - Click
Apply Changes
at the top. - Enable DHCP Server for DMZ (OPT1):
- Go to
Services
->DHCP Server
. - Select the
DMZ
tab. - Check "Enable DHCP server on DMZ interface."
- Range:
10.0.2.100
to10.0.2.200
(should match console setup). - DNS Servers: You can add
10.0.2.1
(pfSense itself) or public DNS like8.8.8.8
. - Click
Save
.
- Go to
Firewall Rules Fundamentals
pfSense uses a stateful firewall. Rules are processed in order, per interface, from top to bottom. The first matching rule wins. If no rule matches, traffic is blocked by default (implicit deny).
- Default LAN Rules:
pfSense usually creates an "allow LAN to any" rule by default. This means clients on the LAN (10.0.1.x
) can access anything on other interfaces (WAN, DMZ) and the internet.Firewall
->Rules
->LAN
tab. You should see a rule like:- Action: Pass, Interface: LAN, Protocol: Any, Source: LAN net, Destination: Any.
- Default WAN Rules:
By default, all inbound traffic from WAN is blocked unless specifically allowed (e.g., by port forwarding or reply traffic to an outbound connection). This is good. - OPT1 (DMZ) Rules:
By default, OPTx interfaces have NO rules, meaning all traffic is blocked. We need to add rules.
Basic Firewall Rule Scenarios:
-
Allow DMZ to access Internet (DNS and Web):
- Go to
Firewall
->Rules
->DMZ
tab. - Click
+ Add
(to add a rule at the bottom). - Action:
Pass
- Interface:
DMZ
- Address Family:
IPv4
- Protocol:
TCP/UDP
(for DNS + Web) - Source:
DMZ net
(this is an alias for10.0.2.0/24
) - Destination:
any
- Destination Port Range:
- For DNS: Type
DNS
(or port53
). ClickAdd another port
. - For HTTP: Type
HTTP
(or port80
). ClickAdd another port
. - For HTTPS: Type
HTTPS
(or port443
).
- For DNS: Type
- Description:
Allow DMZ outbound DNS, HTTP, HTTPS
- Click
Save
. - Click
Apply Changes
. - Now,
dmz-server-01
should be able to ping8.8.8.8
and useapt update
.- Test from
dmz-server-01
:ping 8.8.8.8
sudo apt update
- Test from
- Go to
-
Allow LAN to access a specific service on DMZ Server:
Let's saydmz-server-01
(10.0.2.20
) will run a web server on port 80. We want LAN clients to access it.- Go to
Firewall
->Rules
->LAN
tab. - Click
+ Add
. - Action:
Pass
- Interface:
LAN
- Protocol:
TCP
- Source:
LAN net
- Destination:
- Type:
Single host or alias
- Address:
10.0.2.20
(dmz-server-01
's IP)
- Type:
- Destination Port Range:
HTTP
(or80
) - Description:
Allow LAN to DMZ Web Server (10.0.2.20)
- Click
Save
. - Click
Apply Changes
. - (You'd also need a web server running on dmz-server-01 for this to fully work).
- Go to
-
Block LAN from accessing DMZ (except for specific rules):
If the default "Allow LAN to Any" rule exists, the rule above is sufficient. If you wanted to be more restrictive on LAN, you might delete the "Allow LAN to Any" and only add specific allow rules. The last rule on an interface is often a "block all" (though pfSense has an implicit deny).
NAT (Network Address Translation)
- By default, pfSense performs Outbound NAT on the WAN interface. This means traffic from LAN and DMZ (if allowed by firewall rules) going to the WAN will have its source IP address translated to pfSense's WAN IP (
192.168.122.x
). This is what allows internal clients to share one "public" IP (in our case, the NATed IP fromlibvirt
). Firewall
->NAT
->Outbound
. Mode is usually "Automatic outbound NAT rule generation." This is fine for now.- Port Forwarding (Inbound NAT):
If you wanted to expose a service from a VM (e.g., a web server ondmz-server-01
) to the "internet" (i.e., make it accessible viafw01
's WAN IP192.168.122.x
), you'd set up port forwarding.- Example: Forward WAN port 8080 to
dmz-server-01
(10.0.2.20) port 80.Firewall
->NAT
->Port Forward
tab.- Click
+ Add
. - Interface:
WAN
- Protocol:
TCP
- Destination:
WAN address
- Destination port range:
from port: 8080
,to port: 8080
- Redirect target IP:
10.0.2.20
- Redirect target port:
80
- Description:
Forward to DMZ Web Server
- Click
Save
, thenApply Changes
. - This also automatically creates a linked firewall rule on WAN to allow this traffic.
- Example: Forward WAN port 8080 to
Setting up Inter-VLAN Routing (Router on a Stick - Not Directly Applicable Here)
"Router on a Stick" is a term for using a single physical router interface with VLAN tagging (802.1Q) to route between multiple VLANs on a switch. In our KVM setup, we've achieved a similar logical separation by using:
- Separate
libvirt
virtual networks (lan1-net
,dmz1-net
). - Separate virtual NICs on the pfSense VM, each connected to one of these distinct Layer 2 segments (virtual bridges).
So, pfSense is already routing between these distinct networks (10.0.1.0/24
and 10.0.2.0/24
) because it has a directly connected interface in each. Firewall rules control what traffic is allowed. This is more akin to a router with multiple physical interfaces, each in a different subnet.
If you did want to simulate VLANs passing over a single pfSense interface, you would:
- Create a
libvirt
network that is a "trunk" (allowing tagged traffic). - Configure one pfSense vNIC to connect to this trunk.
- Inside pfSense, create VLAN sub-interfaces on that vNIC (e.g.,
vtnetX.10
,vtnetX.20
). - Assign these VLAN sub-interfaces to pfSense logical interfaces (like LAN_VLAN10, DMZ_VLAN20).
- Configure VMs to also connect to this trunk network and tag their traffic appropriately (or connect them to
libvirt
networks that are themselves configured as access ports for specific VLANs on that trunk). This is more advanced and not necessary for our current topology but good to know.
Workshop Installing and Configuring a pfSense Firewall VM
Objective:
To install pfSense, configure its WAN, LAN, and DMZ interfaces, set up basic firewall rules, and enable internet access for the LAN and DMZ segments.
Prerequisites:
- All VMs and networks from Workshop 5 (
lan1-net
,dmz1-net
,ubuntu-server-01
,client-vm-01
,dmz-server-01
). - pfSense ISO downloaded and extracted.
- A VM (
client-vm-01
or a newgui-client-01
) onlan1-net
capable of running a web browser to access pfSense GUI. Ifclient-vm-01
is a server, you may need to install a minimal desktop and Firefox on it, or use a new GUI VM.
Steps:
-
Create and Install pfSense VM (
fw01
):- Follow "Installing pfSense in a VM" detailed steps:
- Name:
fw01
, OS Type: FreeBSD. - RAM:
1024
MB, CPU:1
, Disk:10
GB. - NIC 1 (WAN): Connect to
default
NAT,virtio
model. - NIC 2 (LAN): Connect to
lan1-net
,virtio
model. - NIC 3 (DMZ/OPT1): Connect to
dmz1-net
,virtio
model. - Install pfSense from ISO, choosing UFS, then reboot (ensure ISO is ejected/disconnected).
- Name:
- Follow "Installing pfSense in a VM" detailed steps:
-
Initial pfSense Console Configuration:
- When prompted, don't set up VLANs (
n
). - Assign interfaces:
- WAN:
vtnet0
(or your first NIC). - LAN:
vtnet1
(or your second NIC). - OPT1:
vtnet2
(or your third NIC). - Confirm (
y
).
- WAN:
- From the console menu:
- Choose option
2
to set LAN IP:10.0.1.1
, subnet24
, no gateway, enable DHCP server (10.0.1.100
-10.0.1.200
). Keep HTTPS. - Choose option
2
again to set OPT1 IP:10.0.2.1
, subnet24
, no gateway, enable DHCP server (10.0.2.100
-10.0.2.200
).
- Choose option
- When prompted, don't set up VLANs (
-
Update Client VM Network Settings:
client-vm-01
(onlan1-net
):- If using static IP, ensure Netplan config has
gateway4: 10.0.1.1
andnameservers: addresses: [10.0.1.1]
. Apply. - Alternatively, change Netplan to use DHCP:
dhcp4: true
(and remove static IP, gateway, DNS lines). pfSense's DHCP server on LAN should provide an IP in10.0.1.100-200
range, plus gateway and DNS.sudo netplan apply
.
- If using static IP, ensure Netplan config has
dmz-server-01
(ondmz1-net
):- If using static IP, ensure Netplan config has
gateway4: 10.0.2.1
andnameservers: addresses: [10.0.2.1]
. Apply. - Or change to
dhcp4: true
.sudo netplan apply
.
- If using static IP, ensure Netplan config has
ubuntu-server-01
(itslan1-net
interfaceenp2s0
):- If static IP
10.0.1.10
, addnameservers: addresses: [10.0.1.1]
to itsenp2s0
config in Netplan if you want it to use pfSense for DNS on that segment. (Its default route for internet is still via its NAT NICenp1s0
). Apply.
- If static IP
-
Access pfSense WebGUI and Complete Wizard:
- On
client-vm-01
(or your GUI client onlan1-net
), open a browser tohttps://10.0.1.1
. - Login:
admin
/pfsense
. - Complete the setup wizard:
- Hostname:
fw01
, Domain:yourlab.lan
. - Primary DNS:
8.8.8.8
(or your choice). - Timezone: Your timezone.
- WAN config: DHCP. Uncheck "Block RFC1918 Private Networks" and "Block bogon networks."
- LAN config:
10.0.1.1/24
(should be set). - Set a new admin password.
- Reload.
- Hostname:
- On
-
Enable and Configure OPT1 (DMZ) Interface in GUI:
Interfaces
->OPT1
.- Check "Enable." Description:
DMZ
. IPv4:Static IPv4
, Address:10.0.2.1 / 24
. - Save and Apply Changes.
Services
->DHCP Server
->DMZ
tab.- Enable DHCP server. Range:
10.0.2.100
-10.0.2.200
. DNS Servers:10.0.2.1
(or8.8.8.8
). - Save.
-
Configure Basic Firewall Rules:
- LAN to Any (Default):
Firewall
->Rules
->LAN
. Verify the default "Pass LAN to Any" rule exists. If not, add it (Action: Pass, Interface: LAN, Protocol: Any, Source: LAN net, Dest: Any).
- DMZ to Internet (DNS, HTTP, HTTPS):
Firewall
->Rules
->DMZ
. Click+ Add
.- Action:
Pass
, Interface:DMZ
, Protocol:TCP/UDP
. - Source:
DMZ net
. - Destination:
any
. - Destination Port Range: Create an alias for common ports first (optional but good practice):
Firewall
->Aliases
->Ports
. Click+ Add
.- Name:
WebAndDNS
. Description:TCP/UDP 53, TCP 80, TCP 443
. - Type:
Port(s)
. Add ports:53
,80
,443
. Save. Apply.
- Back in Firewall Rule (DMZ): Destination Port Range: (Other) -> Type
WebAndDNS
(the alias). - Description:
Allow DMZ outbound Web and DNS
. - Save. Apply Changes.
- LAN to Any (Default):
-
Test Connectivity:
- From
client-vm-01
(LAN):ping 10.0.1.1
(pfSense LAN IP). Expected: Success.ping 8.8.8.8
(Internet). Expected: Success.ping 10.0.2.20
(dmz-server-01
). Expected: Success (due to LAN's "allow any" rule).
- From
dmz-server-01
(DMZ):ping 10.0.2.1
(pfSense DMZ IP). Expected: Success.ping 8.8.8.8
(Internet). Expected: Success (due to our DMZ outbound rule).sudo apt update
(should work).ping 10.0.1.50
(client-vm-01
on LAN). Expected: Fail. (No rule on DMZ allows traffic to LAN net by default).
- From
ubuntu-server-01
(Multi-homed):ping 10.0.1.1
(pfSense LAN IP). Expected: Success.ping 10.0.2.1
(pfSense DMZ IP). Expected: Success.ping 10.0.2.20 -I enp2s0
(Pingdmz-server-01
vialan1-net
interface, routing through pfSense). Expected: Success.ping 8.8.8.8
(Internet via its NAT interface). Expected: Success.
- From
Workshop Summary:
You have now successfully installed and configured a pfSense VM to act as a router/firewall for your lan1-net
and dmz1-net
segments. VMs on these segments can now access the internet (as allowed by firewall rules) and communicate with each other (also controlled by rules). You've learned to:
- Install pfSense and assign its interfaces.
- Configure LAN and OPT1 (DMZ) static IPs and DHCP services.
- Navigate the pfSense WebGUI and complete the initial setup wizard.
- Create basic firewall rules to allow specific traffic (e.g., DMZ to internet).
- Understand how pfSense handles routing between connected segments.
Your simulated network is becoming much more functional and realistic! Next, we'll add essential network services like DNS.
7. Setting Up a DNS Server
Domain Name System (DNS) is a critical part of any network, translating human-readable domain names (like www.google.com
) into machine-usable IP addresses (like 172.217.160.142
). In our simulated environment, setting up our own DNS server will allow us to resolve names for our internal VMs (e.g., srv01.yourlab.lan
to 10.0.1.10
) and control how external names are resolved.
Understanding DNS
DNS Hierarchy
DNS is a hierarchical and distributed naming system.
- Root Servers:
At the top, 13 clusters of root servers know where to find TLD servers. - Top-Level Domain (TLD) Servers:
Manage top-level domains like.com
,.org
,.net
,.uk
,.de
. They know the authoritative servers for second-level domains. - Authoritative Name Servers:
Responsible for storing DNS records for a specific domain (e.g.,google.com
). When queried for a name within that domain, they provide the definitive answer. -
Recursive Resolvers (Caching DNS Servers):
These are the servers your client machines typically query. When a client asks forwww.example.com
, the resolver performs the iterative queries:- Asks a root server: "Where is
.com
?" - Asks a
.com
TLD server: "Where isexample.com
?" - Asks
example.com
's authoritative server: "What is the IP forwww.example.com
?"
Resolvers also cache responses to speed up future queries for the same names.
- Asks a root server: "Where is
Record Types (Common Ones)
DNS uses various record types to store different kinds of information:
- A (Address):
Maps a hostname to an IPv4 address. (e.g.,srv01.yourlab.lan IN A 10.0.1.10
) - AAAA (IPv6 Address):
Maps a hostname to an IPv6 address. - CNAME (Canonical Name):
Creates an alias for a hostname. Points one name to another "canonical" A or AAAA record. (e.g.,www.yourlab.lan IN CNAME srv01.yourlab.lan
) - MX (Mail Exchange):
Specifies mail servers responsible for accepting email for a domain. Includes a preference value. (e.g.,yourlab.lan IN MX 10 mail.yourlab.lan
) - NS (Name Server):
Delegates a DNS zone to be managed by specific authoritative name servers. - PTR (Pointer):
Used for reverse DNS lookups. Maps an IP address back to a hostname. Stored in specialin-addr.arpa
(IPv4) orip6.arpa
(IPv6) zones. (e.g.,10.1.0.10.in-addr.arpa IN PTR srv01.yourlab.lan
) - SRV (Service):
Defines the location (hostname and port) of servers for specific services (e.g., LDAP, XMPP). - TXT (Text):
Allows arbitrary text to be associated with a domain. Used for things like SPF records (email sender validation), DKIM, domain verification.
Recursive vs Authoritative DNS
- Authoritative DNS Server:
Holds the actual DNS records for a zone it's responsible for. It gives definitive answers for that zone. Our internal DNS server will be authoritative for our internal domain (e.g.,yourlab.lan
). - Recursive DNS Server (Resolver):
Performs queries on behalf of clients. It doesn't necessarily hold the records itself but knows how to find them by querying other servers. It also caches results.- Our internal DNS server can also act as a recursive resolver for our VMs, forwarding queries for external domains (like
google.com
) to public DNS servers (like8.8.8.8
).
- Our internal DNS server can also act as a recursive resolver for our VMs, forwarding queries for external domains (like
Choosing a DNS Server Software
Several options are available for running a DNS server on Linux:
-
BIND9 (Berkeley Internet Name Domain version 9):
- The most widely used DNS software. Very powerful, feature-rich, and robust.
- Can act as an authoritative server, recursive resolver, or both.
- Configuration can be complex due to its extensive capabilities and text-based zone files.
- Excellent for learning DNS in depth. This is what we will use.
-
Unbound:
- Primarily a validating, recursive, and caching DNS resolver.
- Known for its security focus and performance.
- Can also serve authoritative zones, but its primary strength is as a resolver.
- Often used in conjunction with an authoritative server like NSD or BIND, or for local resolving on routers like pfSense.
-
Dnsmasq:
- A lightweight, easy-to-configure DNS forwarder and DHCP server.
- Ideal for small networks (like home routers or
libvirt
's default NAT network). - Can serve DNS records from
/etc/hosts
or configuration files. - Not a full-featured authoritative DNS server in the same way as BIND or NSD, but very useful for simple internal name resolution.
libvirt
uses Dnsmasq for its virtual networks. pfSense also uses Dnsmasq as its default DNS Resolver/Forwarder.
-
Knot DNS / NSD (Name Server Daemon):
- High-performance authoritative-only name servers. Often used by TLD operators or for large zones. NSD is developed by NLnet Labs, same as Unbound.
For our educational purposes, BIND9 provides the most comprehensive experience for understanding authoritative DNS zone management.
Installing and Configuring a DNS Server (BIND9 on a Linux VM)
We will configure ubuntu-server-01
(IP 10.0.1.10
on lan1-net
) to be our primary internal DNS server for the domain yourlab.lan
.
Plan:
- Domain:
yourlab.lan
- Forward Zone (
yourlab.lan
): Will contain A, CNAME, MX records for our internal hosts. - Reverse Zone (
1.0.10.in-addr.arpa
): For reverse lookups in the10.0.1.0/24
subnet. - Reverse Zone (
2.0.10.in-addr.arpa
): For reverse lookups in the10.0.2.0/24
(DMZ) subnet. - Recursion: BIND9 will also act as a recursive resolver for our clients, forwarding queries for external domains.
Steps on ubuntu-server-01
:
-
Install BIND9:
bind9
: The BIND9 server.bind9utils
: Utilities likedig
,named-checkconf
,named-checkzone
.bind9-doc
: Documentation.
-
Configure BIND9 Options (
Modify or add the following within thenamed.conf.options
): Edit/etc/bind/named.conf.options
. This file controls global BIND options.options { ... };
block:options { directory "/var/cache/bind"; // Listen on the LAN interface and localhost listen-on port 53 { 127.0.0.1; 10.0.1.10; }; listen-on-v6 { none; }; // Disable IPv6 listening if not using IPv6 extensively // Define an ACL for our internal networks acl "internal-nets" { localhost; 10.0.1.0/24; // LAN1 10.0.2.0/24; // DMZ1 }; allow-query { internal-nets; }; // Who can query this server (for authoritative and cached data) allow-recursion { internal-nets; }; // Who can make recursive queries recursion yes; // Enable recursive queries // Forwarders: If this server can't resolve something, forward to these public DNS servers. // pfSense (10.0.1.1) itself can be a forwarder, which then uses its own configured DNS. // Or use public DNS servers directly. forwarders { 10.0.1.1; // Forward to pfSense (which then uses its configured public DNS) // 8.8.8.8; // Google Public DNS (alternative or additional) // 1.1.1.1; // Cloudflare DNS (alternative or additional) }; // forward only; // 'only' means always use forwarders. forward first; // 'first' tries forwarders, then root if forwarders fail. dnssec-validation auto; // Or 'yes' - recommended // Optional: Log queries for troubleshooting (can generate large logs) // log_queries yes; // This is a BIND 8 syntax. // For BIND 9 logging: // logging { // channel query_log { // file "/var/log/named/query.log" versions 3 size 5m; // severity info; // print-category yes; // print-severity yes; // print-time yes; // }; // category queries { query_log; }; // }; // // Ensure /var/log/named exists and is writable by bind: // // sudo mkdir -p /var/log/named // // sudo chown bind:bind /var/log/named // // sudo chmod 775 /var/log/named };
Explanation of key options:
listen-on
: Specifies which IP addresses and port BIND should listen on for queries.10.0.1.10
is the LAN IP ofubuntu-server-01
.acl "internal-nets"
: Defines a named Access Control List for our trusted networks.allow-query
: Uses the ACL to restrict who can query the server.allow-recursion
: Uses the ACL to restrict who can make recursive queries. This is crucial to prevent becoming an open resolver.forwarders
: Specifies upstream DNS servers. Using pfSense (10.0.1.1
) centralizes external DNS configuration.forward first;
: Tries forwarders first. If they fail, BIND will attempt to resolve from root servers itself.forward only;
can be simpler if you always want to rely on forwarders.
-
Define Zones (
Add the following:named.conf.local
):
Edit/etc/bind/named.conf.local
to define your authoritative zones.// Our internal forward zone for yourlab.lan zone "yourlab.lan" { type master; // This server is the primary master for this zone file "/etc/bind/zones/db.yourlab.lan"; // Path to the zone file allow-update { none; }; // Disable dynamic updates for now notify yes; // Notify secondary servers (if any) of changes }; // Our internal reverse zone for 10.0.1.0/24 subnet (LAN1) // The name is 1.0.10.in-addr.arpa. (reversed IP octets + .in-addr.arpa.) zone "1.0.10.in-addr.arpa" { type master; file "/etc/bind/zones/db.10.0.1"; // Path to the reverse zone file allow-update { none; }; notify yes; }; // Our internal reverse zone for 10.0.2.0/24 subnet (DMZ1) zone "2.0.10.in-addr.arpa" { type master; file "/etc/bind/zones/db.10.0.2"; // Path to the reverse zone file allow-update { none; }; notify yes; };
-
Create Zone Files Directory:
This directory will store our actual zone data files. -
Create Forward Zone File (
Add the following content. Pay close attention to syntax, especially trailing dots on fully qualified domain names (FQDNs) and the SOA record format.db.yourlab.lan
):
This file contains records for theyourlab.lan
domain.; BIND data file for yourlab.lan ; $TTL 3600 ; Default TTL for records in this zone (1 hour) @ IN SOA ns1.yourlab.lan. admin.yourlab.lan. ( 2024031001 ; Serial (YYYYMMDDSS format, SS=sequence for the day) 7200 ; Refresh (2 hours) 3600 ; Retry (1 hour) 1209600 ; Expire (2 weeks) 3600 ) ; Negative Cache TTL (1 hour) ; ; Name Servers @ IN NS ns1.yourlab.lan. ; A Records for hosts ; LAN1 Segment (10.0.1.0/24) ns1 IN A 10.0.1.10 ; This DNS server (ubuntu-server-01) srv01 IN A 10.0.1.10 ; ubuntu-server-01 (can be same as ns1) fw01 IN A 10.0.1.1 ; pfSense LAN IP client01 IN A 10.0.1.50 ; client-vm-01 ; DMZ1 Segment (10.0.2.0/24) dmz01 IN A 10.0.2.20 ; dmz-server-01 fw01-dmz IN A 10.0.2.1 ; pfSense DMZ IP (optional name for clarity) ; CNAME Records (Aliases) www IN CNAME srv01.yourlab.lan. firewall IN CNAME fw01.yourlab.lan. ; MX Records (Mail Exchange) - example placeholder ; @ IN MX 10 mail.yourlab.lan. ; mail IN A 10.0.1.XX ; IP of mail server
Explanation:
$TTL
: Default Time-To-Live for records.@
: Represents the origin of the zone (i.e.,yourlab.lan.
).SOA
(Start of Authority) Record: Defines authoritative information about the zone.ns1.yourlab.lan.
: Primary master name server for this zone (FQDN with trailing dot).admin.yourlab.lan.
: Email address of the admin (replace first.
with@
for actual email:admin@yourlab.lan
). FQDN with trailing dot.Serial
: Version number of the zone file. Must be incremented each time you change the zone file. Using a YYYYMMDDSS format is a common convention.- Refresh, Retry, Expire, Negative Cache TTL: Timers for secondary servers and caching.
NS
Record: Specifies the name server(s) for this domain.A
Records: Map hostnames (without the domain, as@
impliesyourlab.lan.
, unless FQDN is used) to IPv4s.- Trailing dots are important for FQDNs to prevent BIND from appending the zone origin.
-
Create Reverse Zone File for LAN1 (
Add:db.10.0.1
):
This file maps IPs in10.0.1.x
back to hostnames.Note: Only the last octet of the IP is used on the left side of PTR records here because the zone; BIND reverse data file for 10.0.1.0/24 ; $TTL 3600 @ IN SOA ns1.yourlab.lan. admin.yourlab.lan. ( 2024031001 ; Serial 7200 ; Refresh 3600 ; Retry 1209600 ; Expire 3600 ) ; Negative Cache TTL ; ; Name Servers @ IN NS ns1.yourlab.lan. ; PTR Records (last octet of IP -> FQDN) 10 IN PTR ns1.yourlab.lan. ; 10.0.1.10 10 IN PTR srv01.yourlab.lan. ; Also 10.0.1.10 1 IN PTR fw01.yourlab.lan. ; 10.0.1.1 50 IN PTR client01.yourlab.lan. ; 10.0.1.50
1.0.10.in-addr.arpa.
already defines the first three octets in reverse. -
Create Reverse Zone File for DMZ1 (
Add:db.10.0.2
):
This file maps IPs in10.0.2.x
back to hostnames.; BIND reverse data file for 10.0.2.0/24 ; $TTL 3600 @ IN SOA ns1.yourlab.lan. admin.yourlab.lan. ( 2024031001 ; Serial 7200 ; Refresh 3600 ; Retry 1209600 ; Expire 3600 ) ; Negative Cache TTL ; ; Name Servers @ IN NS ns1.yourlab.lan. ; PTR Records 20 IN PTR dmz01.yourlab.lan. ; 10.0.2.20 1 IN PTR fw01-dmz.yourlab.lan. ; 10.0.2.1
-
Check Configuration Files for Errors:
- Check main BIND configuration: If there's no output, the main configuration files are syntactically correct.
- Check forward zone file:
Expected output:
zone yourlab.lan/IN: loaded serial 2024031001. OK
- Check LAN1 reverse zone file:
Expected output:
zone 1.0.10.in-addr.arpa/IN: loaded serial 2024031001. OK
- Check DMZ1 reverse zone file:
Expected output:
zone 2.0.10.in-addr.arpa/IN: loaded serial 2024031001. OK
-
Set Permissions and Ownership:
BIND runs as thebind
user (ornamed
on some systems). It needs to be able to read its configuration and zone files. The/etc/bind/zones
directory and files within need appropriate permissions.If AppArmor or SELinux is active, they might impose further restrictions. Default Ubuntu AppArmor profiles for BIND usually allow reading fromsudo chown -R bind:bind /etc/bind/zones sudo chmod 644 /etc/bind/zones/* # Read for owner/group, read for others sudo chmod 755 /etc/bind/zones # Ensure directory is executable for bind user # The /etc/bind directory itself should already have appropriate permissions.
/etc/bind/
and/var/cache/bind/
. -
Restart BIND9 Service:
-
Check BIND9 Status and Logs:
Look for errors related to loading zones. Common errors include syntax mistakes in zone files, incorrect serial numbers (must always increase), or permission issues. -
Configure Firewall on
ubuntu-server-01
(if usingufw
):
Ifufw
(Uncomplicated Firewall) is active onubuntu-server-01
, allow DNS traffic on port 53 from internal networks.sudo ufw allow from 10.0.1.0/24 to any port 53 proto udp comment 'DNS from LAN1' sudo ufw allow from 10.0.1.0/24 to any port 53 proto tcp comment 'DNS from LAN1 (TCP)' sudo ufw allow from 10.0.2.0/24 to any port 53 proto udp comment 'DNS from DMZ1' sudo ufw allow from 10.0.2.0/24 to any port 53 proto tcp comment 'DNS from DMZ1 (TCP)' # If ufw was enabled: # sudo ufw reload # If ufw was not enabled yet: # sudo ufw enable
Configuring Client VMs to Use Your DNS Server
Now, VMs on lan1-net
and dmz1-net
need to be told to use 10.0.1.10
(our BIND server) as their DNS server.
Method 1: Configure pfSense DHCP Server to provide your BIND server's IP (Recommended for DHCP clients)
This is the most seamless way for clients obtaining IPs via DHCP.
-
On
fw01
(pfSense WebGUI):- Go to
Services
->DHCP Server
. - Select the
LAN
tab. - Scroll down to "Servers" section.
- In the DNS Servers fields:
- DNS Server 1:
10.0.1.10
(your BIND server onubuntu-server-01
) - DNS Server 2: (Optional)
10.0.1.1
(pfSense itself as a fallback, which will then use its configured public DNS forwarders) or leave blank for only your BIND server.
- DNS Server 1:
- Click
Save
at the bottom of the page. - Repeat for the
DMZ
interface tab if you want DMZ clients to use your BIND server:- DNS Server 1:
10.0.1.10
- DNS Server 2: (Optional)
10.0.2.1
(pfSense DMZ IP as fallback)
- DNS Server 1:
- Click
Save
.
- Go to
-
On client VMs (e.g.,
client-vm-01
,dmz-server-01
):- If they are already configured for DHCP (e.g., Netplan
dhcp4: true
), they need to renew their DHCP lease to get the new DNS server information. - Verify the DNS server settings on the client. The primary way is to check what
systemd-resolved
(if used by Netplan) is using, or look at/etc/resolv.conf
.
- If they are already configured for DHCP (e.g., Netplan
Method 2: Manually Configure Static DNS on Clients
If a client VM has a static IP configuration, you must update its Netplan (or equivalent network configuration) to point to 10.0.1.10
.
Example for client-vm-01
if it were using a static IP:
# /etc/netplan/00-installer-config.yaml on client-vm-01
network:
ethernets:
enp1s0: # Adjust interface name as needed
dhcp4: no
addresses: [10.0.1.50/24]
gateway4: 10.0.1.1
nameservers:
addresses: [10.0.1.10] # Point directly to your BIND server
# search: [yourlab.lan] # Optional: adds yourlab.lan to DNS search path
version: 2
sudo netplan apply
.
Testing DNS Resolution
On a client VM (e.g., client-vm-01
on LAN1, or dmz-server-01
on DMZ1) that is now configured to use 10.0.1.10
for DNS:
-
Test Forward Resolution (Internal Records):
In the output, look for the# From client-vm-01 (LAN1) dig ns1.yourlab.lan dig srv01.yourlab.lan dig client01.yourlab.lan dig www.yourlab.lan dig fw01.yourlab.lan dig dmz01.yourlab.lan # Should resolve to 10.0.2.20
ANSWER SECTION
andSERVER:
line.SERVER:
should show10.0.1.10#53
. The answer section should provide the correct IP address. -
Test Reverse Resolution (Internal Records):
You should get PTR records resolving to the fully qualified domain names defined in your reverse zone files. -
Test External Resolution (Recursion through BIND, then pfSense, then public DNS):
These should also resolve correctly. TheSERVER:
field should still show10.0.1.10#53
, indicating your BIND server handled the recursive query. -
Troubleshooting with
nslookup
(alternative todig
):
Common Issues and Troubleshooting:
- Syntax errors in BIND config/zone files:
named-checkconf
andnamed-checkzone
are your best friends. Also, checkjournalctl -u bind9
. - Serial numbers not incremented:
BIND (especially secondary servers, not relevant here yet) relies on increasing serial numbers to detect zone updates. - Permissions:
Ensurebind
user can read all necessary files. - Firewall blocking:
Checkufw
on the BIND server and pfSense firewall rules. DNS uses UDP/53 primarily, but TCP/53 for larger responses or zone transfers. allow-query
/allow-recursion
ACLs:
Ensure the client's network is permitted.- Client DNS configuration:
Double-check/etc/resolv.conf
orresolvectl status
on clients. Ensure they are actually using your BIND server. - pfSense DNS Resolver/Forwarder:
If pfSense itself is having trouble resolving external names, your BIND server (if forwarding through pfSense) will also fail for external names. Check pfSense's DNS settings underSystem
->General Setup
and its resolver status underStatus
->DNS Resolver
.
Workshop Setting Up and Testing Your Internal DNS Server
Objective:
To install BIND9 on ubuntu-server-01
, configure it as an authoritative DNS server for an internal domain (yourlab.lan
) and as a recursive resolver for clients, and then test name resolution from other VMs.
Prerequisites:
- All VMs and network setup from the pfSense workshop (Chapter 6) should be functional.
ubuntu-server-01
(10.0.1.10),client-vm-01
(on LAN1, 10.0.1.50 or DHCP),dmz-server-01
(on DMZ1, 10.0.2.20 or DHCP).- pfSense (
fw01
) configured and routing traffic.
Steps:
-
Install BIND9 on
ubuntu-server-01
:- SSH into
ubuntu-server-01
. - Run
sudo apt update && sudo apt install bind9 bind9utils bind9-doc -y
.
- SSH into
-
Configure
named.conf.options
onubuntu-server-01
:- Edit
/etc/bind/named.conf.options
as per the detailed example in the main text.- Set
listen-on port 53 { 127.0.0.1; 10.0.1.10; };
. - Define
acl "internal-nets" { localhost; 10.0.1.0/24; 10.0.2.0/24; };
. - Set
allow-query { internal-nets; };
andallow-recursion { internal-nets; };
. - Set
recursion yes;
. - Set
forwarders { 10.0.1.1; };
(pfSense LAN IP) andforward first;
. - Save the file.
- Set
- Edit
-
Define Zones in
named.conf.local
onubuntu-server-01
:- Edit
/etc/bind/named.conf.local
. - Add zone definitions for
yourlab.lan
(forward),1.0.10.in-addr.arpa
(reverse LAN1), and2.0.10.in-addr.arpa
(reverse DMZ1) as shown in the main text. - Ensure file paths are
/etc/bind/zones/db.yourlab.lan
,/etc/bind/zones/db.10.0.1
, and/etc/bind/zones/db.10.0.2
. - Save the file.
- Edit
-
Create Zone Files Directory and Zone Files on
ubuntu-server-01
:sudo mkdir -p /etc/bind/zones
.- Create
/etc/bind/zones/db.yourlab.lan
with A records forns1
,srv01
,fw01
,client01
,dmz01
,fw01-dmz
, and CNAMEs as per the example. Use a current serial number. - Create
/etc/bind/zones/db.10.0.1
with PTR records for hosts in10.0.1.0/24
. Use a current serial. - Create
/etc/bind/zones/db.10.0.2
with PTR records for hosts in10.0.2.0/24
. Use a current serial.
-
Validate BIND Configuration and Zone Files on
ubuntu-server-01
:sudo named-checkconf
sudo named-checkzone yourlab.lan /etc/bind/zones/db.yourlab.lan
sudo named-checkzone 1.0.10.in-addr.arpa /etc/bind/zones/db.10.0.1
sudo named-checkzone 2.0.10.in-addr.arpa /etc/bind/zones/db.10.0.2
- Correct any errors reported.
-
Set Permissions and Restart BIND9 on
ubuntu-server-01
:sudo chown -R bind:bind /etc/bind/zones
sudo chmod 644 /etc/bind/zones/*
sudo chmod 755 /etc/bind/zones
sudo systemctl restart bind9
sudo systemctl status bind9
(check for active/running and no errors).journalctl -u bind9 -n 50 --no-pager
(check recent logs).
-
Configure Host Firewall on
ubuntu-server-01
(ufw):- Run the
sudo ufw allow ... port 53 ...
commands as shown in the main text to permit DNS queries from LAN1 and DMZ1. - If ufw is active,
sudo ufw reload
. If not,sudo ufw enable
(this will enable the firewall, ensure SSH is allowed if you're connected via SSH:sudo ufw allow ssh
).
- Run the
-
Configure pfSense DHCP Server (
fw01
):- Access pfSense WebGUI (
https://10.0.1.1
). - Go to
Services
->DHCP Server
. - On the
LAN
tab, set DNS Server 1 to10.0.1.10
. Save. - On the
DMZ
tab, set DNS Server 1 to10.0.1.10
. Save.
- Access pfSense WebGUI (
-
Update Client VM DNS Settings:
- On
client-vm-01
(LAN1):- Ensure Netplan is set to
dhcp4: true
(if not already). - Renew DHCP lease:
sudo dhclient -r && sudo dhclient
. - Verify DNS settings:
cat /etc/resolv.conf
andresolvectl status
.
- Ensure Netplan is set to
- On
dmz-server-01
(DMZ1):- Ensure Netplan is set to
dhcp4: true
. - Renew DHCP lease:
sudo dhclient -r && sudo dhclient
. - Verify DNS settings.
- Ensure Netplan is set to
- On
-
Test DNS Resolution:
- From
client-vm-01
:dig srv01.yourlab.lan
(Expected: 10.0.1.10)dig dmz01.yourlab.lan
(Expected: 10.0.2.20)dig -x 10.0.1.50
(Expected: client01.yourlab.lan.)dig www.google.com
(Expected: Google's IP, via 10.0.1.10 as resolver)
- From
dmz-server-01
:dig srv01.yourlab.lan
dig dmz01.yourlab.lan
dig -x 10.0.2.20
dig www.debian.org
- From
ubuntu-server-01
(the DNS server itself):- Edit
/etc/netplan/00-installer-config.yaml
. For theenp1s0
(NAT) interface, you can explicitly addnameservers: addresses: [127.0.0.1]
to make the server use itself for DNS first. Or, for theenp2s0
(LAN) interface, if that's the primary path for its own lookups, configure its nameserver there. A simpler approach for system-wide DNS on the server itself is to edit/etc/systemd/resolved.conf
: - Then test:
dig srv01.yourlab.lan
(should query itself).
- Edit
- From
Workshop Summary:
You have successfully set up a BIND9 DNS server on ubuntu-server-01
. It serves authoritative records for your internal yourlab.lan
domain and its corresponding reverse lookup zones for LAN1 and DMZ1. It also acts as a recursive resolver for your internal clients, forwarding external queries through pfSense. Your client VMs are now configured (via DHCP from pfSense) to use this internal DNS server. This is a significant step towards a fully functional internal network environment.
8. Deploying a Web Server (Apache or Nginx)
With DNS configured, the next logical step is to deploy services that benefit from name resolution, such as a web server. Web servers host websites and web applications, serving content over HTTP or HTTPS. We'll explore installing and configuring a popular web server like Nginx (or Apache) on one of our VMs.
Introduction to Web Servers
A web server is software that processes requests from clients (typically web browsers) using the Hypertext Transfer Protocol (HTTP) or its secure version, HTTPS. When a client requests a resource (like an HTML page, image, or file), the web server locates the resource and sends it back to the client.
Common Web Server Software:
-
Apache HTTP Server (httpd):
- One of the oldest and most widely used web servers.
- Known for its flexibility, rich feature set, and module-based architecture (
mod_rewrite
,mod_php
, etc.). - Uses a process-driven or hybrid (process/thread) model. Can be memory-intensive under high load with certain configurations (e.g., prefork MPM with
mod_php
). - Configuration is typically done via
.htaccess
files (for per-directory settings) and main configuration files (e.g.,httpd.conf
,apache2.conf
, site-specific virtual host files).
-
Nginx (pronounced "Engine-X"):
- A high-performance, event-driven web server, reverse proxy, load balancer, and HTTP cache.
- Known for its efficiency, scalability, and low memory footprint, especially under concurrent connections.
- Excellent for serving static content and as a reverse proxy for application servers.
- Configuration is done via a main
nginx.conf
file and included site-specific configuration files. Does not use.htaccess
files by default.
-
Microsoft IIS (Internet Information Services):
- Web server for Windows Server platforms. Tightly integrated with the Windows ecosystem.
-
LiteSpeed Web Server:
- A commercial high-performance web server, often a drop-in replacement for Apache.
For our Linux-based lab, Nginx is an excellent choice due to its modern architecture, performance, and increasingly popular usage. We will focus on Nginx, but the concepts are transferable to Apache.
Choosing a Web Server VM
We need to decide which VM will host our web server.
srv01
(ubuntu-server-01
on10.0.1.10
in LAN1): Could host internal websites or act as a backend for a reverse proxy.dmz01
(dmz-server-01
on10.0.2.20
in DMZ1): A common location for public-facing web servers, as the DMZ is a more controlled, isolated network segment.
Let's plan to install our first web server on dmz01
(dmz-server-01
). We can later explore setting up another one on srv01
or using srv01
as an application server proxied by Nginx on dmz01
.
Installing Nginx
We'll install Nginx on dmz-server-01
(10.0.2.20
).
-
SSH into
dmz-server-01
: -
Update package lists and install Nginx:
-
Check Nginx Status:
You should see it's
Once installed, Nginx usually starts automatically.active (running)
. If not, start it:sudo systemctl start nginx
. Enable it to start on boot:sudo systemctl enable nginx
. -
Allow Nginx through Firewall (on
dmz-server-01
itself):
Ifufw
is active ondmz-server-01
:sudo ufw app list # See available application profiles, Nginx should be listed sudo ufw allow 'Nginx HTTP' # Allows traffic on port 80 # If you later configure HTTPS: sudo ufw allow 'Nginx HTTPS' (port 443) # Or more generically: sudo ufw allow 80/tcp && sudo ufw allow 443/tcp # sudo ufw reload # if already active # sudo ufw enable # if not active (ensure SSH is allowed if using it)
-
Allow Nginx through pfSense Firewall:
Clients from LAN1 (10.0.1.0/24
) need to be able to reach port 80 ondmz01
(10.0.2.20
).- Go to your pfSense WebGUI (
https://10.0.1.1
). Firewall
->Rules
.- Select the
LAN
tab (since the traffic originates from LAN). - Click
+ Add
to create a new rule.- Action:
Pass
- Interface:
LAN
- Address Family:
IPv4
- Protocol:
TCP
- Source:
LAN net
(alias for10.0.1.0/24
) - Destination:
- Type:
Single host or alias
- Address:
10.0.2.20
(IP ofdmz-server-01
)
- Type:
- Destination Port Range:
- From:
HTTP
(or type80
) - To:
HTTP
(or type80
)
- From:
- Description:
Allow LAN to DMZ Web Server (dmz01)
- Click
Save
.
- Action:
- Click
Apply Changes
.
- Go to your pfSense WebGUI (
-
Test Default Nginx Page:
- From
client-vm-01
(on10.0.1.0/24
), open a web browser (if GUI) or usecurl
: - You should see the default "Welcome to nginx!" HTML page.
- From
Nginx Configuration Basics
Nginx configuration files are typically located in /etc/nginx/
.
/etc/nginx/nginx.conf
:
The main configuration file. It usually includes other configuration files./etc/nginx/sites-available/
:
Directory to store configuration files for individual websites (virtual hosts). Each site typically gets its own file./etc/nginx/sites-enabled/
:
Directory containing symbolic links to files insites-available/
for sites that you want to be active./etc/nginx/conf.d/
:
Another directory often included bynginx.conf
for additional configuration snippets. Debian/Ubuntu Nginx packages often primarily usesites-available
/sites-enabled
.
The default Nginx welcome page is often served by a configuration file like /etc/nginx/sites-available/default
.
Key Nginx Directives:
server { ... }
:
Defines a virtual server (virtual host).listen <port>;
:
Specifies the port Nginx should listen on for this server block (e.g.,listen 80;
).server_name <name1> <name2> ...;
:
Defines the domain names this server block should respond to (e.g.,server_name example.com www.example.com;
).root <path>;
:
Specifies the document root directory where website files are stored for this server block (e.g.,root /var/www/html;
).index <file1> <file2> ...;
:
Specifies default files to serve if a directory is requested (e.g.,index index.html index.htm;
).location <path_pattern> { ... }
:
Defines how Nginx should handle requests for specific URL paths.
Creating Virtual Hosts
A virtual host allows you to host multiple websites (with different domain names or ports) on a single Nginx server instance. Each website will have its own document root and configuration.
Let's plan two simple sites on dmz01
(dmz-server-01
):
dmz01.yourlab.lan
(already works with the default Nginx page). We'll customize its content.- A new site, say
app.yourlab.lan
, which will also be served bydmz01
.
Step 1: Create Document Root Directories (on dmz-server-01
)
- For
dmz01.yourlab.lan
(we can use the default or create a new one): Let's use/var/www/dmz01_yourlab_lan/html
. - For
app.yourlab.lan
:
Step 2: Create Simple Index Pages (on dmz-server-01
)
- For
dmz01.yourlab.lan
: Add content: - For
app.yourlab.lan
: Add content:
Step 3: Set Permissions (on dmz-server-01
)
Ensure Nginx (which runs as user www-data
on Debian/Ubuntu) can read these files.
sudo chown -R www-data:www-data /var/www/dmz01_yourlab_lan
sudo chown -R www-data:www-data /var/www/app_yourlab_lan
sudo chmod -R 755 /var/www/ # Ensure directories are traversable
Step 4: Create Nginx Server Block (Virtual Host) Configuration Files (on dmz-server-01
)
- Configuration for
dmz01.yourlab.lan
:
We can modify the existingdefault
server block or create a new one. Let's create a new one and disable the default later if necessary. Add: - Configuration for
app.yourlab.lan
: Add:
Step 5: Enable the New Server Blocks (on dmz-server-01
)
Create symbolic links from sites-available
to sites-enabled
.
sudo ln -s /etc/nginx/sites-available/dmz01.yourlab.lan /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/app.yourlab.lan /etc/nginx/sites-enabled/
If the default
site is still enabled (i.e., /etc/nginx/sites-enabled/default
exists) and also listens on port 80 without a specific server_name
or with a wildcard, it might conflict or act as the default handler. For clarity, you might want to disable it if you are explicitly defining all your sites:
# Optional: Disable default Nginx site if you only want your custom sites active on port 80
# sudo rm /etc/nginx/sites-enabled/default
If you remove the default, ensure one of your server_name
entries in your custom sites is designated as the default for IP-based requests or unmatched server names by adding default_server
to its listen
directive, e.g., listen 80 default_server;
. For dmz01.yourlab.lan
, this would be appropriate.
Let's update dmz01.yourlab.lan
config to be the default server for port 80:
sudo nano /etc/nginx/sites-available/dmz01.yourlab.lan
# Modify the listen line:
# listen 80 default_server;
# listen [::]:80 default_server;
Step 6: Add DNS Record for app.yourlab.lan
(on ubuntu-server-01
, our DNS server)
The name app.yourlab.lan
needs to resolve to 10.0.2.20
.
- SSH into
ubuntu-server-01
(ns1.yourlab.lan
). - Edit the forward zone file for
yourlab.lan
: - Increment the serial number! (e.g.,
2024031001
to2024031002
). - Add an A record for
app
: - Save the file.
- Check the zone file:
- Reload BIND9 to apply changes (or restart if reload fails, but reload is preferred):
Step 7: Test Nginx Configuration and Reload (on dmz-server-01
)
syntax is ok
and test is successful
, then reload Nginx:
Step 8: Test Access to New Sites
-
From
client-vm-01
(or any client on LAN1 that can reach DMZ1 and use your DNS):-
First, ensure the new DNS record for
app.yourlab.lan
propagates or clear client DNS cache if necessary. -
Access via browser or
curl
:
-
Serving Different Content on a Single VM
The virtual host setup above demonstrates serving different websites (identified by server_name
) from the same IP address and port. Nginx inspects the Host
header in the HTTP request to determine which server
block should handle the request.
This is fundamental for hosting multiple distinct websites on a single server instance, a very common practice.
Workshop Deploying Nginx and Configuring Virtual Hosts
Objective:
To install Nginx on dmz-server-01
, configure two virtual hosts (dmz01.yourlab.lan
and app.yourlab.lan
), and make them accessible from client-vm-01
.
Prerequisites:
- All VMs and network/DNS setup from Chapter 7 fully functional.
dmz-server-01
running, accessible, and able to install packages.ubuntu-server-01
(DNS server) running and accessible.client-vm-01
running and able to make web requests and DNS queries.- pfSense firewall rules allowing HTTP from LAN to
dmz-server-01
(10.0.2.20
) on port 80.
Steps:
-
Install Nginx on
dmz-server-01
:- SSH into
dmz-server-01
. sudo apt update && sudo apt install nginx -y
.sudo systemctl status nginx
(ensure active).sudo systemctl enable nginx
.- If
ufw
is active ondmz-server-01
,sudo ufw allow 'Nginx HTTP'
.
- SSH into
-
Verify Default Nginx Access (from
client-vm-01
):- On
client-vm-01
, runcurl http://dmz01.yourlab.lan
. You should see the "Welcome to nginx!" page.
- On
-
Create Document Roots and Index Pages on
dmz-server-01
:sudo mkdir -p /var/www/dmz01_yourlab_lan/html
sudo mkdir -p /var/www/app_yourlab_lan/html
- Create
/var/www/dmz01_yourlab_lan/html/index.html
with content:<h1>Hello from dmz01.yourlab.lan</h1>
- Create
/var/www/app_yourlab_lan/html/index.html
with content:<h1>Hello from app.yourlab.lan</h1>
sudo chown -R www-data:www-data /var/www/dmz01_yourlab_lan /var/www/app_yourlab_lan
sudo chmod -R 755 /var/www/
-
Create Nginx Server Block Configs on
dmz-server-01
:- Create
/etc/nginx/sites-available/dmz01.yourlab.lan
with: - Create
/etc/nginx/sites-available/app.yourlab.lan
with:
- Create
-
Enable Server Blocks on
dmz-server-01
:sudo ln -s /etc/nginx/sites-available/dmz01.yourlab.lan /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/app.yourlab.lan /etc/nginx/sites-enabled/
- Optional:
sudo rm /etc/nginx/sites-enabled/default
(if it exists and you want to ensure your configs take precedence).
-
Add DNS Record for
app.yourlab.lan
onubuntu-server-01
:- SSH into
ubuntu-server-01
. - Edit
/etc/bind/zones/db.yourlab.lan
. - Increment serial number.
- Add:
app IN A 10.0.2.20
- Save.
sudo named-checkzone yourlab.lan /etc/bind/zones/db.yourlab.lan
. sudo systemctl reload bind9
.
- SSH into
-
Test Nginx Config and Reload on
dmz-server-01
:sudo nginx -t
.- If OK,
sudo systemctl reload nginx
.
-
Test Access from
client-vm-01
:- Clear DNS cache on
client-vm-01
or wait briefly.- You can try
sudo systemd-resolve --flush-caches
if your client uses systemd-resolved. - Test DNS directly:
dig app.yourlab.lan @10.0.1.10
(should show 10.0.2.20).
- You can try
curl http://dmz01.yourlab.lan
(Expected:<h1>Hello from dmz01.yourlab.lan</h1>
)curl http://app.yourlab.lan
(Expected:<h1>Hello from app.yourlab.lan</h1>
)
- Clear DNS cache on
Workshop Summary:
You have successfully installed Nginx on dmz-server-01
and configured it to serve two different websites, dmz01.yourlab.lan
and app.yourlab.lan
, using virtual hosts. You also updated your internal DNS server to resolve the new hostname. This demonstrates a core concept in web hosting and prepares your environment for more complex web applications or reverse proxy setups.
9. Understanding TCP/IP and Network Analysis with Wireshark
To truly understand what's happening in your simulated network, and to troubleshoot effectively, a grasp of the TCP/IP protocol suite and the ability to inspect network traffic are essential. This section provides an overview of TCP/IP and introduces Wireshark, a powerful network protocol analyzer.
The TCP/IP Model vs OSI Model (Brief Overview)
Network communication is often conceptualized using layered models.
-
OSI (Open Systems Interconnection) Model: A 7-layer conceptual model:
- Physical (e.g., cables, radio waves, voltages)
- Data Link (e.g., Ethernet, MAC addresses, frames, switches)
- Network (e.g., IP, routing, packets, routers)
- Transport (e.g., TCP, UDP, segments/datagrams, ports, reliability)
- Session (e.g., session management, synchronization)
- Presentation (e.g., data formatting, encryption, compression)
- Application (e.g., HTTP, FTP, DNS, user-facing protocols)
-
TCP/IP Model:
A more practical, 4-layer model that is widely implemented:- Link Layer (or Network Interface / Network Access Layer): Combines OSI Physical and Data Link layers. Deals with physical hardware, MAC addresses, and transmitting data over a local physical network (e.g., Ethernet frames).
- Internet Layer (or Network Layer):
Corresponds to OSI Network Layer. Responsible for logical addressing (IP addresses), routing packets across networks, and path determination. Key protocol: IP (Internet Protocol). - Transport Layer:
Corresponds to OSI Transport Layer. Provides end-to-end communication services for applications. Key protocols:- TCP (Transmission Control Protocol):
Connection-oriented, reliable, ordered delivery of a stream of bytes. - UDP (User Datagram Protocol):
Connectionless, unreliable, best-effort datagram delivery. Faster, less overhead than TCP.
- TCP (Transmission Control Protocol):
- Application Layer:
Combines OSI Session, Presentation, and Application layers. Contains protocols used by applications for specific tasks (e.g., HTTP for web, SMTP for email, DNS for name resolution).
Data is encapsulated as it moves down the layers on the sending host (headers are added at each layer) and de-encapsulated as it moves up the layers on the receiving host.
Deep Dive into IP (Internet Protocol - Layer 3)
IP is the core protocol of the Internet layer. Its primary job is to deliver packets from a source host to a destination host based on their IP addresses.
- IP Packet (Datagram):
The unit of data at the IP layer. Contains a header and a payload.- Header Fields (IPv4):
- Version (e.g., 4 for IPv4).
- IHL (Internet Header Length).
- DSCP/ECN (Differentiated Services Code Point / Explicit Congestion Notification).
- Total Length (header + data).
- Identification, Flags, Fragment Offset (used for packet fragmentation).
- TTL (Time To Live):
A counter decremented by each router; if it reaches 0, the packet is discarded (prevents infinite loops). - Protocol:
Indicates the transport layer protocol carried in the payload (e.g., 6 for TCP, 17 for UDP, 1 for ICMP). - Header Checksum.
- Source IP Address:
32-bit address of the sender. - Destination IP Address:
32-bit address of the intended recipient. - Options (rarely used).
- Payload:
The data passed down from the transport layer (e.g., a TCP segment or UDP datagram).
- Header Fields (IPv4):
- IP Addressing:
Globally unique (public IPs) or locally unique within private networks. - Routing:
The process of forwarding IP packets from their source network to their destination network. Routers maintain routing tables to make forwarding decisions. Each router inspects the destination IP address of an incoming packet and forwards it to the next hop router or the final destination if directly connected. - Connectionless:
IP itself is connectionless. Each packet is treated independently. It doesn't establish a connection before sending data. - Unreliable:
IP provides a "best-effort" delivery service. It doesn't guarantee delivery, order, or error-free transmission. Reliability is the responsibility of higher-layer protocols like TCP. - Fragmentation:
If a packet is larger than the Maximum Transmission Unit (MTU) of an underlying network link, IP can fragment the packet into smaller pieces, which are reassembled at the destination.
Deep Dive into TCP (Transmission Control Protocol - Layer 4)
TCP provides a reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating over an IP network.
- Connection-Oriented:
TCP establishes a connection before data transfer begins and closes it afterward. This involves a three-way handshake:- SYN (Synchronize):
Client sends a TCP segment with the SYN flag set to the server, indicating a request to establish a connection. It includes a random initial sequence number (ISN_c). - SYN-ACK (Synchronize-Acknowledge):
Server responds with a TCP segment with both SYN and ACK flags set. It acknowledges the client's SYN (ACK number = ISN_c + 1) and sends its own random initial sequence number (ISN_s). - ACK (Acknowledge):
Client responds with a TCP segment with the ACK flag set, acknowledging the server's SYN (ACK number = ISN_s + 1). At this point, the connection is established, and data transfer can begin.
- SYN (Synchronize):
- TCP Segment:
The unit of data at the TCP layer. Contains a header and payload.- Header Fields:
- Source Port (16 bits):
Identifies the sending application process. - Destination Port (16 bits):
Identifies the receiving application process. (e.g., port 80 for HTTP, 443 for HTTPS, 22 for SSH). - Sequence Number (32 bits):
Identifies the byte position in the stream of data sent by the source. Used for ordering and reassembly. - Acknowledgement Number (32 bits):
If ACK flag is set, this field contains the next sequence number the sender of the ACK is expecting. - Data Offset (Header Length).
- Reserved.
- Flags (Control Bits):
URG
: Urgent pointer field significant.ACK
: Acknowledgement field significant.PSH
: Push function (request to deliver data to application promptly).RST
: Reset the connection (abrupt termination).SYN
: Synchronize sequence numbers (used to initiate connections).FIN
: No more data from sender (used to terminate connections gracefully).
- Window Size (16 bits):
Used for flow control. Indicates the amount of data the sender of this segment is willing to receive. - Checksum (for error detection in header and data).
- Urgent Pointer.
- Options (e.g., MSS - Maximum Segment Size, SACK - Selective Acknowledgement, Window Scale).
- Source Port (16 bits):
- Payload:
Application data.
- Header Fields:
- Reliability:
- Sequence Numbers:
Ensure data is reassembled in the correct order. - Acknowledgements (ACKs):
The receiver sends ACKs for data received. If the sender doesn't receive an ACK within a certain time (Retransmission Timeout - RTO), it retransmits the unacknowledged segment. - Checksum:
Detects corrupted segments. Corrupted segments are discarded, and TCP relies on retransmissions.
- Sequence Numbers:
- Flow Control:
- Uses the sliding window protocol. The receiver advertises its receive window size in the TCP header. The sender ensures it doesn't send more data than the receiver's window can accommodate, preventing the receiver from being overwhelmed.
- Congestion Control:
- TCP has mechanisms (e.g., slow start, congestion avoidance, fast retransmit, fast recovery) to prevent overwhelming the network itself. It adjusts its sending rate based on perceived network congestion (e.g., packet loss, increased RTT).
- Ordered Delivery:
TCP ensures that data is delivered to the receiving application in the order it was sent. - Full-Duplex:
Data can flow in both directions simultaneously over a single TCP connection. - Connection Termination (Four-way handshake typically):
- FIN:
Host A sends a FIN segment to Host B, indicating it has no more data to send. - ACK:
Host B acknowledges Host A's FIN. Host B can still send data to A. - FIN:
When Host B is also done sending data, it sends a FIN segment to Host A. - ACK:
Host A acknowledges Host B's FIN. After a wait time (TIME_WAIT state), the connection is closed.
- FIN:
Deep Dive into UDP (User Datagram Protocol - Layer 4)
UDP is a simpler, connectionless transport protocol.
- Connectionless:
No handshake to establish a connection. Data is just sent. - Unreliable:
No acknowledgements, no retransmissions, no guarantee of delivery or order. "Best effort." - Datagrams:
The unit of data is a UDP datagram.- Header Fields (very simple):
- Source Port (16 bits).
- Destination Port (16 bits).
- Length (16 bits, header + data).
- Checksum (16 bits, optional for IPv4, mandatory for IPv6).
- Header Fields (very simple):
- Low Overhead:
Smaller header and no connection management make it faster than TCP. - Use Cases:
Suitable for applications where speed is more important than guaranteed delivery, or where the application layer handles reliability.- DNS queries (often UDP port 53):
Fast lookups, application can retry if needed. - DHCP (UDP ports 67, 68):
For IP address assignment. - Streaming media (some types):
Occasional lost packets might be acceptable. - Online gaming (some types):
Low latency is critical. - VoIP (some implementations like RTP over UDP):
- DNS queries (often UDP port 53):
Introduction to Wireshark
Wireshark is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education.
- How it works:
Wireshark captures network traffic going to and from your computer (or specific network interfaces). It can also read packet captures from files. - Features:
- Deep inspection of hundreds of protocols.
- Live capture and offline analysis.
- Powerful display filters to narrow down traffic.
- Color coding of packets based on protocol or filters.
- Statistics generation.
- Ability to follow TCP streams, UDP streams, etc.
- Cross-platform (Windows, Linux, macOS).
Wireshark User Interface (Main Components):
- Menu Bar:
File, Edit, View, Go, Capture, Analyze, Statistics, Telephony, Wireless, Tools, Help. - Main Toolbar:
Quick access to common functions (start/stop capture, open/save files, zoom). - Filter Toolbar:
Where you type display filters (e.g.,ip.addr == 10.0.1.50
,tcp.port == 80
,dns
). - Packet List Pane:
Displays a summary of each captured packet (No., Time, Source IP, Destination IP, Protocol, Length, Info). - Packet Details Pane:
Shows a hierarchical view of the selected packet's protocols and header fields (e.g., Frame, Ethernet II, Internet Protocol Version 4, Transmission Control Protocol). You can expand each layer. - Packet Bytes Pane:
Displays the raw data of the selected packet in hexadecimal and ASCII. - Status Bar:
Shows capture file details, profile, packet counts.
Installing Wireshark
-
On Linux (e.g., your KVM host or a Linux GUI VM like
During installation, it might ask: "Should non-superusers be able to capture packets?"client-vm-01
if you installed a desktop on it):- If you select "Yes", your user account will be added to the
wireshark
group. You will need to log out and log back in for this group membership to take effect. This allows you to run Wireshark as a regular user and still capture packets. - If you select "No", you'll generally need to run Wireshark with
sudo wireshark
to capture packets, which is generally discouraged for GUI applications for security reasons. It's recommended to choose "Yes" and then re-login.
- If you select "Yes", your user account will be added to the
-
On Windows or macOS:
Download the installer from https://www.wireshark.org/download.html. Npcap (for Windows) or equivalent capture drivers will be installed.
Capturing and Analyzing Traffic
Basic Capture Process:
- Launch Wireshark.
- Select Capture Interface:
On the main screen, you'll see a list of available network interfaces.- If running on your KVM host and you want to capture traffic from a VM connected to a
libvirt
bridge (likevirbr0
,virbr1
,virbr2
, or your custombr0
), you can select that bridge interface on the host. This allows you to see traffic from/to VMs on that segment. - If running Wireshark inside a VM, select its primary network interface (e.g.,
eth0
,enp1s0
).
- If running on your KVM host and you want to capture traffic from a VM connected to a
- Start Capture:
Double-click the interface name or select it and click the "Start capturing packets" button (blue shark fin icon). - Generate Traffic:
Perform the network activity you want to analyze (e.g., ping, browse a website, make a DNS query). - Stop Capture:
Click the "Stop capturing packets" button (red square icon). - Apply Display Filters (Optional but very useful):
- In the filter bar, type a filter expression and press Enter.
- Examples:
ip.addr == 10.0.1.50
(Show packets to or from this IP)tcp.port == 80
(Show TCP traffic on port 80 - HTTP)udp.port == 53
(Show UDP traffic on port 53 - DNS)dns
(Show DNS protocol traffic)http
(Show HTTP protocol traffic)icmp
(Show ICMP traffic - e.g., ping)tcp.flags.syn == 1 && tcp.flags.ack == 0
(Show TCP SYN packets - start of connection)
- Examine Packets:
- Click on a packet in the Packet List Pane.
- Examine its details in the Packet Details Pane. Expand protocol layers to see header fields.
- Right-click on a packet -> "Follow" -> "TCP Stream" (or UDP/TLS Stream) to see the reassembled payload exchanged between the two endpoints for that specific conversation.
Workshop Analyzing Network Traffic with Wireshark
Objective:
To use Wireshark to capture and analyze DNS queries, HTTP requests, and TCP handshakes within your simulated network.
Prerequisites:
- Your simulated network with
fw01
(pfSense),ubuntu-server-01
(DNS server10.0.1.10
),dmz-server-01
(Nginx web server10.0.2.20
), andclient-vm-01
(on LAN1, using DNS server). -
Wireshark installed. You have two main options for where to run Wireshark:
- On your KVM Host:
This allows capturing on bridge interfaces likevirbr1
(forlan1-net
) orvirbr2
(fordmz1-net
). This is powerful as you can see all traffic on that segment. - Inside
client-vm-01
:
If you installed a desktop environment and Wireshark onclient-vm-01
, you can capture traffic from its perspective.
For this workshop, let's try capturing on the KVM host, on the
virbr1
interface (connected tolan1-net
). Iflan1-net
is associated withvirbr1
(checkvirsh net-dumpxml lan1-net
for bridge name orip addr
on host). - On your KVM Host:
Part 1: Capturing DNS Traffic
- Prepare for Capture (on KVM Host):
- Open Wireshark on your KVM host.
- Identify the bridge interface for
lan1-net
(e.g.,virbr1
). Select it.
- Start Capture: Click the "Start capturing packets" button.
- Generate DNS Traffic (on
client-vm-01
):- SSH into
client-vm-01
or use its console. - Clear any local DNS cache (optional, for a fresh query):
sudo systemd-resolve --flush-caches
(if applicable). - Perform some DNS lookups:
- SSH into
- Stop Capture (on KVM Host):
Click the "Stop" button in Wireshark. - Filter and Analyze DNS Packets:
- In the Wireshark filter bar, type
dns
and press Enter. - You should see packets related to your
dig
commands. - Find a query from
client-vm-01
(e.g.,10.0.1.50
) toubuntu-server-01
(DNS server10.0.1.10
).- Select a "Standard query" packet.
- In the Packet Details pane, expand "Domain Name System (query)".
- Look at "Queries": It will show the name being queried (e.g.,
srv01.yourlab.lan: type A, class IN
).
- Find the corresponding "Standard query response" packet from
10.0.1.10
to10.0.1.50
.- Expand "Domain Name System (response)".
- Look at "Answers": It will show the A record (e.g.,
srv01.yourlab.lan: type A, class IN, addr 10.0.1.10
).
- Observe a query for
www.google.com
.client-vm-01
sends query to10.0.1.10
.10.0.1.10
(your BIND server) then forwards this query to its forwarder (pfSense10.0.1.1
). You should see this traffic if capturing onvirbr1
.- pfSense then queries public DNS (this traffic won't be on
virbr1
, but on pfSense's WANvirbr0
). - The response comes back through the chain.
- Look at the source and destination UDP ports (typically 53).
- In the Wireshark filter bar, type
Part 2: Capturing HTTP Traffic and TCP Handshake
- Prepare for Capture (on KVM Host):
- If you stopped the previous capture, start a new one on
virbr1
(orvirbr2
ifdmz-server-01
is the target and you want to capture on the DMZ segment, butclient-vm-01
is onlan1-net
sovirbr1
is better for seeing client's view). - Let's capture on
virbr1
to seeclient-vm-01
's traffic todmz-server-01
.
- If you stopped the previous capture, start a new one on
- Start Capture: Click "Start."
- Generate HTTP Traffic (on
client-vm-01
):- Access the web server on
dmz-server-01
:
- Access the web server on
- Stop Capture (on KVM Host).
- Filter and Analyze HTTP/TCP Packets:
- In Wireshark filter bar, type
ip.addr == 10.0.1.50 && ip.addr == 10.0.2.20
(traffic between client and web server). - Or, more specific:
tcp.port == 80 && ip.addr == 10.0.2.20
. - Identify the TCP Three-Way Handshake:
- Packet from
10.0.1.50
(client) to10.0.2.20
(server): ProtocolTCP
, Info column will show[SYN]
. Expand TCP layer details: note flags, sequence number. - Packet from
10.0.2.20
to10.0.1.50
: ProtocolTCP
, Info[SYN, ACK]
. Expand TCP: note flags, sequence number, acknowledgement number. - Packet from
10.0.1.50
to10.0.2.20
: ProtocolTCP
, Info[ACK]
. Expand TCP: note flags, acknowledgement number.
- Packet from
- Identify the HTTP GET Request:
- Following the handshake, a packet from
10.0.1.50
to10.0.2.20
. Info might showHTTP GET /index.html ...
. - Expand "Hypertext Transfer Protocol". You'll see the request line (
GET /index.html HTTP/1.1
), Host header (Host: dmz01.yourlab.lan
), User-Agent, etc.
- Following the handshake, a packet from
- Identify the HTTP OK Response:
- A packet from
10.0.2.20
to10.0.1.50
. Info might showHTTP HTTP/1.1 200 OK ...
. - Expand "Hypertext Transfer Protocol". You'll see the status line (
HTTP/1.1 200 OK
), Server header, Content-Type, Content-Length, and the actual HTML data.
- A packet from
- Follow TCP Stream:
- Right-click on one of the HTTP packets (e.g., the GET request).
- Select "Follow" -> "TCP Stream."
- A new window will pop up showing the reassembled data exchanged: the client's HTTP request headers (in blue, for example) and the server's HTTP response headers and HTML content (in red, for example). This is very useful for seeing the complete application-level conversation. Close this window to return to the main packet list.
- Identify TCP Connection Termination (FIN packets):
- Look for packets with
[FIN, ACK]
flags being exchanged.
- Look for packets with
- In Wireshark filter bar, type
Part 3: Exploring Other Wireshark Features (Self-Study)
- Statistics:
ExploreStatistics
menu (Endpoints, Conversations, Protocol Hierarchy, etc.). - Coloring Rules:
View
->Coloring Rules
. See how different protocols are colored. - Capture Filters vs. Display Filters:
- Capture Filters
(set before starting capture, e.g.,host 10.0.1.50
,port 80
) limit what packets Wireshark saves to disk. More efficient for long captures. - Display Filters
(applied after capture) hide packets from view but don't discard them. More flexible for analysis.
- Capture Filters
Workshop Summary:
You have installed Wireshark and used it to capture and analyze fundamental network traffic:
- DNS queries and responses, observing the protocol fields.
- The TCP three-way handshake for establishing an HTTP connection.
- HTTP GET requests and server responses, including headers and payload.
- How to use display filters to isolate specific traffic.
- How to follow a TCP stream.
This hands-on experience with Wireshark is invaluable for understanding how protocols work at a low level and for diagnosing network problems in any environment, real or simulated. You can now "see" the data flowing through your virtual network.