Skip to content
Author Nejat Hakan
eMail nejat.hakan@outlook.de
PayPal Me https://paypal.me/nejathakan


Networking Commands

Introduction Understanding the Network Command Line

Welcome to the world of Linux networking through the command line interface (CLI). In modern computing, networking is fundamental. Whether you're managing a massive server farm, configuring a small home network, or developing web applications, understanding how network components interact and how to diagnose issues is crucial. While graphical user interfaces (GUIs) offer convenience for some tasks, the CLI provides unparalleled power, flexibility, and scriptability for network management and troubleshooting on Linux systems.

Mastering the command line tools allows you to:

  • Inspect Network Configuration: Quickly view IP addresses, subnet masks, MAC addresses, and the status of network interfaces.
  • Test Connectivity: Verify if your system can reach other hosts on the local network or the wider internet.
  • Resolve Domain Names: Translate human-readable domain names (like www.google.com) into the IP addresses computers use.
  • Analyze Routing: Understand the paths network traffic takes to reach its destination.
  • Examine Network Services: See which network services are running on your system or remote systems and which ports they are listening on.
  • Capture and Analyze Traffic: Inspect the actual data packets flowing across your network interfaces for deep troubleshooting.
  • Securely Access Remote Systems: Log in to other computers and transfer files securely over the network.
  • Automate Tasks: Integrate networking commands into scripts to automate configuration, monitoring, and deployment.

This section is designed to be a deep dive. We will explore not just how to use these commands, but why they work the way they do, touching upon the underlying networking concepts like the TCP/IP model, IP addressing, DNS, routing, and common protocols (ICMP, TCP, UDP, SSH). We will start with fundamental tools for basic inspection and connectivity testing, gradually moving towards more advanced tools for analysis and remote access.

Each subchapter will introduce a category of networking tasks and the relevant commands. Crucially, each subchapter concludes with a "Workshop" section. These workshops are designed to be hands-on, practical exercises where you apply the commands you've just learned in realistic, step-by-step scenarios. Treat these workshops as mini-projects to solidify your understanding and build practical skills.

We will primarily focus on the modern suite of tools (like the ip command from iproute2), but we will also mention legacy commands (like ifconfig, netstat, nslookup) that you might still encounter on older systems or in existing documentation, explaining their limitations and why the newer tools are generally preferred.

Prepare to open your terminal and dive deep into the essential commands that empower Linux network administration.

1. Inspecting Network Interfaces and Basic Connectivity

Before troubleshooting complex network issues, you must first understand the status of your own system's network interfaces and its basic ability to communicate. This involves checking IP addresses, interface states, and sending simple test packets to other hosts. The primary tools for this are ip and ping. We will also briefly discuss the legacy ifconfig command.

The ip Command (iproute2 Suite)

The ip command is the modern, unified tool in Linux for displaying and manipulating routing, network devices, interfaces, and tunnels. It's part of the iproute2 package and replaces several older, disparate commands like ifconfig, route, and arp. Its syntax is generally ip [OPTIONS] OBJECT {COMMAND | help}, where OBJECT is the type of thing you want to manage (like link, addr, route) and COMMAND is the action you want to perform.

Viewing Network Interfaces and Addresses (ip addr)

The most common use case is viewing IP address information assigned to your network interfaces.

Syntax:

ip addr [show [dev IFACE]]
  • addr (or a, address): Specifies that we are working with network addresses.
  • show: The default action if only ip addr is given. Displays address information.
  • dev IFACE: (Optional) Specifies a particular network interface (e.g., eth0, ens33, wlan0). If omitted, information for all interfaces is shown.

Example Output and Explanation:

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:1c:42:a1:b2:c3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.105/24 brd 192.168.1.255 scope global dynamic noprefixroute eth0
       valid_lft 85870sec preferred_lft 85870sec
    inet6 fe80::21c:42ff:fea1:b2c3/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

Let's break down the output for eth0:

  • 2: eth0:: The interface index (2) and name (eth0). Interface names can vary (ens3, enp0s3, wlp2s0, etc.) depending on the system's naming convention.
  • <BROADCAST,MULTICAST,UP,LOWER_UP>: Interface flags.
    • UP: The interface is administratively enabled.
    • LOWER_UP: The physical layer is connected (e.g., cable plugged in and link established).
    • BROADCAST: The interface supports broadcasting.
    • MULTICAST: The interface supports multicasting.
  • mtu 1500: Maximum Transmission Unit. The largest size, in bytes, of a single IP packet that can be transmitted over this interface without fragmentation. 1500 is standard for Ethernet.
  • qdisc fq_codel: Queuing discipline. The algorithm used to manage outgoing packets if there's congestion. fq_codel is a modern, fair queuing algorithm.
  • state UP: The operational state of the interface. UP means it's ready to pass traffic. Other states include DOWN and UNKNOWN.
  • group default: Interface group.
  • qlen 1000: Transmit queue length.
  • link/ether 00:1c:42:a1:b2:c3: The Layer 2 (Data Link) address, commonly known as the MAC (Media Access Control) address. This is a unique hardware identifier.
  • brd ff:ff:ff:ff:ff:ff: The broadcast MAC address. Packets sent to this address are received by all devices on the same Ethernet segment.
  • inet 192.168.1.105/24: The IPv4 address (192.168.1.105) and its subnet mask represented in CIDR (Classless Inter-Domain Routing) notation (/24). /24 means the first 24 bits are the network portion, leaving the remaining 8 bits for host addresses. This corresponds to a traditional subnet mask of 255.255.255.0.
  • brd 192.168.1.255: The broadcast IP address for this subnet. Packets sent here are received by all hosts within the 192.168.1.0/24 network.
  • scope global: The address is globally valid (i.e., routable beyond the local host). Other scopes include host (only valid on the host itself, like 127.0.0.1) and link (only valid on the local network segment, like IPv6 link-local addresses).
  • dynamic: Indicates the address was obtained dynamically, likely via DHCP (Dynamic Host Configuration Protocol). Static addresses won't show this.
  • noprefixroute: A flag indicating that a route for the directly connected network prefix should not be automatically added just based on this address assignment (often used with DHCP).
  • valid_lft 85870sec preferred_lft 85870sec: Lease times (in seconds) for dynamically assigned addresses. The address is valid for valid_lft, but the system should try to renew it before preferred_lft expires. forever is used for static addresses or loopback.
  • inet6 fe80::21c:42ff:fea1:b2c3/64: The IPv6 link-local address. All IPv6-enabled interfaces automatically configure a link-local address starting with fe80::. These are only used for communication on the same physical link.
  • scope link: Indicates this IPv6 address is only valid on the local link.

If you only care about the Layer 2 details (MAC address, interface state, MTU), you can use ip link.

Syntax:

ip link [show [dev IFACE]]

Example:

$ ip link show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:1c:42:a1:b2:c3 brd ff:ff:ff:ff:ff:ff

This provides a subset of the ip addr output, focusing on the interface itself rather than the assigned IP addresses.

The ping Command

ping (Packet INternet Groper) is the most fundamental command for testing network connectivity. It sends ICMP (Internet Control Message Protocol) "echo request" packets to a target host and waits for ICMP "echo reply" packets back. It measures round-trip time (latency) and packet loss.

Underlying Concept: ICMP

ICMP is a companion protocol to IP. It's not used for regular data transfer (like TCP or UDP) but for sending error messages and operational information. ping utilizes ICMP Type 8 (Echo Request) and Type 0 (Echo Reply). Firewalls can sometimes block ICMP traffic, which can cause ping requests to fail even if the host is reachable via other protocols (like HTTP on TCP port 80).

Syntax:

ping [OPTIONS] DESTINATION
  • DESTINATION: The hostname (e.g., www.google.com) or IP address (e.g., 8.8.8.8) of the target host.

Common Options:

  • -c COUNT: Stop after sending COUNT echo request packets. Without this, ping often runs continuously until interrupted (Ctrl+C).
  • -i INTERVAL: Wait INTERVAL seconds between sending each packet (default is 1 second). Can be a decimal value (e.g., -i 0.2).
  • -s PACKETSIZE: Specify the number of data bytes to be sent. Default is 56, which translates to a 64-byte ICMP packet when combined with the 8-byte ICMP header.
  • -I INTERFACE: Specify the source interface or IP address to send packets from (useful on multi-homed systems).
  • -4: Force using IPv4.
  • -6: Force using IPv6.
  • -W TIMEOUT: Time (in seconds) to wait for a response for each packet (default varies).
  • -q: Quiet output. Only displays summary statistics at the end.

Example and Explanation:

$ ping -c 4 www.google.com
PING www.google.com (142.250.187.196) 56(84) bytes of data.
64 bytes from lhr48s10-in-f4.1e100.net (142.250.187.196): icmp_seq=1 ttl=116 time=15.2 ms
64 bytes from lhr48s10-in-f4.1e100.net (142.250.187.196): icmp_seq=2 ttl=116 time=15.5 ms
64 bytes from lhr48s10-in-f4.1e100.net (142.250.187.196): icmp_seq=3 ttl=116 time=14.9 ms
64 bytes from lhr48s10-in-f4.1e100.net (142.250.187.196): icmp_seq=4 ttl=116 time=15.8 ms

--- www.google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3005ms
rtt min/avg/max/mdev = 14.902/15.353/15.801/0.338 ms

Breakdown:

  • PING www.google.com (142.250.187.196) 56(84) bytes of data.: Shows the target hostname, the resolved IP address, and the size of the ICMP payload (56 bytes) and total IP packet size (84 bytes = 56 data + 8 ICMP header + 20 IP header).
  • 64 bytes from ...: A successful reply was received.
    • lhr48s10-in-f4.1e100.net (142.250.187.196): The hostname and IP address of the system that replied.
    • icmp_seq=1: The sequence number of the packet. Should increment for each request/reply pair. Gaps indicate packet loss.
    • ttl=116: Time To Live. This is a value in the IP header that is decremented by each router along the path. It prevents packets from circulating indefinitely. The initial value depends on the sender's OS, and the received value gives a rough idea of the number of hops.
    • time=15.2 ms: Round-Trip Time (RTT) or latency for this specific packet.
  • --- www.google.com ping statistics ---: Summary after ping finishes or is interrupted.
    • 4 packets transmitted, 4 received, 0% packet loss: Shows how many requests were sent, how many replies were received, and the percentage of loss. Packet loss indicates network problems.
    • time 3005ms: Total time the ping command was running.
    • rtt min/avg/max/mdev: Statistics for the round-trip times of the received packets: minimum, average, maximum, and mean deviation. These values are key indicators of network performance and stability.

Common ping Issues:

  • Destination Host Unreachable: Your system or an intermediate router doesn't have a route to the destination, or the destination host explicitly rejected the packet (e.g., firewall).
  • Request timed out: No reply was received within the timeout period. This could be due to network congestion, packet loss, a firewall blocking the request or reply, or the destination host being down or configured not to respond to pings.
  • Unknown host: Your system could not resolve the hostname to an IP address via DNS.

Legacy Command: ifconfig

For many years, ifconfig (interface configuration) was the standard tool for viewing and managing network interfaces. While still available on many systems (often via the net-tools package), it's considered deprecated in favor of ip.

Syntax (Viewing):

ifconfig [INTERFACE]

Example Output:

$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.105  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::21c:42ff:fea1:b2c3  prefixlen 64  scopeid 0x20<link>
        ether 00:1c:42:a1:b2:c3  txqueuelen 1000  (Ethernet)
        RX packets 12345  bytes 6789012 (6.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9876  bytes 1234567 (1.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Comparison with ip addr:

  • ifconfig combines address and link information more tightly.
  • It displays network masks in traditional dotted-decimal notation (netmask 255.255.255.0) instead of CIDR (/24).
  • It often shows more detailed packet counters (RX/TX packets, errors, drops, etc.) directly in the default output. The ip command can show similar stats using ip -s link.
  • ifconfig generally cannot display all IP addresses assigned to an interface if there are multiple (secondary addresses). ip addr handles this correctly.
  • Configuration using ifconfig is less intuitive and powerful than using ip.

While you should learn and primarily use ip, being able to read ifconfig output is useful when working on older systems.

Workshop: Basic Network Checkup

Goal: Use ip and ping to verify your system's network configuration and basic connectivity.

Assumptions: You are running Linux in a typical network environment (like a university lab, home network, or cloud VM) with internet access. Replace interface names (like eth0) and target addresses if necessary for your specific setup.

Steps:

  1. Identify Your Network Interfaces:

    • Open your terminal.
    • Type ip link and press Enter.
    • Observe the output. Identify the primary network interface you use for external connectivity. It's often named eth0, ensX, enpXsY, or wlan0 (for wireless). Note its state (should be UP).
    • Also note the lo (loopback) interface.
    # Command:
    ip link
    
    # Example Observation:
    # Note the name and state of your main interface (e.g., eth0, state UP)
    
  2. Check Your IP Address Configuration:

    • Type ip addr show dev <YOUR_INTERFACE_NAME> (replace <YOUR_INTERFACE_NAME> with the name you identified in Step 1, e.g., eth0). Press Enter.
    • Find the inet line. Note your IPv4 address and its CIDR suffix (e.g., 192.168.1.105/24).
    • Find the inet6 line(s). Note your IPv6 addresses (if any), especially the link-local (fe80::..) one.
    • Observe the scope (e.g., global, link).
    # Command (replace eth0 if needed):
    ip addr show dev eth0
    
    # Example Observation:
    # IPv4: 192.168.1.105/24 (global)
    # IPv6: fe80::21c:42ff:fea1:b2c3/64 (link)
    
  3. Test Loopback Connectivity:

    • The loopback interface (lo) represents the machine itself. Pinging it tests the local TCP/IP stack.
    • Type ping -c 4 127.0.0.1 and press Enter.
    • Verify that you receive 4 replies with very low latency (usually less than 1ms) and 0% packet loss.
    # Command:
    ping -c 4 127.0.0.1
    
    # Expected Outcome:
    # 4 packets transmitted, 4 received, 0% packet loss
    # Very low time values (e.g., time=0.050 ms)
    
  4. Test Connectivity to Your Default Gateway:

    • The default gateway is the router on your local network that forwards traffic to other networks. You need its IP address first. Find it using the ip route command (we'll cover this in detail later, but for now, just use it to get the gateway IP).
    • Type ip route and press Enter. Look for the line starting with default via. The IP address immediately after via is your gateway. Let's assume it's 192.168.1.1.
    • Type ping -c 4 <GATEWAY_IP> (e.g., ping -c 4 192.168.1.1) and press Enter.
    • Verify that you receive replies, likely with low latency, and 0% packet loss. This confirms connectivity within your local network.
    # Command to find gateway:
    ip route
    
    # Example Output Snippet:
    # default via 192.168.1.1 dev eth0 proto dhcp metric 100
    # (Note the gateway IP: 192.168.1.1)
    
    # Command to ping gateway (replace IP):
    ping -c 4 192.168.1.1
    
    # Expected Outcome:
    # 4 packets transmitted, 4 received, 0% packet loss
    # Low time values (e.g., time=1.5 ms)
    
  5. Test Internet Connectivity (using IP Address):

    • Ping a reliable public IP address, like one of Google's public DNS servers (8.8.8.8).
    • Type ping -c 4 8.8.8.8 and press Enter.
    • Verify replies and check the latency. This confirms your ability to reach the internet beyond your local network. Latency will be higher than pinging the gateway.
    # Command:
    ping -c 4 8.8.8.8
    
    # Expected Outcome:
    # 4 packets transmitted, 4 received, 0% packet loss
    # Higher time values (e.g., time=15 ms, depending on your location)
    
  6. Test Internet Connectivity and DNS (using Hostname):

    • Ping a reliable public hostname, like www.google.com. This tests both internet connectivity and your system's ability to resolve domain names using DNS.
    • Type ping -c 4 www.google.com and press Enter.
    • Verify replies and note the IP address resolved for the hostname.
    # Command:
    ping -c 4 www.google.com
    
    # Expected Outcome:
    # PING www.google.com (resolved IP) ...
    # 4 packets transmitted, 4 received, 0% packet loss
    # Time values similar to pinging 8.8.8.8
    

Conclusion: By completing these steps, you've used ip to inspect your network configuration and ping to progressively test connectivity from your local machine, to your local gateway, and finally to the public internet using both IP addresses and hostnames. If any step failed, it would provide a starting point for further troubleshooting (e.g., if pinging the gateway fails, the issue is likely local; if pinging 8.8.8.8 fails but the gateway works, the issue might be with the gateway or ISP; if pinging 8.8.8.8 works but www.google.com fails, the issue is likely DNS).

2. DNS Name Resolution Tools

Humans prefer using memorable names like www.example.com, while computers communicate using numerical IP addresses like 93.184.216.34. The Domain Name System (DNS) is the hierarchical and distributed naming system that translates hostnames into IP addresses (forward lookup) and, less commonly, IP addresses back into hostnames (reverse lookup). Linux provides several command-line tools to interact with DNS servers and perform these lookups. The main modern tool is dig, while host offers simpler output, and nslookup is a legacy tool still found on many systems.

Understanding DNS Basics

Before diving into the commands, let's briefly review key DNS concepts:

  • DNS Servers: Specialized servers that store DNS records and respond to DNS queries. They are often hierarchical:
    • Recursive Resolvers: Your computer usually points to a recursive resolver (provided by your ISP, institution, or a public service like Google's 8.8.8.8 or Cloudflare's 1.1.1.1). When you request a name, the resolver does the work of querying other DNS servers (root, TLD, authoritative) to find the answer and then caches it. Your system's resolvers are typically listed in /etc/resolv.conf.
    • Authoritative Servers: These servers hold the actual DNS records for a specific domain (zone). For example, example.com has designated authoritative servers that know the IP address for www.example.com.
  • DNS Records: Entries in a DNS database that map names to IPs or provide other information. Common types include:
    • A: Maps a hostname to an IPv4 address.
    • AAAA (Quad-A): Maps a hostname to an IPv6 address.
    • CNAME (Canonical Name): Creates an alias, pointing a hostname to another hostname.
    • MX (Mail Exchanger): Specifies the mail servers responsible for receiving email for a domain. Includes a preference number (lower is more preferred).
    • NS (Name Server): Delegates a domain or subdomain to a set of authoritative name servers.
    • PTR (Pointer): Maps an IP address back to a hostname (used for reverse DNS).
    • TXT: Holds arbitrary text information, often used for verification purposes (e.g., SPF records for email validation).
    • SOA (Start of Authority): Contains administrative information about the zone, including the primary name server, contact email, serial number, and refresh timers.
  • DNS Query: The process of asking a DNS server for information about a specific name or IP address.

The dig Command (Domain Information Groper)

dig is the most powerful and flexible command-line tool for querying DNS servers. It's part of the bind-utils or dnsutils package. It provides detailed output, making it ideal for troubleshooting and in-depth analysis.

Syntax:

dig [@SERVER] [NAME] [TYPE] [OPTIONS]
  • @SERVER: (Optional) Specify the DNS server to query directly. If omitted, dig uses the servers listed in /etc/resolv.conf.
  • NAME: The hostname or IP address you want to look up.
  • TYPE: (Optional) The type of DNS record to query (e.g., A, AAAA, MX, NS, TXT, ANY). Defaults to A if omitted. For reverse lookups, use the special -x option instead of specifying a type.
  • OPTIONS: Various options to control query behavior and output format (e.g., +short for brief output).

Example 1: Simple A Record Lookup

$ dig www.google.com

; <<>> DiG 9.16.1-Ubuntu <<>> www.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54321
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;www.google.com.            IN  A

;; ANSWER SECTION:
www.google.com.     211 IN  A   142.250.187.196

;; Query time: 12 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Tue May 21 10:30:00 UTC 2024
;; MSG SIZE  rcvd: 61

Explanation:

  • Header: Shows query details, status (NOERROR means success), and flags (qr=query response, rd=recursion desired, ra=recursion available).
  • Question Section: Shows what was asked: an A record (IN means Internet class) for www.google.com.. Note the trailing dot, signifying the root of the DNS hierarchy (often added automatically by dig).
  • Answer Section: The core result. www.google.com. has an A record pointing to 142.250.187.196. The 211 is the TTL (Time To Live) in seconds, indicating how long this record can be cached.
  • Statistics: Query time, the server that responded (here 127.0.0.53, a local caching resolver often used by systemd-resolved), timestamp, and message size.

Example 2: Querying for MX Records

$ dig google.com MX

;; ANSWER SECTION:
google.com.     600 IN  MX  10 smtp.google.com.
google.com.     600 IN  MX  20 alt1.smtp.google.com.
google.com.     600 IN  MX  30 alt2.smtp.google.com.
google.com.     600 IN  MX  40 alt3.smtp.google.com.
google.com.     600 IN  MX  50 alt4.smtp.google.com.

This shows the mail servers for google.com, ordered by preference (10 is the most preferred).

Example 3: Reverse DNS Lookup

$ dig -x 8.8.8.8

;; ANSWER SECTION:
8.8.8.8.in-addr.arpa. 86400 IN  PTR dns.google.

The -x option formats the IP address into the special .in-addr.arpa domain used for IPv4 reverse lookups and queries for a PTR record. It shows that 8.8.8.8 maps back to the hostname dns.google..

Example 4: Querying a Specific DNS Server

$ dig @1.1.1.1 www.cloudflare.com AAAA

;; ANSWER SECTION:
www.cloudflare.com. 300 IN  AAAA    2606:4700::6810:85e5
www.cloudflare.com. 300 IN  AAAA    2606:4700::6810:84e5

This queries Cloudflare's public DNS server (1.1.1.1) directly for the IPv6 (AAAA) addresses of www.cloudflare.com.

Example 5: Short Output

$ dig +short www.google.com A
142.250.187.196
$ dig +short google.com MX
10 smtp.google.com.
20 alt1.smtp.google.com.
30 alt2.smtp.google.com.
40 alt3.smtp.google.com.
50 alt4.smtp.google.com.

The +short option provides just the answer, useful for scripting.

The host Command

The host command is simpler alternative to dig, providing less detailed but often easier-to-read output for common DNS lookups. It's also often included in the bind-utils or dnsutils package.

Syntax:

host [-t TYPE] NAME [SERVER]
  • -t TYPE: (Optional) Specify the record type (e.g., a, aaaa, mx, ns, txt, ptr).
  • NAME: The hostname or IP address to look up.
  • SERVER: (Optional) The specific DNS server to query.

Example 1: Simple A Record Lookup

$ host www.google.com
www.google.com has address 142.250.187.196
www.google.com has IPv6 address 2a00:1450:4009:821::2004

By default, host often looks up both A and AAAA records.

Example 2: Querying for MX Records

$ host -t mx google.com
google.com mail is handled by 10 smtp.google.com.
google.com mail is handled by 20 alt1.smtp.google.com.
google.com mail is handled by 30 alt2.smtp.google.com.
google.com mail is handled by 40 alt3.smtp.google.com.
google.com mail is handled by 50 alt4.smtp.google.com.

Example 3: Reverse DNS Lookup

$ host 8.8.8.8
8.8.8.8.in-addr.arpa domain name pointer dns.google.

host automatically performs a reverse lookup if given an IP address.

Example 4: Querying a Specific Server

$ host www.cloudflare.com 1.1.1.1
Using domain server:
Name: 1.1.1.1
Address: 1.1.1.1#53
Aliases:

www.cloudflare.com has address 104.16.132.229
www.cloudflare.com has address 104.16.133.229
www.cloudflare.com has IPv6 address 2606:4700::6810:85e5
www.cloudflare.com has IPv6 address 2606:4700::6810:84e5

Legacy Command: nslookup

nslookup (name server lookup) is the oldest of the three tools. While functional for basic lookups, its output format is less clear than host, and it lacks the detailed diagnostic capabilities of dig. It's generally recommended to use dig or host instead, but you might encounter nslookup on older systems or in older scripts.

Syntax (Non-interactive mode):

nslookup [-type=TYPE] NAME [SERVER]

Example 1: Simple Lookup

$ nslookup www.google.com
Server:     127.0.0.53
Address:    127.0.0.53#53

Non-authoritative answer:
Name:   www.google.com
Address: 142.250.187.196
Name:   www.google.com
Address: 2a00:1450:4009:811::2004

Note the "Non-authoritative answer" indication, meaning the response came from a cache or recursive resolver, not directly from the authoritative server for google.com. dig provides much more detail about this.

Example 2: MX Record Lookup

$ nslookup -type=mx google.com
Server:     127.0.0.53
Address:    127.0.0.53#53

Non-authoritative answer:
google.com  mail exchanger = 10 smtp.google.com.
google.com  mail exchanger = 20 alt1.smtp.google.com.
google.com  mail exchanger = 30 alt2.smtp.google.com.
google.com  mail exchanger = 40 alt3.smtp.google.com.
google.com  mail exchanger = 50 alt4.smtp.google.com.

Authoritative answers can be found from:

nslookup also has an interactive mode (if run without arguments) which can be confusing. Due to its limitations and less informative output, dig is strongly preferred for any serious DNS work.

Workshop: DNS Exploration

Goal: Use dig and host to explore various DNS record types and query different servers.

Assumptions: You have internet access and the dig and host commands installed (usually part of bind-utils or dnsutils package).

Steps:

  1. Find Your Default DNS Resolver:

    • The file /etc/resolv.conf usually contains the IP addresses of the DNS servers your system is configured to use. Sometimes, it might point to a local resolver like 127.0.0.53 if using systemd-resolved or dnsmasq.
    • Type cat /etc/resolv.conf and examine the nameserver lines. Note the IP address(es).
    # Command:
    cat /etc/resolv.conf
    
    # Example Observation:
    # nameserver 127.0.0.53
    # or perhaps:
    # nameserver 192.168.1.1
    # nameserver 8.8.8.8
    
  2. Basic A and AAAA Record Lookup:

    • Use host to find the IPv4 and IPv6 addresses for a common website, like wikipedia.org.
    • Use dig to specifically query for the A record of the same website. Compare the output detail.
    • Use dig again, but this time query for the AAAA record.
    # Command 1 (host):
    host wikipedia.org
    
    # Command 2 (dig A):
    dig wikipedia.org A
    
    # Command 3 (dig AAAA):
    dig wikipedia.org AAAA
    
    # Observation: Note the addresses returned and the difference in output verbosity between host and dig.
    
  3. Find Mail Servers (MX Records):

    • Use dig to find the Mail Exchanger (MX) records for a domain that likely handles its own email, for example, your university's domain or a well-known domain like gmail.com.
    • Use dig +short to get just the MX records and their preferences.
    # Command (replace example.edu with a real domain):
    dig example.edu MX
    
    # Command (short version):
    dig +short example.edu MX
    
    # Observation: Note the server names and the preference numbers (lower is preferred).
    
  4. Find Authoritative Name Servers (NS Records):

    • Every domain has designated Name Servers (NS) that are authoritative for its records. Use dig to find the NS records for a domain, like github.com.
    # Command:
    dig github.com NS
    
    # Observation: Note the names of the authoritative name servers listed in the ANSWER section. These servers hold the master records for the github.com domain.
    
  5. Perform a Reverse DNS Lookup:

    • First, find an IP address for a known server (e.g., using ping or host). Let's use one of Cloudflare's IPs obtained earlier: 104.16.132.229.
    • Use dig -x to perform a reverse lookup on this IP address.
    • Use host with the IP address to do the same.
    # Command (dig reverse):
    dig -x 104.16.132.229
    
    # Command (host reverse):
    host 104.16.132.229
    
    # Observation: See if the IP address maps back to a meaningful hostname. Not all IPs have PTR records configured.
    
  6. Query a Specific Public DNS Server:

    • Compare the results from your default resolver with those from a public resolver like Google's (8.8.8.8) or Cloudflare's (1.1.1.1). Sometimes results differ due to caching or geographic routing (Anycast).
    • Query Google's DNS server for the A record of www.google.com.
    • Query Cloudflare's DNS server for the A record of www.google.com.
    # Command (query Google DNS):
    dig @8.8.8.8 www.google.com A
    
    # Command (query Cloudflare DNS):
    dig @1.1.1.1 www.google.com A
    
    # Observation: Did you get the same IP address? Did the query time or TTL differ significantly? This can reveal differences in resolver caches or Anycast routing.
    
  7. Explore TXT Records (Optional):

    • TXT records are often used for domain verification or email policies like SPF. Try looking up TXT records for a domain like google.com.
    # Command:
    dig google.com TXT
    
    # Observation: Look for records related to SPF (e.g., "v=spf1 ...") or domain verification keys.
    

Conclusion: Through this workshop, you've practiced using dig and host to query different types of DNS records (A, AAAA, MX, NS, PTR, TXT), understand their purpose, and interact directly with specific DNS servers. This is fundamental for diagnosing website access issues, email delivery problems, and understanding how the internet's naming system functions.

3. Understanding Network Routing

When your computer sends a packet to a destination outside its local network, it doesn't inherently know the entire path. Instead, it relies on a routing table. The routing table contains rules that tell the operating system where to send packets based on their destination IP address. Usually, for unknown destinations, packets are sent to a default gateway (a router on the local network), which then takes responsibility for forwarding the packet further. Understanding how to view and interpret the routing table and trace the path packets take across the internet is crucial for diagnosing connectivity issues that go beyond the local machine. The primary tools for this are ip route (part of iproute2), traceroute, and tracepath. We will also mention the legacy netstat -r command.

The IP Routing Table

The routing table is a database maintained by the kernel that lists known network destinations and specifies the next "hop" (usually a gateway IP address or a local interface) to send packets destined for those networks.

Viewing the Routing Table (ip route)

The ip route command (part of the modern iproute2 suite) is used to display and manipulate the IP routing table.

Syntax (Viewing):

ip route [show]
# Or for IPv6 routes:
ip -6 route [show]

Example Output and Explanation:

$ ip route
default via 192.168.1.1 dev eth0 proto dhcp metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.105 metric 100

Breakdown:

  • default via 192.168.1.1 dev eth0 proto dhcp metric 100: This is the default route.
    • default: Matches any destination IP address not matched by a more specific route. This is where traffic to the internet goes.
    • via 192.168.1.1: Packets matching this route should be sent to the gateway router at IP address 192.168.1.1.
    • dev eth0: Send the packets out through the eth0 interface to reach the gateway.
    • proto dhcp: This route was learned via the DHCP protocol. Other possibilities include static, kernel, redirect.
    • metric 100: A preference value for this route. If multiple routes exist to the same destination, the one with the lower metric is preferred.
  • 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown: A route for the network used by Docker containers.
    • 172.17.0.0/16: Matches any destination IP address within the 172.17.0.0 to 172.17.255.255 range.
    • dev docker0: Send packets directly out through the docker0 virtual interface. There's no via because it's a directly connected network from the host's perspective.
    • proto kernel: This route was automatically created by the kernel when the docker0 interface was configured.
    • scope link: This network is directly connected (link-local scope).
    • src 172.17.0.1: When sending packets to this network, use 172.17.0.1 as the source IP address.
    • linkdown: Indicates the docker0 interface is currently operationally down.
  • 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.105 metric 100: A route for the local network segment.
    • 192.168.1.0/24: Matches any destination IP within the local 192.168.1.x network.
    • dev eth0: Send packets directly out through eth0.
    • proto kernel: Automatically created by the kernel based on the IP address assigned to eth0.
    • scope link: Directly connected network.
    • src 192.168.1.105: Use the interface's main IP as the source address.
    • metric 100: Metric associated with this route (often related to the interface metric).

How the Kernel Uses the Routing Table:

When the kernel needs to send an IP packet, it looks for the most specific matching route in the table.

  1. It compares the destination IP address against each route's network prefix (like 192.168.1.0/24).
  2. The route with the longest matching prefix (most specific match) is chosen. For example, a route for 192.168.1.50/32 would be more specific than 192.168.1.0/24.
  3. If no specific match is found, the default route is used.
  4. If no routes match (including no default route), the destination is considered unreachable.

Getting the Route for a Specific Destination (ip route get)

You can ask the kernel exactly which route it would use for a specific destination IP address.

Syntax:

ip route get <DESTINATION_IP>

Example:

$ ip route get 8.8.8.8
8.8.8.8 via 192.168.1.1 dev eth0 src 192.168.1.105 uid 1000
    cache

This output confirms that to reach 8.8.8.8, the system will use the default route: send packets via gateway 192.168.1.1, out through interface eth0, using the source IP 192.168.1.105. The cache indicates this result might be cached.

$ ip route get 192.168.1.50
192.168.1.50 dev eth0 src 192.168.1.105 uid 1000
    cache

To reach another host 192.168.1.50 on the local network, it uses the direct route via eth0.

Tracing the Path (traceroute and tracepath)

While ip route shows the first hop, traceroute and tracepath attempt to discover the sequence of routers (hops) a packet follows to reach a destination across the internet. They are invaluable for diagnosing where connectivity breaks down or where high latency is introduced.

Underlying Concept: TTL Expiry

Both tools work by sending packets towards the destination with incrementally increasing Time To Live (TTL) values in the IP header.

  1. The first packet is sent with TTL=1. The first router it reaches decrements the TTL to 0.
  2. When TTL reaches 0, the router discards the packet and (usually) sends an ICMP "Time Exceeded" message back to the source. This message contains the router's IP address.
  3. The tool records the IP address of the router that sent the ICMP message.
  4. The tool then sends a packet with TTL=2. It passes the first router (TTL becomes 1), but the second router decrements TTL to 0 and sends back an ICMP "Time Exceeded" message. The tool records the second router's IP.
  5. This continues, incrementing the TTL for each probe, revealing the IP address of each successive router along the path.
  6. The process stops when the probes reach the final destination. The destination host, instead of sending "Time Exceeded", usually sends a reply indicating the port is unreachable (for traceroute's default UDP probes) or an echo reply (if using ICMP probes), signaling the end of the trace.

The traceroute Command

traceroute is the classic tool. By default, it sends UDP packets to high-numbered, likely unused ports. Some firewalls might block these UDP packets or the resulting ICMP responses.

Syntax:

traceroute [OPTIONS] DESTINATION

Common Options:

  • -n: Do not resolve IP addresses to hostnames (faster).
  • -I: Use ICMP Echo Request packets instead of UDP (like ping). May bypass firewalls that block UDP but allow ICMP.
  • -T: Use TCP SYN packets instead of UDP. Useful for probing hosts behind firewalls that block UDP and ICMP but allow TCP connections (e.g., web servers on port 80). Often requires root privileges.
  • -p PORT: Specify the destination port for UDP or TCP probes (default is high for UDP, 80 for TCP).
  • -q NUM: Set the number of probe packets per hop (default 3). Sending multiple probes helps identify packet loss or variability at a specific hop.
  • -m MAX_TTL: Set the maximum number of hops (max TTL value) to probe (default 30).
  • -4 / -6: Force IPv4 or IPv6.

Example:

$ traceroute www.google.com
traceroute to www.google.com (142.250.187.196), 30 hops max, 60 byte packets
 1  _gateway (192.168.1.1)  1.500 ms  1.450 ms  1.600 ms
 2  isp-gw-router.example.net (10.0.0.1)  8.200 ms  8.150 ms  8.300 ms
 3  core-router1.city.isp.net (192.0.2.5)  12.100 ms  12.050 ms  *
 4  ix.peer-router.google.com (203.0.113.10)  15.500 ms  15.450 ms  15.600 ms
 5  google-backbone1.net (10.1.2.3)  15.600 ms  15.550 ms  15.700 ms
 6  lhr48s10-in-f4.1e100.net (142.250.187.196)  15.800 ms  15.750 ms  15.900 ms

Explanation:

  • Each line represents one hop (router) along the path.
  • 1: Hop number.
  • _gateway (192.168.1.1): The hostname (if resolved) and IP address of the router at this hop. _gateway is often resolved from the local /etc/hosts or DNS for the default gateway.
  • 1.500 ms 1.450 ms 1.600 ms: Round-trip times for the three probe packets sent to this hop. Variability here can indicate instability.
  • *: An asterisk indicates that no response was received for that probe packet within the timeout period. Occasional asterisks might indicate minor congestion or rate limiting, but consistent asterisks (* * *) for a hop suggest that router is configured not to send ICMP "Time Exceeded" messages, or a firewall is blocking the replies. If asterisks continue for all subsequent hops, it indicates a break in the path or filtering further downstream.
  • The trace completes when it reaches the final destination IP address.

The tracepath Command

tracepath is a simpler tool, often installed by default as part of iputils. It performs a similar function to traceroute but doesn't require root privileges and uses UDP packets by default (like traceroute). It also attempts to discover the Path MTU (Maximum Transmission Unit) along the way.

Syntax:

tracepath [OPTIONS] DESTINATION[/PORT]

Common Options:

  • -n: Do not resolve IP addresses to hostnames.
  • -l PKTLEN: Set the initial packet length (for MTU discovery).
  • -p PORT: Specify the destination port (default 443 if run as root, random otherwise).
  • -4 / -6: Force IPv4 or IPv6.

Example:

$ tracepath www.google.com
 1?: [LOCALHOST]                      pmtu 1500
 1:  _gateway (192.168.1.1)                            1.600ms
 1:  _gateway (192.168.1.1)                            1.550ms
 2:  isp-gw-router.example.net (10.0.0.1)             8.250ms
 3:  core-router1.city.isp.net (192.0.2.5)             12.150ms
 4:  ix.peer-router.google.com (203.0.113.10)          15.550ms reached
     Resume: pmtu 1500 hops 4 back 64

Explanation:

  • Output is generally simpler than traceroute.
  • pmtu 1500: Reports the Path MTU discovered so far. If a router required fragmentation or sent an ICMP "Fragmentation Needed" message, tracepath would typically report a smaller PMTU.
  • It often shows duplicate lines if multiple probes get the same response time.
  • reached: Indicates the destination responded.

tracepath is often sufficient for a quick path check, while traceroute offers more options (like ICMP/TCP probes) for advanced troubleshooting.

Legacy Command: netstat -r

The netstat command is a multi-purpose tool for displaying network connections, interface statistics, and the routing table. The -r option specifically shows the routing table. Like ifconfig, it's part of the older net-tools package and is largely superseded by ip route.

Syntax:

netstat -r [-n]
  • -n: Show numerical addresses instead of trying to resolve hostnames (recommended).

Example Output:

$ netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG        0 0          0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
192.168.1.0     0.0.0.0         255.255.255.0   U         0 0          0 eth0

Explanation:

  • Destination: The destination network address. 0.0.0.0 represents the default route.
  • Gateway: The gateway address to use. 0.0.0.0 means no gateway is needed (directly connected).
  • Genmask: The subnet mask for the destination network.
  • Flags: Indicate properties of the route:
    • U: Route is Up.
    • G: Route uses a Gateway.
    • H: Route is to a specific Host (not a network).
  • Iface: The outgoing network interface.
  • MSS, Window, irtt: TCP-related parameters, often 0 unless specific route properties are set.

Comparing this to ip route output, netstat -rn is less detailed (e.g., doesn't show protocol, scope, or source address easily) and uses older terminology (Genmask vs. CIDR). Use ip route for modern systems.

Workshop: Tracing Network Paths

Goal: Use ip route and traceroute (or tracepath) to understand your system's routing table and trace the path to different destinations.

Assumptions: You have internet access. traceroute might need to be installed (sudo apt install traceroute or sudo yum install traceroute).

Steps:

  1. Examine Your Routing Table:

    • Display your IPv4 routing table using ip route.
    • Identify the default route (destination default or 0.0.0.0). Note the gateway IP and the outgoing interface.
    • Identify the route for your local network (e.g., 192.168.1.0/24). Note that it likely doesn't have a gateway (via) specified, meaning it's directly connected.
    # Command:
    ip route
    
    # Observation:
    # Default route: via <GATEWAY_IP> dev <INTERFACE>
    # Local route: <LOCAL_NETWORK>/<PREFIX> dev <INTERFACE> src <YOUR_IP>
    
  2. Verify Route Selection:

    • Use ip route get to confirm which route is used for a local IP address (use an IP on your network, but not your own, e.g., your gateway's IP).
    • Use ip route get to confirm which route is used for a public internet IP address (e.g., 8.8.8.8).
    # Command (local IP - replace with your gateway IP):
    ip route get 192.168.1.1
    
    # Command (public IP):
    ip route get 8.8.8.8
    
    # Observation: Confirm the local IP uses the direct route and the public IP uses the default route via your gateway.
    
  3. Trace Path to Your Gateway:

    • Use traceroute (or tracepath) to trace the path to your default gateway's IP address. Use the -n option to prevent hostname lookups for speed.
    # Command (replace with your gateway IP):
    traceroute -n <GATEWAY_IP>
    # or
    # tracepath -n <GATEWAY_IP>
    
    # Observation: This trace should complete in a single hop, showing only the gateway's IP address with low latency.
    
  4. Trace Path to a Public Website:

    • Use traceroute -n to trace the path to a well-known website like www.wikipedia.org.
    # Command:
    traceroute -n www.wikipedia.org
    
    # Observation:
    # - Note the first hop - it should be your gateway IP.
    # - Observe the subsequent hops - these are routers within your ISP and the broader internet backbone.
    # - Look at the latency (ms) for each hop. Does it increase significantly at certain points?
    # - Are there any hops that consistently show '*' (timeouts)? This might indicate a router not responding to probes or potential packet loss.
    # - Note the total number of hops to reach the destination.
    
  5. Trace Using ICMP Probes:

    • Some networks prioritize or filter ICMP differently than UDP. Try the same trace using ICMP Echo packets with traceroute -In. (Note: tracepath doesn't typically have an ICMP option).
    # Command:
    traceroute -n -I www.wikipedia.org
    
    # Observation: Does the path differ from the UDP trace? Do the response times change? Do previously unresponsive hops (`* * *`) now respond, or vice-versa? This can help diagnose firewall issues.
    
  6. Trace Path to a Different Continent (Optional):

    • If curious, trace the path to a server located far away geographically, for example, a university or news site in another country (e.g., www.bbc.co.uk if you're in the US, or www.stanford.edu if you're in Europe/Asia).
    # Example Command:
    traceroute -n www.bbc.co.uk
    
    # Observation: Compare the number of hops and the typical latency values to the trace performed in Step 4. You should see significantly higher latency due to the physical distance the signals must travel. Look for router names that might indicate undersea cable landing points or major international exchange points.
    

Conclusion: This workshop demonstrated how to read your local routing table using ip route and how to use traceroute / tracepath to map the sequence of routers packets traverse across networks. You practiced interpreting the output, including latency and potential points of failure or filtering. This is a fundamental skill for diagnosing slow connections or reachability problems beyond your local network.

4. Examining Network Connections and Listening Ports

Understanding which network services are running on your machine (or a remote machine) and who is connected to them is essential for security auditing, troubleshooting service availability, and general network awareness. Linux provides powerful tools to inspect listening sockets (ports waiting for incoming connections) and established network connections. The modern standard is ss, which has largely replaced the older, more versatile netstat command for this purpose. We will also briefly introduce nmap, a powerful network scanner often used to probe ports on remote hosts.

Sockets, Ports, TCP, and UDP

Before using the tools, let's clarify some terms:

  • Socket: An endpoint for network communication. In the context of TCP/IP, a socket is typically defined by a combination of: Protocol (TCP or UDP), Local IP Address, Local Port Number, Remote IP Address, and Remote Port Number.
  • Port Number: A 16-bit number (0-65535) used to differentiate between multiple services running on the same host. Specific services are conventionally assigned "well-known" ports (0-1023, e.g., 80 for HTTP, 443 for HTTPS, 22 for SSH). Ports 1024-49151 are "registered" ports, and 49152-65535 are "dynamic" or "private" ports often used for outgoing connections.
  • TCP (Transmission Control Protocol): A connection-oriented protocol. It establishes a reliable, ordered, and error-checked stream of data between two applications. Requires a "three-way handshake" to establish a connection. Used for HTTP, HTTPS, FTP, SSH, SMTP, etc.
  • UDP (User Datagram Protocol): A connectionless protocol. It sends packets ("datagrams") without establishing a connection first. Faster and lower overhead than TCP, but offers no guarantee of delivery, order, or error correction. Used for DNS, DHCP, NTP, VoIP, online gaming, etc.
  • Listening Socket: A socket that is waiting for incoming connection requests (TCP) or incoming datagrams (UDP) on a specific local IP address and port.
  • Established Connection: A TCP connection that has successfully completed the three-way handshake and is actively exchanging data or ready to do so.
  • State: Sockets, particularly TCP sockets, go through various states (e.g., LISTEN, SYN-SENT, SYN-RECV, ESTABLISHED, FIN-WAIT1, FIN-WAIT2, CLOSE-WAIT, LAST-ACK, TIME-WAIT, CLOSED). Understanding these states can be helpful for advanced troubleshooting.

The ss Command (Socket Statistics)

ss is the modern tool for dumping socket statistics. It gets its information directly from kernel space (netlink subsystem), making it significantly faster than the traditional netstat command, especially on systems with many connections. It's part of the iproute2 package.

Syntax:

ss [OPTIONS] [FILTER]

Common Options:

  • -t: Display TCP sockets.
  • -u: Display UDP sockets.
  • -l: Display only listening sockets.
  • -a: Display all sockets (both listening and non-listening).
  • -n: Do not resolve service names (show port numbers instead). Faster.
  • -p: Show the process (PID/program name) using the socket. Often requires root privileges.
  • -e: Show extended socket information (like UID, inode, SELinux context).
  • -o: Show timer information (e.g., keepalive timers for TCP).
  • -i: Show internal TCP information (congestion control, RTT, etc.).
  • -4 / -6: Display only IPv4 or IPv6 sockets.
  • -s: Print summary statistics.

FILTER Syntax:

ss allows powerful filtering based on state, address, port, etc. The syntax is [ state STATE-FILTER ] [ EXPRESSION ].

  • STATE-FILTER: Keywords like listening, established, closed, synchronized, etc.
  • EXPRESSION: dport = :PORT, sport = :PORT, dst IP[/MASK], src IP[/MASK].

Example 1: List All Listening TCP Sockets (Numerical)

$ ss -ltn
State      Recv-Q Send-Q Local Address:Port     Peer Address:Port Process
LISTEN     0      4096     127.0.0.53%lo:53        0.0.0.0:*
LISTEN     0      128          0.0.0.0:22        0.0.0.0:*
LISTEN     0      128             [::]:22           [::]:*

Explanation:

  • State: Socket state (LISTEN).
  • Recv-Q / Send-Q: Receive and Send queue sizes (bytes waiting). For listening sockets, Send-Q often indicates the maximum backlog of allowed incoming connections waiting to be accepted.
  • Local Address:Port: The local IP address and port the socket is bound to.
    • 127.0.0.53%lo:53: Listening on the loopback interface (%lo), address 127.0.0.53 (common for local DNS resolvers), port 53 (DNS).
    • 0.0.0.0:22: Listening on all available IPv4 interfaces (0.0.0.0), port 22 (SSH).
    • [::]:22: Listening on all available IPv6 interfaces ([::]), port 22 (SSH).
  • Peer Address:Port: For listening sockets, this is usually 0.0.0.0:* or [::]:*, indicating it will accept connections from any remote address/port.
  • Process: Not shown here because we didn't use -p.

Example 2: List Listening TCP/UDP Sockets with Process Info (Requires Root)

$ sudo ss -ltupn
State    Recv-Q Send-Q Local Address:Port  Peer Address:Port Process
LISTEN   0      4096   127.0.0.53%lo:53     0.0.0.0:*      users:(("systemd-resolve",pid=678,fd=13))
UNCONN   0      0        127.0.0.1:323    0.0.0.0:*      users:(("chronyd",pid=789,fd=5))
LISTEN   0      128      0.0.0.0:22     0.0.0.0:*      users:(("sshd",pid=910,fd=3))
UNCONN   0      0          0.0.0.0:68     0.0.0.0:*      users:(("dhclient",pid=850,fd=6))
LISTEN   0      128         [::]:22        [::]:*      users:(("sshd",pid=910,fd=4))
UNCONN   0      0            [::1]:323       [::]:*      users:(("chronyd",pid=789,fd=6))

Explanation:

  • We added -u to include UDP sockets and -p to show processes.
  • State UNCONN: UDP sockets are connectionless, so they are shown as UNCONN (unconnected) when listening.
  • users:(("systemd-resolve",pid=678,fd=13)): Shows the program name (systemd-resolve), Process ID (678), and File Descriptor (13) associated with the socket. This tells us systemd-resolved is listening on port 53, sshd on port 22, chronyd (NTP client) on port 323, and dhclient on port 68 (DHCP client).

Example 3: List Established TCP Connections

$ ss -tn
State      Recv-Q Send-Q  Local Address:Port     Peer Address:Port Process
ESTAB      0      0       192.168.1.105:22     192.168.1.50:54321
ESTAB      0      0       192.168.1.105:49876    93.184.216.34:443

Explanation:

  • Shows established TCP connections (State ESTAB).
  • The first line shows an incoming SSH connection from 192.168.1.50 (port 54321) to our machine's port 22.
  • The second line shows an outgoing HTTPS connection from our machine (using a dynamic source port 49876) to the remote server 93.184.216.34 (likely example.com) on port 443.

Example 4: Filter Established Connections to a Specific Port

$ ss -tn state established '( dport = :443 or sport = :443 )'
State      Recv-Q Send-Q  Local Address:Port     Peer Address:Port Process
ESTAB      0      0       192.168.1.105:49876    93.184.216.34:443

This filters the established TCP connections to show only those where either the destination port (dport) or source port (sport) is 443 (HTTPS).

Legacy Command: netstat

netstat (network statistics) was the traditional tool for these tasks. It's part of the net-tools package. While still widely available, ss is preferred due to its speed and more modern interface. netstat combines several functions invoked by different options.

Syntax for Connections/Listening Ports:

netstat [-t] [-u] [-l] [-n] [-p] [-a] [-e]

Options are very similar to ss:

  • -t: TCP
  • -u: UDP
  • -l: Listening
  • -n: Numerical addresses/ports
  • -p: Show PID/Program name (often requires root)
  • -a: All sockets
  • -e: Extended information

Example 1: List All Listening TCP Sockets (Numerical, with PID)

$ sudo netstat -ltnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      678/systemd-resolve
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      910/sshd
tcp6       0      0 :::22                   :::*                    LISTEN      910/sshd

Output is similar to ss -ltnp, but the formatting differs. tcp6 indicates IPv6.

Example 2: List Established TCP Connections (Numerical)

$ netstat -tn
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 192.168.1.105:22        192.168.1.50:54321      ESTABLISHED
tcp        0      0 192.168.1.105:49876     93.184.216.34:443       ESTABLISHED

Again, similar to ss -tn, but netstat often separates listening sockets ("servers") from established connections ("w/o servers") in its output headings.

While functional, netstat parses /proc/net/* files, which can be slow on busy systems, whereas ss uses the more efficient netlink interface.

Introduction to nmap (Network Mapper)

While ss and netstat inspect sockets on the local machine, nmap is designed to scan remote hosts (or the local host) to discover open ports, identify services, and even determine the operating system. It's a powerful and complex tool used for network exploration and security auditing. Using nmap against networks or hosts you don't have permission to scan is unethical and likely illegal. We will only use it against localhost here.

Syntax (Basic Port Scan):

nmap [SCAN TYPE] [OPTIONS] TARGET

Common Scan Types:

  • -sT: TCP Connect Scan (completes the 3-way handshake, easily detectable).
  • -sS: TCP SYN Scan ("half-open" scan, sends SYN, looks for SYN-ACK, doesn't complete connection. Default as root, stealthier).
  • -sU: UDP Scan (sends UDP packets, slower and less reliable).
  • -sV: Version Detection (tries to determine service/version on open ports).
  • -O: OS Detection (tries to guess the remote OS).

Common Options:

  • -p PORT-SPEC: Specify ports to scan (e.g., -p 22, -p 1-1024, -p 22,80,443, -p- for all 65535 ports).
  • -n: No DNS resolution.
  • -v: Verbose output.
  • T<0-5>: Timing template (T4 is default, T5 is faster/more aggressive, T0-T2 are slower/stealthier).

Example: Scan Common TCP Ports on Localhost

$ nmap localhost
Starting Nmap 7.80 ( https://nmap.org ) at 2024-05-21 11:00 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00010s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 998 closed ports
PORT   STATE SERVICE
22/tcp open  ssh
53/tcp open  domain

Nmap done: 1 IP address (1 host up) scanned in 0.05 seconds

Explanation:

  • nmap scanned the default list of common TCP ports on localhost (127.0.0.1).
  • It found port 22 (SSH) and port 53 (Domain/DNS) are open.
  • Other scanned ports were reported as closed. Ports not scanned are not mentioned unless nmap can determine they are filtered (no response, likely due to a firewall).

Example: Scan Specific UDP Ports on Localhost (Requires Root)

$ sudo nmap -sU -p 68,123,161 localhost
Starting Nmap 7.80 ( https://nmap.org ) at 2024-05-21 11:05 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00015s latency).
Other addresses for localhost (not scanned): ::1
PORT    STATE         SERVICE
68/udp  open|filtered dhcpc
123/udp open          ntp
161/udp closed        snmp

Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds

Explanation:

  • -sU specifies UDP scan. UDP scanning is less reliable.
  • Port 68 (DHCP client) is open|filtered - nmap received no response, which could mean it's open or firewalled.
  • Port 123 (NTP) is open - it likely received a response appropriate for NTP.
  • Port 161 (SNMP) is closed - it likely received an ICMP "Port Unreachable" message.

nmap is a vast topic. This is just a tiny glimpse. Its primary use is network discovery and security auditing from an external perspective.

Workshop: Investigating Local Services and Connections

Goal: Use ss to identify listening services and examine active connections on your own machine. Optionally use nmap to scan localhost.

Assumptions: You are running a standard Linux desktop or server. You have ss (usually default) and potentially nmap installed. Some commands require sudo.

Steps:

  1. List Listening TCP Ports:

    • Use ss to list all listening TCP sockets. Use numerical output (-n) and also show the process (-p, requires sudo).
    • Identify common services: Do you see port 22 (SSH)? Port 80/443 (HTTP/HTTPS)? Port 631 (IPP/CUPS printing)? Port 53 (DNS)? Port 25 (SMTP)?
    • Note which IP address each service is listening on (0.0.0.0 or :: means all interfaces, 127.0.0.1 or ::1 means loopback only).
    # Command:
    sudo ss -ltnp
    
    # Observation: List the ports you see open and the corresponding service/PID. Are they expected?
    # Example: Port 22/sshd, Port 53/systemd-resolve, Port 631/cupsd
    
  2. List Listening UDP Ports:

    • Use ss similarly to list listening UDP sockets, including the process information (sudo ss -lunp).
    • Look for common UDP services: Port 53 (DNS), Port 68 (DHCP client), Port 123 (NTP client - chronyd/ntpd), Port 5353 (mDNS/Avahi).
    # Command:
    sudo ss -lunp
    
    # Observation: List the UDP ports and services. Are services like DHCP client or NTP client active?
    # Example: Port 68/dhclient, Port 123/chronyd, Port 5353/avahi-daemon
    
  3. Generate Network Traffic:

    • Open a web browser and navigate to a website (e.g., https://www.linux.org). This will create an established HTTPS connection on TCP port 443.
    • Alternatively, if you have another machine on your network, establish an SSH connection to your machine (ssh your_username@your_ip).
  4. Examine Established Connections:

    • While the browser or SSH connection is active, use ss to view established TCP connections (ss -tnp, use sudo for process info).
    • Look for the connection related to the activity in Step 3.
    • For the web browser: You should see a connection from your local IP (with a high random port) to the website's IP address on destination port 443. Note the process if visible.
    • For the SSH connection: You should see a connection from the remote machine's IP (random port) to your machine's IP on destination port 22. Note the sshd process.
    # Command:
    sudo ss -tnp
    
    # Observation: Find the connection corresponding to your browser traffic (dport 443) or incoming SSH (dport 22). Note the local/peer addresses and ports, and the associated process.
    
  5. Filter Connections:

    • Practice filtering the output. For example, show only established connections involving port 22 (SSH).
    # Command:
    sudo ss -tnp state established '( dport = :22 or sport = :22 )'
    
    # Observation: Verify that only SSH-related established connections are displayed.
    
  6. Check Summary Statistics (Optional):

    • Use ss -s to get a quick overview of the total number of sockets in various states.
    # Command:
    ss -s
    
    # Observation: See the totals for TCP (established, orphaned, syn-recv, time-wait, etc.) and UDP sockets.
    
  7. Scan Localhost with nmap (Optional):

    • Use nmap to perform a default TCP scan against your own machine (localhost).
    • Compare the list of open ports reported by nmap with the TCP listening ports identified by ss in Step 1. Do they match?
    # Command:
    nmap localhost
    
    # Observation: Does nmap's list of open TCP ports correspond to the output of `sudo ss -ltnp`?
    

Conclusion: In this workshop, you used ss extensively to inspect which services are listening for network connections (TCP and UDP) on your machine and which processes are responsible for them. You also examined active, established connections. This ability is critical for understanding what your system is doing on the network, troubleshooting services that aren't responding, and performing basic security checks. You also got a brief introduction to how nmap can be used to probe for open ports.

5. Capturing and Analyzing Network Traffic

Sometimes, understanding network behavior requires looking beyond connection states and routing tables to examine the actual data packets being sent and received. Packet capture and analysis involve intercepting network traffic on an interface and decoding the protocols within each packet. This is invaluable for deep troubleshooting of application behavior, network protocol issues, performance problems, and security investigations. The standard command-line tool for packet capture in Linux is tcpdump, and its powerful GUI/TUI counterpart from the Wireshark project is tshark.

Network Tapping and Libpcap

To capture traffic, these tools typically need access to the raw network interface. On Linux, this is usually done via the AF_PACKET socket interface. The underlying library that most packet capture tools (including tcpdump, tshark, and Wireshark) use to interact with the operating system's capture mechanism is libpcap. Capturing traffic usually requires root privileges because it involves listening promiscuously (capturing all traffic on the network segment, not just traffic addressed to the host) or accessing privileged network interfaces.

Important Note on Promiscuous Mode: When a network interface is put into promiscuous mode, it accepts all packets it sees on the physical medium (within its collision domain or VLAN), regardless of the destination MAC address. This is essential for sniffing traffic between other hosts on the same network segment (e.g., on a traditional hub or a monitored switch port). However, on modern switched networks, you typically only see broadcast traffic and traffic specifically addressed to your host's MAC address unless techniques like ARP spoofing or switch port mirroring (SPAN) are used. Capturing in non-promiscuous mode only shows traffic sent to/from your host or broadcast/multicast traffic.

The tcpdump Command

tcpdump is the quintessential command-line packet sniffer. It's lightweight, powerful, and ubiquitous on Unix-like systems. Its core function is to capture packets matching certain criteria and display a summary of their headers or save the full packet data to a file for later analysis.

Syntax:

tcpdump [OPTIONS] [FILTER EXPRESSION]

Common Options:

  • -i INTERFACE: Specify the network interface to listen on (e.g., eth0, any to capture on all). If not specified, tcpdump usually picks the lowest-numbered active interface.
  • -n: Don't convert addresses (IPs) to names.
  • -nn: Don't convert addresses or port numbers to names/services. (Often preferred for clarity and speed).
  • -X: Print packet data in both hex and ASCII.
  • -XX: Same as -X but includes the Ethernet header.
  • -v, -vv, -vvv: Increase verbosity, showing more protocol detail.
  • -c COUNT: Exit after capturing COUNT packets.
  • -s SNAPLEN: Set the snapshot length (bytes to capture per packet). -s 0 usually means capture the full packet. Default is often small (e.g., 262144 bytes or 65535 bytes), which is usually sufficient unless dealing with jumbo frames or needing full payloads.
  • -w FILENAME.pcap: Write the raw captured packets to a file in pcap format (standard format readable by Wireshark, tshark, etc.) instead of printing to the screen. Essential for capturing large amounts of data or for later analysis.
  • -r FILENAME.pcap: Read packets from a previously saved pcap file instead of a live interface.

Filter Expression (Berkeley Packet Filter - BPF):

tcpdump uses the powerful BPF syntax to select which packets to capture. This is crucial to avoid being overwhelmed by irrelevant traffic. Filters are processed efficiently in the kernel.

  • Type Qualifiers: host, net, port
    • host 192.168.1.1: Packets with source or destination IP 192.168.1.1.
    • net 192.168.1.0/24: Packets with source or destination network 192.168.1.0/24.
    • port 80: Packets with source or destination TCP/UDP port 80.
  • Direction Qualifiers: src, dst
    • src host 192.168.1.105: Packets originating from IP 192.168.1.105.
    • dst port 53: Packets destined for TCP/UDP port 53.
  • Protocol Qualifiers: tcp, udp, icmp, icmp6, arp, ip, ip6, ether
    • icmp: Capture only ICMP packets.
    • tcp port 22: Capture only TCP packets with source or destination port 22.
  • Logical Operators: and (or &&), or (or ||), not (or !). Parentheses () can be used for grouping (often need to be escaped or quoted in shells: \( ... \) or '...').
    • host 1.1.1.1 and tcp port 443: Capture TCP traffic to/from host 1.1.1.1 on port 443.
    • not host 192.168.1.1: Capture traffic not involving the gateway.
    • port 80 or port 443: Capture HTTP or HTTPS traffic.
    • src host 10.0.0.5 and \( dst port 80 or dst port 443 \): Capture packets from 10.0.0.5 going to either port 80 or 443.

Example 1: Capture ICMP Traffic (like Ping)

# Run this command in one terminal
sudo tcpdump -i eth0 -nn icmp

# In another terminal, ping a host (e.g., 8.8.8.8)
ping -c 3 8.8.8.8

Example tcpdump Output:

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:15:01.123456 IP 192.168.1.105 > 8.8.8.8: ICMP echo request, id 1234, seq 1, length 64
11:15:01.138789 IP 8.8.8.8 > 192.168.1.105: ICMP echo reply, id 1234, seq 1, length 64
11:15:02.124567 IP 192.168.1.105 > 8.8.8.8: ICMP echo request, id 1234, seq 2, length 64
11:15:02.140123 IP 8.8.8.8 > 192.168.1.105: ICMP echo reply, id 1234, seq 2, length 64
11:15:03.125678 IP 192.168.1.105 > 8.8.8.8: ICMP echo request, id 1234, seq 3, length 64
11:15:03.141456 IP 8.8.8.8 > 192.168.1.105: ICMP echo reply, id 1234, seq 3, length 64
6 packets captured
...

Explanation:

  • -i eth0: Listen on interface eth0.
  • -nn: Show numerical IPs and ports.
  • icmp: Filter expression to capture only ICMP packets.
  • The output shows the timestamp, protocol (IP), source IP (>), destination IP (:), protocol details (ICMP echo request/reply), ICMP identifiers (id, seq), and packet length.

Example 2: Capture DNS Traffic (Port 53)

sudo tcpdump -i eth0 -nn port 53
# In another terminal, use dig or host (e.g., dig www.example.com)

This will show the UDP (and sometimes TCP) packets exchanged with DNS servers on port 53.

Example 3: Capture Traffic to/from a Specific Host and Save to File

sudo tcpdump -i eth0 -nn -w web_traffic.pcap host www.example.com
# In another terminal, browse to http://www.example.com or https://www.example.com
# Press Ctrl+C in the tcpdump terminal when done.

This captures all IP traffic (TCP, UDP, ICMP, etc.) between your host and www.example.com and saves it to web_traffic.pcap. The file can then be analyzed with tcpdump -r or Wireshark/tshark.

Example 4: Display Packet Content (Hex/ASCII)

sudo tcpdump -i eth0 -nn -X port 80 and host example.com
# Browse to http://example.com (Note: Use HTTP, not HTTPS, to see clear text)

This will capture HTTP traffic and display the content of the packets, allowing you to see HTTP headers and potentially data (if not encrypted). -X is useful for debugging protocol interactions.

The tshark Command

tshark is the command-line sibling of the popular Wireshark graphical network analyzer. It uses the same dissection engines as Wireshark, meaning it understands a vast number of protocols and can provide much more detailed analysis than tcpdump's default output. It can read/write pcap files and use libpcap for live captures.

Syntax:

tshark [OPTIONS] [FILTER EXPRESSION]

Common Options (Many overlap with tcpdump and Wireshark):

  • -i INTERFACE: Specify interface.
  • -n: Disable network object name resolution (like tcpdump -n).
  • -V: Detailed view, showing full dissection of each packet.
  • -c COUNT: Stop after COUNT packets.
  • -s SNAPLEN: Snapshot length.
  • -w FILENAME.pcapng: Write raw packets to a file (default format is pcapng, which is more advanced than pcap).
  • -r FILENAME.pcap[ng]: Read packets from a file.
  • -f "BPF FILTER": Specify a capture filter (BPF syntax, same as tcpdump) to limit what packets are captured. Applied early.
  • -Y "DISPLAY FILTER": Specify a display filter (Wireshark syntax) to limit what packets are displayed after capture/reading. Applied later, more flexible protocol-level filtering.
  • -T fields -e FIELD1 -e FIELD2 ...: Output specific fields only, useful for scripting/data extraction.
  • -z STATS: Compute various statistics (e.g., conversations, endpoints, protocol hierarchies).

Wireshark Display Filters (-Y): These are much richer than BPF capture filters and allow filtering based on specific protocol fields dissected by Wireshark/tshark.

  • ip.addr == 192.168.1.1
  • tcp.port == 80
  • dns.qry.name == "www.example.com"
  • http.request.method == "GET"
  • tcp.flags.syn == 1 and tcp.flags.ack == 0 (TCP SYN packet)
  • !(arp or icmp or dns) (Exclude common background protocols)

Example 1: Live Capture with Detailed Output (like Wireshark summary)

sudo tshark -i eth0 -f "port 80 or port 443"
# Browse web pages

tshark will print a one-line summary for each captured HTTP/HTTPS packet, similar to the default Wireshark packet list pane.

Example 2: Reading a File and Showing Full Packet Details

# Assuming web_traffic.pcap was created earlier with tcpdump
tshark -r web_traffic.pcap -V -c 1

This reads the first packet (-c 1) from web_traffic.pcap and displays its full dissection (-V), showing details for Ethernet, IP, TCP/UDP, and any higher-level protocols it recognizes.

Example 3: Reading a File and Applying a Display Filter

tshark -r web_traffic.pcap -Y "http.request"

This reads the web_traffic.pcap file but only displays packets identified as HTTP requests.

Example 4: Extracting Specific Fields

# Capture DNS queries and extract time, source IP, destination IP, and query name
sudo tshark -i eth0 -f "udp port 53" -Y "dns.flags.response == 0" \
    -T fields -e frame.time -e ip.src -e ip.dst -e dns.qry.name

This captures live DNS queries (UDP port 53, not responses), and uses -T fields to output only the specified fields (-e ...), making it easy to parse or import into spreadsheets/databases.

tshark is incredibly versatile for deep analysis, statistical reporting (-z option), and extracting specific information from captures, bridging the gap between basic tcpdump output and the full Wireshark GUI.

Workshop: Capturing and Inspecting Traffic

Goal: Use tcpdump and tshark to capture live network traffic, save it to a file, and inspect specific protocol interactions.

Assumptions: You have tcpdump installed (usually default). tshark might need installation (sudo apt install tshark or sudo yum install wireshark-cli). Running captures requires sudo. Be mindful not to capture sensitive information if on a shared network.

Steps:

  1. Capture ping Traffic:

    • In one terminal, start tcpdump to capture ICMP traffic on your main network interface (eth0 or similar). Use -nn for numerical output.
    • sudo tcpdump -i <INTERFACE> -nn icmp
    • In a second terminal, ping a known host (e.g., ping -c 5 8.8.8.8).
    • Observe the tcpdump output. You should see pairs of ICMP echo requests from your IP to 8.8.8.8 and echo replies coming back. Press Ctrl+C to stop tcpdump.
  2. Capture DNS Lookup:

    • In terminal 1, start tcpdump to capture DNS traffic (port 53).
    • sudo tcpdump -i <INTERFACE> -nn port 53
    • In terminal 2, use dig or host to look up a domain name you haven't visited recently (to avoid hitting a local cache).
    • dig www.debian.org
    • Observe the tcpdump output. You should see UDP packets from your machine to your DNS resolver (check /etc/resolv.conf) and replies coming back. Stop tcpdump.
  3. Capture HTTP Traffic and Save to File:

    • We need a plain HTTP site for this (most sites redirect to HTTPS). http://info.cern.ch (the world's first website) sometimes works, or set up a simple local web server if possible. For demonstration, we'll capture traffic to example.com on port 80, even if it redirects.
    • In terminal 1, start tcpdump to capture traffic to/from example.com on port 80, saving it to a file named http_capture.pcap. Use -s 0 to capture full packets.
    • sudo tcpdump -i <INTERFACE> -nn -s 0 -w http_capture.pcap host example.com and port 80
    • In terminal 2, use curl or a browser to access http://example.com.
    • curl http://example.com
    • Stop tcpdump in terminal 1 (Ctrl+C).
  4. Inspect Captured File with tcpdump:

    • Use tcpdump with the -r option to read the file. Use -X to see packet content.
    • tcpdump -nn -r http_capture.pcap -X
    • Look through the output. Can you identify the TCP three-way handshake (SYN, SYN-ACK, ACK)? Can you see the HTTP GET request from your machine? Can you see the HTTP response from the server (likely a 301 redirect to HTTPS)? The -X output will show the actual headers.
  5. Inspect Captured File with tshark (Basic):

    • Use tshark to read the same file with default summary output.
    • tshark -r http_capture.pcap
    • Notice how tshark automatically dissects and labels the protocols (TCP, HTTP).
  6. Inspect Captured File with tshark (Detailed):

    • Use tshark with the -V option to see the full dissection of the first few packets.
    • tshark -r http_capture.pcap -V -c 3
    • Examine the detailed breakdown for Ethernet, IP, TCP, and HTTP layers.
  7. Filter Displayed Packets with tshark:

    • Use tshark with a display filter (-Y) to show only the HTTP requests within the capture file.
    • tshark -r http_capture.pcap -Y "http.request"
    • Now try showing only the TCP SYN packets (start of connection).
    • tshark -r http_capture.pcap -Y "tcp.flags.syn == 1 and tcp.flags.ack == 0"
  8. Extract Fields with tshark (Example: TCP Conversations):

    • Use tshark's statistics (-z) feature to list TCP conversations in the file.
    • tshark -r http_capture.pcap -z conv,tcp
    • This shows a summary of data exchanged between IP address pairs and ports.

Conclusion: This workshop provided hands-on experience with capturing live network traffic using tcpdump, filtering based on protocols and hosts, and saving captures to files. You then used both tcpdump -r and the more powerful tshark to read these files, inspect packet contents, view detailed protocol dissections, apply display filters, and extract summary information. These skills are fundamental for advanced network troubleshooting and analysis, allowing you to see exactly what is happening "on the wire."

6. Secure Remote Access and File Transfer

A fundamental requirement in networking is the ability to securely access and manage remote Linux systems and transfer files between them. While older, insecure protocols like Telnet and FTP exist, they transmit data (including passwords) in clear text, making them unsuitable for use over untrusted networks. The industry standard for secure remote access and file transfer on Linux is the Secure Shell (SSH) protocol and its associated tools: ssh, scp, and rsync.

The Secure Shell (SSH) Protocol

SSH is a cryptographic network protocol that provides three main capabilities over an unsecured network:

  1. Secure Command-Shell: Allows a user to log in to a remote machine and execute commands interactively, as if they were sitting at the remote console.
  2. Secure File Transfer: Allows files to be copied securely between hosts.
  3. Port Forwarding (Tunneling): Allows arbitrary TCP ports to be forwarded securely over the SSH connection, enabling secure access to services that might not natively support encryption.

Key Security Features:

  • Encryption: All traffic (authentication, commands, file data, forwarded ports) is strongly encrypted, preventing eavesdropping.
  • Authentication: Verifies the identity of both the user connecting (client) and the server being connected to.
    • Server Authentication: The client verifies the server's identity using the server's public host key (usually found in /etc/ssh/ssh_host_*_key.pub on the server and cached in the client's ~/.ssh/known_hosts file). This prevents Man-in-the-Middle (MitM) attacks. You'll often see a prompt asking you to verify the host key fingerprint the first time you connect.
    • Client Authentication: The server verifies the user's identity, typically using either:
      • Password Authentication: User provides their password for the remote account (encrypted during transit by SSH). Simple but potentially vulnerable to brute-force attacks.
      • Public Key Authentication: More secure and convenient. The user generates a public/private key pair (e.g., using ssh-keygen). The public key is placed on the server (in ~/.ssh/authorized_keys), while the private key remains securely on the client (often passphrase-protected). During login, the client proves possession of the private key without actually sending it over the network.
  • Data Integrity: Ensures that the data transmitted has not been tampered with during transit using cryptographic checksums (MACs - Message Authentication Codes).

Most Linux distributions come with an SSH client (ssh, scp, sftp) installed by default. To accept incoming SSH connections, the SSH server daemon (sshd) must be installed and running (often package openssh-server). The main configuration file for the server is typically /etc/ssh/sshd_config, and for the client, /etc/ssh/ssh_config (system-wide) and ~/.ssh/config (user-specific).

The ssh Command (Remote Login)

The ssh command is used to initiate a connection to a remote SSH server and obtain an interactive shell or execute a single command.

Syntax:

ssh [OPTIONS] [user@]hostname [COMMAND]
  • user@: (Optional) The username to log in as on the remote host. If omitted, defaults to the current local username.
  • hostname: The hostname or IP address of the remote SSH server.
  • COMMAND: (Optional) If provided, this command is executed on the remote host instead of starting an interactive shell. The connection closes after the command finishes.
  • OPTIONS:
    • -p PORT: Connect to a specific port on the remote host (default is 22).
    • -i IDENTITY_FILE: Specify the path to the private key file to use for public key authentication (default is often ~/.ssh/id_rsa, ~/.ssh/id_ed25519, etc.).
    • -l USER: Alternative way to specify the remote username.
    • -v, -vv, -vvv: Increase verbosity for debugging connection issues.
    • -X / -Y: Enable X11 forwarding (allows running graphical applications remotely, requires X server on client and server configuration). -Y is generally preferred for security (trusted forwarding).
    • -L [bind_address:]port:host:hostport: Local port forwarding. Listens on port on the client machine and forwards connections via the SSH tunnel to host:hostport relative to the server.
    • -R [bind_address:]port:host:hostport: Remote port forwarding. Listens on port on the server machine and forwards connections via the SSH tunnel back to host:hostport relative to the client.
    • -D [bind_address:]port: Dynamic port forwarding (SOCKS proxy). ssh acts as a SOCKS proxy server listening on port on the client, forwarding connections through the remote server.

Example 1: Interactive Login

ssh user@remote.example.com
# (First time connection: verify host key fingerprint)
# (Enter password or passphrase for private key)
user@remote:~$ # Now you have a shell on remote.example.com
user@remote:~$ ls -l
user@remote:~$ exit
Connection to remote.example.com closed.
$

Example 2: Executing a Single Command

ssh admin@server1.example.com 'uptime'
# (Authentication happens)
 11:30:01 up 10 days,  2:15,  1 user,  load average: 0.01, 0.03, 0.05
$ # Connection closed automatically

Example 3: Connecting with a Specific Private Key and Port

ssh -i ~/.ssh/deploy_key -p 2222 deploy@app.example.com

Example 4: Local Port Forwarding

Imagine a database server (db.internal) running on port 5432 that is only accessible from remote.example.com, not directly from your local machine.

ssh -L 8000:db.internal:5432 user@remote.example.com
# Keep this SSH connection running.
# Now, on your local machine, connect to localhost:8000
# (e.g., psql -h localhost -p 8000 ...)
# The connection will be forwarded securely through remote.example.com to db.internal:5432

The scp Command (Secure Copy)

scp uses SSH to securely copy files and directories between hosts. Its syntax is modeled after the traditional cp command.

Syntax:

# Copy from Local to Remote
scp [OPTIONS] LOCAL_SOURCE [user@]hostname:REMOTE_DESTINATION

# Copy from Remote to Local
scp [OPTIONS] [user@]hostname:REMOTE_SOURCE LOCAL_DESTINATION
  • LOCAL_SOURCE/LOCAL_DESTINATION: Path to file/directory on the local machine.
  • [user@]hostname:REMOTE_SOURCE/[user@]hostname:REMOTE_DESTINATION: Specification of the remote host and the path to the file/directory on that remote host.
  • OPTIONS:
    • -P PORT: Specify the remote SSH port (Note: Uppercase -P for scp, lowercase -p for ssh).
    • -i IDENTITY_FILE: Specify the private key file.
    • -r: Recursively copy entire directories.
    • -p: Preserves modification times, access times, and modes from the original file.
    • -q: Quiet mode (no progress meter).
    • -C: Enable compression.

Example 1: Copy a Local File to a Remote Host

scp report.txt user@remote.example.com:/home/user/documents/
# (Authentication happens)
report.txt                                    100%   12KB  95.5KB/s   00:00
$

Example 2: Copy a File from a Remote Host to Local

scp admin@server1.example.com:/var/log/syslog ./server1_syslog.log
# (Authentication happens)
syslog                                        100%  512KB   1.2MB/s   00:00
$

Example 3: Copy a Directory Recursively to Remote (using non-default port)

scp -r -P 2222 ./project_files deploy@app.example.com:/opt/app/

scp is simple and effective for basic file transfers. However, for large files, synchronizing directories, or transfers over unreliable networks, rsync is often a better choice.

The rsync Command (Remote Sync)

rsync is a highly versatile tool for synchronizing files and directories between locations. It can operate locally or remotely (typically using SSH as its transport). Its key advantage is the "delta-transfer" algorithm: it intelligently figures out which parts of files have changed and only transfers the differences, making it very efficient for subsequent syncs after the initial copy.

Syntax (Common Remote Usage over SSH):

# Push from Local to Remote
rsync [OPTIONS] LOCAL_SOURCE/ [user@]hostname:REMOTE_DESTINATION

# Pull from Remote to Local
rsync [OPTIONS] [user@]hostname:REMOTE_SOURCE/ LOCAL_DESTINATION

Important Note on Trailing Slashes: The presence or absence of a trailing slash / on the source path is significant in rsync:

  • rsync -a source_dir remote:dest_dir: Copies the source_dir itself into dest_dir, resulting in dest_dir/source_dir.
  • rsync -a source_dir/ remote:dest_dir: Copies the contents of source_dir into dest_dir, resulting in dest_dir/file1, dest_dir/file2, etc.

Common Options:

  • -a (archive): A shortcut for a common set of options (-rlptgoD) that preserves permissions, ownership (if possible), timestamps, symbolic links, groups, and devices. Essential for backups and directory syncs.
  • -v (verbose): Increase verbosity.
  • -z (compress): Compress file data during transfer. Useful on slow networks.
  • -P: Equivalent to --partial --progress. Keeps partially transferred files (for resuming) and shows a progress bar for each file.
  • --delete: Delete files in the destination directory that do not exist in the source directory. Use with caution! Essential for true mirroring but can cause data loss if misused.
  • --exclude=PATTERN: Exclude files matching the pattern (e.g., --exclude='*.log', --exclude='cache/'). Can be specified multiple times.
  • --include=PATTERN: Include files matching the pattern, often used with --exclude to make exceptions.
  • -n (dry-run): Perform a trial run without making any changes. Shows what would be transferred or deleted. Highly recommended before using --delete.
  • -e 'ssh -p PORT -i IDENTITY_FILE': Specify the remote shell command and its options. Used to connect via a non-standard port or use a specific key with rsync.

Example 1: Backup Local Home Directory to Remote Server

# Copy contents of local ~/my_important_data to /backup/home on remote_server
# Preserve attributes, compress, show progress, be verbose
rsync -avzP ~/my_important_data/ user@remote_server:/backup/home/

Example 2: Mirroring a Website Directory (with delete)

# Make the remote directory exactly match the local one
# Do a dry run first!
rsync -avzP --delete -n /var/www/my_site/ deploy@webserver:/var/www/live_site/

# If dry run looks okay, remove -n to perform the actual sync
rsync -avzP --delete /var/www/my_site/ deploy@webserver:/var/www/live_site/

Example 3: Syncing Using Non-standard Port and Excluding Logs

rsync -avzP --exclude='*.log' -e 'ssh -p 2222' \
    ~/projects/current_app/ \
    dev@staging_server:/srv/app/

rsync's efficiency, especially for repeated transfers, and its flexibility with options like --delete and --exclude make it the preferred tool for backups, deployments, and keeping directory structures synchronized over the network.

Workshop: Remote Access and File Synchronization

Goal: Practice connecting to a remote host using ssh, copying files using scp, and synchronizing directories using rsync.

Assumptions:

  • You need access to two Linux machines where you can log in. This could be:
    • Two virtual machines (e.g., using VirtualBox or VMware) on your computer, networked together.
    • Your local machine and a cloud VM (e.g., AWS EC2, Google Cloud, DigitalOcean).
    • Your local machine and a server provided by your university or another provider.
    • Even just your local machine connecting to itself (ssh localhost) can work for practicing the commands, though it's less realistic.
  • The "remote" machine must have the openssh-server package installed and running.
  • You know the IP address or hostname of the remote machine (REMOTE_IP_OR_HOSTNAME) and a username (REMOTE_USER) you can log in as.
  • You can authenticate (either via password or by having set up SSH keys beforehand). For simplicity, password authentication is assumed here if keys aren't set up, but key authentication is strongly recommended for real-world use.

Steps:

  1. Establish SSH Connection:

    • From your local machine (Machine A), connect to the remote machine (Machine B) using ssh.
    • ssh REMOTE_USER@REMOTE_IP_OR_HOSTNAME
    • If it's the first time, verify the host key fingerprint and type yes.
    • Enter the password for REMOTE_USER when prompted.
    • You should now have a command prompt on Machine B.
    • Run a simple command like hostname or pwd to confirm you are on the remote machine.
    • Type exit to close the connection and return to Machine A.
  2. Create Sample Files/Directories:

    • On Machine A (your local machine), create a sample file and a directory:
    • echo "This is a test file from Machine A." > test_file_A.txt
    • mkdir test_dir_A
    • echo "File 1 inside dir A" > test_dir_A/file1.txt
    • echo "File 2 inside dir A" > test_dir_A/file2.txt
    • ls -lR to verify.
  3. Copy a Single File using scp (Local to Remote):

    • Use scp to copy test_file_A.txt from Machine A to the home directory of REMOTE_USER on Machine B.
    • scp test_file_A.txt REMOTE_USER@REMOTE_IP_OR_HOSTNAME:~
      • (The ~ is a shortcut for the remote user's home directory).
    • Enter the password if prompted.
    • Verify: ssh REMOTE_USER@REMOTE_IP_OR_HOSTNAME 'ls -l ~/test_file_A.txt' (Execute remote command).
  4. Copy a Directory using scp (Local to Remote):

    • Use scp with the -r option to copy the entire test_dir_A directory to the remote home directory.
    • scp -r test_dir_A REMOTE_USER@REMOTE_IP_OR_HOSTNAME:~
    • Verify: ssh REMOTE_USER@REMOTE_IP_OR_HOSTNAME 'ls -lR ~/test_dir_A'
  5. Create a Sample File on Remote:

    • Log in to Machine B again using ssh.
    • ssh REMOTE_USER@REMOTE_IP_OR_HOSTNAME
    • Create a file there: echo "This file originated on Machine B." > test_file_B.txt
    • exit
  6. Copy a File using scp (Remote to Local):

    • Use scp to copy test_file_B.txt from Machine B back to your current directory on Machine A.
    • scp REMOTE_USER@REMOTE_IP_OR_HOSTNAME:~/test_file_B.txt .
      • (The . represents the current local directory).
    • Verify: ls -l test_file_B.txt and cat test_file_B.txt.
  7. Prepare for rsync:

    • On Machine A, create a directory for rsync testing and some files within it:
    • mkdir rsync_source
    • echo "Rsync file 1" > rsync_source/rsync1.txt
    • echo "Rsync file 2" > rsync_source/rsync2.txt
    • mkdir rsync_source/subdir
    • echo "Rsync sub file" > rsync_source/subdir/subfile.txt
  8. Synchronize Directory using rsync (Local to Remote):

    • Use rsync with the archive (-a), verbose (-v), and progress (-P) options to copy the contents of rsync_source to a new directory named rsync_dest on Machine B. Remember the trailing slash on the source!
    • rsync -avP rsync_source/ REMOTE_USER@REMOTE_IP_OR_HOSTNAME:~/rsync_dest
    • Verify: ssh REMOTE_USER@REMOTE_IP_OR_HOSTNAME 'ls -lR ~/rsync_dest' (You should see rsync1.txt, rsync2.txt, and subdir/subfile.txt directly inside rsync_dest).
  9. Modify Local Source and Resync:

    • On Machine A, modify one file and add another:
    • echo "Rsync file 1 (modified)" > rsync_source/rsync1.txt
    • echo "Rsync file 3" > rsync_source/rsync3.txt
    • Run the same rsync command as in Step 8 again.
    • rsync -avP rsync_source/ REMOTE_USER@REMOTE_IP_OR_HOSTNAME:~/rsync_dest
    • Observe the output. rsync should recognize that rsync2.txt and subdir/subfile.txt haven't changed and only transfer the changed rsync1.txt and the new rsync3.txt. This demonstrates its efficiency.
    • Verify the changes on Machine B: ssh REMOTE_USER@REMOTE_IP_OR_HOSTNAME 'cat ~/rsync_dest/rsync1.txt ~/rsync_dest/rsync3.txt'
  10. Test rsync --delete (Dry Run First!):

    • On Machine A, remove a file from the source:
    • rm rsync_source/rsync2.txt
    • Perform a dry run (-n) with --delete to see what rsync would do to make the destination match the source.
    • rsync -avPn --delete rsync_source/ REMOTE_USER@REMOTE_IP_OR_HOSTNAME:~/rsync_dest
    • Observe the output. It should indicate that it would delete rsync2.txt on the remote destination. No changes are actually made because of -n.
    • (Optional: Remove the -n and run again to perform the deletion if desired).

Conclusion: This workshop guided you through the fundamental secure remote operations: logging in interactively with ssh, copying individual files and directories with scp, and efficiently synchronizing directory contents using rsync, including leveraging its delta-transfer algorithm and testing the --delete option safely with a dry run. These tools are indispensable for managing remote Linux systems and deploying applications.

7. Managing Network Configuration with NetworkManager

While the ip command allows for immediate, temporary manipulation of network interfaces, addresses, and routes, these changes typically don't persist after a reboot. Managing persistent network configuration in modern Linux distributions is often handled by higher-level services. One of the most common is NetworkManager. NetworkManager provides detection and configuration for various network devices (Ethernet, Wi-Fi, mobile broadband, VPNs) and aims to keep network connections active when available. It offers GUI tools (like nm-applet or settings panels in desktop environments) but also provides a powerful command-line interface called nmcli. Learning nmcli is essential for managing network settings persistently from the command line or in scripts, especially on servers or systems without a GUI.

Other configuration methods exist (e.g., systemd-networkd with /etc/systemd/network files, traditional /etc/network/interfaces on older Debian/Ubuntu systems, Netplan on newer Ubuntu), but NetworkManager with nmcli is widely adopted across many distributions (Fedora, RHEL, CentOS, Ubuntu Desktop, etc.).

NetworkManager Concepts

  • Device: Represents a physical or virtual network interface recognized by NetworkManager (e.g., eth0, wlan0, virbr0).
  • Connection (Profile): A collection of settings that define how to connect to a network using a specific device. This includes things like IP addressing method (DHCP/static), IP addresses, gateways, DNS servers, Wi-Fi SSID and password, VPN settings, etc. A device can have multiple connection profiles associated with it, but only one can be active on the device at a time.
  • Active Connection: A connection profile that is currently in use by a device to establish network connectivity.

The nmcli Command

nmcli is the command-line client for interacting with the NetworkManager daemon. It allows you to view status, manage devices, and create, view, modify, activate, and deactivate connection profiles.

General Syntax:

nmcli [OPTIONS] OBJECT {COMMAND | help}
  • OBJECTS: general, networking, radio, connection (or con, c), device (or dev, d).
  • COMMANDS: Vary depending on the object (e.g., status, show, up, down, add, modify, delete).

Common Tasks and Examples:

Viewing General Status and Network State

# Overall NetworkManager status and connectivity state
nmcli general status

# Check if networking is enabled overall
nmcli networking

# Check status of Wi-Fi, WWAN, Bluetooth radios
nmcli radio

Example nmcli general status Output:

STATE      CONNECTIVITY  WIFI-HW  WIFI     WWAN-HW  WWAN
connected  full          enabled  enabled  enabled  disabled
  • STATE: connected, connecting, disconnected, etc.
  • CONNECTIVITY: full (internet), limited (local network only), none, unknown.
  • WIFI-HW/WIFI: Hardware switch status / Software status.

Managing Devices

# List all network devices and their status
nmcli device status

# Show detailed information about a specific device
nmcli device show eth0

# Manually connect a device using a suitable connection profile
sudo nmcli device connect eth0

# Manually disconnect a device (deactivates its active connection)
sudo nmcli device disconnect eth0

# Re-apply device configuration (useful after some driver/hardware changes)
sudo nmcli device reapply eth0

Example nmcli device status Output:

DEVICE  TYPE      STATE      CONNECTION
eth0    ethernet  connected  Wired connection 1
wlan0   wifi      connected  MyHomeWiFi
docker0 bridge    connected  docker0
lo      loopback  unmanaged  --
  • DEVICE: Interface name.
  • TYPE: Type of device.
  • STATE: connected, disconnected, connecting, unavailable, unmanaged (NetworkManager ignores this device).
  • CONNECTION: Name of the active connection profile on this device.

Managing Connections (Profiles)

This is where most persistent configuration happens.

# List all saved connection profiles
nmcli connection show
# Alias: nmcli c s

# List only the active connection profiles
nmcli connection show --active
# Alias: nmcli c s -a

# Show detailed configuration of a specific connection profile
nmcli connection show "Wired connection 1"

# Bring up (activate) a connection profile (if suitable device available)
sudo nmcli connection up "MyOtherWiFi"
# Alias: nmcli c up "MyOtherWiFi"

# Bring down (deactivate) an active connection profile
sudo nmcli connection down "MyHomeWiFi"
# Alias: nmcli c down "MyHomeWiFi"

# Delete a connection profile
sudo nmcli connection delete "OldCafeWiFi"
# Alias: nmcli c d "OldCafeWiFi"

Example nmcli connection show Output:

NAME                UUID                                  TYPE      DEVICE
Wired connection 1  e8a4f5b6-ae6d-4a7a-8d8c-1f2e3d4b5a6c  ethernet  eth0
MyHomeWiFi          a1b2c3d4-e5f6-7890-1234-abcdef012345  wifi      wlan0
docker0             f0e9d8c7-b6a5-4321-fedc-ba9876543210  bridge    docker0
VPN Work            12345678-abcd-ef01-2345-67890abcdef  vpn       --
OldCafeWiFi         98765432-fedc-ba98-7654-3210abcdef  wifi      --
  • NAME: User-friendly name of the connection profile.
  • UUID: Unique identifier. Can often be used instead of the name.
  • TYPE: Type of connection (ethernet, wifi, vpn, bridge, bond, etc.).
  • DEVICE: Device currently using this connection (or -- if inactive).

Adding a New Connection Profile (Example: Static Ethernet)

# Add a new Ethernet connection profile named "Static_Office" for device eth0
# Configure static IPv4 address, gateway, and DNS servers
sudo nmcli connection add type ethernet con-name "Static_Office" ifname eth0 \
    ip4 192.168.10.50/24 gw4 192.168.10.1

# Add DNS servers to the newly created connection
sudo nmcli connection modify "Static_Office" ipv4.dns "8.8.8.8 1.1.1.1"

# Set the connection to manual (static) addressing instead of DHCP
sudo nmcli connection modify "Static_Office" ipv4.method manual

# (Optional) Set connection to auto-connect when device is available
sudo nmcli connection modify "Static_Office" connection.autoconnect yes

# Show the configuration we just created
nmcli connection show "Static_Office"

# Activate the new connection (this will disconnect any current connection on eth0)
sudo nmcli connection up "Static_Office"
  • connection add: Creates a new profile.
  • type ethernet: Specifies the connection type.
  • con-name "Static_Office": Sets the profile name.
  • ifname eth0: Associates the profile with the eth0 device.
  • ip4 ... gw4 ...: Sets the static IPv4 address (with CIDR prefix) and default gateway.
  • connection modify: Changes settings on an existing profile.
  • ipv4.dns "...": Sets DNS servers (space-separated).
  • ipv4.method manual: Sets static IP configuration (use auto for DHCP).
  • connection.autoconnect yes: Makes NetworkManager try to activate this profile automatically.

Adding a New Wi-Fi Connection

# Scan for available Wi-Fi networks
nmcli device wifi list

# Connect to a WPA2-protected network (will prompt for password)
sudo nmcli device wifi connect "MySecureSSID" password "MyWiFiPassword" name "MySecureWiFi_Profile"
# Note: 'name' assigns a profile name, otherwise one is generated.

# Connect to an open (unsecured) network
# sudo nmcli device wifi connect "OpenCafeNetwork" name "Cafe"

nmcli automatically creates and saves the connection profile when using device wifi connect.

Modifying an Existing Connection

# Change DNS servers for an existing profile
sudo nmcli connection modify "Wired connection 1" ipv4.dns "1.1.1.1 1.0.0.1"

# Change profile back to DHCP (automatic)
sudo nmcli connection modify "Wired connection 1" ipv4.method auto
sudo nmcli connection modify "Wired connection 1" ipv4.dns "" # Clear static DNS

# Reactivate the connection for changes to take effect (might cause brief disconnect)
sudo nmcli connection up "Wired connection 1"

nmcli offers a vast array of options for configuring various connection types (VPNs, bonding, bridging, VLANs, etc.). Use nmcli connection add --help, nmcli connection modify --help, or refer to the nmcli(1) man page and nmcli-examples(7) man page for more details.

Workshop: Managing Network Profiles with nmcli

Goal: Use nmcli to inspect network status, view device and connection information, and create/modify a connection profile (e.g., setting static DNS servers).

Assumptions:

  • You are using a Linux distribution managed by NetworkManager (common on desktops and many server distros like Fedora, CentOS, RHEL, recent Ubuntu).
  • You have sudo privileges.
  • You have an active network connection (e.g., Ethernet or Wi-Fi).

Steps:

  1. Check Overall Status:

    • Use nmcli general status to see the overall state and connectivity.
    • Use nmcli networking to ensure networking is enabled.
    • Use nmcli radio if you have Wi-Fi/WWAN to check radio status.
    # Commands:
    nmcli general status
    nmcli networking
    nmcli radio
    
  2. Inspect Devices:

    • List all network devices known to NetworkManager using nmcli device status.
    • Identify your primary network interface (e.g., eth0, wlan0) and note its state and the name of the connection profile it's using.
    • Show detailed information for your primary device using nmcli device show <YOUR_DEVICE>. Examine the general properties, IP addresses, gateway, and DNS servers listed.
    # Commands:
    nmcli device status
    nmcli device show eth0  # Replace eth0 if needed
    
  3. Inspect Connections:

    • List all saved connection profiles using nmcli connection show.
    • Identify the profile currently active on your primary device (it should match the name from Step 2). Note its NAME and UUID.
    • Display the detailed configuration of your active connection profile using its name: nmcli connection show "<YOUR_ACTIVE_CONNECTION_NAME>".
    • Pay close attention to ipv4.method (likely auto if using DHCP) and ipv4.dns (likely provided by DHCP).
    # Commands:
    nmcli connection show
    nmcli connection show "Wired connection 1" # Replace with your active connection name
    
  4. Modify DNS Servers (Example):

    • Let's modify your active connection profile to use specific public DNS servers (e.g., Cloudflare's 1.1.1.1 and 1.0.0.1) instead of the ones provided by DHCP.
    • Use nmcli connection modify to set the ipv4.dns property. Remember to use sudo.
    • sudo nmcli connection modify "<YOUR_ACTIVE_CONNECTION_NAME>" ipv4.dns "1.1.1.1 1.0.0.1"
    • Important: If your connection uses DHCP (ipv4.method auto), by default NetworkManager might still prioritize DHCP-provided DNS. To ensure only your specified DNS servers are used, also set:
    • sudo nmcli connection modify "<YOUR_ACTIVE_CONNECTION_NAME>" ipv4.ignore-auto-dns yes
  5. Apply Changes:

    • For the DNS changes to take effect, you typically need to reactivate the connection. This will cause a brief network interruption.
    • sudo nmcli connection down "<YOUR_ACTIVE_CONNECTION_NAME>"
    • sudo nmcli connection up "<YOUR_ACTIVE_CONNECTION_NAME>"
    • Alternatively, sometimes restarting NetworkManager (sudo systemctl restart NetworkManager) or simply reactivating the device (sudo nmcli device reapply <YOUR_DEVICE>) might work, but down/up on the connection is often the most reliable way.
  6. Verify Changes:

    • Check the device details again to see the updated DNS servers:
    • nmcli device show <YOUR_DEVICE> (Look for IP4.DNS[1] and IP4.DNS[2]).
    • You can also check the contents of the system's current DNS resolver configuration file, /etc/resolv.conf. It should now list 1.1.1.1 and 1.0.0.1.
    • cat /etc/resolv.conf
    • Test DNS resolution: ping www.cloudflare.com.
  7. Revert Changes (Optional):

    • To go back to using DHCP-provided DNS servers:
    • sudo nmcli connection modify "<YOUR_ACTIVE_CONNECTION_NAME>" ipv4.ignore-auto-dns no
    • sudo nmcli connection modify "<YOUR_ACTIVE_CONNECTION_NAME>" ipv4.dns "" (Set DNS back to empty)
    • Reactivate the connection:
    • sudo nmcli connection down "<YOUR_ACTIVE_CONNECTION_NAME>"
    • sudo nmcli connection up "<YOUR_ACTIVE_CONNECTION_NAME>"
    • Verify again using nmcli device show <YOUR_DEVICE> and cat /etc/resolv.conf.

Conclusion: This workshop introduced you to nmcli, the command-line interface for NetworkManager. You practiced viewing the status of NetworkManager, devices, and connections. Most importantly, you learned how to modify an existing connection profile to change settings like DNS servers and how to apply and verify those changes. This ability to manage network profiles persistently from the command line is crucial for server administration and automating network configuration tasks.

Conclusion Mastering the Network Command Line

Throughout this exploration, we've journeyed from the fundamental checks of network interface status and basic connectivity to the intricate details of DNS resolution, routing paths, active connections, deep packet analysis, secure remote administration, and persistent configuration management. You've encountered a powerful suite of command-line tools – ip, ping, dig, host, traceroute, tracepath, ss, tcpdump, tshark, ssh, scp, rsync, and nmcli – each serving a critical role in understanding, managing, and troubleshooting Linux networking.

We emphasized not just how to use these commands but why they work, touching upon the underlying protocols and concepts like ICMP, DNS records, TCP/IP states, routing tables, BPF filters, SSH security mechanisms, and NetworkManager profiles. The workshops accompanying each section provided practical, hands-on experience, transforming theoretical knowledge into tangible skills.

Mastering these commands offers significant advantages over relying solely on graphical interfaces:

  • Power and Precision: CLI tools often provide finer control and more detailed output.
  • Efficiency: Experienced users can perform complex tasks much faster in the terminal.
  • Scriptability: Commands can be easily incorporated into scripts to automate repetitive tasks like configuration, monitoring, and deployment.
  • Remote Management: Essential for managing servers or embedded systems that may not have a graphical environment, often accessed via SSH.
  • Resourcefulness: CLI tools are generally lightweight and available even in minimal system installs or recovery environments.
  • Deeper Understanding: Working at the command line encourages a more profound understanding of the underlying network mechanics.

While the journey to mastery is ongoing, you now possess a solid foundation. You can:

  • Inspect your network interfaces and verify IP configurations (ip addr).
  • Test reachability and latency (ping).
  • Query the Domain Name System for various records (dig, host).
  • Analyze routing tables and trace network paths (ip route, traceroute).
  • Examine listening ports and active connections (ss).
  • Capture and inspect raw network traffic (tcpdump, tshark).
  • Securely access remote systems and transfer files (ssh, scp, rsync).
  • Manage persistent network configurations via NetworkManager (nmcli).

Continue practicing these commands in real-world scenarios. Set up virtual machines to create test networks. Use tcpdump or tshark to observe the traffic generated by different applications. Automate file transfers with rsync. Configure static IPs or manage Wi-Fi connections using nmcli. The more you use these tools, the more intuitive and powerful they will become. The command line is your window into the intricate and fascinating world of Linux networking – embrace its power to become a more effective administrator, developer, or power user. ```