Author | Nejat Hakan |
nejat.hakan@outlook.de | |
PayPal Me | https://paypal.me/nejathakan |
Linux Security and Hardening
Introduction Why Secure Your Linux System?
Welcome to a truly essential part of working with any Linux system: security and hardening. Perhaps you've heard that "Linux is secure." It's true that Linux is built with a strong security foundation – concepts like separating user privileges and isolating processes are baked into its core design. However, a standard, out-of-the-box installation is usually optimized for general use or ease of setup, not necessarily for resisting determined attackers or specific threats you might face. This is where security and hardening come in.
Let's define our terms clearly:
- Security: This is the overall practice of protecting your computer systems and the data they hold. It's about preventing unauthorized people or programs from accessing, changing, disrupting, or destroying your valuable information and ensuring the system is available when you need it. Think of it as protecting your digital house and everything inside it.
- Hardening: This is the specific process of making your Linux system more difficult to attack. We do this by reducing its attack surface. Imagine the attack surface as all the doors, windows, and potential weak spots an intruder could try to exploit to get into your house. Hardening involves locking unnecessary doors, boarding up unused windows, reinforcing weak walls, and setting up alarms.
Specifically, hardening a Linux system involves actions like:
- Removing software you don't actually need (less software means fewer potential bugs to exploit).
- Turning off services that aren't being used (fewer open network "doors").
- Configuring system settings to be as strict as possible while still allowing the system to function as needed.
- Applying security updates (patches) promptly to fix known vulnerabilities.
- Setting up strong controls over who can access the system and what they can do (user accounts and permissions).
- Implementing monitoring and logging so you can see if something suspicious is happening.
In today's interconnected world, cyber threats are a constant reality. Whether you're running Linux on your personal laptop, managing a web server for a business, or deploying applications in the cloud, securing and hardening your system isn't just a good idea – it's a fundamental responsibility.
This guide will walk you through the core principles and practical steps needed to significantly boost the security of your Linux system. We'll cover managing users, securing files, protecting network connections, keeping an eye on logs, staying updated, and checking your work with auditing tools. Every concept will be explained thoroughly, assuming you're learning this for the first time. After each theoretical section, we'll dive into a hands-on Workshop where you can immediately practice what you've learned on a real (or virtual) Linux system.
Let's get started building stronger digital defenses!
1. User Account Security The First Line of Defense
User accounts are the main doorways into your Linux system. Anyone or anything interacting with the system does so through a user account. If an attacker manages to compromise a user account – especially a powerful one like the root
account – they could potentially gain complete control. This makes secure user account management absolutely critical.
Strong Password Policies
Think of passwords as the keys to your user accounts. A weak, easily guessable password is like leaving a flimsy key under the doormat. Attackers use sophisticated tools that can try millions of password combinations per second (these are called brute-force attacks) or test lists of common words and previously leaked passwords (dictionary attacks). Enforcing strong password policies makes these attacks much, much harder.
What makes a password strong?
- Complexity: It should be a mix of different character types:
- Uppercase letters (A-Z)
- Lowercase letters (a-z)
- Numbers (0-9)
- Special characters (e.g.,
!@#$%^&*()_+-=[]{};':",./<>?
)
- Length: Longer is always better. Each additional character makes a password exponentially harder to guess. Aim for a minimum of 12-15 characters, or even more if possible.
- Uniqueness: Don't reuse passwords across different websites or systems. If one site is breached, attackers won't get the keys to your other accounts.
- Avoid Obvious Information: Don't use personal information like your name, birthdate, pet's name, or common words.
Enforcing Policies in Linux:
Linux uses a flexible system called PAM (Pluggable Authentication Modules) to handle tasks related to proving a user's identity (authentication). We can configure PAM modules to enforce password rules.
pam_pwquality
/pam_cracklib
: These modules check password strength when a user tries to set a new one. You configure rules like minimum length, required character types, and preventing passwords based on dictionary words.- The configuration file is often
/etc/security/pwquality.conf
(on newer systems like Fedora, CentOS 8+, recent Ubuntu) or settings might be directly within files in/etc/pam.d/
(likecommon-password
on Debian/Ubuntu orsystem-auth
on older CentOS/RHEL). - Example
/etc/security/pwquality.conf
settings:# This line sets the minimum acceptable password length. 14 is a good start. minlen = 14 # These lines require at least one character from different classes. # The negative number (-1) means "at least one". # dcredit = digit, ucredit = uppercase, lcredit = lowercase, ocredit = other/special dcredit = -1 ucredit = -1 lcredit = -1 ocredit = -1 # This checks how many characters must be different from the old password. difok = 5
- The configuration file is often
- Password Aging (
chage
command): Even strong passwords can potentially be compromised over time (e.g., if captured unknowingly). Password aging forces users to change their passwords periodically, limiting the window of opportunity for an attacker who might have obtained an old password.- The
chage
command lets administrators set policies per user:- Minimum days between changes (prevents users from changing it immediately back to the old one).
- Maximum days a password is valid (the core setting for forcing changes).
- Warning period before expiration (gives users notice).
- Account inactivity period (can lock accounts not used for a long time).
- The
User and Group Management Best Practices
Simply having strong passwords isn't enough. We also need to manage who has accounts and what they are allowed to do.
Key Principles:
- Principle of Least Privilege: This is perhaps the most important concept in security. Give users and programs only the permissions they absolutely need to perform their intended function, and nothing more. For example, don't browse the web or read email while logged in as the all-powerful
root
user! Any vulnerability exploited in your browser would then have full system control. - Use
sudo
for Administrative Tasks: Theroot
user (also called the superuser, user ID 0) has unlimited power on the system. Logging in directly asroot
is extremely risky because any mistake you make (like accidentally typingrm -rf /
instead ofrm -rf ./
) or any malicious program you accidentally run has system-wide, potentially devastating consequences. Instead:- Log in as a regular, non-privileged user for your daily work (like the
testuser
we'll create). - When you need to perform an administrative task (like installing software, editing system files, managing users), use the
sudo
command (stands for "superuser do"). sudo
allows users who are listed in a special configuration file (/etc/sudoers
) to run specific commands as root (or even as another user).- When you run
sudo <command>
, it prompts for your user password (not the root password). This is more convenient and secure because you don't need to know or share the root password with multiple people. - Critically,
sudo
usage is logged (usually in/var/log/auth.log
or/var/log/secure
), creating an audit trail. You can see which user ran which administrative command and when, unlike direct root logins where actions are just attributed to "root". - Access to
sudo
is controlled by the/etc/sudoers
file and files included from the/etc/sudoers.d/
directory. Never edit/etc/sudoers
directly with a normal text editor! Always use the special commandvisudo
.visudo
locks thesudoers
file to prevent multiple people editing it at once, and most importantly, it performs syntax checking before saving. This prevents you from saving a broken configuration file that could lock everyone (including yourself) out of usingsudo
.
- Log in as a regular, non-privileged user for your daily work (like the
- Effective Group Management: Linux allows users to be members of one primary group and multiple supplementary groups. Permissions can be assigned to groups instead of just individual users. This simplifies administration enormously.
- Create groups for specific roles or projects (e.g.,
developers
,webadmins
,auditors
,interns
). - Grant necessary file permissions or
sudo
privileges to the group. - Add users to the relevant groups. When a user's role changes or they leave, you just modify their group memberships instead of hunting down and changing individual permissions scattered across the filesystem or
sudoers
file.
- Create groups for specific roles or projects (e.g.,
- Regular User Account Audits: Time passes, people change roles, projects end, employees leave. You need to periodically review all user accounts on the system to ensure they are still necessary and have appropriate privileges.
- Examine the
/etc/passwd
file (it lists all local user accounts). - Disable or remove accounts for users who no longer need access (e.g., former employees, temporary accounts). Disabling (
usermod -L
) is often preferred initially over deleting (userdel
) as it preserves user files if needed later. - Check for any suspicious or unknown accounts – these could be a sign of a security breach.
- Review the membership of administrative groups (like
sudo
orwheel
) and other privileged groups. Ensure only authorized personnel are members.
- Examine the
Common User/Group Management Commands:
Remember, you typically need sudo
to run these commands as they modify system-wide user information.
useradd <username>
: Creates a new user account. Important options:-m
: Creates the user's home directory (e.g.,/home/<username>
). Highly recommended for regular users.-s <shell_path>
: Sets the user's login shell (e.g.,-s /bin/bash
for the Bash shell, or-s /sbin/nologin
for service accounts that shouldn't log in interactively).-g <groupname>
: Sets the user's primary group.-G <groupname>,<groupname>
: Sets the user's supplementary groups.
passwd <username>
: Sets or changes the password for the specified user. If run without a username, it allows you to change your own password. Onlyroot
can change other users' passwords without knowing the old one.userdel <username>
: Deletes a user account. Use-r
to also remove the user's home directory and mail spool (userdel -r <username>
). Use with caution!groupadd <groupname>
: Creates a new group.groupdel <groupname>
: Deletes a group. Be careful, ensure no files rely solely on this group ownership.usermod
: Modifies existing user account properties. Very powerful and frequently used.usermod -aG <groupname> <username>
: Appends the user to the specified supplementary Group(s). This is the standard way to add a user to groups likesudo
orwheel
without removing their existing supplementary groups. The-a
(append) is crucial!usermod -L <username>
: Locks the user's account password, preventing password-based logins. The account still exists.usermod -U <username>
: Unlocks a previously locked user account password.usermod -s /sbin/nologin <username>
: Changes the user's shell to prevent interactive logins.usermod -d /new/home <username>
: Changes the user's home directory path. (Use-m
as well to move existing contents).
chage -l <username>
: Lists the password ageing information for a user (last change, expires, inactive, etc.).visudo
: The only command you should use to safely edit the/etc/sudoers
configuration. It locks the file and checks syntax on save.
Disabling Unnecessary Accounts
Most Linux distributions come with several pre-defined user accounts created during installation. Some are essential system accounts used by background services (daemons) to run with limited privileges (this is good practice – avoids running services as root). However, others might be unnecessary remnants or potential security risks.
Key Actions:
- Disable Direct Root Login (Especially via SSH): As emphasized before, logging in directly as
root
, particularly over the network via SSH, is extremely risky. If an attacker can target theroot
username, they only need to guess one password to gain full control. We will explicitly configure the SSH service (in section Network Security Guarding the Gates and Workshop Managing Users and Basic Policies) to forbid direct root login. Users must log in as a standard user and usesudo
for elevation. - Identify and Lock/Remove Unused User Accounts:
- Carefully examine the list of users in the
/etc/passwd
file. Each line represents one account. Fields are separated by colons (:
). The first field is the username, the third is the User ID (UID). By convention:- UID 0 is always
root
. - UIDs 1-999 (or sometimes 1-499) are typically reserved for system accounts used by services.
- UIDs 1000+ (or 500+) are usually assigned to regular human users.
- UID 0 is always
- For any regular user account (UID >= 1000) that is not actively needed (e.g., belongs to someone who left, a test account no longer used), you should take action:
- Locking:
sudo passwd -l <username>
orsudo usermod -L <username>
. This disables password login but keeps the account and files intact. It's easily reversible (passwd -u
orusermod -U
). This is often the safest first step. - Deleting:
sudo userdel -r <username>
. This removes the account entirely, including its home directory (-r
). This is permanent, so be certain the account and its data are no longer needed.
- Locking:
- Review System Accounts (UID < 1000): Understand the purpose of these accounts. Many are necessary for core system functions or installed services (e.g.,
bin
,daemon
,mail
,lp
,www-data
,postgres
,sshd
). Do not delete or lock these unless you are absolutely sure the corresponding service is unused and disabled. Disabling the service itself (usingsystemctl
) is usually the correct approach rather than disabling its dedicated user account. - Look for obvious unused accounts: Some older installations might have accounts like
games
,ftp
, or a defaultguest
account with login privileges. If you don't use these services or features, lock or remove these accounts. A common security measure is to set the shell for system accounts that never need to log in interactively to/sbin/nologin
or/bin/false
usingusermod -s
.
- Carefully examine the list of users in the
By diligently managing who can log in, enforcing strong authentication, applying the principle of least privilege, and removing unnecessary access points, you establish a robust first line of defense for your Linux system.
Workshop Managing Users and Basic Policies
Goal: In this workshop, we'll put the theory into practice. You will create a new user, enforce a basic password aging policy, grant administrative privileges using sudo
correctly, and disable direct root login via SSH – a critical hardening step.
Prerequisites:
- Access to a Linux system (a Virtual Machine like Ubuntu Server or CentOS Stream is perfect for safe experimentation).
- You need to be logged in as a user who already has
sudo
privileges, or as theroot
user.
Let's Get Started!
-
Open Your Terminal: Access the command line interface of your Linux system.
-
Create a New Standard User:
- We need administrative rights for creating users. If you're not logged in as root, prefix commands with
sudo
. We'll usesudo
for each command as it's best practice. - Let's create a user named
appadmin
. We want to ensure their home directory is created (-m
) and their default login shell is set to bash (-s /bin/bash
).- Explanation:
sudo
: Execute the following command with superuser (root) privileges.useradd
: The command used to add a new user to the system.-m
: Tellsuseradd
to make (create) the user's home directory, typically/home/appadmin
. If you omit this, the user might not have a personal space to store files or configurations, which can cause issues.-s /bin/bash
: Sets the user's login shell. The shell is the command-line interpreter./bin/bash
is a very common and powerful shell. Setting this ensures the user gets a familiar interactive environment when they log in.appadmin
: This is the username we've chosen for our new account.
- Explanation:
- Now, we must set an initial password for this new user. The
passwd
command is used for this. You'll be prompted to type the new password twice (to confirm you didn't make a typo). For this exercise, choose a reasonably complex password (mix of letters, numbers, maybe a symbol).
- We need administrative rights for creating users. If you're not logged in as root, prefix commands with
-
Enforce Password Aging:
- Let's configure the system so that the
appadmin
user must change their password at least every 90 days. We use thechage
command (change age).- Explanation:
chage
: The command dedicated to changing user password expiry information.-M 90
: Sets the Maximum number of days the password remains valid to 90. After 90 days from the last password change, the user will be forced to set a new one upon login.
- Explanation:
- Let's verify this change took effect. The
-l
option lists the current aging settings for the specified user.- Carefully read the output. Find the line that says "Maximum number of days between password change". It should now display
90
.
- Carefully read the output. Find the line that says "Maximum number of days between password change". It should now display
- Let's configure the system so that the
-
Grant
sudo
Privileges Correctly:- We want
appadmin
to be able to perform administrative tasks, but we won't give them the root password or let them log in as root. Instead, we'll add them to the special group whose members are allowed to use thesudo
command. The name of this group typically varies by distribution family:- On Debian, Ubuntu, and derivatives: the group is usually named
sudo
. - On Fedora, CentOS, RHEL, and derivatives: the group is usually named
wheel
.
- On Debian, Ubuntu, and derivatives: the group is usually named
- We use the
usermod
command to modify theappadmin
user. We want to append (-a
) the user to one or more supplementary Groups (-G
). - Command for Debian/Ubuntu systems:
- Command for CentOS/Fedora/RHEL systems:
- Explanation:
usermod
: The command to modify an existing user account's properties.-a
: This flag means append. It's crucial! It adds the user to the specified group(s) in addition to any other supplementary groups they might already be in. Without-a
, the-G
option would replace all existing supplementary groups with the one specified, which is usually not what you want.-G
: This flag indicates that you are specifying supplementary Group(s) to add the user to.sudo
orwheel
: The actual name of the administrative group on your specific Linux distribution.appadmin
: The username of the account we are modifying.
- Explanation:
- Verification: Group membership changes often don't take effect in a user's current shell session. The user needs to log out and log back in, or you need to start a new login shell for them.
- Switch to the
appadmin
user usingsu
(substitute user) with the-
flag, which simulates a full login (important for group memberships and environment): (Enter theappadmin
password you set earlier). - Now, as
appadmin
, try to run a command that requires root privileges, prefixing it withsudo
. Updating the package list is a safe command for testing: - The system should prompt you with
[sudo] password for appadmin:
. Enter theappadmin
user's password (not root's). - If the command runs successfully (e.g., starts checking repositories), then
sudo
access is configured correctly! If you get an error like "appadmin
is not in the sudoers file. This incident will be reported.", double-check that you added the user to the correct group name (sudo
orwheel
) for your distribution. - Type
exit
to close theappadmin
shell and return to your original user session.
- Switch to the
- We want
-
Disable Direct Root Login via SSH (Critical Step!):
- SSH (Secure Shell) is the standard, encrypted way to get a command-line login on remote Linux systems. Allowing the
root
user to log in directly via SSH is a major security risk. We must disable it. This is done in the SSH server's configuration file. - The configuration file for the SSH daemon (
sshd
) is typically/etc/ssh/sshd_config
. We need to edit it usingsudo
and a text editor likenano
(beginner-friendly) orvim
. - Scroll through the file (use arrow keys or PageDown in
nano
). Look for a line starting withPermitRootLogin
.- It might be commented out with a
#
(e.g.,#PermitRootLogin prohibit-password
). Comments are ignored by the server. - It might already exist and be set to
yes
(e.g.,PermitRootLogin yes
). - It might exist and be set to
prohibit-password
orwithout-password
(these allow root login only with SSH keys, but we want to disable it completely).
- It might be commented out with a
- You need to make sure there is an uncommented line (no
#
at the beginning) that reads exactly:- Explanation: This configuration directive explicitly instructs the SSH server (
sshd
) to reject any login attempt using theroot
username, regardless of whether the password or key is correct. This forces all remote administrative access to happen by logging in as a non-root user (likeappadmin
) first, and then usingsudo
to elevate privileges when needed. This is much more secure and provides a better audit trail.
- Explanation: This configuration directive explicitly instructs the SSH server (
- Save the changes and exit the editor. (In
nano
: pressCtrl+O
to Write Out, pressEnter
to confirm the filename, then pressCtrl+X
to Exit). - This configuration change will not take effect until the SSH service is restarted. The command varies slightly depending on the system's service manager (most modern systems use
systemd
).- On systems using systemd (most common - Ubuntu 16.04+, CentOS 7+, Fedora, Debian 8+):
- On older systems using SysVinit (e.g., CentOS 6):
- Verification (Crucial!): Before you log out of your current session (especially if you are connected via SSH yourself!), open a new, separate terminal window or SSH connection. Do not close the terminal where you just restarted the service yet!
- In the new terminal, try to SSH into the machine as root. Use
localhost
if testing on the machine itself, or its IP address if testing from another machine. This attempt must fail. You should get an error like "Permission denied, please try again." or the connection might just close. It should not prompt for the root password. - Next, in the same new terminal, try to SSH as the user you created (
appadmin
): This attempt must succeed. It should prompt forappadmin
's password, and after you enter it correctly, you should get a shell prompt as theappadmin
user. - Only if you have confirmed that root login fails AND your regular user login succeeds, can you be sure the change worked correctly. Now it's safe to close the new test terminal and your original terminal if desired. If login failed for
appadmin
or root login still worked, go back to your original terminal (which you wisely kept open!), carefully re-edit/etc/ssh/sshd_config
, fix the mistake (check for typos!), save, restartsshd
again, and re-verify using a new terminal.
- In the new terminal, try to SSH into the machine as root. Use
- SSH (Secure Shell) is the standard, encrypted way to get a command-line login on remote Linux systems. Allowing the
Workshop Summary: Fantastic! You've created a new user, applied a simple password expiration policy, correctly configured sudo
access for that user, and significantly enhanced security by disabling direct root login over SSH. These are fundamental steps in securing any Linux system.
2. Filesystem Security Protecting Your Data
Just like managing who can enter the system, we need to control what they can do with the files and directories stored within it. Linux has a powerful and granular permission system to protect data confidentiality (preventing unauthorized reading) and integrity (preventing unauthorized changes or deletion). Understanding and managing these permissions is essential for security.
Understanding Standard Linux Permissions
Every file and directory in Linux has a set of permissions associated with it. These permissions define who can do what with the item. Access is controlled for three distinct categories of users:
- User (u): The owner of the file or directory. By default, this is the user who created the item. The owner typically has the most control.
- Group (g): Each file/directory also belongs to a group. Users who are members of this group may have different access rights than others who are not the owner and not in the group. This allows for controlled sharing among team members.
- Others (o): Represents everyone else on the system who is not the owner and not a member of the group. Permissions for "others" are usually the most restrictive.
For each of these three categories (User, Group, Others), there are three basic permission types, often remembered as Read, Write, Execute (rwx):
- Read (r):
- Files: Allows opening and viewing the contents of the file (e.g., using
cat
,less
,more
, or opening in a text editor). - Directories: Allows listing the names of the files and subdirectories inside that directory (e.g., using the
ls
command). You must also have execute permission on the directory to usels
effectively or access the files within it.
- Files: Allows opening and viewing the contents of the file (e.g., using
- Write (w):
- Files: Allows modifying or changing the content of the file. This includes editing it, appending to it, or overwriting it entirely. It also usually grants permission to delete the file, although this interaction is more complex and depends on the permissions of the directory containing the file (see Sticky Bit later).
- Directories: This permission on a directory is quite powerful. It allows you to:
- Create new files or subdirectories within this directory.
- Delete files or subdirectories from this directory (regardless of who owns the file/subdirectory, unless the sticky bit is set).
- Rename files or subdirectories within this directory.
- Crucially, you also need execute permission (
x
) on the directory to perform any write operations within it.
- Execute (x):
- Files: Allows the system to run the file as a program or script. The file must actually contain executable code (like a compiled program) or be a script starting with a "shebang" line (like
#!/bin/bash
) that tells the system which interpreter to use. This permission is not needed for reading or writing data files. - Directories: Allows you to enter (or "traverse") the directory, for example, using the
cd
command. It also allows you to access files or subdirectories inside it, assuming you have the necessary permissions on those items as well. You cannot access anything inside a directory (not even metadata like file sizes withls -l
) without execute permission on the directory itself.
- Files: Allows the system to run the file as a program or script. The file must actually contain executable code (like a compiled program) or be a script starting with a "shebang" line (like
Viewing Permissions:
The ls -l
command is your primary tool for viewing permissions (the -l
stands for "long listing format"). Let's look at some example output:
ls -l /home/appadmin/config.yml
-rw-rw---- 1 appadmin developers 1024 Nov 11 09:15 /home/appadmin/config.yml
ls -ld /srv/webdata
drwxr-x--- 5 www-data www-data 4096 Oct 30 17:00 /srv/webdata
ls -ld
is used for directories to show the directory's permissions itself, not the contents inside it)
Let's decode the beginning part (-rw-rw----
or drwxr-x---
):
- Character 1: File Type
-
: A regular file.d
: A directory.l
: A symbolic link (a pointer to another file or directory).- Others exist (
c
=character device,b
=block device,p
=named pipe,s
=socket) but are less common in daily user interaction.
- Characters 2-10: Permission Bits (rwx for User, Group, Others)
- Set 1 (Chars 2-4): User/Owner Permissions.
rw-
means the owner (appadmin
in the first example) can Read and Write, but not Execute.rwx
means the owner (www-data
in the second example) has Read, Write, and Execute permissions. - Set 2 (Chars 5-7): Group Permissions.
rw-
means members of the owning group (developers
) can Read and Write.r-x
means members of the group (www-data
) can Read and Execute, but not Write. - Set 3 (Chars 8-10): Others Permissions.
---
means Others (neither owner nor in the group) have no permissions at all.
- Set 1 (Chars 2-4): User/Owner Permissions.
The rest of the ls -l
line typically shows:
- Number of hard links to the file/directory.
- Username of the owner.
- Name of the owning group.
- Size of the file in bytes (or space used by directory metadata).
- Date and time of the last modification.
- The name of the file or directory.
Changing Permissions (chmod
):
The chmod
(change mode) command is used to modify these rwx
permissions. To use chmod
on a file or directory, you must either be its owner or be the root
user. chmod
can operate in two useful modes:
-
Symbolic Mode (Using Letters - often easier for targeted changes):
- Specify Who: Use
u
(user/owner),g
(group),o
(others), ora
(all three). If you don't specify 'who', it defaults to 'all' (a
), modified by yourumask
(see later), which can be confusing, so it's best to be explicit. - Specify Action: Use
+
to add a permission,-
to remove a permission, or=
to set the permissions exactly (removing any others in that category). - Specify Permission: Use
r
,w
, orx
. - Examples:
chmod u+x analyze_data.sh
: Adds execute permission for the user (owner) only.chmod g-w shared_document.txt
: Removes write permission for the group.chmod o=r README
: Sets permissions for others to be exactly read-only (removes anyw
orx
they might have had).chmod ug+rw project_files/
: Adds read and write for both user and group on the directoryproject_files
.chmod a-x sensitive_data.db
: Removes execute permission for all (user, group, and others). Useful for ensuring data files aren't accidentally run.
- Specify Who: Use
-
Numeric (Octal) Mode (Using Numbers - often faster for setting all permissions at once):
- This mode uses a three-digit number, where each digit represents the permissions for User, Group, and Others, respectively.
- Each permission (
r
,w
,x
) has a numeric value:r
(read) = 4w
(write) = 2x
(execute) = 1- No permission = 0
- Add the values for the desired permissions within each category:
rwx
= 4 + 2 + 1 = 7rw-
= 4 + 2 + 0 = 6r-x
= 4 + 0 + 1 = 5r--
= 4 + 0 + 0 = 4-wx
= 0 + 2 + 1 = 3-w-
= 0 + 2 + 0 = 2--x
= 0 + 0 + 1 = 1---
= 0 + 0 + 0 = 0
- Combine the three digits (User, Group, Others).
- Examples:
chmod 755 web_script.cgi
: Setsrwxr-xr-x
. (User=7, Group=5, Others=5). Common for scripts or programs that others need to execute, and directories others need to access.chmod 644 default_config.conf
: Setsrw-r--r--
. (User=6, Group=4, Others=4). Common for configuration or data files that others should only read.chmod 600 ~/.ssh/id_rsa
: Setsrw-------
. (User=6, Group=0, Others=0). Essential for private keys and highly sensitive data; only the owner can read/write.chmod 700 ~/private_scripts
: Setsrwx------
. (User=7, Group=0, Others=0). Makes a directory completely private to the owner.
Changing Ownership
Sometimes you need to change which user owns a file or which group it belongs to, for instance, when transferring responsibility or setting up shared project areas.
chown
(Change Owner): This command changes the user and/or group ownership of a file or directory. To change the owner to someone else, you generally must be theroot
user (sudo
).-
chgrp
(Change Group): This command changes only the group ownership. To use this, you must typically be the owner of the file AND also be a member of the new group you are changing it to (or beroot
). -
Recursive Operation (
-R
): Bothchown
andchgrp
support the-R
(or--recursive
) option. This applies the ownership change to the specified directory and all files and subdirectories contained within it, all the way down. Warning: Be extremely careful when using-R
! Applying it to the wrong directory (like/
or/usr
) can severely break your system. Double-check your command before running it with-R
.
Special Permissions SUID, SGID, Sticky Bit
Beyond the basic rwx
permissions, Linux offers three "special" permission bits that grant additional capabilities or modify behavior in specific ways. These are displayed in the ls -l
output by replacing the x
in the user, group, or other permission slots, or by using an uppercase letter if the underlying execute bit is not set.
-
SUID (Set User ID):
- Applies to: Executable files only.
- Functionality: When an executable file with the SUID bit set is run, the process executes with the privileges (the effective user ID) of the file's owner, not the user who actually launched it.
- Symbol in
ls -l
: Ans
in the user's execute slot (rwsr-xr-x
). If the owner doesn't actually have execute permission (usually an error), it shows as an uppercaseS
(rwSr-xr-x
). - Purpose/Example: The
/usr/bin/passwd
command allows regular users to change their own passwords. This action requires modifying the/etc/shadow
file, which is owned byroot
and highly protected. Thepasswd
executable file itself is owned byroot
and has the SUID bit set (-rwsr-xr-x
). When a normal user likeappadmin
runspasswd
, thepasswd
process runs as if it were root, granting it the necessary permission to update the shadow file. The program is carefully written to only allow users to modify their own entry. - Security Consideration: SUID executables owned by
root
are powerful and potentially dangerous. If a vulnerability (like a buffer overflow or command injection flaw) exists in such a program, an attacker could potentially exploit it to gain root privileges on the system (this is called privilege escalation). Therefore, the number of SUID root programs should be minimized and they should be carefully vetted. - Setting with
chmod
(Numeric): Add4
to the beginning of the three-digit octal code. E.g.,chmod 4755 my_suid_tool
setsrwsr-xr-x
. (4=SUID, 7=User rwx, 5=Group r-x, 5=Others r-x). - Setting with
chmod
(Symbolic):chmod u+s my_suid_tool
.
-
SGID (Set Group ID):
- Applies to: Executable files AND directories.
- Functionality (Files): When an executable file with the SGID bit set is run, the process executes with the privileges (the effective group ID) of the file's group, rather than the primary group of the user running it. Less common than SUID for executables, but sometimes used for programs that need access to group-specific resources.
- Symbol:
s
orS
in the group's execute slot (rwxr-sr-x
orrwxr-Sr-x
). - Numeric: Add
2
to the beginning (chmod 2755 my_sgid_tool
). - Symbolic:
chmod g+s my_sgid_tool
.
- Symbol:
- Functionality (Directories - Very Common & Useful): When the SGID bit is set on a directory:
- Any new file or subdirectory created within that directory will automatically inherit the group ownership from the directory itself, instead of inheriting the primary group of the user creating the file.
- Any new subdirectory created within it will also automatically inherit the SGID bit (
g+s
) itself. - Symbol:
s
orS
in the group's execute slot (rwxrwsr-x
). - Purpose/Example: Essential for shared project directories. If you have
/srv/projects/alpha
owned by groupalpha_team
and setchmod g+s /srv/projects/alpha
, then when usercarol
(member ofalpha_team
) createsnew_report.txt
inside, the file will be owned bycarol:alpha_team
automatically, ensuring other team members have the intended group access based on the directory's permissions. Without SGID, it might have defaulted tocarol:carol
. - Numeric: Add
2
to the beginning (chmod 2775 shared_dir
). (2=SGID, 7=User rwx, 7=Group rwx, 5=Others r-x). - Symbolic:
chmod g+s shared_dir
.
-
Sticky Bit:
- Applies to: Directories only.
- Functionality: Primarily affects file deletion. In a directory with the sticky bit set (and where users have write and execute permission), a user can only delete or rename files that they own. They cannot delete or rename files owned by other users, even if the directory permissions (
w
for group or other) would normally allow it. - Symbol:
t
in the others' execute slot (rwxrwxrwt
). If others don't have execute permission, it shows asT
(rwxrwxrwT
). - Purpose/Example: The classic example is the
/tmp
directory, used for temporary files by many users and applications. It's typically world-writable (rwxrwxrwt
, mode 1777). The sticky bit (t
) is crucial here; it allows anyone to create files in/tmp
, but prevents useralice
from deleting files created by userbob
, avoiding chaos and potential denial-of-service. - Numeric: Add
1
to the beginning (chmod 1777 /shared/tmp
). (1=Sticky, 7=User rwx, 7=Group rwx, 7=Others rwx). - Symbolic:
chmod +t /shared/tmp
.
Finding Files with Special Permissions:
Because SUID/SGID executables (especially if owned by root) and world-writable files/directories can pose security risks, it's good practice to periodically search for them to ensure they are necessary and configured correctly.
# Find all SUID files owned by root
sudo find / -type f -user root -perm /4000 -ls
# Find all SGID files
sudo find / -type f -perm /2000 -ls
# Find all world-writable files (permission -??-??-?w-)
sudo find / -type f -perm /0002 -ls
# Find all world-writable directories (permission d??-??-?w-)
sudo find / -type d -perm /0002 -ls
# Find all directories with the sticky bit set
sudo find / -type d -perm /1000 -ls
- Explanation of
find
options:find /
: Start searching from the root directory (/
). You can specify other starting paths.-type f
or-type d
: Search only for files (f
) or directories (d
).-user root
: Search only for files owned by the userroot
.-perm /<mode>
: Search for files where any of the specified permission bits in<mode>
are set. The leading slash (/
) signifies this "any match" logic.4000
represents the SUID bit.2000
represents the SGID bit.1000
represents the Sticky bit.0002
represents the write bit for "others".
-ls
: For each file found, execute thels -ld
command to display detailed information, including permissions and ownership.
Sensible Defaults and umask
When you create a new file (e.g., touch report.txt
) or a new directory (e.g., mkdir project_beta
), what permissions do they get automatically? This is determined by the umask
(user file-creation mode mask).
The umask
is a value that specifies the permissions that should be removed or "masked out" from the system's default base permissions when a new file or directory is created. Think of it as the opposite of chmod
.
- Default Base Permissions (Theoretical Maximums):
- For new files:
666
(rw-rw-rw-
). Execute permission (x
) is generally not included by default for safety, as most new files are data, not programs. - For new directories:
777
(rwxrwxrwx
). Execute permission (x
) is included because it's necessary to enter or traverse a directory.
- For new files:
- How
umask
Works: Theumask
value (represented in octal, like permissions) is subtracted from the base permission value using bitwise logic (specifically,base_permissions AND NOT umask
). - Common
umask
Value:0022
(often displayed as022
)- This mask represents
--- -w- -w-
(no user mask, remove write for group, remove write for others). - New Files:
666
(rw-rw-rw-
)AND NOT
022
(--- -w- -w-
) results in644
(rw-r--r--
). - New Directories:
777
(rwxrwxrwx
)AND NOT
022
(--- -w- -w-
) results in755
(rwxr-xr-x
). - Result: Owner gets read/write (files) or rwx (dirs), while group and others get read-only (files) or read/execute (dirs). This is a common, reasonably permissive default.
- This mask represents
- More Restrictive
umask
:0027
(often displayed as027
)- Mask:
--- -w- rwx
(no user mask, remove write for group, remove read/write/execute for others). - New Files:
666
(rw-rw-rw-
)AND NOT
027
(--- -w- rwx
) results in640
(rw-r-----
). - New Directories:
777
(rwxrwxrwx
)AND NOT
027
(--- -w- rwx
) results in750
(rwxr-x---
). - Result: Owner gets full rights, group gets read (files) or read/execute (dirs), but others get no permissions at all. This is a much better default for security on multi-user systems.
- Mask:
- Very Restrictive
umask
:0077
(often displayed as077
)- Mask:
--- rwx rwx
(no user mask, remove all for group, remove all for others). - New Files:
666
(rw-rw-rw-
)AND NOT
077
(--- rwx rwx
) results in600
(rw-------
). - New Directories:
777
(rwxrwxrwx
)AND NOT
077
(--- rwx rwx
) results in700
(rwx------
). - Result: New files and directories are completely private to the owner by default. This is often the default
umask
for theroot
user and is recommended for users handling sensitive data.
- Mask:
Viewing and Setting umask
:
- View Current
umask
: Simply typeumask
in your shell. It will usually output the four-digit octal value (e.g.,0022
). Useumask -S
to see it in symbolic form (e.g.,u=rwx,g=rx,o=rx
). - Set
umask
(Current Session):umask 027
. This only affects the current shell session and processes started from it. - Set
umask
(Persistently): To make the change permanent for a user, you typically need to edit their shell startup files (like~/.bashrc
or~/.profile
) and add theumask 027
command there. For a system-wide default for new users, administrators might edit files like/etc/profile
,/etc/bashrc
, or/etc/login.defs
(look for aUMASK
setting in/etc/login.defs
).
Setting a more restrictive default umask
(like 027
or even 077
) is a simple but effective hardening measure.
Workshop Practicing Filesystem Permissions
Goal: Get hands-on experience using chmod
and chown
, and observe the practical effects of standard permissions, SGID on directories, and the Sticky Bit.
Prerequisites:
- A Linux system with
sudo
access. - The
appadmin
user created in the previous workshop (or any standard, non-root user you can log in as). - Ability to switch between your primary administrative user (the one with
sudo
) andappadmin
(e.g., usingsu - appadmin
or separate terminal logins).
Let's Get Started!
-
Prepare a Shared Workspace (as your admin user with
sudo
):- First, let's create a directory intended for collaboration. We'll place it in
/srv
(often used for service data). - Check its initial ownership and permissions. It's likely owned by
root
. - Let's create a dedicated group for our project team.
- Add both your administrative user (replace
your_admin_user
with your actual username) and theappadmin
user to this new group usingusermod -aG
. - Very Important: Group membership changes require a new login session. Log out of both your admin user and
appadmin
sessions completely, then log back in. Alternatively, open completely new terminal windows for each. Verify the new group membership for both users:You must see# Run this as your admin user after logging back in groups # Switch to appadmin (this starts a new session) and check su - appadmin groups exit # Return to admin user shell
projectalpha
listed in the output for both users before proceeding.
- First, let's create a directory intended for collaboration. We'll place it in
-
Standard Permissions and Ownership:
- Let's configure the directory for the team. We want the directory itself to be owned by
root
(so regular users can't delete the directory itself), but the group should beprojectalpha
. We'll give the owner (root
) full control (rwx
), group members read/write/execute (rwx
), and deny all access to others (---
). The octal mode for this is770
.# Change group ownership sudo chgrp projectalpha /srv/shared_project # Set permissions: rwxrwx--- sudo chmod 770 /srv/shared_project # Verify the changes ls -ld /srv/shared_project
- The output should now show something like
drwxrwx--- 1 root projectalpha ... /srv/shared_project
.
- The output should now show something like
- Now, let's see what happens when
appadmin
creates a file inside. Switch to theappadmin
user: - Navigate into the shared directory (this works because
appadmin
is inprojectalpha
which hasrwx
permissions) and create a file:- Observe Carefully: The file is owned by
appadmin
. What is the group? It's most likelyappadmin
's primary group (which might also be namedappadmin
), notprojectalpha
. The permissions will depend onappadmin
'sumask
(likelyrw-r--r--
or644
). This means other members ofprojectalpha
might only be able to read this file, not modify it, which hinders collaboration.
- Observe Carefully: The file is owned by
- Let's configure the directory for the team. We want the directory itself to be owned by
-
Using SGID for Collaboration:
- We can fix the group ownership problem using the Set Group ID (SGID) bit on the parent directory (
/srv/shared_project
). - Exit the
appadmin
shell to return to your administrative user:exit
- As your admin user, add the SGID bit (
s
) to the group permissions of the shared directory. We can use symbolic mode (g+s
). We also want to ensure group members get write permission by default if theumask
was restrictive, so we can combinechmod g+ws
. Or numerically, we add2000
to the existing770
permissions, resulting in2770
.# Using symbolic mode (adds write and sgid for group) sudo chmod g+ws /srv/shared_project # Alternatively, using numeric mode (sets User=rwx, Group=rwx+s, Other=---) # sudo chmod 2770 /srv/shared_project # Verify the permissions - look for the 's' in the group execute slot ls -ld /srv/shared_project
- The permissions should now look like
drwxrws---
. Thex
for the group permissions has been replaced by ans
.
- The permissions should now look like
- Switch back to
appadmin
:su - appadmin
- Go into the directory and create a new file:
cd /srv/shared_project echo "AppAdmin second file (post-SGID)." > appadmin_v2.txt ls -l appadmin_v1.txt appadmin_v2.txt
- Compare the group ownership!
appadmin_v1.txt
(created before SGID) still belongs toappadmin
's primary group. Butappadmin_v2.txt
(created after SGID was set on the directory) should now automatically belong to theprojectalpha
group, inherited from the parent directory! This ensures consistent group ownership for collaboration.
- Compare the group ownership!
- We can fix the group ownership problem using the Set Group ID (SGID) bit on the parent directory (
-
Understanding the Sticky Bit:
- Let's repurpose our directory to simulate a temporary, world-writable space like
/tmp
, where anyone can create files but shouldn't delete others' files. - Exit the
appadmin
shell:exit
- As your admin user, change the permissions to be world-writable (
777
) and add the sticky bit (+t
or1000
). The full numeric mode is1777
.# Using numeric mode (Sticky=1, User=7, Group=7, Other=7) sudo chmod 1777 /srv/shared_project # Verify - look for the 't' at the end of the permissions string ls -ld /srv/shared_project
- Permissions should now be
drwxrwxrwt
.
- Permissions should now be
- Switch to
appadmin
:su - appadmin
- Go into the directory and create a file:
- Now, in a separate terminal, log in as your
your_admin_user
(or use the shell you were in beforesu
). Navigate to the shared directory: - As
your_admin_user
, try to delete the file created byappadmin
:- Observe: You should receive an "Operation not permitted" error! Even though
your_admin_user
has write permission on the directory (it'srwxrwxrwt
), the sticky bit (t
) prevents non-owners from deleting files within this directory.
- Observe: You should receive an "Operation not permitted" error! Even though
- Now, as
your_admin_user
, create and then delete your own file in the same directory:echo "Admin user temp file" > admin_temp.txt ls -l admin_temp.txt rm admin_temp.txt ls -l admin_temp.txt # Should show "No such file or directory"
- This should work without error because you are the owner of
admin_temp.txt
.
- This should work without error because you are the owner of
- Let's repurpose our directory to simulate a temporary, world-writable space like
-
Clean Up (as your admin user with
sudo
):- Let's remove the test directory and the group we created. Use
rm -rf
carefully.
- Let's remove the test directory and the group we created. Use
Workshop Summary: In this workshop, you practiced changing ownership (chown
, chgrp
) and permissions (chmod
using both symbolic and numeric modes). More importantly, you observed the practical application of the SGID bit for ensuring correct group ownership in collaborative directories and the function of the Sticky Bit in protecting files within publicly writable directories. These are essential tools for managing filesystem security effectively.
3. Network Security Guarding the Gates
Most Linux systems are connected to networks – the internet, a local office network, or both. This connectivity is essential for functionality but also exposes them to potential threats from remote attackers. Securing network services and controlling the traffic flowing in and out of your system are vital hardening tasks. Think of this as setting up the border control and security checkpoints for your system.
Firewall Configuration
A firewall is a fundamental network security tool, acting as a digital gatekeeper for your system's network interfaces. It inspects network traffic attempting to enter (ingress) or leave (egress) your system and decides whether to allow or block it based on a predefined set of rules (a policy). The most common and recommended strategy for servers is to implement a default deny policy for incoming traffic: block everything coming in unless a specific rule explicitly allows it.
Key Concepts and Tools:
- Netfilter: This is the actual packet filtering framework built deep inside the Linux kernel. It's the powerful engine that examines network packets, matches them against rules, and takes actions like accepting, dropping, or modifying them. You don't interact with Netfilter directly, but through user-space tools.
iptables
: The classic, veteran command-line utility for configuring IPv4 Netfilter rules (andip6tables
for IPv6). It's extremely powerful and flexible, allowing for very complex rule sets, but its syntax is notoriously difficult and managing large rule sets can be cumbersome. While still prevalent, it's often managed by higher-level tools now.nftables
: The modern replacement for theiptables
suite (iptables
,ip6tables
,arptables
,ebtables
).nftables
provides a single, unified framework with a more consistent and less complex syntax. It offers better performance, atomic rule updates (making changes safer), and improved support for modern networking concepts like sets. It also interfaces directly with the Netfilter subsystem. Many modern distributions are migrating tonftables
as the default backend.- User-Friendly Front-Ends: Because
iptables
andnftables
command lines can be intimidating, several front-end tools have been developed to simplify common firewall management tasks. These tools generate the necessaryiptables
ornftables
rules behind the scenes.- UFW (Uncomplicated Firewall): Very popular on Ubuntu and other Debian-based distributions. Its goal is simplicity for common scenarios. It provides easy commands for allowing or denying incoming connections based on port numbers, service names (like
ssh
,http
), or source IP addresses. UFW manages eitheriptables
ornftables
underneath, depending on the system configuration. It's an excellent choice for beginners and straightforward server/desktop setups. firewalld
: The default firewall management solution on Red Hat-based systems (Fedora, CentOS, RHEL). It introduces the concept of network zones (e.g.,public
,internal
,work
,home
,dmz
,trusted
). Each zone represents a different level of trust and has its own set of firewall rules. Network interfaces (likeeth0
) are assigned to a zone. You then manage security by adding or removing allowed services (likessh
,http
) or ports within specific zones.firewalld
typically usesnftables
as its backend on current systems.
- UFW (Uncomplicated Firewall): Very popular on Ubuntu and other Debian-based distributions. Its goal is simplicity for common scenarios. It provides easy commands for allowing or denying incoming connections based on port numbers, service names (like
Basic Firewall Strategy (Ingress - Incoming Traffic):
- Default Policy: Deny. This is the cornerstone. Configure the firewall's default behavior for incoming traffic to
DROP
orREJECT
any packet that doesn't match an explicitALLOW
rule.DROP
silently discards the packet (often preferred as it doesn't give feedback to scanners), whileREJECT
sends back an error message (can sometimes be useful for debugging but less stealthy). - Allow Essential Internal Traffic: Always permit traffic on the loopback interface (
lo
, IP address127.0.0.1
for IPv4,::1
for IPv6). Many local applications rely on this internal communication path. Both UFW andfirewalld
usually have implicit rules to allow this automatically. - Allow Specific Required Services: Create explicit rules to allow incoming connections only for the network services your system needs to offer to the outside world. Be specific:
- Specify the port number (e.g.,
22
for SSH,443
for HTTPS). - Specify the protocol (usually
tcp
orudp
). Most common services like SSH, HTTP, HTTPS use TCP. Services like DNS or some VPNs might use UDP. - Example Services: SSH (TCP/22), HTTP (TCP/80), HTTPS (TCP/443), Mail (TCP/25, 587, 465, 143, 993), Databases (e.g., TCP/3306 for MySQL, TCP/5432 for PostgreSQL).
- Specify the port number (e.g.,
- Restrict Source IPs (If Possible): For maximum security, if a service should only be accessible from certain known IP addresses or network ranges (e.g., SSH access only from your office network, database access only from your application servers), add the source IP/network restriction to the allow rule. Example: Allow TCP port 22 only if the connection comes from
192.168.1.0/24
.
Egress (Outgoing Traffic):
- Default Allow (Common): Many simpler firewall setups allow all outgoing connections by default. This is generally considered less risky than allowing incoming connections because the connection is initiated from your trusted system out to the network.
- Default Deny (More Secure): For servers handling sensitive data or in high-security environments, you might implement a default deny policy for outgoing traffic as well. Then, you explicitly allow only the necessary outbound connections (e.g., to specific update servers on port 80/443, to specific DNS servers on UDP/53, to specific monitoring servers). This helps prevent malware from "phoning home" or exfiltrating data if the system gets compromised. This requires more careful planning.
Example Commands (Conceptual - see Workshop for practice):
-
Using UFW (Debian/Ubuntu):
sudo ufw status
: Check current status and rules.sudo ufw enable
: Activates the firewall using defined rules & defaults.sudo ufw default deny incoming
: Sets the default policy for incoming traffic.sudo ufw default allow outgoing
: Sets the default policy for outgoing traffic.sudo ufw allow ssh
: Allows traffic based on the service name (looks up port 22/tcp).sudo ufw allow 8080/tcp
: Allows TCP traffic specifically on port 8080.sudo ufw allow from 192.168.1.100
: Allows all traffic from a specific IP.sudo ufw allow from 10.0.0.0/24 to any port 3306 proto tcp
: Allows MySQL access only from the 10.0.0.x network.sudo ufw deny 111
: Explicitly denies traffic on port 111 (often used byrpcbind
).sudo ufw delete allow 8080/tcp
: Removes a previously added rule.sudo ufw status numbered
: Shows rules with numbers for easier deletion (sudo ufw delete <number>
).
-
Using
firewalld
(Fedora/CentOS/RHEL):sudo systemctl status firewalld
: Checks if the service is running.sudo firewall-cmd --state
: Simple running/not running status.sudo firewall-cmd --get-active-zones
: Shows which zones are active and which interfaces belong to them.sudo firewall-cmd --list-all
: Shows the complete configuration (services, ports, etc.) for the default zone. Use--zone=<zonename>
for others.sudo firewall-cmd --permanent --add-service=https
: Permanently adds the HTTPS service to the default zone. Changes require--reload
.sudo firewall-cmd --permanent --add-port=8443/tcp
: Permanently adds TCP port 8443. Requires--reload
.sudo firewall-cmd --permanent --zone=internal --add-source=192.168.50.0/24
: Adds a source network to the 'internal' zone. Connections from this source will then be subject to the rules of the 'internal' zone. Requires--reload
.sudo firewall-cmd --reload
: Makes permanent changes active in the running configuration.sudo firewall-cmd --remove-port=8443/tcp
: Removes a port from the running configuration (temporary unless--permanent
was also used before reloading).
SSH Server Hardening (sshd
)
SSH (Secure Shell) is the encrypted protocol used for virtually all remote command-line administration of Linux systems. Because it's the primary gateway for administrators (and potentially attackers), securing the SSH server daemon (sshd
) is absolutely crucial. The main configuration file is typically /etc/ssh/sshd_config
.
Essential sshd
Hardening Steps:
- Disable Root Login: (We did this in Workshop 1.2, but it bears repeating!) Find the
PermitRootLogin
directive in/etc/ssh/sshd_config
and ensure it is uncommented and set tono
. This single change significantly reduces risk. - Use Key-Based Authentication (Strongly Recommended over Passwords): Passwords can be weak, guessed via brute-force, phished, or stolen. SSH keys use public-key cryptography, which is far more secure.
- Concept: You generate a pair of keys: a private key (e.g.,
id_ed25519
) that you keep absolutely secret and protected on your local machine, and a corresponding public key (e.g.,id_ed25519.pub
) that you copy to the server. When you connect, the server uses your public key to issue a cryptographic challenge that only your private key can correctly answer. This proves your identity without ever sending your private key or a password over the network. - Setup Steps:
- On your local computer (the one you connect from): Open a terminal and run
ssh-keygen
. It's recommended to use modern algorithms like Ed25519:ssh-keygen -t ed25519
. Alternatively, use strong RSA:ssh-keygen -t rsa -b 4096
. - Follow the prompts. It will ask where to save the key (usually
~/.ssh/id_ed25519
or~/.ssh/id_rsa
is fine). - Crucially, enter a strong passphrase when prompted. This encrypts your private key file on your disk. If someone steals your private key file, they still need the passphrase to use it. Don't skip this!
- Copy the public key to the server: The easiest way is often using the
ssh-copy-id
command from your local machine: This command automatically appends your public key (~/.ssh/id_ed25519.pub
or similar) to the~/.ssh/authorized_keys
file in the specified user's home directory on the server and sets the correct permissions. - Manual Copy (if
ssh-copy-id
isn't available): Copy the content of your public key file (e.g.,cat ~/.ssh/id_ed25519.pub
) and paste it as a new line into the~/.ssh/authorized_keys
file on the server (you might need to create the~/.ssh
directory first (mkdir ~/.ssh
) and theauthorized_keys
file). - Set Strict Permissions on the Server: SSH requires strict permissions for security. Run these commands on the server as the user you're setting up keys for:
(
700
means only owner can read/write/execute the directory,600
means only owner can read/write the file). - Test: Try SSHing from your local machine to the server (
ssh user@server_ip
). It should now either connect directly (if your SSH agent has the key unlocked) or prompt you for the key's passphrase instead of the user's login password.
- On your local computer (the one you connect from): Open a terminal and run
- Disable Password Authentication Entirely: Once you have confirmed that key-based login works reliably for all necessary administrative users, you should disable password-based logins completely in
/etc/ssh/sshd_config
for maximum security:This makes brute-force password attacks completely ineffective against your SSH server.# Set to no to disable conventional password authentication PasswordAuthentication no # Set ChallengeResponseAuthentication to no as well, as some PAM modules # might still allow password-like logins through this mechanism. ChallengeResponseAuthentication no # Depending on PAM setup, you might also need this if disabling all password auth # UsePAM no
- Concept: You generate a pair of keys: a private key (e.g.,
- Change the Default SSH Port (Optional - Security through Obscurity): The default SSH port is 22/tcp. Automated scanners (bots) constantly hammer port 22 looking for vulnerable servers. Changing the port can significantly reduce the noise from these automated attacks in your logs. Find the
#Port 22
line insshd_config
, uncomment it, and change22
to a high, unused port number (e.g.,2222
,49123
).- If you change the port:
- You must allow the new port in your system firewall (UFW/firewalld).
- You must specify the new port when connecting using the
-p
option:ssh -p <new_port> user@server_ip
.
- Note: This does not stop a targeted attacker who can easily scan all ports to find the new SSH port. It's primarily effective against non-targeted, automated scans.
- If you change the port:
- Limit Access with
AllowUsers
orAllowGroups
: Don't allow every user defined on the system to even attempt an SSH login. Explicitly define who is allowed using these directives insshd_config
. Add one of these lines (not usually both): Any user not matching these directives will be denied access immediately, even before authentication is attempted. - Use Protocol 2 Only: SSH Protocol 1 is old, insecure, and should never be used. Ensure Protocol 2 is enforced (this is the default on virtually all modern systems):
- Limit Failed Authentication Attempts: Slow down any potential brute-force attacks (e.g., if password auth is somehow still enabled, or against key passphrases) by limiting the number of tries per connection: Set this to a low number like 3 or 4.
- Set Idle Timeout Interval: Automatically disconnect idle SSH sessions after a certain period. This prevents sessions from remaining open indefinitely if a user forgets to log out or their connection drops, reducing the window for session hijacking.
- Disable X11 Forwarding (If Not Needed): X11 forwarding allows you to run graphical applications on the server and display them on your local machine through the SSH tunnel. If you only need command-line access (most common for servers), disable this feature to slightly reduce the attack surface:
- Review Ciphers, MACs, and KexAlgorithms (Advanced): For very high security, you can restrict the allowed cryptographic algorithms to only the strongest modern ones, disabling older or weaker options. Check current security recommendations (e.g., from Mozilla's SSH guidelines). Example (syntax might vary slightly):
# Example - consult current recommendations before applying! # KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 # Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr # MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com
- Use
fail2ban
(Essential Complement): As discussed in the logging section (Logging and Monitoring Keeping an Eye on Things), usingfail2ban
to automatically block IPs that fail SSH authentication repeatedly is a highly effective way to stop brute-force attacks, complementing thesshd_config
settings.
The Golden Rule of sshd_config
Changes:
- Edit the file (
sudo nano /etc/ssh/sshd_config
). - Save the file.
- Test Syntax:
sudo sshd -t
. Fix any reported errors! - Restart Service:
sudo systemctl restart sshd
. - Verify Login: Open a NEW terminal and try to log in. DO NOT close your original session until you confirm the new connection works as expected (and that disallowed methods, like root login or password login, fail).
Disabling Unnecessary Services
Every network service running on your system that listens for incoming connections (i.e., has an open network "port") potentially increases the attack surface. Even if a service is protected by the firewall, vulnerabilities might exist in the service itself that could be exploited if the firewall is ever misconfigured or bypassed. Furthermore, unnecessary services consume system resources (memory, CPU).
Applying the principle of least privilege here means: if the system doesn't need a service to perform its function, disable that service.
Steps to Identify and Disable Services:
-
Identify Listening Network Services: Use the
ss
command (socket statistics) to see which processes are listening for network connections.netstat
is an older alternative.# Show TCP (t) and UDP (u) sockets, show Numeric ports/addresses (n), # show the Process (p) using the socket, only show Listening (l) sockets. sudo ss -tulnp
- Interpreting the Output: Look at the
Local Address:Port
column for lines where theState
isLISTEN
.0.0.0.0:<port>
: Listening on the specified<port>
on all available IPv4 interfaces. This service is reachable from the network. Pay close attention to these.127.0.0.1:<port>
orlocalhost:<port>
: Listening only on the IPv4 loopback interface. This service is generally only reachable from the system itself, which is much safer. Often used for local databases or inter-process communication.[::]:<port>
: Listening on the specified<port>
on all available IPv6 interfaces. Reachable from the network via IPv6. Pay close attention to these.[::1]:<port>
: Listening only on the IPv6 loopback interface (::1
is the IPv6 equivalent of127.0.0.1
). Only reachable locally via IPv6.- The
Users:(("<process_name>",pid=<PID>,...))
part shows the actual program name and its Process ID (PID) that has the port open.
- Interpreting the Output: Look at the
-
Analyze the Listening Services: For each service found listening on
0.0.0.0
or[::]
(i.e., accessible from the network), ask yourself:- What is this service? Use the process name. If unsure, search online for the process name (e.g., "what is cupsd linux").
- Is this service absolutely necessary for the primary function of this specific machine? (e.g., Does a database server really need a printing service (
cupsd
) running? Does a web server really need an email sending service (postfix
) listening unless it's supposed to send email?) - If it is needed, does it need to be accessible from the entire network? Could it be configured to listen only on
127.0.0.1
if only local processes need to connect to it? (This configuration depends on the specific service). Can access be restricted using firewall rules (as covered in Firewall Configuration)?
-
Stop and Disable Unneeded Services: If you determine a service is not required, you should stop it from running currently and prevent it from starting automatically the next time the system boots. On modern systems using
systemd
, use thesystemctl
command:- Stop the service immediately:
(Example:
sudo systemctl stop cups.service
) - Prevent the service from starting automatically on boot:
(Example:
sudo systemctl disable cups.service
) - Check for Related Socket Units: Some
systemd
services use "socket activation." The main service daemon doesn't run all the time. Instead,systemd
listens on the socket (e.g.,cups.socket
). When a connection arrives on that socket,systemd
starts the actual service (cups.service
) to handle it. If you want to completely disable such a service, you need to stop and disable both the service unit and the socket unit: (Example:sudo systemctl stop cups.socket; sudo systemctl disable cups.socket
) - Check Status: You can verify a service is stopped and disabled:
(Look for
Active: inactive (dead)
andLoaded: ...; disabled; ...
) - Masking (A Stronger Disable): If disabling isn't enough (e.g., another service persistently tries to start it as a dependency), you can
mask
the service. This creates a symbolic link from the service file's location in/etc/systemd/system
to/dev/null
, making it impossible forsystemd
to start it. Use with caution, but it's effective. (To undo:sudo systemctl unmask <service_name>.service
)
- Stop the service immediately:
(Example:
-
Verify: After stopping/disabling, run
sudo ss -tulnp
again. The line corresponding to the service you disabled should no longer appear in the LISTEN state on external interfaces (0.0.0.0
or[::]
).
Common Candidates for Disabling (Depending on System Role):
cupsd
/cups.socket
: Printing service. Often unnecessary on servers.rpcbind
/rpcbind.socket
: Remote Procedure Call binder. Needed for NFS (Network File System) and NIS (Network Information Service). Disable if not using these.avahi-daemon
/avahi-daemon.socket
: Used for Zeroconf/Bonjour service discovery (finding printers, shares on local network automatically). Usually unnecessary on servers.apache2
/httpd
/nginx
: Web server services. Disable if this machine is not intended to be a web server.postfix
/sendmail
/exim4
: Mail Transfer Agents (MTAs). Disable unless this machine is specifically configured to send or receive email.mysqld
/mariadbd
/postgresql
: Database servers. Disable if no database is needed on this host.vsftpd
/proftpd
: FTP servers. FTP is generally insecure (transmits passwords in clear text); use SFTP (via SSH) instead if possible. Disable if not needed.smbd
/nmbd
: Samba services (Windows file/print sharing). Disable if not sharing files with Windows clients.chronyd
/ntpd
: Network Time Protocol daemons. You generally want one of these running to keep system time accurate, but ensure it's configured securely (e.g., using appropriate server pools, not allowing remote configuration).systemd-timesyncd
is a simpler client often used on desktops/clients.
Caution: Be careful! Do not disable services essential for the system's operation, such as:
sshd
(if you need remote SSH access)systemd-networkd
orNetworkManager
(handle network interface configuration and connectivity)systemd-resolved
ordnsmasq
(if handling local DNS resolution)systemd-logind
(manages user logins and sessions)dbus-daemon
(inter-process communication system)crond
orsystemd timers
(for scheduled tasks)
Always research a service if you're unsure of its purpose before disabling it. Reducing the number of listening network services is a highly effective hardening technique.
Workshop Basic Firewall and SSH Hardening
Goal: Put network security theory into practice by configuring a basic firewall (using UFW or firewalld
) and applying several key hardening settings to the SSH server.
Prerequisites:
- A Linux system with
sudo
access. - An installed and running SSH server (
openssh-server
package or similar). - UFW installed (common on Debian/Ubuntu, install with
sudo apt update && sudo apt install ufw
if needed) ORfirewalld
installed and running (common on Fedora/CentOS/RHEL, install withsudo dnf install firewalld
andsudo systemctl enable --now firewalld
if needed). - Ability to test SSH connections, ideally from another machine on the same network or at least from
localhost
.
Choose ONE of the Firewall sections (Part 1A or 1B) based on your distribution.
Part 1A: Configuring the Firewall with UFW (Debian/Ubuntu)
-
Check Initial Status: See if UFW is currently managing the firewall.
- Output will likely be
Status: inactive
. If it's active, examine the existing rules.
- Output will likely be
-
Set Default Policies: This is the most important step for a secure baseline. We will deny all incoming connections by default and allow all outgoing connections (a common starting point).
- These commands set the underlying policy. Connections not matching any specific rule will be handled according to these defaults.
-
Allow SSH (Absolutely Essential Before Enabling!): If you enable the firewall with the "deny incoming" policy without explicitly allowing SSH, you will immediately lock yourself out if connected remotely! UFW understands the service name
ssh
.- Explanation: This command tells UFW to add a rule allowing incoming connections on the standard SSH port (TCP port 22). You could also use
sudo ufw allow 22/tcp
.
- Explanation: This command tells UFW to add a rule allowing incoming connections on the standard SSH port (TCP port 22). You could also use
-
(Optional) Allow Other Necessary Services: If this machine were, for example, a web server, you would also need to allow HTTP and HTTPS traffic:
-
Enable UFW: Now that the essential 'allow' rule for SSH is in place, activate the firewall.
- UFW will warn you that the command may disrupt existing connections. Type
y
and pressEnter
.
- UFW will warn you that the command may disrupt existing connections. Type
-
Verify Final Status: Check that the firewall is active and includes the rules you expect. The
verbose
option shows the default policies as well.- You should see
Status: active
. Under "Default:", you should seedeny (incoming), allow (outgoing), disabled (routed)
. Under "To Action From", you should see a rule allowing traffic to port 22 (ssh) from Anywhere.
- You should see
Part 1B: Configuring the Firewall with firewalld
(Fedora/CentOS/RHEL)
-
Check Initial Status: Ensure the
firewalld
service is installed, enabled, and running.- The status should be
active (running)
and the state should berunning
. If not, runsudo systemctl enable --now firewalld
.
- The status should be
-
Identify Default Zone and Check SSH:
firewalld
operates with zones. Find out which zone your network interface is in (usuallypublic
by default for external interfaces) and check if thessh
service is already allowed in that zone.# See which zone(s) are active and which interfaces they cover sudo firewall-cmd --get-active-zones # Assuming 'public' is the relevant zone, list its allowed services sudo firewall-cmd --list-services --zone=public
- Look for
ssh
in the output list. On many server installations, it's allowed by default in thepublic
zone.
- Look for
-
Allow SSH (If Necessary): If
ssh
was not listed as an allowed service in the previous step for your active zone, you need to add it. We use--permanent
to make the change persistent across reboots (it won't take effect until--reload
). -
(Optional) Allow Other Necessary Services: If this were a web server, you'd add http/https:
-
Reload
firewalld
: Apply any changes made with the--permanent
flag to the active firewall configuration.- You should see a
success
message. If you get an error, review the commands you entered.
- You should see a
-
Verify Final Status: List the complete configuration for the zone again to confirm your changes are active.
# Replace 'public' with your actual active zone if different sudo firewall-cmd --list-all --zone=public
- Confirm that
ssh
(and any other services you intended to add) are now listed under theservices:
section for that zone.
- Confirm that
Part 2: Hardening the SSH Server (sshd
)
(These steps use /etc/ssh/sshd_config
and are the same regardless of which firewall (UFW or firewalld) you configured in Part 1)
-
Edit the SSH Configuration File: Open the SSH daemon's configuration file using
sudo
and a text editor. -
Confirm Root Login is Disabled: Scroll through the file and find the
PermitRootLogin
directive. Ensure it is uncommented (no#
at the start of the line) and its value is set tono
. -
(Highly Recommended) Enforce Key-Based Authentication Only:
- Prerequisite: You MUST have successfully set up SSH key-based login for your administrative user(s) (as described in section Network Security Guarding the Gates) and verified that it works before disabling passwords. If you haven't done this, skip this step for now, but plan to do it soon!
- Find the
PasswordAuthentication
directive. Uncomment it if necessary and set its value tono
. - Find the
ChallengeResponseAuthentication
directive. Uncomment it if necessary and set its value tono
.
-
(Optional Security-through-Obscurity) Change the Default Port:
- Find the line
#Port 22
. Remove the#
to uncomment it and change22
to an unused port number (e.g.,2222
). Pick a number above 1024. - CRITICAL: If you change the port, you must immediately:
- Allow the new port in your firewall:
- UFW:
sudo ufw allow 2222/tcp
- firewalld:
sudo firewall-cmd --permanent --add-port=2222/tcp --zone=public && sudo firewall-cmd --reload
- UFW:
- Remember to always use
ssh -p 2222 user@host
to connect from now on.
- Allow the new port in your firewall:
- Find the line
-
(Recommended) Limit Allowed Users: Add a line to explicitly specify which users are allowed to log in via SSH. Replace the usernames with your actual administrative users.
-
(Recommended) Set Idle Timeout: Add or uncomment and configure these lines to automatically disconnect idle sessions after about 10 minutes:
-
Save and Exit: (In
nano
:Ctrl+O
,Enter
,Ctrl+X
) -
Test Configuration Syntax: Before restarting the service, ask
sshd
to check the configuration file for any syntax errors.- This command should produce no output if the syntax is correct. If it reports errors, go back and fix the
sshd_config
file before proceeding.
- This command should produce no output if the syntax is correct. If it reports errors, go back and fix the
-
Restart the SSH Service: Apply the configuration changes by restarting the
sshd
service. -
Verification (Critical Safety Check!):
- Keep your current terminal session open! Do not log out or close it yet. This is your safety net if you made a mistake.
- Open a NEW terminal window or establish a new SSH connection from another machine.
- Try to log in as one of the users listed in your
AllowUsers
directive (e.g.,appadmin
).- Remember to use
ssh -p <new_port> ...
if you changed the port. - If you disabled password authentication, it must connect using your SSH key (it might ask for the key's passphrase).
- If password authentication is still enabled, it should prompt for the user's password.
- Remember to use
- Verify that you can successfully log in as the allowed user.
- Next, try to log in as
root
. This must fail (likely "Permission denied"). - If you disallowed password authentication, try connecting without specifying your key or if the key isn't available – it must fail without even prompting for a password.
- Only after successfully verifying that allowed users can log in (using the correct method) AND that disallowed users/methods fail, is it safe to close your original terminal session. If verification fails, use the original session to fix
/etc/ssh/sshd_config
, runsshd -t
, restartsshd
, and try verifying again.
Workshop Summary: Excellent! You've now configured a basic firewall blocking unwanted incoming connections while allowing SSH. You've also applied several critical hardening settings to the SSH server itself, significantly improving its resistance to unauthorized access and automated attacks by enforcing specific users, disabling root login, and ideally requiring SSH key authentication.
4. Logging and Monitoring Keeping an Eye on Things
Preventing intrusions is the primary goal of hardening, but robust security also requires detection. What if an attack succeeds, or someone makes an unauthorized change? How would you know? Effective logging and monitoring act as your system's security cameras and alarm system. They record significant events, errors, and activities, providing the visibility needed to troubleshoot problems, investigate security incidents after they occur (forensics), and potentially detect malicious activity in progress.
Understanding System Logs
Linux systems have a long tradition of logging events generated by the kernel, system services (daemons), user logins, security-related actions, and applications. Knowing where these logs are stored and how to interpret them is a fundamental system administration skill.
Two Main Logging Architectures:
-
syslog
Protocol (and thersyslog
Daemon):- Concept: This is the classic logging standard. Individual programs and system components send log messages using the
syslog
protocol (over a local socket or sometimes the network) to a central logging daemon. rsyslog
: On most modern Linux distributions,rsyslog
is the powerful and flexible daemon responsible for receiving thesesyslog
messages. It replaced the oldersyslogd
.- Configuration:
rsyslog
's behavior is controlled by configuration files:/etc/rsyslog.conf
: The main configuration file./etc/rsyslog.d/
: A directory where additional configuration snippets (often package-specific) are placed (e.g.,/etc/rsyslog.d/50-default.conf
).- These files contain rules defining what messages to log (based on their source, called facility, like
kern
,auth
,mail
,cron
) and their severity priority (fromdebug
up toemerg
)) and where to send them (usually to specific text files in/var/log
, butrsyslog
can also send logs over the network to a central log server, write to databases, etc.).
- Typical Log Files in
/var/log/
: Logs are traditionally stored as plain text files, making them easy to read with commands likecat
,less
,tail
, andgrep
. Some of the most important standard log files include:/var/log/syslog
(Common on Debian/Ubuntu): A general catch-all for many informational messages from various system services./var/log/messages
(Common on RHEL/CentOS/Fedora): Similar tosyslog
, a primary destination for general system messages./var/log/auth.log
(Debian/Ubuntu) or/var/log/secure
(RHEL/CentOS/Fedora): Extremely important for security monitoring. This file records authentication-related events: successful and failed user logins (console, SSH, etc.),sudo
command usage, user/group creation/modification events handled by PAM. Monitor this file closely!/var/log/kern.log
: Contains messages generated specifically by the Linux kernel itself (e.g., hardware detection, driver messages, kernel-level errors). Useful for diagnosing hardware or low-level system issues./var/log/dmesg
: Stores the kernel ring buffer messages, primarily capturing events during the boot process. Thedmesg
command displays the current contents of this buffer./var/log/boot.log
: Records messages specifically related to the startup sequence of system services during boot.- Application-Specific Logs: Many server applications maintain their own dedicated logs, often within subdirectories of
/var/log
. Examples: Apache web server (/var/log/apache2/access.log
,/var/log/apache2/error.log
), Nginx (/var/log/nginx/access.log
,/var/log/nginx/error.log
), MySQL/MariaDB (/var/log/mysql/error.log
). Check the documentation for your specific applications.
- Concept: This is the classic logging standard. Individual programs and system components send log messages using the
-
journald
(The Systemd Journal):- Concept: Most modern Linux distributions now use
systemd
as their primary init system and service manager.systemd
includes its own integrated and centralized logging system called the Journal, managed by thesystemd-journald
daemon. - Centralized Collection:
journald
is designed to be the central hub for nearly all system logs. It captures:- Kernel messages directly from
/dev/kmsg
. - Messages from the early boot process (initramfs).
- Standard output (
stdout
) and standard error (stderr
) streams of all services managed bysystemd
. This is a major advantage, as applications don't necessarily need special logging code; anything they print can be captured. - Messages sent via the native Journal API.
- Messages submitted through the standard
/dev/log
socket, meaning it can also capture traditionalsyslog
messages (oftenrsyslog
is configured to forward its messages to the Journal, orjournald
handles/dev/log
directly).
- Kernel messages directly from
- Storage Format: Unlike traditional plain text logs,
journald
stores log data in a structured, indexed, binary format. Each log entry includes extensive metadata (like the service unit, PID, UID, GID, timestamp, hostname, etc.) automatically associated with it. This allows for much more powerful and efficient querying and filtering compared togrep
ping through text files. - Storage Location:
- Persistent: If the directory
/var/log/journal
exists,journald
will store the journal files there, preserving logs across system reboots. You might need to create this directory yourself (sudo mkdir -p /var/log/journal
) if you want persistent logging. - Volatile (Runtime): If
/var/log/journal
does not exist,journald
stores logs in/run/log/journal
. This directory resides in a temporary filesystem (tmpfs
) and its contents are lost when the system is shut down or rebooted.
- Persistent: If the directory
- The
journalctl
Command: This powerful command-line utility is the primary way to interact with the logs stored byjournald
.
- Concept: Most modern Linux distributions now use
Essential journalctl
Commands:
journalctl
: Shows all logs stored in the journal, starting with the oldest entries. It uses a pager likeless
by default, so you can navigate with arrow keys, PageUp/PageDown, search with/
, and quit withq
.journalctl -f
: Follows the journal in real-time, printing new log entries as they are generated. Extremely useful for watching what's happening right now (e.g., while testing a service). PressCtrl+C
to stop following.journalctl -n 20
: Shows the lastn
(e.g., 20) log entries. Add--no-pager
to print directly to the terminal without usingless
. (journalctl -n 20 --no-pager
).journalctl -b
: Shows all logs since the current system boot.journalctl -b -1
: Shows logs from the previous boot (-2
for the one before that, and so on).journalctl --since "YYYY-MM-DD HH:MM:SS"
: Filters logs to show entries recorded since the specified timestamp. You can use flexible formats:--since "2 hours ago"
,--since yesterday
,--since 09:00
.journalctl --until "YYYY-MM-DD HH:MM:SS"
: Filters logs to show entries recorded until the specified timestamp (e.g.,--until "1 hour ago"
). Combine with--since
for a specific time window.journalctl -u <unit_name>
: Extremely useful! Filters logs to show only entries generated by a specificsystemd
unit (e.g., a service). Example:journalctl -u sshd.service
,journalctl -u nginx.service
,journalctl -u cron.service
.journalctl /path/to/executable
: Filters logs to show entries generated by processes running a specific executable file. Example:journalctl /usr/sbin/sshd
.journalctl -p <priority>
: Filters messages by priority level. Priorities are numeric (0-7) or named:emerg
(0),alert
(1),crit
(2),err
(3),warning
(4),notice
(5),info
(6),debug
(7). Using-p err
(or-p 3
) shows all messages with priority "error" or higher (error, critical, alert, emergency). Very useful for finding problems.journalctl -p warning
(or-p 4
) shows warnings and above.journalctl -k
: Shows only kernel messages (equivalent to running thedmesg
command, but accesses the journal).journalctl _UID=<user_id>
: Filters by the User ID of the process that generated the log entry. Example: Find logs from user 1000 withjournalctl _UID=1000
. You can filter on any metadata field stored in the journal (prefix the field name with_
). Usejournalctl -o verbose
to see all metadata for entries.journalctl --disk-usage
: Reports the total disk space currently used by the archived journal files (if persistent storage is enabled).journalctl --vacuum-size=500M
: Reduces the size of archived journal files, deleting the oldest files until the total disk usage is below the specified size (e.g., 500 Megabytes).journalctl --vacuum-time=4weeks
: Reduces the size of archived journal files, deleting files older than the specified time duration (e.g., keep only the last 4 weeks).
Log Rotation (Managing Log File Size):
Log files cannot grow forever; they would eventually fill up the disk. Log rotation is the automated process of archiving old log files and starting new ones, keeping disk usage under control.
- Traditional Logs (
/var/log
text files): Thelogrotate
utility is the standard tool.- Configuration:
/etc/logrotate.conf
(global settings) and files within/etc/logrotate.d/
(rules specific to log files generated by different packages, e.g.,apache2
,rsyslog
,apt
). These rules define how often to rotate (daily, weekly, monthly), how many old logs to keep, whether to compress old logs (compress
), size thresholds (size
), and commands to run before/after rotation (e.g., telling a service to reopen its log file). - Execution:
logrotate
is typically run automatically once a day via a system cron job (often found in/etc/cron.daily/logrotate
).
- Configuration:
journald
Logs:journald
handles its own rotation internally, based on settings in/etc/systemd/journald.conf
. Key settings to control size and retention include:SystemMaxUse=
: Sets an absolute maximum disk space the persistent journal (in/var/log/journal
) is allowed to consume.SystemKeepFree=
: Tellsjournald
to try and leave at least this much disk space free when rotating.SystemMaxFileSize=
: Sets the maximum size a single journal file can reach before being rotated.MaxFileSec=
: Sets the maximum time span a single journal file covers before being rotated.RuntimeMaxUse=
,RuntimeKeepFree=
,RuntimeMaxFileSize=
are similar settings but apply only to the volatile journal storage in/run/log/journal
.- You can also manually trigger rotation using the
--rotate
flag withjournalctl
or by sending a signal tosystemd-journald
. Manual vacuuming using--vacuum-size
or--vacuum-time
is often more practical for immediate cleanup.
Regularly reviewing key logs (especially authentication logs and messages with error/warning priority) is a critical security habit. Using journalctl
with its filtering capabilities makes this much easier on modern systems.
Introduction to Intrusion Detection Systems (IDS)
Manually sifting through potentially gigabytes of log data is challenging. While essential for deep investigation, we also need tools that can automatically monitor activity and alert us to suspicious events in near real-time. An Intrusion Detection System (IDS) is designed for this purpose. It monitors system or network activity, analyzes it against known attack patterns or baseline behavior, and flags potential security policy violations or malicious activities.
Main Categories of IDS:
- Network Intrusion Detection System (NIDS): These systems are like security guards watching the network roads leading to your system (or entire network). They capture and analyze network traffic packets passing through a specific point (e.g., connected to a mirrored port on a switch or sitting inline). They look for:
- Signature-based detection: Recognizable patterns (signatures) of known attacks or malware communication (e.g., specific exploit attempts, botnet command-and-control traffic).
- Anomaly-based detection: Deviations from a baseline of "normal" network traffic patterns (e.g., unusual protocols, unexpected traffic volumes, connections to suspicious countries).
- Examples: Snort (classic signature-based), Suricata (modern, multi-threaded, handles signatures and protocols), Zeek (formerly Bro - powerful network analysis framework focused on generating high-level logs about network activity). NIDS deployment is typically a more advanced network security topic.
- Host-based Intrusion Detection System (HIDS): These systems run as software agents directly on the individual computers (hosts) they are designed to protect. They focus on activities happening on that specific host. Common HIDS capabilities include:
- Log Analysis: Automatically collecting and analyzing system and application logs (from
/var/log
orjournald
) in real-time, using predefined rules to detect suspicious events (e.g., multiple authentication failures, use of dangerous commands, specific error patterns indicating compromise). - File Integrity Monitoring (FIM): This is a core HIDS function. The system first creates a secure baseline database containing cryptographic checksums (hashes like SHA-256) and other attributes (permissions, ownership, modification times) of critical system files, binaries, and configuration files. It then periodically rescans these files and compares their current state to the baseline. Any unexpected changes (modified content, changed permissions) trigger an alert, as this could indicate tampering by malware or an attacker.
- Policy Monitoring / Configuration Assessment: Checking if the system's configuration settings comply with predefined security policies or benchmarks (e.g., CIS Benchmarks). For example, verifying that SSH root login is disabled or password complexity rules are enforced.
- Rootkit Detection: Scanning the system (memory, kernel structures, file system) for signs of known rootkits. Rootkits are malware specifically designed to hide their presence and maintain persistent, privileged access.
- Active Response (Optional): Some HIDS can be configured to take automatic actions when a high-severity threat is detected. This could include blocking the source IP address in the firewall, terminating a suspicious process, disabling a user account, or running a predefined script. Active response must be configured very carefully to avoid unintended consequences.
- Log Analysis: Automatically collecting and analyzing system and application logs (from
Practical HIDS Example fail2ban
While fail2ban
doesn't typically perform file integrity monitoring or rootkit detection like full HIDS solutions, it excels as a log monitoring application with active response capabilities, specifically targeting brute-force login attacks. It's highly effective, relatively simple to configure, and provides significant security value, making it an excellent starting point.
- How
fail2ban
Works:- Monitoring:
fail2ban
continuously tails (reads the end of) specified log files (like/var/log/auth.log
,/var/log/secure
, web server error logs) or queries the systemd journal (journalctl
). - Filtering: It applies filters (defined using regular expressions, often found in
/etc/fail2ban/filter.d/
) to scan the log entries for patterns matching failed login attempts or other specific undesirable activities. These patterns typically extract the source IP address responsible for the failed attempt. - Counting: It maintains internal counters for each source IP address, tracking how many times a matching failure pattern occurs within a defined time window (the
findtime
). - Triggering: If the failure count from a single IP address reaches a predefined threshold (the
maxretry
) within thefindtime
,fail2ban
triggers a configured action.
- Monitoring:
- Action: The most common action is to ban the offending IP address. This usually involves executing a command to add a temporary rule to the system's firewall (it supports
iptables
,nftables
,firewalld
, UFW, and others) that blocks (DROP
orREJECT
) all further connections from that IP address. Other actions could include sending email notifications. - Ban Duration: The ban lasts for a configurable period specified by
bantime
. After thebantime
expires,fail2ban
automatically removes the firewall rule, unbanning the IP. - Benefit: Effectively stops automated brute-force password guessing attacks against services like SSH, FTP, web login forms, etc., before they have a chance to succeed. It dramatically reduces server load and log noise caused by these attacks.
- Configuration:
- Default filters and actions are provided, often in
/etc/fail2ban/filter.d/
and/etc/fail2ban/action.d/
. - The main configuration defining which services (jails) are active and their parameters (
findtime
,maxretry
,bantime
,logpath
,filter
,action
) is traditionally in/etc/fail2ban/jail.conf
. - Crucially, DO NOT EDIT
jail.conf
directly. Package updates might overwrite it. Instead, create local override files:/etc/fail2ban/jail.local
: Settings in this file override the same settings found injail.conf
. This is the primary place for your customizations./etc/fail2ban/jail.d/*.conf
(or*.local
): You can also place configuration snippets for individual jails in files within this directory for better organization. Files here are read afterjail.conf
andjail.local
.
- Configuration is structured into sections called jails, typically enclosed in square brackets (e.g.,
[sshd]
,[apache-auth]
). The[DEFAULT]
section sets global defaults for all jails, which can be overridden within specific jail sections.
- Default filters and actions are provided, often in
Other HIDS Examples Brief Mention
For more comprehensive host-based intrusion detection beyond what fail2ban
offers, consider these powerful open-source options (note that they generally require more setup and configuration effort):
- Wazuh: A very popular and feature-rich open-source security platform evolved from the original OSSEC HIDS. It provides log analysis, advanced file integrity monitoring (FIM), vulnerability detection (by correlating installed software versions with CVE databases), configuration assessment against benchmarks (like CIS), cloud security monitoring, rootkit detection, and active response capabilities. It typically uses an agent-manager architecture (agents run on monitored hosts and report to a central Wazuh server/cluster) and often integrates with the Elastic Stack (Elasticsearch, Kibana) for visualization and analysis.
- OSSEC: The original open-source HIDS project on which Wazuh is based. It continues to be developed and offers similar core HIDS functionalities like log analysis, FIM, and rootkit detection. It's known for being relatively lightweight and stable.
- Tripwire (Open Source Version): While Tripwire also offers commercial products, its open-source version is a highly regarded tool primarily focused on File Integrity Monitoring (FIM). It excels at creating a secure baseline database of file states and meticulously reporting any subsequent changes (additions, deletions, modifications to content or metadata). It's less focused on real-time log analysis but is excellent for detecting unauthorized file system tampering.
Implementing tools like fail2ban
or a full HIDS, combined with diligent manual log review when needed, provides crucial visibility into the security state and ongoing activities on your Linux system.
Workshop Exploring Logs and Setting Up fail2ban
Goal: Learn how to navigate and search system logs using journalctl
(or traditional log files) and then install, configure, and test fail2ban
to automatically protect your SSH server from brute-force attacks.
Prerequisites:
- A Linux system with
sudo
access. - An installed and running SSH server.
- Access to another machine on the same network, or the ability to generate failed SSH logins from
localhost
. fail2ban
package available in your distribution's repositories.
Part 1: Exploring System Logs
-
Using
journalctl
(Recommended forsystemd
-based systems):- Let's look at the last 25 log entries specifically from the SSH service daemon (
sshd
or sometimesssh
). We'll use-u
to specify the unit and--no-pager
to print directly. (If that shows nothing relevant, trysudo journalctl -u ssh -n 25 --no-pager
) -
Now, let's follow the journal in real-time to see new messages as they happen.
- Keep this command running in your terminal.
-
Simulate Failed Logins: Open a NEW terminal window (or use another machine). In this new window, try to SSH into your server several times using an incorrect password or a non-existent username. Make at least 3-4 failed attempts.
- Observe Real-time Logs: Look back at the terminal where
journalctl -f
is running. As you make the failed attempts in the other window, you should see new lines appearing almost instantly, documenting each authentication failure. Look for messages containing "Failed password", "Invalid user", or "authentication failure", often including the source IP address (from 127.0.0.1
orfrom ::1
if testing from localhost). -
Press
Ctrl+C
in thejournalctl -f
terminal to stop following the logs. -
Search for Past Failures: Now let's query the journal specifically for recent error or warning messages related to SSH.
# Search for priority 'warning' (4) or higher (-p 4), show last 50 lines, filter for 'ssh' (case-insensitive) sudo journalctl -p 4 -n 50 --no-pager | grep -i 'ssh'
- This should clearly show the log entries generated by your failed login attempts.
- Let's look at the last 25 log entries specifically from the SSH service daemon (
-
Using Traditional Log Files (Alternative):
- If you prefer, or if
journald
isn't the primary logger: - Identify the authentication log file:
/var/log/auth.log
(Debian/Ubuntu) or/var/log/secure
(RHEL/CentOS/Fedora). - Follow the log file in real-time:
- Simulate failed logins as described in step 1c using a separate terminal.
- Observe the new failure messages appearing in the
tail -f
output. PressCtrl+C
to stop following. - View the last ~50 lines of the file to see the history:
- If you prefer, or if
Part 2: Installing and Configuring fail2ban
-
Install
fail2ban
: Use your distribution's package manager.- Debian/Ubuntu:
- Fedora/CentOS/RHEL (EPEL repository might be needed for older versions):
-
Enable and Start the Service: Ensure
fail2ban
starts automatically on boot and is running now.sudo systemctl enable fail2ban sudo systemctl start fail2ban # Verify it's running sudo systemctl status fail2ban
- Look for
Active: active (running)
in the status output.
- Look for
-
Create a Local Configuration File: We need to enable the SSH protection jail and potentially customize settings. We do this in
/etc/fail2ban/jail.local
, which overrides defaults in/etc/fail2ban/jail.conf
. -
Configure
jail.local
: Paste the following configuration into the empty file. Read the comments carefully, especiallyignoreip
!# /etc/fail2ban/jail.local # This file overrides settings from /etc/fail2ban/jail.conf [DEFAULT] # Settings in [DEFAULT] apply to all jails unless overridden below. # Whitelist IP addresses that should NEVER be banned. # This MUST include 127.0.0.1 (localhost IPv4) and ::1 (localhost IPv6). # Add the public IP address of your management workstation(s) or home/office network. # Use space separation for multiple entries. CIDR notation (e.g., 192.168.1.0/24) is allowed. # FIND YOUR PUBLIC IP by searching "what is my IP" in a browser from that location. # FAILURE TO WHITELIST YOURSELF HERE WILL LIKELY RESULT IN BEING BANNED! ignoreip = 127.0.0.1/8 ::1 YOUR_STATIC_IP_ADDRESS_HERE ANOTHER_TRUSTED_IP_RANGE_HERE # Default ban time in seconds. 1h = 3600, 1d = 86400. bantime = 1h # Time window (seconds) during which failures are counted. 10m = 600. findtime = 10m # Number of failures within 'findtime' to trigger a ban. maxretry = 5 # --- SSH Server Protection Jail --- [sshd] # This section configures protection specifically for the SSH daemon. # Enable this jail (MUST be true to activate SSH protection) enabled = true # Optional: If you changed the SSH port in /etc/ssh/sshd_config (Workshop Basic Firewall and SSH Hardening), # you MUST specify the same port here for fail2ban to monitor correctly. # port = 2222 # Optional: Override default maxretry just for SSH if desired. # maxretry = 3 # Optional: Override default bantime just for SSH if desired. # bantime = 12h # Optional: Specify firewall action if auto-detection is wrong. # Common actions: # iptables-multiport # nftables-multiport # firewallcmd-rich-rules (for firewalld) # ufw (for UFW) # Autodetection usually works well if the integration package (like fail2ban-firewalld) is installed. # action = firewallcmd-rich-rules
- Action Required: Replace
YOUR_STATIC_IP_ADDRESS_HERE
andANOTHER_TRUSTED_IP_RANGE_HERE
with the actual IP addresses or network ranges you use to administer this server. If your home/office IP changes frequently (dynamic IP), whitelisting might be difficult; be extra careful not to ban yourself.
- Action Required: Replace
-
Save and Exit: (In
nano
:Ctrl+O
,Enter
,Ctrl+X
) -
Restart
fail2ban
: To makefail2ban
read your newjail.local
configuration. -
Verify
fail2ban
Status and Jails: Check that the service is running and that oursshd
jail is active and configured.- Check overall status and list active jails:
- Output should show
Number of jail: 1
(or more) and listsshd
.
- Output should show
- Get detailed status for the
sshd
jail specifically:- This shows:
- Filter details (which patterns it's using)
- Action details (which firewall commands it will run)
- Log file being monitored (or journal backend)
- Total failed attempts currently tracked
- List of currently banned IPs (should be empty initially)
- This shows:
- Check overall status and list active jails:
-
Test the Ban!
- Go back to your second terminal (or remote machine).
- Crucially, ensure the IP address you are testing from is NOT listed in the
ignoreip
setting in/etc/fail2ban/jail.local
. (If testing fromlocalhost
and127.0.0.1/8 ::1
are whitelisted, this test won't work from localhost. You'd need to test from another machine whose IP is not whitelisted). - Deliberately fail SSH login attempts again from this non-whitelisted IP. Make sure you exceed the
maxretry
count (e.g., 6 failed attempts ifmaxretry = 5
) within thefindtime
window (e.g., 10 minutes). - After crossing the
maxretry
threshold, try to connect via SSH one more time from that same IP. - Observe: Your SSH connection attempt should now fail differently! Instead of prompting for a password, it should likely hang, time out, or give an immediate "Connection refused" error. You've been banned by
fail2ban
! - Go back to your original terminal (where you have
sudo
access) and check thesshd
jail status again:- You should now see the IP address you tested from listed under
Banned IP list
! Success!
- You should now see the IP address you tested from listed under
-
(Important) Unbanning an IP: If you need to manually remove an IP from the banned list (e.g., you banned yourself or a legitimate user):
# Replace <BANNED_IP_ADDRESS> with the actual IP shown in the status sudo fail2ban-client set sshd unbanip <BANNED_IP_ADDRESS>
- Run
sudo fail2ban-client status sshd
again to confirm the IP is gone from the banned list. You should now be able to connect normally from that IP again.
- Run
Workshop Summary: You've successfully inspected system logs to find evidence of failed logins and then installed, configured, and tested fail2ban
. You saw how it automatically detects and blocks IP addresses performing brute-force attacks against your SSH server, adding a powerful layer of automated defense. Remember the critical importance of correctly configuring the ignoreip
setting!
5. Updates and Patch Management Staying Current
Imagine building a strong fortress but neglecting to repair a known weak spot discovered in the wall. That's precisely what running outdated software is like in the digital world. Software developers and the security community constantly discover vulnerabilities – flaws or weaknesses in code – that could be exploited by attackers. When these are found, developers release patches (updates) to fix them. Attackers actively search the internet for systems running software with known, unpatched vulnerabilities. Therefore, keeping your Linux system's software up-to-date by applying patches promptly is arguably one of the most impactful and essential security practices you can adopt.
Importance of Regular Updates
Applying software updates provided by your Linux distribution offers several critical benefits:
- Security Patches: This is the absolute primary reason. Updates deliver fixes for security holes that could allow attackers to:
- Gain unauthorized access (e.g., bypass login, escalate privileges).
- Steal sensitive data.
- Install malware (viruses, ransomware, cryptocurrency miners).
- Disrupt your system's operation (denial of service).
- Use your system to attack others. The longer you delay applying a security patch after its release, the larger the window of opportunity for attackers who have learned about the vulnerability and developed tools (exploits) to take advantage of it. Many successful cyberattacks exploit vulnerabilities for which patches have been available for weeks, months, or even years.
- Bug Fixes: Updates frequently fix non-security-related bugs ("functional bugs") that could cause software to crash, behave incorrectly, consume excessive memory or CPU, lead to data corruption, or simply be annoying. Sometimes, these functional bugs can also have unforeseen security implications.
- Stability and Reliability Improvements: By fixing bugs and sometimes optimizing code, updates can lead to a more stable and reliable system overall.
- New Features & Performance Enhancements: While less common in routine patches (major feature additions usually come with major version upgrades), updates can sometimes introduce minor functional improvements or performance optimizations.
Neglecting updates, especially security updates, is akin to ignoring a recall notice for a faulty lock on your front door. It's a significant, unnecessary risk.
Using Package Managers for Updates
Manually tracking, downloading, compiling, and installing updates for every piece of software on a Linux system would be an unmanageable nightmare. Thankfully, Linux distributions provide sophisticated package management systems to automate this process.
Key Concepts:
- Package: A software application or library bundled into a specific archive format (e.g.,
.deb
for Debian/Ubuntu,.rpm
for Fedora/CentOS/RHEL) containing the program files, metadata (version, description), dependencies, and installation/removal scripts. - Package Manager: A tool (like
apt
,dnf
,yum
) that handles the entire lifecycle of software packages: searching, installing, upgrading, configuring, and removing software. - Repositories: Centralized online servers (or local mirrors) maintained by the Linux distribution (or trusted third parties) that store a large collection of verified software packages specifically built and tested for that distribution version. The package manager connects to these repositories to find available software and updates.
- Dependencies: Software rarely exists in isolation. Package A might require Package B (e.g., a specific library) to function correctly. The package manager automatically identifies and installs these required dependencies when you install Package A, ensuring the software works. It also manages updates across dependent packages.
The Main Package Managers & Commands for Updating:
-
APT (Advanced Package Tool):
- Distributions: Debian, Ubuntu, Linux Mint, Pop!_OS, Kali Linux, etc.
- Update Workflow:
sudo apt update
: Run this first, always. This command doesn't install anything. It downloads the latest package lists (indexes) from all configured repositories, so APT knows which package versions are currently available.sudo apt list --upgradable
(Optional): Shows a detailed list of packages currently installed on your system for which newer versions were found in the repositories during theapt update
step. Useful for seeing what will change.sudo apt upgrade
: This is the command that performs the updates. It downloads and installs the newest available versions of all packages currently installed on your system, based on the information gathered byapt update
. It will try not to remove existing packages or install new ones (unless they are new dependencies required by an upgrading package). This is the standard command for routine system updates.sudo apt full-upgrade
(or the olderdist-upgrade
): This is a more powerful upgrade command. Likeapt upgrade
, it installs the newest versions. However, it may also remove currently installed packages if that's necessary to resolve complex dependency conflicts, typically encountered when upgrading between major distribution releases (e.g., Ubuntu 22.04 to 24.04). Use this with slightly more awareness thanapt upgrade
for routine updates, but it's often necessary for release upgrades.
- Other useful APT commands:
apt install
,apt remove
,apt purge
,apt search
,apt show
,apt autoremove
(removes orphaned dependencies),apt clean
(clears downloaded package cache).
-
DNF (Dandified YUM):
- Distributions: Fedora, CentOS Stream 8+, RHEL 8+, Oracle Linux 8+, etc. (Modern successor to YUM).
- Update Workflow:
sudo dnf check-update
: Checks the configured repositories and lists all installed packages that have updates available. (Combines the check and list steps compared to APT).sudo dnf upgrade
: Performs the updates. Downloads and installs the latest versions of all upgradable packages, automatically handling dependencies. This is the primary command for updating the system.sudo dnf upgrade --security
(Optional): Attempts to apply only updates marked as security relevant.
- Other useful DNF commands:
dnf install
,dnf remove
,dnf search
,dnf info
,dnf autoremove
,dnf history
(shows past transactions, allows undo/rollback),dnf clean all
(clears caches).
-
YUM (Yellowdog Updater, Modified):
- Distributions: CentOS 7, RHEL 7, older Fedora. Largely superseded by DNF, but syntax is very similar.
- Update Workflow:
sudo yum check-update
sudo yum update
sudo yum update --security
(Optional)
- Other useful YUM commands:
yum install
,yum remove
,yum search
,yum info
,yum autoremove
,yum history
,yum clean all
.
Update Frequency and Best Practices:
- Check Frequently: Run the "update check" command (
apt update
ordnf check-update
) often – daily is ideal, especially for servers, or at least several times a week for desktops. - Apply Promptly: Install available updates regularly. For desktops, weekly might be sufficient. For internet-facing servers, applying security updates within a day or two of release is often recommended if possible. The urgency depends on the severity of the vulnerability being patched.
- Server Updates - Plan and Test: On critical production servers, applying updates haphazardly can be risky (though often less risky than not patching). Best practice involves:
- Testing: If possible, apply updates to a non-production "staging" or "test" server first that mirrors the production environment, and verify that critical applications still function correctly.
- Scheduling: Apply updates during planned maintenance windows when system downtime or potential issues will have the least impact on users or business operations.
- Backup: Ensure you have recent, working backups before applying major updates.
- Reboot When Necessary: This is crucial! Updates to certain core components, most notably the Linux kernel itself, but also essential libraries like
glibc
or thesystemd
init system, often require a system reboot for the changes to fully take effect. Until you reboot, you are still running the old, potentially vulnerable kernel or library in memory. Package managers often notify you if a reboot is required after an update (e.g., APT creates/var/run/reboot-required
). Don't delay necessary reboots indefinitely, especially after applying security updates to the kernel.
Automated Updates
Manually running update commands regularly can be easily forgotten. Most Linux distributions provide tools to automate the process, particularly for security updates.
-
Debian/Ubuntu:
unattended-upgrades
- Installation:
sudo apt install unattended-upgrades
- Configuration: Highly configurable via files in
/etc/apt/apt.conf.d/
. Key files:20auto-upgrades
: Simple enable/disable switches (APT::Periodic::Update-Package-Lists "1";
,APT::Periodic::Unattended-Upgrade "1";
). "1" means daily.50unattended-upgrades
: Detailed configuration. You can specify which "origins" or "suites" to automatically upgrade (e.g., enable the"-security"
suite but disable regular"-updates"
). You can blacklist specific packages, enable automatic removal of unused dependencies, configure email notifications, and control automatic reboots (Unattended-Upgrade::Automatic-Reboot "true"
).
- Recommendation: Configure it to automatically install security updates (
"${distro_id}:${distro_codename}-security";
) but perhaps disable automatic installation of non-security updates unless you accept the risk. Consider enabling email notifications. Be cautious with automatic reboots on servers.
- Installation:
-
Fedora/CentOS/RHEL:
dnf-automatic
- Installation:
sudo dnf install dnf-automatic
- Configuration: Edit
/etc/dnf/automatic.conf
. Key options:upgrade_type
: Set todefault
(all updates),security
(only security-related updates - recommended), orminimal
.download_updates = yes
: Download updates automatically.apply_updates = yes
: Crucial setting. Set toyes
to actually install the downloaded updates. Ifno
, it might only download or notify.emit_via
: How to send notifications (e.g.,stdio
,email
). Configure email settings (email_from
,email_to
,email_host
) if using email.random_sleep
: Adds a random delay before checking to avoid overwhelming mirrors (usually enabled).
- Enable Timer:
dnf-automatic
runs via asystemd
timer unit. Enable and start it:sudo systemctl enable --now dnf-automatic.timer
.
- Installation:
Pros and Cons of Automated Updates:
- Pros:
- Timeliness: Security patches are applied very quickly, drastically reducing the window of vulnerability.
- Consistency: Ensures updates aren't forgotten or delayed due to human oversight.
- Convenience: Reduces manual administrative workload.
- Cons:
- Risk of Breakage: An update (even a security patch, though less likely) could potentially introduce a regression or conflict that breaks a critical application or service, and it might happen without you being immediately present to fix it.
- Uncontrolled Reboots: If automatic reboots are enabled, they might happen at peak usage times or during critical operations if not carefully scheduled (some tools allow scheduling constraints).
Recommendation Summary:
- Automate Security Updates: The security benefits of applying security patches immediately generally outweigh the risks for most systems. Configure
unattended-upgrades
ordnf-automatic
(withupgrade_type = security
andapply_updates = yes
) for this. - Manual or Scheduled Non-Security Updates: For non-security updates on critical servers, consider applying them manually during planned maintenance windows after appropriate testing, rather than fully automating them.
- Notifications: Always configure notifications so you are aware when automated updates have been applied.
Patch management is a fundamental pillar of system security. Establish a reliable process, whether manual, automated, or hybrid, and stick to it.
Workshop Checking for and Applying Updates
Goal: Practice using your system's package manager (APT or DNF/YUM) to check for available software updates, review them, and apply them safely.
Prerequisites:
- A Linux system (Ubuntu/Debian or Fedora/CentOS/RHEL) with
sudo
access. - An active internet connection to reach the distribution's repositories.
- (Optional but likely) Some pending updates for demonstration purposes. If your system is perfectly up-to-date, the commands will still work but might not show any packages to upgrade, which is also a successful outcome!
Choose the set of steps corresponding to your distribution's package manager.
Part 1: Using APT (Debian/Ubuntu)
-
Refresh Package Lists: This is the essential first step. It downloads the latest list of available packages and versions from the repositories defined in
/etc/apt/sources.list
and/etc/apt/sources.list.d/
.- Observe: Watch the output. It connects to repository URLs and downloads package list information. It might end with a message like "X packages can be upgraded. Run 'apt list --upgradable' to see them."
-
List Upgradable Packages (Optional but Recommended): Before applying updates, it's good practice to see exactly which packages are going to be changed.
- Observe: This command outputs a list of installed packages that have newer versions available in the repositories. It shows the package name, the repository it's coming from, the new version number, the architecture, and sometimes the currently installed version. Reviewing this list can help you anticipate potential impacts (e.g., "Oh, the kernel is being updated, I'll probably need to reboot").
-
Apply the Upgrades: Now, execute the command to download and install the newer versions of all upgradable packages.
- Observe: APT will first calculate dependencies and then present a summary:
- The list of packages that will be upgraded.
- Any new packages that need to be installed as dependencies.
- The amount of data that needs to be downloaded.
- The amount of disk space that will be used/freed after the operation.
- Confirmation: It will almost always ask for confirmation:
Do you want to continue? [Y/n]
. Read the summary carefully. If it looks reasonable, typeY
and pressEnter
. If you typen
or just press Enter, the operation will be aborted. - Wait: APT will now download the package files (
.deb
archives) and then proceed to unpack and configure them, replacing the older versions. This may take some time depending on the number of updates and your internet/disk speed.
- Observe: APT will first calculate dependencies and then present a summary:
-
Handle Configuration File Prompts (If Any): During the configuration phase, if APT detects that a configuration file for a package being upgraded (e.g., in
/etc
) has been modified locally by you since it was last installed, it might pause and ask you what to do. Common options include:- Keep your currently installed local version.
- Install the package maintainer's new version (overwriting your changes).
- Show a diff (comparison) between the versions.
- Start a shell to investigate manually.
- Read the prompt carefully! Often, keeping your local version is the safest default unless you know the new maintainer's version contains essential security fixes or changes required for the package to work with the new version.
-
(Recommended) Clean Up Unused Dependencies: After upgrades, some packages that were installed as dependencies might no longer be needed by any currently installed package (they become "orphaned").
apt autoremove
can clean these up.- Observe: It will list the packages proposed for removal and ask for confirmation
[Y/n]
. Carefully review this list. Usually, it's safe to remove these packages, but if you see something you know you installed manually and still need, you might want to mark it as manually installed (sudo apt-mark manual <package_name>
) first and then re-runautoremove
, or just sayn
for now and investigate further.
- Observe: It will list the packages proposed for removal and ask for confirmation
-
(Optional) Clean Up Downloaded Package Cache: APT stores the downloaded
.deb
package files in a cache (usually/var/cache/apt/archives/
). These can take up significant disk space over time.apt clean
removes these cached files.- This command simply cleans the cache and doesn't affect installed software.
-
Check if Reboot is Required: Updates to the kernel or core libraries often require a reboot to be fully activated. Debian/Ubuntu systems typically signal this by creating a file named
/var/run/reboot-required
. You can check for its existence.if [ -f /var/run/reboot-required ]; then echo "########## A system reboot IS required! ##########" # Optionally display the contents of the file which might list packages triggering the reboot # sudo cat /var/run/reboot-required.pkgs else echo "System reboot is likely not required by the package manager at this time." fi
- Action: If the message indicates a reboot is required, you should plan to reboot the system at your earliest convenience to ensure all patches (especially kernel security fixes) are active.
Part 2: Using DNF/YUM (Fedora/CentOS/RHEL)
-
Check for Available Updates: DNF/YUM typically perform the repository check and list available updates in a single step.
# For DNF (Modern systems like Fedora, CentOS Stream 8+, RHEL 8+) sudo dnf check-update # For YUM (Older systems like CentOS 7, RHEL 7) # sudo yum check-update
- Observe: The command will connect to the enabled repositories and then list all installed packages for which a newer version is available, showing the package name, architecture, new version number, and the repository it comes from. If no updates are found, it will state that.
-
Apply the Updates: Execute the command to download and install all the available updates found in the previous step.
- Observe: The package manager will:
- Resolve dependencies (figure out exactly which versions of all packages are needed).
- Present a summary of the transaction: packages to be installed, upgraded, or potentially removed (less common with standard
upgrade
), total download size, total installed size.
- Confirmation: It will ask for confirmation, usually
Is this ok [y/N]:
. Review the transaction summary carefully. If it looks correct, typey
and pressEnter
. TypingN
or just Enter will cancel the operation. - Wait: DNF/YUM will download the required RPM package files and then install/upgrade them, running pre- and post-installation scripts as needed.
- Observe: The package manager will:
-
Handle Configuration Files (.rpmnew / .rpmsave): If a package update includes a new version of a configuration file that you have modified locally, DNF/YUM often handle this differently than APT. Instead of prompting interactively, they might:
- Save the package maintainer's new version with a
.rpmnew
suffix (e.g.,/etc/ssh/sshd_config.rpmnew
). - Save your existing modified version with an
.rpmsave
suffix (e.g.,/etc/ssh/sshd_config.rpmsave
) before overwriting it with the default new version. - It's good practice after updates to search for these files (e.g.,
sudo find /etc -name "*.rpmnew" -o -name "*.rpmsave"
) and manually compare them (using tools likediff
) to merge any necessary changes from the new version into your customized file, or vice-versa.
- Save the package maintainer's new version with a
-
(Recommended) Clean Up Unused Dependencies: Remove packages that were installed as dependencies but are no longer required by any currently installed package.
- Observe: It will list the packages to be removed and ask for confirmation
[y/N]
. Review the list to ensure nothing critical is being removed unexpectedly.
- Observe: It will list the packages to be removed and ask for confirmation
-
(Optional) Clean Up Cached Data: Remove cached package files (RPMs) and repository metadata to free up disk space.
-
Reboot (If Necessary): While DNF/YUM might not always explicitly create a "reboot-required" flag file like APT, a reboot is generally strongly recommended after applying updates that include a new kernel version. Check the list of updated packages (
dnf history info last
might help). Ifkernel
or related packages were updated, plan to reboot to activate the new kernel.
Workshop Summary: You have now practiced the essential workflow for keeping your Linux system up-to-date using its native package manager. You learned how to check for updates, review them, apply them, perform optional cleanup, and understand when a reboot is necessary. Making this a regular habit is crucial for maintaining a secure and stable system.
6. Security Auditing Tools Checking Your Configuration
You've diligently worked through the previous sections: hardened user accounts, secured file permissions, configured the firewall, disabled unneeded services, examined logs, and applied the latest updates. That's excellent progress! But how do you know if your system is actually configured securely according to best practices? How can you find weaknesses you might have overlooked? This is where security auditing tools become invaluable. They automate the process of scanning your system and comparing its current state against predefined security benchmarks and known good configurations.
Introduction to Automated Auditing
Manually verifying every single security setting across a Linux system is incredibly time-consuming, complex, and easy to get wrong. Automated auditing tools streamline this by running a comprehensive set of checks covering diverse areas:
- User account settings (password policies, home directory permissions, sudo rules)
- File system permissions (especially on critical system files and directories)
- Boot process security (bootloader passwords, service configurations)
- Network configuration (firewall rules, listening ports, IPv6 settings)
- Software inventory and patch levels (checking for outdated packages)
- Logging and auditing settings (is
rsyslog
/journald
configured correctly? Isauditd
running?) - Kernel parameters (checking sysctl settings related to security)
- Installed services (checking configuration of SSH, web servers, databases for known insecure settings)
- Presence of potentially dangerous tools or files.
These tools compare your system's state against a predefined set of rules, often based on industry standards like the CIS Benchmarks (Center for Internet Security) or general security best practices. They then generate a report highlighting areas of concern, potential vulnerabilities, misconfigurations, and often provide specific suggestions for remediation.
Using lynis
for System Auditing
lynis
is an excellent, widely respected, open-source security auditing tool for Linux, macOS, and other Unix-like systems. It's designed to perform an extensive "health scan" focusing on security aspects. It's great for system hardening assessment, vulnerability detection (within the scope of configuration checks), and can assist with compliance testing (like PCI-DSS, HIPAA, ISO27001, although it's not a compliance certification tool itself).
Why Use lynis
?
- Comprehensive: Performs hundreds of individual tests across many system categories.
- Agentless (for basic scans): You typically run it directly on the host you want to audit; no need to install agents beforehand for a standard scan.
- Clear Output: Provides relatively easy-to-understand results, categorizing findings as Warnings or Suggestions.
- Actionable Suggestions: Often gives specific commands or configuration changes needed to address a finding.
- References: Links findings to online resources (like its own control descriptions at cisofy.com) for more details.
- Non-Intrusive: Primarily performs read-only checks. It doesn't automatically change your system configuration (you have to implement the suggestions yourself).
- Open Source & Extensible: Written primarily in shell script, making it transparent and allowing advanced users to potentially add custom tests.
Installing lynis
:
- From Distribution Repositories (Easiest): Often available directly via your package manager. This might not always be the absolute latest version, but it's usually recent enough and easy to manage.
- Debian/Ubuntu:
sudo apt update && sudo apt install lynis
- Fedora/CentOS/RHEL (EPEL):
sudo dnf install epel-release && sudo dnf install lynis
- Debian/Ubuntu:
- From Git Repository (Latest Version): Clone the official repository for the most up-to-date version.
- From Tarball: Download the latest tarball from the CISOfy website.
Running a System Audit:
Navigate to the lynis
directory (if installed from Git) or just use the command if installed from package manager. Run the system audit using sudo
because it needs to read system files:
# If installed from package manager:
sudo lynis audit system
# If installed from Git (adjust path if you cloned elsewhere):
# cd /opt/lynis
# sudo ./lynis audit system
- Execution:
lynis
will display its progress, running tests section by section. It usually pauses after each major section, prompting you to press Enter to continue. This lets you read the immediate results.
Interpreting the lynis
Report:
The most crucial part is the summary section at the very end, labeled -[ Lynis Results ]-
. Key items include:
- Hardening index: A score representing the assessed hardening level (higher is better, but focus on addressing findings).
- Warnings: Potential issues that need further investigation or context.
- Suggestions: The most important part. Concrete recommendations for hardening actions based on failed tests.
- Log file location: Usually
/var/log/lynis.log
for full details. - Report data location: Usually
/var/log/lynis-report.dat
for machine parsing.
Focus on the "Suggestions" section. Each suggestion typically includes:
- A Test ID (e.g.,
SSH-7408
). - A Description of the finding.
- A Recommendation (often specific).
- A URL for more details on the CISOfy website.
Using the Results:
- Review: Read each Suggestion and Warning.
- Prioritize: Address critical areas first (e.g., authentication, remote access, firewall).
- Research: Use the Test ID and URL to understand why a change is recommended.
- Implement: Carefully apply the recommended changes.
- Verify: Re-run
sudo lynis audit system
after making changes. The suggestions you fixed should disappear, confirming your actions were effective.
Other Auditing Tools Brief Mention
- OpenSCAP: A standardized framework (SCAP) for vulnerability scanning and compliance checking. Uses tools like
oscap-scanner
and requires specific SCAP content (e.g., fromscap-security-guide
package). More complex but essential for formal compliance against benchmarks like CIS or DISA STIG. - Chef InSpec: An open-source framework using a Ruby-based language to define security and compliance tests. Very flexible, good for "compliance as code," integrates with configuration management.
For general system hardening and assessment, lynis
provides an excellent starting point.
Workshop Performing a System Audit with lynis
Goal: Install and run the lynis
security auditing tool, review its report, focusing on actionable suggestions, implement one or two of those suggestions, and then verify the improvement by re-running the scan.
Prerequisites:
- A Linux system with
sudo
access. - An internet connection (for installation and potentially accessing report links).
Steps:
-
Install
lynis
: Use the package manager method if available (recommended for ease).- Method A: Package Manager
- Debian/Ubuntu:
- Fedora/CentOS/RHEL (ensure EPEL repo is enabled if needed):
- Method B: Git (Gets the latest version)
- Method A: Package Manager
-
Run the Initial Audit: Execute the
lynis
audit command with root privileges (sudo
).# If installed via package manager: sudo lynis audit system # If installed via Git (use the correct path): # sudo /opt/lynis/lynis audit system
- Press
Enter
when prompted after each major section to continue the scan.
- Press
-
Review the Scan Output and Summary: Observe the results as they scroll. Once finished, scroll up to find the
-[ Lynis Results ]-
section. Pay close attention to theWarnings:
and especially theSuggestions:
lists. -
Analyze the Suggestions: Read the list carefully. Find one or two suggestions that relate to topics we've covered and seem straightforward to implement. Examples:
SSH-7408
: Suggesting specificsshd_config
changes.AUTH-9328
: Recommending installation or configuration of password quality modules.PKGS-7394
: Suggesting installation of helper packages for patch management.BOOT-5122
: Checking for Grub bootloader password protection.
-
(Example) Implement a Suggestion: Let's assume
lynis
suggested installingapt-show-versions
on a Debian/Ubuntu system (Test IDPKGS-7394
) to help track package versions and update status.- Install the package:
- (If the suggestion was different, like changing an SSH setting, you would edit the relevant config file, test syntax with
sudo sshd -t
, and restart the service withsudo systemctl restart sshd
as shown in previous workshops).
-
Re-run the Audit: Execute the
lynis
scan again exactly as in Step 2. -
Verify the Fix: Once the second scan completes, find the
-[ Lynis Results ]-
summary again.- Check Suggestions: The specific suggestion you addressed (e.g., related to
apt-show-versions
orPKGS-7394
) should no longer be present in the list. - Check Hardening Index: The score might have increased slightly.
- This comparison confirms your action was successful according to
lynis
.
- Check Suggestions: The specific suggestion you addressed (e.g., related to
-
(Optional) Explore More:
- View the full log:
sudo less /var/log/lynis.log
- Get details about a test ID:
sudo lynis show details <TEST-ID>
- View the full log:
Workshop Summary: You have successfully used lynis
to perform an automated security audit. You learned how to read its report, focusing on actionable suggestions, implemented a recommended fix, and verified its effectiveness by re-running the scan. This audit -> harden -> verify cycle is key to systematically improving system security. Feel free to tackle more suggestions from the report!
Conclusion
You've reached the end of this foundational guide to Linux Security and Hardening! We've covered a lot of ground, moving from the essential principles to practical, hands-on implementation through the workshops.
We began by understanding why hardening is necessary even for Linux systems and then established our first line of defense by securing user accounts, demanding strong passwords, mastering the use of sudo
over direct root access, and managing user privileges effectively. We dove into the filesystem, learning the language of permissions (rwx
for user, group, other), applying chmod
and chown
correctly, understanding the power and risks of special permissions (SUID, SGID, Sticky Bit), and setting secure defaults with umask
.
Our focus then shifted to the network perimeter, where we learned to build digital walls with firewalls (UFW, firewalld
), meticulously hardened the vital SSH service against common attacks, and practiced minimizing the system's attack surface by identifying and disabling unnecessary network services. Recognizing that prevention isn't always perfect, we investigated the critical role of logging and monitoring, learning how to navigate system logs (journalctl
, /var/log
) and implementing automated brute-force defense with fail2ban
.
We underscored the non-negotiable importance of staying current by diligently applying software updates via package managers. Finally, we closed the loop by learning how to objectively assess our hardening efforts and discover areas for improvement using automated auditing tools like lynis
, embracing the vital cycle of auditing, hardening, and verifying.
Key Takeaways to Remember:
- Security is Layered: Effective security isn't about a single magic bullet. It's the result of applying multiple layers of defense – strong authentication, correct permissions, network controls, vigilant monitoring, timely updates, and regular auditing – working together.
- Principle of Least Privilege: This is a recurring theme for a reason. Always grant only the minimum permissions required for a user, service, or process to perform its legitimate function. Avoid excessive privileges whenever possible.
- Automation is Your Ally: Leverage tools like
fail2ban
for automated defense,unattended-upgrades
ordnf-automatic
for timely security patching, andlynis
oroscap
for consistent auditing. Automation reduces human error and ensures regular execution of critical tasks. - Security is an Ongoing Process, Not a Destination: This guide provides a strong foundation, but the work isn't done. New threats emerge, new vulnerabilities are discovered in software, and system configurations can change. Maintaining security requires continuous effort: stay informed, apply updates regularly, perform periodic audits, and adapt your defenses as needed.
Where to Go From Here?
This material covered the essential foundations of Linux security and hardening. If you're eager to learn more and further strengthen your systems, consider exploring these more advanced topics:
- Mandatory Access Control (MAC): Systems like SELinux (Security-Enhanced Linux, common on RHEL/Fedora/CentOS) and AppArmor (Application Armor, common on Debian/Ubuntu/SUSE) provide a much finer-grained, policy-based control over what processes are allowed to do, even if running as root. They significantly limit the potential damage from a compromised application.
- Advanced Intrusion Detection/Prevention (IDS/IPS): Explore more comprehensive HIDS like Wazuh or network-based tools like Suricata (which can also act as an Intrusion Prevention System, actively blocking malicious traffic).
- Centralized Logging: For managing multiple servers, configuring
rsyslog
or using dedicated agents (like Filebeat) to forward logs to a central log management system (e.g., Graylog, Splunk, or an ELK/OpenSearch stack) allows for easier searching, correlation of events across systems, and long-term storage. - Vulnerability Scanning: Regularly scanning your systems with tools like OpenVAS (open-source) or commercial scanners (Nessus, Qualys) helps identify specific known software vulnerabilities (CVEs) that need patching.
- Kernel Parameter Tuning (
sysctl
): Optimizing kernel settings related to network stack security (e.g., SYN cookies, disabling IP forwarding if not a router, ignoring certain ICMP messages) and memory management (e.g., ASLR settings) can further harden the system's core. - Application-Specific Hardening: Dive into the security best practices specifically recommended for the applications you run, such as web servers (Apache, Nginx hardening guides), databases (MySQL/PostgreSQL security settings), containerization platforms (Docker security), etc.
- Cryptography and Public Key Infrastructure (PKI): Deeper understanding of encryption, digital signatures, certificate management.
By consistently applying the principles and practices learned here, you have already taken significant strides towards operating a much more secure and resilient Linux environment compared to a default installation. Continue to build on this foundation, practice your skills, remain curious about security developments, and stay vigilant in the ever-evolving landscape of cybersecurity. Well done on completing this essential journey! ```