I prefer to use Debian for the vast majority of my servers, whether it's a baremetal install or a virtual machine. This guide will take you through the netinstall process. At the end, package installation recommendations will vary depending on whether it's being installed as a VM after finishing the libvirt guide or baremetal on a physical machine.
Installation
The majority of this will be very straightforward as the Debian developers have written a great installer. The first screen you come to will ask what installer you want to use. I always choose the text-based one, Install
, rather than the GUI, Graphical install
, as it's a better remote experience.
If installing Debian to a baremetal machine from Hetzner, refer to the Debian/Hetzner page for more info.
Timezone
For the second screen, select whatever it is you have sitting in front of you. For the next, I always choose United States
then later run sudo dpkg-reconfigure tzdata
, to set it to UTC (this will be covered later).
Hostname
For the hostname of a physical machine, use its location. If it's in the US, name it us1
. For Germany, de1
. For a VM, name it something along the lines of what you used when creating it, whether that's nextcloud.us1
for a Nextcloud instance on a server in the US, loadbalancer.de2
for a load balancer on your second server in Germany, and so on. It needs to be instantly recognisable such that you can identify its purpose and what's installed at a glance.
For a domain name, it's best to use one you actually own and I like to put all of my servers in a dedicated DNS zone to make things cleaner. With this, you would be able to have srv.example.com
and use it for all of your servers distributed across the world. You could then use a subdomain to separate physical hosts such as de1.srv.example.com
for a server in Germany or us1.srv.example.com
for one in the US. Under those, designate each service. Using the earlier example, nextcloud.us1.srv.example.com
would be the fully qualified domain name for a Nextcloud VM running on a physical host in the US. This is all for efficiency; when you're monitoring servers and VMs in a unified dashboard like Grafana (that will be covered elsewhere), you want to be able to tell ''exactly'' what you're looking at without having to check your records, curl wtfismyip.com/text
to see your IP address, etc. You want to know exactly what you're looking at the moment you look at it.
On a side note, what you use for server domains has nothing to do with what URL the application is accessible under. Even though the Nextcloud host is at nextcloud.de1.srv.example.com
,Β it might be publicly accessible through cloud.example.com
or files.example.com
or whatever. These long domains are only useful for you, the admin. As a bonus, it makes SSHFP records easier to work with. See DNS for more further information about domains and managing them.
Users
On to users and passwords. You can set different passwords for the root and nonroot user but I don't. As the installation screen says:
If you leave this empty, the root account will be disabled and the system's initial user will be given the power to become root using the "sudo" command
By running something like sudo su
, the nonroot user can become root after entering their password.
The more secure a password, the harder it is to remember. At the moment, I have five passwords memorized and they were all generated with Diceware: my PC's boot disk decryption key, the home disk decryption key, my user's password, and the one for my password manager. Everything else is stored in KeePassXC, my password manager of choice. With that in mind, generate a 12 character password with something like pwgen (available in all Linux repos). Running pwgen 12 1
will output a 12 character password that you'll manually type into the console. Later, it will be changed to something more secure.
Partitioning
Next up is partitioning. If this is a baremetal installation, I recommend choosing Guided
then the option with encrypted LVM
. This will set up full disk encryption. On a physical host, this is ''extremely'' important as all of your personal data will be unreadable should the machine ever be stolen or seized. If this is a VM, don't worry about it and just choose Use entire disk
. Because the host has FDE, all of the VMs will be encrypted too. Go with the defaults for the next few screens then write the changes to your disk.
After that, disk setup will begin. Depending on the size of your disk and whether or not you're encrypting it, this could take anywhere from 10 minutes to 2 hours. If you're encrypting the disk, go do something else while waiting, it could be a while.
Mirrors
The next screen is about selecting which Debian mirror you'll be using. This decision is an important one as it determines the speed at which you'll get updates as well as how quickly the files will be downloaded. I'm not going to recommend a specific mirror, this is just something to keep in mind. Downloads will likely be faster from regions geographically closer to your physical server and updates from Debian itself (deb.debian.org
) will be more timely. The closest mirror to you might not update their repo until two weeks after Debian has update theirs so you'd be waiting two weeks for a new version of whatever application. I don't actually know of any that are this slow, it's just an example. Regardless of which one you go with, you'll get the updates eventually.
Proxy
The next screen is about a proxy. If you need to fill something in there, you'll know. Otherwise, leave it blank. I do participate in the package survey but it's a personal decision.
Software
For software selection, the only items you should have checked are SSH server
and standard system utilities
. The other options (including web server
) install extra components we won't need. Installation shouldn't take too long. After it's finished, you'll need to reboot the VM or physical machine then continue to the next section.
Configuration
SSH
For physical hosts run ssh <user>@<ip-address>
, enter your password, and make sure you can sign in. If yes, the rest of this guide can be followed from that session and virt-manager
's session can be closed. For a VM, follow along from virt-manager
's console and skip trying to connect for now.
{{Note|Warning: while you're configuring SSH from an SSH session, do not exit or disconnect. If you do, you risk losing access and having to open virt-manager
again to fix it through the console. This likely means typing that incredibly long password by hand or using something like xdotool for X11 or ydotool for Wayland to do so. When you're editing anything to do with the network, keep an existing session connected as you try to open another one. If you can't, you've messed something up and need to fix it.|warn}}
Remote
Ports & IP addresses
On physical hosts, you'll only want to change the port. SSH is assigned to port 22 per RFC 4251 but we can still change that by editing /etc/ssh/sshd_config
. Line 13 should be Port 22
. Change that number to something above 5000, save, and run service sshd restart
. Open a new terminal and connect with SSH but adding -p <port>
. If you're dropped to a shell, you've succeeded. Otherwise, take a look at that configuration again.
On a VM, you'll want to change the port ''in addition'' to the address it's listening on. This should be the very next line and set it to 192.168.1.0
. This will allow connections from the physical host but deny connections from anywhere else. Again, save the file and restart sshd
then make sure you can still log in. Changing the port number will prevent most port scans from revealing that SSH is open because they generally just check well-known ports. If you're being specifically targeted, however, the port scan will be configured to check ''far'' more than just those. Thus, additional measures should be taken.
Logging in
Edit /etc/ssh/sshd_config
and modify lines 32 and 56. 32 should say PermitRootLogin no
and 56 should be PasswordAuthentication no
. As the names suggest, the first disallows remote root login entirely and the second disables password authentication. You always want to log in as an ''unprivileged'' user and escalate from there using long passwords. Because we'll be using public and private key pairs to authenticate brute-force attacks are ''much'' less likely to succeed and connecting actually becomes simpler.
{{Note|If you're on a physical host, remember to keep your SSH session open. If you disconnect now, you'll have to fix it using virt-manager
.}}
Local
Use key-based authentication
On your local machine, we're going to generate an SSH keypair using the Ed25519 scheme. This is simply ssh-keygen -t ed25519
. You'll be prompted to enter a password and you ''can'' enter one but I personally don't. When prompted for a save location, put it in ~/.config/ssh/<hostname>
as this will allow for easier identification later.
For a physical host, paste the contents of ~/.ssh/<hostname><b>.pub</b>
into ~/.ssh/config/authorized_keys
on your server. That <b>.pub</b>
extension is especially important. If you omit that, you'll be pasting your private key into your server which is never ok. Your private keys should not leave your machine. Save the file and run ssh -i ~/.ssh/<hostname> -p <port> <user>@<ip-address>
on your local machine. This time, you shouldn't be prompted for your password. If you are, make sure you've pasted the correct public key and that the filename on the server is correct too. If all goes well, move on to the next section.
For a VM, the process is a bit more complicated as you can't paste into the console. What you'll need to do is paste the contents of that <b>.pub</b>
file into a pastebin with support for downloading the raw contents. As this is a public key, it's alright; they're designed to be widely distributed with no consequences. Again, ''do not give out your private key'' and especially don't paste it anywhere online. Using separate keys for each server minimises the effects of leaking a private key but, even if only one is leaked and only that VM is compromised, it's still compromised and you should delete the VM and the key then start over. With that said, termbin is probably the easiest to use for this purpose. Run cat ~/.ssh/<hostname><b>.pub</b> | nc termbin.com 9999
. A URL will be output and you'll need to type that into your VM console like this: curl https://termbin.com/<string> > ~/.ssh/authorized_keys
. Even now, you still won't be able to connect as your VM is inaccessible from outside the physical server. Think of the VM as being behind your router at home. Devices on your local network can connect to each other but external devices have to first go through that router. In this situation, the router is the physical host. Continue to the next section for using the physical server as a jumping point to your VM.
"Automating" SSH login
Creating a configuration file allows for a ''much'' better SSH experience as it not only defines all the parameters in the commands above but also specifies a friendly name to use when connecting. This file is stored in ~/.ssh/config
. Setup is a little bit complicated but you'll have two different ways of connecting defined. One is for the physical host and the other is for the VM. The physical host can be connected to directly using the following snippet. Make changes where necessary.
Host <hostname>
Hostname <ip address>
User <user>
Port <port>
IdentityFile ~/.ssh/<hostname>
ForwardAgent no
For the VM, go to View
β Details
β NIC
in virt-manager
and look at what it says in the IP address
field. Using the snippet above, take the address in virt-manager
and paste it in the Hostname
field then fill out the rest of it. You'll also need to add a line at the bottom for ProxyJump
and put the friendly name (Host
) of your physical machine there. Here's an example of both:
Host lu1
Hostname 462.478.102.923
User amolith
Port 9164
IdentityFile ~/.ssh/lu1
ForwardAgent no
Host ejabberd.lu1
Hostname 192.168.122.135
User amolith
Port 8571
IdentityFile ~/.ssh/ejabberd.lu1
ForwardAgent no
ProxyJump lu1
With ProxyJump
, SSH will connect to the physical host first, lu1
, then jump from there to the final destination, ejabberd
. Add blocks like this for every VM and, complicated as it may seem, connecting is very quick and simple. With that config, all that's required to connect to the servers is ssh lu1
or ssh ejabberd
.