My usual practice is to create a couple users on a host.
🐺 One user for myself.
This user is usually wolf or wnoble¯\_(ツ)_/¯.
This user has no special privilges, and is usually only permitted to login from a few addresses
🤖 One user for automation.
This user usually has password authentication disabled.
The user(s) actually running the services
I keep the same UID/gid across all my machines so’s that nfsmounted homedirs, if I ever choose to use them, aren’t an utter mess.
Change the password of each user that is capable of logging in..
The password should be unique, long, and stored someplace safe. like a password manager like 1password.
users that should be able to perform sudo actions can.
each host should be able to be accessed by the others as relevant users.
…. Lets continue, shall we?
Configure the network interface to be statically defined #
As a general rule, Your core infrastructure should be as self-reliant as possible.
This means removing as many functional dependencies as possible.
When viewed through that lens, it makes a lot of sense to statically configure your DNS server’s network stack.
That being said, it’s also worthwhile to make things as antifragile as possible.
On the off chance that host reverts its’ networking config to use DHCP,
it’d be nice if your DHCP server issued it the address everything expects it to be at, right?
……RIGHT?
Identify your host’s MAC address and current IP address. #
running the command ip a show eth0 should give you everything you need here:
root@coredns:~/# ip a show eth0
root@coredns-03:/home/pi# ip a show eth0
This should output something similar to:
root@coredns:~/# ip a show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether dc:a6:32:ff:aa:dd brd ff:ff:ff:ff:ff:ff
inet 192.168.1.53/24 metric 100 brd 192.168.1.255 scope global dynamic eth0
valid_lft 24271sec preferred_lft 24271sec
inet6 fe80::dea6:32ff:fe55:9063/64 scope link
valid_lft forever preferred_lft forever
This is telling you that your eth0 interface has the mac address dc:a6:32:ff:aa:dd. Use this to configure your DHCP server.
Tip
Configuring your DHCP server to assign a specific address to this host is optional, and outside the scope of this guide.
It’s worth doing, imho..
As you can see below, the original example config for netplan isn’t terribly informative.
Fortunately the netplan manpage has some good examples in it: man netplan
/etc/netplan/50-cloud-init.yaml (original)
# This file is generated from information provided by the datasource. Changes# to it will not persist across an instance reboot. To disable cloud-init's# network configuration capabilities, write a file# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:# network: {config: disabled}network:
ethernets:
eth0:
dhcp4: true
optional: true
version: 2
/etc/netplan/50-cloud-init.yaml (static config)
# This file is generated from information provided by the datasource. Changes# to it will not persist across an instance reboot. To disable cloud-init's# network configuration capabilities, write a file# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:# network: {config: disabled}network:
version: 2 ethernets:
eth0:
match:
macaddress: 'dc:a6:32:ff:aa:dd' wakeonlan: true
set-name: 'eth0' dhcp4: false
addresses:
- '192.168.53.53/16' gateway4: '192.168.1.1' nameservers:
search: [dmz.wolfspyre.io, wolfspyre.io] addresses: ['127.0.0.1']
If this worked as anticipated, you’ll be prompted to Press ENTER before a timer resets. If not, wait a minute or so and ssh back in and try again.
Warning
This might be obvious, but if you’re changing the address your host will have, a successful test will kill your ssh connection.
In this case, getting a ping going from your workstation to the new target address will make it easy for you to assess when you can ssh into the host at its new address.
The Nice thing about the example Netplan config, is it tells you exactly how to disable cloudinit.
To disable cloud-init’s network configuration capabilities, write a file /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following: network: {config: disabled}
A lot of this stuff could and arguabbly should be done with automation.. AND I don’t want to get into a chicken/egg situation with my core infrastructure.
I will likely write some ansible playbooks to configure a lot of this in the future, which will retain steady-state config over time.
BUT I wanted to have this focus on CoreDNS not on the automation-tool-du-jour1
So for critical infra hosts, I populate /etc/hosts with the IP addresses of the hosts that system will need to talk to in order to function.
This helps reduce fragility in times of odd fluctuation.
Uncomment the entries for the relevant network segment for the host being built.
the rc.local script runs after the system is running.
Specifically WHEN it runs depends on a few things, but it’s sufficient to say it runs late in the boot process.
We will use it later in this guide to optionally refresh things
Info
It’s easy to forget that the script must be executable in order to be run. Ergo: chmod +x /etc/rc.local
/etc/rc.local
#!/usr/bin/env bash
##/etc/rc.local/etc/init.d/procps restart
# This is only here to give you a way to validate the script is firing. /usr/bin/date > /tmp/rclocal_script_has_run
exit 0
I try to populate /etc/services with relevant information for the ports services will listen on. This is of questionable value. IANAs port assignment registry
should be considered the canonical source