Michel Le Cocq 2022-02-24 18:54:43 +01:00
commit 6ec28b5284
4 changed files with 234 additions and 16 deletions

@ -0,0 +1,64 @@
With the firewall configured, it was time to set up Fail2ban. It can be installed from pkg, along with pyinotify for kqueue support.
~~~
pkg install py37-fail2ban-0.11.1_2
pkg install py37-pyinotify-0.9.6
~~~
The default configuration is in /usr/local/etc/fail2ban/jail.conf, and overrides should be put in jail.local. First I needed to tell Fail2ban to use PF.
~~~
[DEFAULT]
banaction = pf
~~~
his refers to the file /usr/local/etc/fail2ban/action.d/pf.conf, which adds banned IP addresses to a PF table called fail2ban. This on its own doesnt do anything but register the address with PF, so I needed to add a rule to pf.conf to block the traffic.
~~~
table <fail2ban> persist
block in quick from <fail2ban>
~~~
I added this rule directly below block in all so that it took precedence over my ICMP rules.
Back to Fail2ban, I enabled the SSH jail, which watches for failed logins in /var/log/auth.log.
~~~
[sshd]
enabled = true
~~~
Then I reloaded the PF configuration and started Fail2ban.
~~~
service pf reload
echo 'fail2ban_enable="YES"' >> /etc/rc.conf
service fail2ban start
~~~
To see it in action, I can tail the Fail2ban log, list the addresses in the fail2ban table, and inspect the statistics for my PF rules.
~~~
tail /var/log/fail2ban.log
pfctl -t fail2ban -T show
pfctl -v -s rules
~~~
My final jail.local looks like this:
~~~
[DEFAULT]
bantime = 86400
findtime = 3600
maxretry = 3
banaction = pf
[sshd]
enabled = true
~~~
https://www.sqlpac.com/fr/documents/linux-ubuntu-fail2ban-installation-configuration-iptables.html
https://cmcenroe.me/2016/06/04/freebsd-pf-fail2ban.html

@ -69,6 +69,35 @@ GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
root@laptop:/root# update-grub
~~~
## disable encrypted swap
~~~
root@laptop:/root# swapoff -a
root@laptop:/root# cryptsetup close cryptswap
root@laptop:/root# mkswap /dev/nvme0n1p2
root@laptop:/root# printf "RESUME=/dev/nvme0n1p2" | tee /etc/initramfs-tools/conf.d/resume
root@laptop:/root# update-initramfs -u -k all
root@laptop:/root# update-grub
~~~
* ajust /etc/fstab to
~~~
/dev/nvme0n1p2 none swap discard 0 0
#/dev/mapper/cryptswap none swap discard 0 0
~~~
* check
~~~
root@laptop:/root# swapon -a
root@laptop:/root# swapon --summary
Nom de fichier Type Taille Utilisé Priorité
/dev/nvme0n1p2 partition 32653308 0 -2
root@laptop:/root#
~~~
### to be solve
~~~

@ -0,0 +1,119 @@
## Fixing Debian/Ubuntu UEFI boot manager with Debian/Ubuntu Live
source : [Code Bites](https://emmanuel-galindo.github.io/en/2017/04/05/fixing-debian-boot-uefi-grub/)
Steps summary:
- Boot Debian Live
- Verify Debian Live was loaded with UEFI
- Review devices location and current configuration
- Mount broken system (via chroot)
- Reinstall grub-efi
- Verify configuration
- Logout from chroot and reboot
### Verify Debian Live was loaded with UEFI :
~~~
FromLive $ dmesg | grep -i efi
~~~
~~~
FromLive $ ls -l /sys/firmware/efi | grep vars
~~~
### Mount broken system (via chroot)
Mounting another system via chroot is the usual procedure to recover broken systems. Once the chroot comand is issues, Debian Live will treat the broken systems “/” (root) as its own. Commands run in a chroot environment will affect the broken systems filesystems and not those of the Debian Live.
#### My system is full ZFS
You have to add *-f* to force import because zfs think he should be on an other system.
*-R* is to use an altroot path.
~~~
FromLive $ sudo su
FromLive # zpool import -f -R /mnt rpool
~~~
~~~
FromLive # zfs mount rpool/ROOT/ubuntu_myid
~~~
I'm in the case where ny zpool is encrypt !
see : [zfs trouble encrypt zpool](zfs-trouble-live-boot-solution)
### Prepare chroot env
Mount the critical virtual filesystems with the following single command:
~~~
FromLive # for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done
~~~
Mount all zfs file system on rpool.
~~~
FromLive # zfs mount -a
~~~
Chroot to your normal (and broken) system device
~~~
FromLive # chroot /mnt
~~~
import also bpool but do not mount it *-N* :
~~~
InsideChroot # zpool import -N bpool
~~~
Mount your EFI partition:
~~~
InsideChroot # mount -a
~~~
You should see :
* all your rpool zfs vol
* /boot from your bpool.
* Your efi partition.
~~~
InsideChroot # df -h
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
udev 16G 0 16G 0% /dev
tmpfs 3,2G 1,8M 3,2G 1% /run
rpool/ROOT/ubuntu_19k4ww 272G 3,7G 269G 2% /
bpool/BOOT/ubuntu_19k4ww 1,2G 270M 929M 23% /boot
rpool/USERDATA/yourlogin_43xnpb 280G 12G 269G 5% /home/yourlogin
rpool/USERDATA/root_43xnpb 269G 640K 269G 1% /root
rpool/ROOT/ubuntu_19k4ww/srv 269G 128K 269G 1% /srv
rpool/ROOT/ubuntu_19k4ww/var/lib 269G 34M 269G 1% /var/lib
rpool/ROOT/ubuntu_19k4ww/var/log 269G 47M 269G 1% /var/log
rpool/ROOT/ubuntu_19k4ww/var/spool 269G 128K 269G 1% /var/spool
/dev/nvme0n1p1 511M 16M 496M 4% /boot/efi
rpool/ROOT/ubuntu_19k4ww/var/games 269G 128K 269G 1% /var/games
rpool/ROOT/ubuntu_19k4ww/var/snap 269G 128K 269G 1% /var/snap
rpool/ROOT/ubuntu_19k4ww/var/mail 269G 128K 269G 1% /var/mail
rpool/ROOT/ubuntu_19k4ww/usr/local 269G 256K 269G 1% /usr/local
rpool/ROOT/ubuntu_19k4ww/var/www 269G 128K 269G 1% /var/www
rpool/ROOT/ubuntu_19k4ww/var/lib/AccountsService 269G 128K 269G 1% /var/lib/AccountsService
rpool/ROOT/ubuntu_19k4ww/var/lib/NetworkManager 269G 256K 269G 1% /var/lib/NetworkManager
rpool/ROOT/ubuntu_19k4ww/var/lib/apt 269G 77M 269G 1% /var/lib/apt
rpool/ROOT/ubuntu_19k4ww/var/lib/dpkg 269G 41M 269G 1% /var/lib/dpkg
/dev/nvme0n1p1 511M 16M 496M 4% /boot/efi
~~~
### Reinstall grub-efi
~~~
InsideChroot # apt-get install --reinstall grub-efi
~~~
~~~
InsideChroot # update-grub
~~~

@ -1,16 +1,22 @@
# zfs trouble live boot solution
## zpool trouble you can mount it from live systeme
boot on usb drive which permit zfs then :
~~~
zpool import -R /mnt rpool
zfs load-key rpool
zfs mount rpool/USERDATA/nomad_e8bdbt
~~~
## in case you wanted to change zpool passwd
~~~{.shell}
zfs change-key rpool
~~~
---
format: markdown
toc: no
title: zfs trouble encrypt zpool
...
# zfs trouble live boot solution
## zpool trouble you can mount it from live systeme
boot on usb drive which permit zfs then :
~~~
zpool import -R /mnt rpool
zfs load-key rpool
zfs mount rpool/USERDATA/nomad_e8bdbt
~~~
## in case you wanted to change zpool passwd
~~~{.shell}
zfs change-key rpool
~~~