From 6352af3f1f907ce006af32acc1a356cdf89a32da Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Wed, 23 Sep 2020 16:06:13 +0200 Subject: [PATCH 01/13] zfs uefi trouble --- full-zfs-ecrypt-uefi-boot-trouble.md | 122 +++++++++++++++++++++++++++ grub-touble.md | 0 zfs-trouble-live-boot-solution.md | 38 +++++---- 3 files changed, 144 insertions(+), 16 deletions(-) create mode 100644 full-zfs-ecrypt-uefi-boot-trouble.md create mode 100644 grub-touble.md diff --git a/full-zfs-ecrypt-uefi-boot-trouble.md b/full-zfs-ecrypt-uefi-boot-trouble.md new file mode 100644 index 0000000..61d7cb3 --- /dev/null +++ b/full-zfs-ecrypt-uefi-boot-trouble.md @@ -0,0 +1,122 @@ +## Fixing Debian/Ubuntu UEFI boot manager with Debian/Ubuntu Live + +source : [Code Bites](https://emmanuel-galindo.github.io/en/2017/04/05/fixing-debian-boot-uefi-grub/) + +Steps summary: + +- Boot Debian Live +- Verify Debian Live was loaded with UEFI +- Review devices location and current configuration +- Mount broken system (via chroot) +- Reinstall grub-efi +- Verify configuration +- Logout from chroot and reboot + +### Verify Debian Live was loaded with UEFI : + +~~~ +$ dmesg | grep -i efi +~~~ + +~~~ +$ ls -l /sys/firmware/efi | grep vars +~~~ + +### Mount broken system (via chroot) + +Mounting another system via chroot is the usual procedure to recover broken system’s. Once the chroot comand is issues, Debian Live will treat the broken system’s “/” (root) as its own. Commands run in a chroot environment will affect the broken systems filesystems and not those of the Debian Live. + +#### My system is full ZFS + +You have to add *-f* to force import because zfs think he should be on an other system. +*-R* is to use an altroot path. + +~~~ +zpool import -f -R /mnt rpool +~~~ + +~~~ +zpool import bpool +~~~ + +Here system will tell you he can't mount it because /boot is in use. We don't wanted to mount it here, we will do it inside the chroot. + +~~~ +zfs mount rpool/ROOT/ubuntu_myid +~~~ + +I'm in the case where ny zpool is encrypt ! + +see : [zfs trouble encrypt zpool](zfs-trouble-live-boot-solution) + +#### My system is not ZFS + +Mount root partition (for example an lvm) + +~~~ +# mount /dev/volumegroup/logicalvolume /mnt +~~~ + +Mount boot partition (F.e. in drive sda) + +~~~ +# mount /dev/sda2 /mnt/boot +~~~ + +Mount the EFI System Partition (usually in /dev/sda1) + +~~~ +# mount /dev/sda1 /mnt/boot/efi +~~~ + +### Prepare chroot env + +Mount the critical virtual filesystems with the following single command: + +~~~ +# for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done +~~~ + +Mount all zfs file system on rpool. + +~~~ +# zfs mount -a +~~~ + +Chroot to your normal (and broken) system device + +~~~ +# chroot /mnt +~~~ + +Then here mon bpool zfs vol. + +~~~ +inside chroot # zfs mount -a +~~~ + +Mount your EFI partition: + +~~~ +inside chroot # mount -a +~~~ + +You should see : + +* all your rpool zfs vol. +* /boot from your bpool. +* Your efi partition. + +### Reinstall grub-efi + +~~~ +# apt-get install --reinstall grub-efi +~~~ + +~~~ +# grub-install /dev/sda +~~~ + +~~~ +# update-grub +~~~ diff --git a/grub-touble.md b/grub-touble.md new file mode 100644 index 0000000..e69de29 diff --git a/zfs-trouble-live-boot-solution.md b/zfs-trouble-live-boot-solution.md index 5c8cfcc..fd2a096 100644 --- a/zfs-trouble-live-boot-solution.md +++ b/zfs-trouble-live-boot-solution.md @@ -1,16 +1,22 @@ -# zfs trouble live boot solution -## zpool trouble you can mount it from live systeme - -boot on usb drive which permit zfs then : - -~~~ -zpool import -R /mnt rpool -zfs load-key rpool -zfs mount rpool/USERDATA/nomad_e8bdbt -~~~ - -## in case you wanted to change zpool passwd - -~~~{.shell} -zfs change-key rpool -~~~ +--- +format: markdown +toc: no +title: zfs trouble encrypt zpool +... + +# zfs trouble live boot solution +## zpool trouble you can mount it from live systeme + +boot on usb drive which permit zfs then : + +~~~ +zpool import -R /mnt rpool +zfs load-key rpool +zfs mount rpool/USERDATA/nomad_e8bdbt +~~~ + +## in case you wanted to change zpool passwd + +~~~{.shell} +zfs change-key rpool +~~~ From b5484fde3708e010be27e831bf2cb89d6cfcd786 Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Wed, 23 Sep 2020 16:06:54 +0200 Subject: [PATCH 02/13] Delete page 'grub touble' --- grub-touble.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 grub-touble.md diff --git a/grub-touble.md b/grub-touble.md deleted file mode 100644 index e69de29..0000000 From 1328652cffa4c792d27b1c2a63a035d188426646 Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Wed, 23 Sep 2020 16:11:09 +0200 Subject: [PATCH 03/13] ajout lite partitions --- full-zfs-ecrypt-uefi-boot-trouble.md | 34 ++++++++++++++++++++++++---- 1 file changed, 30 insertions(+), 4 deletions(-) diff --git a/full-zfs-ecrypt-uefi-boot-trouble.md b/full-zfs-ecrypt-uefi-boot-trouble.md index 61d7cb3..23ad599 100644 --- a/full-zfs-ecrypt-uefi-boot-trouble.md +++ b/full-zfs-ecrypt-uefi-boot-trouble.md @@ -103,20 +103,46 @@ inside chroot # mount -a You should see : -* all your rpool zfs vol. +* all your rpool zfs vol * /boot from your bpool. * Your efi partition. +~~~ +inside chroot # df -h +Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur +udev 16G 0 16G 0% /dev +tmpfs 3,2G 1,8M 3,2G 1% /run +rpool/ROOT/ubuntu_19k4ww 272G 3,7G 269G 2% / +bpool/BOOT/ubuntu_19k4ww 1,2G 270M 929M 23% /boot +rpool/USERDATA/nomad_43xnpb 280G 12G 269G 5% /home/nomad +rpool/USERDATA/root_43xnpb 269G 640K 269G 1% /root +rpool/ROOT/ubuntu_19k4ww/srv 269G 128K 269G 1% /srv +rpool/ROOT/ubuntu_19k4ww/var/lib 269G 34M 269G 1% /var/lib +rpool/ROOT/ubuntu_19k4ww/var/log 269G 47M 269G 1% /var/log +rpool/ROOT/ubuntu_19k4ww/var/spool 269G 128K 269G 1% /var/spool +/dev/nvme0n1p1 511M 16M 496M 4% /boot/efi +rpool/ROOT/ubuntu_19k4ww/var/games 269G 128K 269G 1% /var/games +rpool/ROOT/ubuntu_19k4ww/var/snap 269G 128K 269G 1% /var/snap +rpool/ROOT/ubuntu_19k4ww/var/mail 269G 128K 269G 1% /var/mail +rpool/ROOT/ubuntu_19k4ww/usr/local 269G 256K 269G 1% /usr/local +rpool/ROOT/ubuntu_19k4ww/var/www 269G 128K 269G 1% /var/www +rpool/ROOT/ubuntu_19k4ww/var/lib/AccountsService 269G 128K 269G 1% /var/lib/AccountsService +rpool/ROOT/ubuntu_19k4ww/var/lib/NetworkManager 269G 256K 269G 1% /var/lib/NetworkManager +rpool/ROOT/ubuntu_19k4ww/var/lib/apt 269G 77M 269G 1% /var/lib/apt +rpool/ROOT/ubuntu_19k4ww/var/lib/dpkg 269G 41M 269G 1% /var/lib/dpkg +/dev/nvme0n1p1 511M 16M 496M 4% /boot/efi +~~~ + ### Reinstall grub-efi ~~~ -# apt-get install --reinstall grub-efi +inside chroot # apt-get install --reinstall grub-efi ~~~ ~~~ -# grub-install /dev/sda +inside chroot # grub-install /dev/sda ~~~ ~~~ -# update-grub +inside chroot # update-grub ~~~ From b7ce356350000651a41e6051c89c5a8e29fc55ad Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Wed, 23 Sep 2020 16:16:07 +0200 Subject: [PATCH 04/13] fusion --- full-zfs-ecrypt-uefi-boot-trouble.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/full-zfs-ecrypt-uefi-boot-trouble.md b/full-zfs-ecrypt-uefi-boot-trouble.md index 23ad599..17bce5b 100644 --- a/full-zfs-ecrypt-uefi-boot-trouble.md +++ b/full-zfs-ecrypt-uefi-boot-trouble.md @@ -114,7 +114,7 @@ udev 16G 0 16G 0% /dev tmpfs 3,2G 1,8M 3,2G 1% /run rpool/ROOT/ubuntu_19k4ww 272G 3,7G 269G 2% / bpool/BOOT/ubuntu_19k4ww 1,2G 270M 929M 23% /boot -rpool/USERDATA/nomad_43xnpb 280G 12G 269G 5% /home/nomad +rpool/USERDATA/yourlogin_43xnpb 280G 12G 269G 5% /home/yourlogin rpool/USERDATA/root_43xnpb 269G 640K 269G 1% /root rpool/ROOT/ubuntu_19k4ww/srv 269G 128K 269G 1% /srv rpool/ROOT/ubuntu_19k4ww/var/lib 269G 34M 269G 1% /var/lib From bf20d3857516eff738adeedae75f499e5b58817a Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Wed, 23 Sep 2020 16:19:42 +0200 Subject: [PATCH 05/13] indentation --- full-zfs-ecrypt-uefi-boot-trouble.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/full-zfs-ecrypt-uefi-boot-trouble.md b/full-zfs-ecrypt-uefi-boot-trouble.md index 17bce5b..2988998 100644 --- a/full-zfs-ecrypt-uefi-boot-trouble.md +++ b/full-zfs-ecrypt-uefi-boot-trouble.md @@ -114,7 +114,7 @@ udev 16G 0 16G 0% /dev tmpfs 3,2G 1,8M 3,2G 1% /run rpool/ROOT/ubuntu_19k4ww 272G 3,7G 269G 2% / bpool/BOOT/ubuntu_19k4ww 1,2G 270M 929M 23% /boot -rpool/USERDATA/yourlogin_43xnpb 280G 12G 269G 5% /home/yourlogin +rpool/USERDATA/yourlogin_43xnpb 280G 12G 269G 5% /home/yourlogin rpool/USERDATA/root_43xnpb 269G 640K 269G 1% /root rpool/ROOT/ubuntu_19k4ww/srv 269G 128K 269G 1% /srv rpool/ROOT/ubuntu_19k4ww/var/lib 269G 34M 269G 1% /var/lib From dac2cb310bd58813cff3ee03263f021ec289a37d Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Wed, 23 Sep 2020 16:21:37 +0200 Subject: [PATCH 06/13] add promt env --- full-zfs-ecrypt-uefi-boot-trouble.md | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/full-zfs-ecrypt-uefi-boot-trouble.md b/full-zfs-ecrypt-uefi-boot-trouble.md index 2988998..07f5d8b 100644 --- a/full-zfs-ecrypt-uefi-boot-trouble.md +++ b/full-zfs-ecrypt-uefi-boot-trouble.md @@ -15,11 +15,11 @@ Steps summary: ### Verify Debian Live was loaded with UEFI : ~~~ -$ dmesg | grep -i efi +from live $ dmesg | grep -i efi ~~~ ~~~ -$ ls -l /sys/firmware/efi | grep vars +from live $ ls -l /sys/firmware/efi | grep vars ~~~ ### Mount broken system (via chroot) @@ -32,17 +32,18 @@ You have to add *-f* to force import because zfs think he should be on an other *-R* is to use an altroot path. ~~~ -zpool import -f -R /mnt rpool +from live $ sudo su +from live # zpool import -f -R /mnt rpool ~~~ ~~~ -zpool import bpool +from live # zpool import bpool ~~~ Here system will tell you he can't mount it because /boot is in use. We don't wanted to mount it here, we will do it inside the chroot. ~~~ -zfs mount rpool/ROOT/ubuntu_myid +from live # zfs mount rpool/ROOT/ubuntu_myid ~~~ I'm in the case where ny zpool is encrypt ! @@ -53,20 +54,20 @@ see : [zfs trouble encrypt zpool](zfs-trouble-live-boot-solution) Mount root partition (for example an lvm) -~~~ -# mount /dev/volumegroup/logicalvolume /mnt +~~~ +from live # mount /dev/volumegroup/logicalvolume /mnt ~~~ Mount boot partition (F.e. in drive sda) ~~~ -# mount /dev/sda2 /mnt/boot +from live # mount /dev/sda2 /mnt/boot ~~~ Mount the EFI System Partition (usually in /dev/sda1) ~~~ -# mount /dev/sda1 /mnt/boot/efi +from live # mount /dev/sda1 /mnt/boot/efi ~~~ ### Prepare chroot env @@ -74,19 +75,19 @@ Mount the EFI System Partition (usually in /dev/sda1) Mount the critical virtual filesystems with the following single command: ~~~ -# for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done +from live # for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done ~~~ Mount all zfs file system on rpool. ~~~ -# zfs mount -a +from live # zfs mount -a ~~~ Chroot to your normal (and broken) system device ~~~ -# chroot /mnt +from live # chroot /mnt ~~~ Then here mon bpool zfs vol. From 145096fcda10f7f097cad8950365418fc95294e9 Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Wed, 23 Sep 2020 16:24:27 +0200 Subject: [PATCH 07/13] change promt --- full-zfs-ecrypt-uefi-boot-trouble.md | 36 ++++++++++++++-------------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/full-zfs-ecrypt-uefi-boot-trouble.md b/full-zfs-ecrypt-uefi-boot-trouble.md index 07f5d8b..33366c6 100644 --- a/full-zfs-ecrypt-uefi-boot-trouble.md +++ b/full-zfs-ecrypt-uefi-boot-trouble.md @@ -15,11 +15,11 @@ Steps summary: ### Verify Debian Live was loaded with UEFI : ~~~ -from live $ dmesg | grep -i efi +FromLive $ dmesg | grep -i efi ~~~ ~~~ -from live $ ls -l /sys/firmware/efi | grep vars +FromLive $ ls -l /sys/firmware/efi | grep vars ~~~ ### Mount broken system (via chroot) @@ -32,18 +32,18 @@ You have to add *-f* to force import because zfs think he should be on an other *-R* is to use an altroot path. ~~~ -from live $ sudo su -from live # zpool import -f -R /mnt rpool +FromLive $ sudo su +FromLive # zpool import -f -R /mnt rpool ~~~ ~~~ -from live # zpool import bpool +FromLive # zpool import bpool ~~~ Here system will tell you he can't mount it because /boot is in use. We don't wanted to mount it here, we will do it inside the chroot. ~~~ -from live # zfs mount rpool/ROOT/ubuntu_myid +FromLive # zfs mount rpool/ROOT/ubuntu_myid ~~~ I'm in the case where ny zpool is encrypt ! @@ -55,19 +55,19 @@ see : [zfs trouble encrypt zpool](zfs-trouble-live-boot-solution) Mount root partition (for example an lvm) ~~~ -from live # mount /dev/volumegroup/logicalvolume /mnt +FromLive # mount /dev/volumegroup/logicalvolume /mnt ~~~ Mount boot partition (F.e. in drive sda) ~~~ -from live # mount /dev/sda2 /mnt/boot +FromLive # mount /dev/sda2 /mnt/boot ~~~ Mount the EFI System Partition (usually in /dev/sda1) ~~~ -from live # mount /dev/sda1 /mnt/boot/efi +FromLive # mount /dev/sda1 /mnt/boot/efi ~~~ ### Prepare chroot env @@ -75,31 +75,31 @@ from live # mount /dev/sda1 /mnt/boot/efi Mount the critical virtual filesystems with the following single command: ~~~ -from live # for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done +FromLive # for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done ~~~ Mount all zfs file system on rpool. ~~~ -from live # zfs mount -a +FromLive # zfs mount -a ~~~ Chroot to your normal (and broken) system device ~~~ -from live # chroot /mnt +FromLive # chroot /mnt ~~~ Then here mon bpool zfs vol. ~~~ -inside chroot # zfs mount -a +InsideChroot # zfs mount -a ~~~ Mount your EFI partition: ~~~ -inside chroot # mount -a +InsideChroot # mount -a ~~~ You should see : @@ -109,7 +109,7 @@ You should see : * Your efi partition. ~~~ -inside chroot # df -h +InsideChroot # df -h Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur udev 16G 0 16G 0% /dev tmpfs 3,2G 1,8M 3,2G 1% /run @@ -137,13 +137,13 @@ rpool/ROOT/ubuntu_19k4ww/var/lib/dpkg 269G 41M 269G 1% /var/ ### Reinstall grub-efi ~~~ -inside chroot # apt-get install --reinstall grub-efi +InsideChroot # apt-get install --reinstall grub-efi ~~~ ~~~ -inside chroot # grub-install /dev/sda +InsideChroot # grub-install /dev/sda ~~~ ~~~ -inside chroot # update-grub +InsideChroot # update-grub ~~~ From a0299fd357ca92f249071bc8f1b3e0303116f2db Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Thu, 24 Sep 2020 17:06:36 +0200 Subject: [PATCH 08/13] quelques ajustements --- full-zfs-ecrypt-uefi-boot-trouble.md | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/full-zfs-ecrypt-uefi-boot-trouble.md b/full-zfs-ecrypt-uefi-boot-trouble.md index 33366c6..d4752c4 100644 --- a/full-zfs-ecrypt-uefi-boot-trouble.md +++ b/full-zfs-ecrypt-uefi-boot-trouble.md @@ -36,12 +36,6 @@ FromLive $ sudo su FromLive # zpool import -f -R /mnt rpool ~~~ -~~~ -FromLive # zpool import bpool -~~~ - -Here system will tell you he can't mount it because /boot is in use. We don't wanted to mount it here, we will do it inside the chroot. - ~~~ FromLive # zfs mount rpool/ROOT/ubuntu_myid ~~~ @@ -90,10 +84,10 @@ Chroot to your normal (and broken) system device FromLive # chroot /mnt ~~~ -Then here mon bpool zfs vol. +import also bpool but do not mount it *-N* : ~~~ -InsideChroot # zfs mount -a +InsideChroot # zpool import -N bpool ~~~ Mount your EFI partition: From f01df69db7e69befbac442f377718b25f08fc325 Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Thu, 24 Sep 2020 17:07:40 +0200 Subject: [PATCH 09/13] on ne parle plus que de ZFS --- full-zfs-ecrypt-uefi-boot-trouble.md | 24 ------------------------ 1 file changed, 24 deletions(-) diff --git a/full-zfs-ecrypt-uefi-boot-trouble.md b/full-zfs-ecrypt-uefi-boot-trouble.md index d4752c4..12fcc4f 100644 --- a/full-zfs-ecrypt-uefi-boot-trouble.md +++ b/full-zfs-ecrypt-uefi-boot-trouble.md @@ -44,26 +44,6 @@ I'm in the case where ny zpool is encrypt ! see : [zfs trouble encrypt zpool](zfs-trouble-live-boot-solution) -#### My system is not ZFS - -Mount root partition (for example an lvm) - -~~~ -FromLive # mount /dev/volumegroup/logicalvolume /mnt -~~~ - -Mount boot partition (F.e. in drive sda) - -~~~ -FromLive # mount /dev/sda2 /mnt/boot -~~~ - -Mount the EFI System Partition (usually in /dev/sda1) - -~~~ -FromLive # mount /dev/sda1 /mnt/boot/efi -~~~ - ### Prepare chroot env Mount the critical virtual filesystems with the following single command: @@ -134,10 +114,6 @@ rpool/ROOT/ubuntu_19k4ww/var/lib/dpkg 269G 41M 269G 1% /var/ InsideChroot # apt-get install --reinstall grub-efi ~~~ -~~~ -InsideChroot # grub-install /dev/sda -~~~ - ~~~ InsideChroot # update-grub ~~~ From 31af11aafe78a1025c18c3092c50bf9987978aea Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Fri, 11 Dec 2020 17:36:12 +0100 Subject: [PATCH 10/13] PF-and-Fail2ban-on-FreeBSD --- PF-and-Fail2ban-on-FreeBSD.md | 64 +++++++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 PF-and-Fail2ban-on-FreeBSD.md diff --git a/PF-and-Fail2ban-on-FreeBSD.md b/PF-and-Fail2ban-on-FreeBSD.md new file mode 100644 index 0000000..5479d18 --- /dev/null +++ b/PF-and-Fail2ban-on-FreeBSD.md @@ -0,0 +1,64 @@ + + +With the firewall configured, it was time to set up Fail2ban. It can be installed from pkg, along with pyinotify for kqueue support. + +~~~ +pkg install py37-fail2ban-0.11.1_2 +pkg install py37-pyinotify-0.9.6 +~~~ + +The default configuration is in /usr/local/etc/fail2ban/jail.conf, and overrides should be put in jail.local. First I needed to tell Fail2ban to use PF. + +~~~ +[DEFAULT] +banaction = pf +~~~ + +his refers to the file /usr/local/etc/fail2ban/action.d/pf.conf, which adds banned IP addresses to a PF table called fail2ban. This on its own doesn’t do anything but register the address with PF, so I needed to add a rule to pf.conf to block the traffic. + +~~~ +table persist +block in quick from +~~~ + +I added this rule directly below block in all so that it took precedence over my ICMP rules. + +Back to Fail2ban, I enabled the SSH jail, which watches for failed logins in /var/log/auth.log. + +~~~ +[sshd] +enabled = true +~~~ + +Then I reloaded the PF configuration and started Fail2ban. + +~~~ +service pf reload +echo 'fail2ban_enable="YES"' >> /etc/rc.conf +service fail2ban start +~~~ + +To see it in action, I can tail the Fail2ban log, list the addresses in the fail2ban table, and inspect the statistics for my PF rules. + +~~~ +tail /var/log/fail2ban.log +pfctl -t fail2ban -T show +pfctl -v -s rules +~~~ + +My final jail.local looks like this: + +~~~ +[DEFAULT] +bantime = 86400 +findtime = 3600 +maxretry = 3 +banaction = pf + +[sshd] +enabled = true +~~~ + + +https://www.sqlpac.com/fr/documents/linux-ubuntu-fail2ban-installation-configuration-iptables.html +https://cmcenroe.me/2016/06/04/freebsd-pf-fail2ban.html From 1fdc14b6f7931dccf9c82726508b4786c7df66d5 Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Tue, 11 May 2021 10:30:02 +0200 Subject: [PATCH 11/13] add part disable it --- encrypt-swap-Ubuntu-20.04.md | 189 ++++++++++++++++++++--------------- 1 file changed, 108 insertions(+), 81 deletions(-) diff --git a/encrypt-swap-Ubuntu-20.04.md b/encrypt-swap-Ubuntu-20.04.md index afed352..04e22cb 100644 --- a/encrypt-swap-Ubuntu-20.04.md +++ b/encrypt-swap-Ubuntu-20.04.md @@ -1,81 +1,108 @@ -# encrypt swap Ubuntu 20.04 with hibernation - -## prerequisite - -* all command bellow are run has root -* install ecryptfs - -~~~{bat} -root@laptop:/root# install apt-get install ecryptfs-utils -~~~ - -## encrypt swap - -* turn off current swap - -~~~ -root@laptop:/root# swapoff -a -~~~ - -* encrypt swap partition - -~~~ -root@laptop:/root# cryptsetup luksFormat --cipher aes-xts-plain64 --verify-passphrase --key-size 256 /dev/nvme0n1p2 -root@laptop:/root# cryptsetup open /dev/nvme0n1p2 cryptswap -~~~ - -* set up the crypt partition as swap. - -~~~ -root@laptop:/root# mkswap /dev/mapper/cryptswap -~~~ - -* ajust **/etc/fstab** to use your mapper, replace your encrypt swap device like bellow : - -~~~ -/dev/mapper/cryptswap none swap discard 0 0 -~~~ - -* add your encrypt swap device define in **/etc/crypttab** - -~~~ -cryptswap /dev/nvme0n1p2 none luks -~~~ - -* enable swap - -~~~ -root@laptop:/root# swapon -a -~~~ - -* edit **/etc/initramfs-tools/conf.d/resume**. Replace the existing **RESUME** line with the following line. - -~~~ -root@laptop:/root# printf "RESUME=/dev/mapper/cryptswap" | tee /etc/initramfs-tools/conf.d/resume -~~~ - -* Register these changes. - -~~~ -root@laptop:/root# update-initramfs -u -k all -~~~ - - -* Change your /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT to point to remove or be sure there is nothing in resume - -GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" - -~~~ -root@laptop:/root# update-grub -~~~ - -### to be solve - -~~~ -cryptsetup: ERROR: Couln't resolve device rpool/ROOT/ubuntu_... -cryptsetup: WARNING: Couln't determine root device -~~~ - -## sources -* [wiki.archlinux.org - dm-crypt/Swap encryption](https://wiki.archlinux.org/index.php/Dm-crypt/Swap_encryption#LVM_on_LUKS) -* [help.ubuntu.com - Enable Hibernate With Encrypted Swap](https://help.ubuntu.com/community/EnableHibernateWithEncryptedSwap) +# encrypt swap Ubuntu with hibernation + +## prerequisite + +* all command bellow are run has root +* install ecryptfs + +~~~{bat} +root@laptop:/root# install apt-get install ecryptfs-utils +~~~ + +## encrypt swap + +* turn off current swap + +~~~ +root@laptop:/root# swapoff -a +~~~ + +* encrypt swap partition + +~~~ +root@laptop:/root# cryptsetup luksFormat --cipher aes-xts-plain64 --verify-passphrase --key-size 256 /dev/nvme0n1p2 +root@laptop:/root# cryptsetup open /dev/nvme0n1p2 cryptswap +~~~ + +* set up the crypt partition as swap. + +~~~ +root@laptop:/root# mkswap /dev/mapper/cryptswap +~~~ + +* ajust **/etc/fstab** to use your mapper, replace your encrypt swap device like bellow : + +~~~ +/dev/mapper/cryptswap none swap discard 0 0 +~~~ + +* add your encrypt swap device define in **/etc/crypttab** + +~~~ +cryptswap /dev/nvme0n1p2 none luks +~~~ + +* enable swap + +~~~ +root@laptop:/root# swapon -a +~~~ + +* edit **/etc/initramfs-tools/conf.d/resume**. Replace the existing **RESUME** line with the following line. + +~~~ +root@laptop:/root# printf "RESUME=/dev/mapper/cryptswap" | tee /etc/initramfs-tools/conf.d/resume +~~~ + +* Register these changes. + +~~~ +root@laptop:/root# update-initramfs -u -k all +~~~ + + +* Change your /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT to point to remove or be sure there is nothing in resume + +GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" + +~~~ +root@laptop:/root# update-grub +~~~ + +### disable encrypted swap + +~~~ +root@laptop:/root# swapoff -a +root@laptop:/root# cryptsetup close cryptswap +root@laptop:/root# mkswap /dev/nvme0n1p2 +root@laptop:/root# printf "RESUME=/dev/nvme0n1p2" | tee /etc/initramfs-tools/conf.d/resume +~~~ + +* ajust /etc/fstab to + +~~~ +/dev/nvme0n1p2 none swap discard 0 0 +#/dev/mapper/cryptswap none swap discard 0 0 +~~~ + +* check + +~~~ +root@laptop:/root# swapon -a +root@laptop:/root# swapon --summary +Nom de fichier Type Taille Utilisé Priorité +/dev/nvme0n1p2 partition 32653308 0 -2 +root@laptop:/root# +~~~ + +### to be solve + +~~~ +cryptsetup: ERROR: Couln't resolve device rpool/ROOT/ubuntu_... +cryptsetup: WARNING: Couln't determine root device +~~~ + + +## sources +* [wiki.archlinux.org - dm-crypt/Swap encryption](https://wiki.archlinux.org/index.php/Dm-crypt/Swap_encryption#LVM_on_LUKS) +* [help.ubuntu.com - Enable Hibernate With Encrypted Swap](https://help.ubuntu.com/community/EnableHibernateWithEncryptedSwap) From 0fd99fa979d2e76d8345c7af09c1544dad45a3c9 Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Tue, 11 May 2021 10:30:56 +0200 Subject: [PATCH 12/13] souci de niveau de titre --- encrypt-swap-Ubuntu-20.04.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/encrypt-swap-Ubuntu-20.04.md b/encrypt-swap-Ubuntu-20.04.md index 04e22cb..fb001fb 100644 --- a/encrypt-swap-Ubuntu-20.04.md +++ b/encrypt-swap-Ubuntu-20.04.md @@ -69,7 +69,7 @@ GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" root@laptop:/root# update-grub ~~~ -### disable encrypted swap +## disable encrypted swap ~~~ root@laptop:/root# swapoff -a From bfc1d62c78e9855efdea00725f7e7a241d253696 Mon Sep 17 00:00:00 2001 From: Michel Le Cocq Date: Tue, 11 May 2021 10:38:05 +0200 Subject: [PATCH 13/13] missing update-grub and initramfs --- encrypt-swap-Ubuntu-20.04.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/encrypt-swap-Ubuntu-20.04.md b/encrypt-swap-Ubuntu-20.04.md index fb001fb..d5d6dbe 100644 --- a/encrypt-swap-Ubuntu-20.04.md +++ b/encrypt-swap-Ubuntu-20.04.md @@ -76,6 +76,9 @@ root@laptop:/root# swapoff -a root@laptop:/root# cryptsetup close cryptswap root@laptop:/root# mkswap /dev/nvme0n1p2 root@laptop:/root# printf "RESUME=/dev/nvme0n1p2" | tee /etc/initramfs-tools/conf.d/resume +root@laptop:/root# update-initramfs -u -k all +root@laptop:/root# update-grub + ~~~ * ajust /etc/fstab to