docs: add all untracked content

This commit is contained in:
2022-04-30 23:14:42 +03:00
parent c32feec22b
commit c83b847568
24 changed files with 1314 additions and 0 deletions

40
Automatic_Git_Commits.md Normal file
View File

@@ -0,0 +1,40 @@
---
title: Automatic Git Commits
description: A simple set of scripts to enable a Git commit & push whenever entering a tracked directory
published: true
date: 2022-04-30T20:10:22.327Z
tags: git, maintanence
editor: markdown
dateCreated: 2021-09-02T08:23:55.501Z
---
# Automatic Git Commits
> This techique is being phased out from Takahe - I didn't like it for home use. However, it is useful and may prove useful again - hence this page.
{.is-warning}
Created for a non-Git savvy folk at work, the Takahe runs a pair of elegant Git auto commit scripts - the `GitDaemon` and the `GitUpstreamer`. After some modification, below is an outline on how these scripts work.
The first foothold is under the shell's profile - in our case, each user's `~/.zshrc` file (you *are* using [ohmyzsh](https://ohmyz.sh/), right?). If you look at the Environment Variables section at the bottom, you'll find the following:
```
### Environment Variables ###
export SCRIPTS="/home/shmick/Scripts"
source "$SCRIPTS/GitDaemon"
```
The [Git Daemon](https://git.pukeko.xyz/shmick/scripts/src/branch/master/GitDaemon) script is a replacement that latches onto the `cd` command - it first executes normal `cd` and then finds out whether the directory is under Git.
The [Git Upstreamer](https://git.pukeko.xyz/shmick/scripts/src/branch/master/GitUpstreamer) is what actually does the auto-commit - simply by quering the `git` command's output and committing to the set remote. This, of course, relies on a working set up remote.
You need to be aware of the following:
- The scripts only run on directories with a file named .autocommit. Git is common, and I want to be in control of what goes upstream, to avoid awkward situations.
- The scripts adds all files in the folder, and thus relies on a proper .gitignore file. If you had not explicitly ignored it - they will commit it!
- The scripts' only toggle is via the shell profile. No shell profile - no scripts.
# Under Git Automatic commits
- [Shells](https://git.pukeko.xyz/shmick/shells)
- under `/home/` and `/etc/zsh`
- [Docker compose files](https://git.pukeko.xyz/shmick/docker)
- Excluding Grocy, the Minecraft server, Pi-Hole and Portainer
- [Linux Kernel configs](https://git.pukeko.xyz/shmick/kernel-confs) (This doesn't really matter - I put them there, but whatever)

53
Backups.md Normal file
View File

@@ -0,0 +1,53 @@
---
title: Backups
description: What is backed up? Where is it backed up? look here!
published: true
date: 2022-04-30T20:10:24.250Z
tags: maintanence, backup
editor: markdown
dateCreated: 2021-09-02T08:29:45.542Z
---
I use several methods of backup to keep important data stowed away. Below is an outline of what is backed up and how.
> Local Backups are transitioning to Seafile - in progress!
{.info}
# Cloud Backups
The first and most reliable form of backup is a [Restic](https://restic.net/) repository stored on [Backblaze B2](https://secure.backblaze.com/b2_buckets.htm).
Three things are currently backed up with B2:
1. Photos (Path: `/Red-Vol/Media/Pictures`, Bucket: `Gerbil-TK`)
2. Containers (Path: `/Red-Vol/Media/Containers`, Bucket: `Gerbil-TK-Containers`)
3. Cloud Data gathered using Synching (Path: `/Red-Vol/Media/Cloud/Syncthing/`, Bucket: `Pukeko-XYZ-Cloud`)
Backups run on a cron job which executes the following commands:
- `restic -r b2:Gerbil-TK backup /Red-Vol/Media/Pictures/ --exclude='*סתם*' --exclude='2021' --exclude='Photos from *'` for photos.
- `restic -r b2:Gerbil-TK-Containers backup --exclude=“photoprism” --exclude=“transmission” --exclude=“jekyll” --exclude=“minecraftserver” /Red-Vol/Media/Containers` for Docker containers.
- `restic -r b2:Pukeko-XYZ-Cloud backup /Red-Vol/Media/Cloud/Syncthing/`
## Important Points
- Photoprism pictures are not backed up as their source directories are already backed up.
- `/home` is backed up under the Cloud backup.
- Anything crucial is expected to be backed up via Syncthing, and from there to B2.
- Containers back up their data, which is always contained in their directory by convention. They are also another copy of the [Docker Git repository](https://git.gerbil.tk/shmick/docker).
## B2 Cloud Points
- When first creating a new bucket, it needs to be initialized with Restic by running `restic init`.
> Example:
`restic -r b2:Pukeko-XYZ-Cloud init`
{.info}
> The password is read from the `RESTIC_PASSWORD` environment variable set in the `zprofile`. It is also stored in the [BitWarden password manager](https://vault.pukeko.xyz).
{.warning}
- Bucket names are globally unique and cannot be changed once created (as far as I'm aware).
- When creating a new bucket, set it to be as basic as possible - no object locks, no encryption, no snapshots - Restic handles all of these things.
# Local Backups
Local backups are done using [Syncthing](https://sync.pukeko.xyz), a decenteralized sync engine. Syncthing provides encrypted, delta-aware backups, and is very granular - what is uploaded and where is set by each client. Sync can also be configured as single direction (send-only or receive-only).
By convention, every backup is placed under `/Red-Vol/Media/Cloud/Syncthing/[user]`.

26
Compiling_on_RAM.md Normal file
View File

@@ -0,0 +1,26 @@
---
title: Compiling on RAM
description: Short guide on how to mount a tmpfs cache and set it as a compiling environment for great speed gain when compiling, at the expense of RAM.
published: true
date: 2022-04-30T20:10:26.153Z
tags: gentoo, performance
editor: markdown
dateCreated: 2021-09-02T07:39:15.029Z
---
# Compiling on RAM using `tmpfs`
When using compilation heavy systems (erhm, Gentoo), everyday compiling can be drastically sped up on capable systems by compiling on `tmpfs`, which is basically a file system on the RAM. Since RAM is dozens of times faster than even the fastest consumer SSDs, this is a great improvement for compile times, at the cost of memory.
See these pages on enabling compile on `tmpfs` for [Gentoo](https://wiki.gentoo.org/wiki/Portage_TMPDIR_on_tmpfs) and [Arch Linux](https://wiki.archlinux.org/title/Makepkg#Improving_compile_times). Arch Linux tweaks *probably* work for Gentoo as well, and perhaps FreeBSD - needs to be tested.
## Possible issue - not enough space
> Note: this shouldn't really happen anymore on Takahe as I've excluded core scoundrels as per the wiki page.
{.is-warning}
Since `tmpfs` sits on the RAM, large package may fail to compile (or refuse to try beforehand). While the Gentoo wiki [Describes a Gentoo-specific solution](https://wiki.gentoo.org/wiki/Portage_TMPDIR_on_tmpfs#Per-package_choices_at_compile_time) (Which did not work), there are two easier, universal fixes:
- Temporarily increase tmpfs memory by running `mount -o remount,size=[new size-in-G] /path/to/tmpfs/mount`
- Temporarily unmount tmpfs by running `umount /path/to/tmpfs/mount`, obviously losing performance benefits.
## ZRAM
Since compiling on tmpfs (and in general) eats up large amounts of memory, and Linuxes (and maybe BSDs - haven't gotten there) do not play nice with low memory, setting up `ZRAM` ([Arch Linux and the rest of them](https://wiki.archlinux.org/title/Improving_performance#Zram_or_zswap),[Gentoo](https://wiki.gentoo.org/wiki/Zram)) is a great match when compiling on `tmpfs`.

31
FortiVPN.md Normal file
View File

@@ -0,0 +1,31 @@
---
title: Forticlient VPN
description: Guide on setting up a headless FortiVPN client
published: true
date: 2022-04-30T20:10:28.072Z
tags: forti, vpn
editor: markdown
dateCreated: 2021-09-02T08:14:33.492Z
---
# Forticlient VPN
Connecting to Forti VPNs on Linux is apparently quite easy thanks to [openfortivpn](https://github.com/adrienverge/openfortivpn).
To install, run `emerge net-vpn/openfortivpn` on Gentoo, and follow the docs for everything else.
## Set up a config:
Create `/etc/openfortivpn/[config]` and fill it according to this pattern:
```
host = 185.163.148.98
port = 10443
username = matanh
```
Replace whatever you need, obviously - that one's out of date.
## Accept certificate
Now, try connecting with `openfortivpn -c /etc/openfortivpn/[config]`, which will complain about a missing trusted-cert and spit out the line to add to the config - ex:
```
trusted-cert = 32ad6666ce6ee21f9353ca24392fa3a5db179b00540616965aa8767cd8a2583c
```
Add this to the config and log on again. Viola!

43
FreeBSD_VM.md Normal file
View File

@@ -0,0 +1,43 @@
---
title: FreeBSD Virtual Machine
description: Tips and tricks for running a FreeBSD guest under KVM
published: true
date: 2022-04-30T20:10:29.996Z
tags: kvm, virt-manager, freebsd, virtualization
editor: markdown
dateCreated: 2022-04-30T19:57:51.313Z
---
Linux's KVM (Kernel Virtual Machine) is great.
The FreeBSD operating system is also great.
You know what's great? running FreeBSD as a KVM guest!
Here are some insights into the process.
# VM image or .ISO installation?
FreeBSD offers both premade virtual machine disks (in particular, we care about the `.qcow2` image) and installation ISO's - a netinstall, a minimal install and a full installation.
Both have their uses:
The VM image is:
- Far quicker to set up - it expands to a disk and is ready to go.
- Kind of bothersome to expand - it's about 5GB and `virt-manager` does not offer a nice GUI for expanding the disk. Not impossible, but not friendly. (see [resizing qcow2 article](https://linuxconfig.org/how-to-resize-a-qcow2-disk-image-on-linux) - maybe GParted can do this?)
- Uses UFS - great for simple stuff without the overhead of ZFS, not great if you want the bling and oomph of ZFS
The ISO's are:
- More similar to a physical install. If you need to do this for bare metal at some point, it's pretty much the same.
- Much more configurable - you can set the filesystem, swap, encryption, and hardening in the nice sysinstall TUI instead of manually later
- Slower to set up, obviously
- If using network setup (you should), you need to be absolutely certain the VM has network access (see [Setting up a network bridge](/KVM_Network_Bridge)) or you're going to waste a lot of time.
# Post install - system hangs at boot time
So appearantly, if you select the FreeBSD profile in virt-manager when installing the VM, KVM adds a serial device - which is both useless and throws BSD into a loop. This won't hamper the installatin - but you won't get to the login prompt afterwards.
If everything look OK but you're not getting to the login screen, remove the serial device from the VM's hardware.
> Source: [FreeBSD forums thread](https://forums.freebsd.org/threads/freebsd13-wont-boot-in-qemu-kvm-host.80818/)
{.is-info}

52
Gentoo_Tips_Tricks.md Normal file
View File

@@ -0,0 +1,52 @@
---
title: Gentoo Tips, Tricks & Shenanigans
description: These are Gentoo relevations which surprised me at some point, along with nice to haves.
published: true
date: 2022-04-30T20:10:31.949Z
tags: gentoo, performance
editor: markdown
dateCreated: 2021-09-02T08:10:07.601Z
---
Below are stuff I found out about Gentoo.
# Moving package from Testing to Stable
Gentoo's testing branch (`~arch`) is actually quite stable. However, in case you'd like to keep something on stable for whatever reason (I encountered this with the kernel, trying to Clang it), you can mask the `~amd64` keyword under `/etc/portage/package.accept_keywords/your_package`. Example:
```bash
/etc/portage/package.accept_keywords/gentoo-sources:
sys-kernel/gentoo-sources -~amd64
```
That is, if your `/etc/portage/make.conf` contains ~amd64, you can un-`~amd64`-it this way. Noice!
# Clang
Clang is an LLVM frontend and an alternative to GCC. Clang has recently [surpassed GCC](https://www.phoronix.com/scan.php?page=article&item=clang12-gcc11-icelake&num=6) performance wise, as well as being easier to set up for goodies like LTO.
[Gentoo's wiki page](https://wiki.gentoo.org/wiki/Clang) lays out how to use Clang as a system compiler instead of GCC - but note the few caveats below:
> READ THE INSTRUCTIONS CAREFULLY! the order is crucial and is not chronological!
{.is-danger}
- All DE's tested (KDE5, GNOME3) require `samba`, and `samba` requires GCC's own `libunwind`. Hence, when following instructions above, do not use `llvm-libunwind` (unfortunately). This is only relevant for GUI installations - servers are fine.
- Clone the [Portage Git repository](http://git.pukeko.xyz/shmick/portage.git) for necessary overrides back to GCC (such as Python3, GNU libc).
- Use `flto=thin`. It's easy and it works great.
- > Use `LDFLAGS` per compiler - when adding Clang's `LDFLAGS` to the global `make.conf`, GCC also reads them (and fails!).
{.is-warning}
- Use ninja for faster compile times.
- if following [Compiling on the RAM](/Compiling_on_RAM) When building Clang itself, unmount `/var/tmp/portage` - it doesn't play nicely.
Happy compiling!
# Virtualizing with UEFI on QEMU
When trying to create a UEFI VM on `virt-manager`, some settings will lead to a blank screen. In order to use UEFI:
- emerge the UEFI firmware - `sys-firmware/edk2-ovmf`
- restart libvirtd if running
- when creating a VM, select the Q35 chipset (440FX does not work), and use the `secure_code` option - `UEFI x86_64: /usr/share/qemu/edk2-x86_64-secure-code.fd`.
> Reference: https://forums.gentoo.org/viewtopic-t-1066336-start-0.html
{.is-info}
# The `Makefile` is just a file!
You can read it! you can tweak it! yay!
> Thanks, [Gentoo-folk](https://forums.gentoo.org/viewtopic-t-1140180.html)!

191
Gentooisms.md Normal file
View File

@@ -0,0 +1,191 @@
---
title: Hall of Gentooisms
description: Anything related to wrangling Gentoo, solving errors in Gentoo, making Gentoo faster thus creating errors in Gentoo, goes here. Gentoo.
published: true
date: 2022-04-30T20:10:34.075Z
tags: gentoo
editor: markdown
dateCreated: 2021-08-21T18:00:45.725Z
---
# Basics
Everybody starts somewhere.
Here are the *absolute* essentials when it comes to Gentoo-wrangling.
## The `make.conf`
Gentoo's key settings are set under the `/etc/portage/make.conf` file - which is likely the first you modify. Some important sections are:
- `USE=` - Global USE flags for the system (see below).
- `CFLAGS` - these are the options to pass to the compiler. Unless you like suffering, these should be `-O2 -pipe -march=native`.
- `MAKEOPTS` are things to pass to the `make` command - pass `-j$(nproc)` to give compiling you entire system's **oopmh**.
> You can also override to less oomph, or start with some oomph and give specific package all oomph via Portage - see environment section below.
{.is-info}
- `ACCEPT_KEYWORDS` - Your branch (see below).
- `CMAKE_MAKEFILE_GENERATOR` - what program to use for make. Default is `emake`, but you should probably use `ninja` for faster builds.
- `CPU_FLAGS_X86` - just run `cpuid2cpuflags | sed -e 's/CPU_FLAGS_X86://g'` and put it there. Do not question it.
> You'll likely need to install that command using `emerge cpuid2cpuflags`.
{.is-info}
> See my (hopefully current) `make.conf` [here](https://git.pukeko.xyz/shmick/portage/src/branch/master/make.conf).
{.is-info}
## The Portage
It is big, it is gnarly, and I'm going to pretend not to see it is also Python. What should you know about Portage, the legendary Gentoo package manager?
Since it's inspired by BSD's ```ports```, each package's full name is a category followed by an actual pacakge name (ex: ```sys-devel/llvm```).
Under Portage's configuration directory ```/etc/portage```, there's a bunch of things you can mess with:
* Each package's keywords (like switching it around between branches)
* Each package's USE flags (things to add or remove from it, see [here](https://www.gentoo.org/support/use-flags/), and search specific package's USE flags [here](https://packages.gentoo.org/))
* Compiler environments and pointing packages to them
Generally, the structure is ```/etc/portage/package.something/```, under which you put the base name of your package:
```bash
/etc/portage/package.accept_keywords/gentoo-sources:
sys-kernel/gentoo-sources -amd64 ~amd64
# ^ Package name ^ Keywords
```
And inside the file, you specify the full package name followed by whatever it is you're setting:
When it comes to more general things like environments, you create a file under ```/etc/portage/env``` and name it whatever, like the GCC override for my Clang system:
```bash
/etc/portage/env/compiler-gcc-lto:
CC="gcc"
CXX="g++"
CFLAGS="-flto=16 -march=native -O2 -pipe"
CXXFLAGS="${CFLAGS}"
LDFLAGS="-O2 -Wl,--as-needed"
AR="gcc-ar"
NM="gcc-nm"
RANLIB="gcc-ranlib"
```
and tell package to to use it by writing down their full name followed by the config in ```/etc/portage/package.env```:
```bash
/etc/portage/package.env:
sys-libs/glibc compiler-gcc-lto
# ^ Package name #^ Config under /etc/portage/env
```
> You can see my current ```/etc/portage``` in the [git repository](https://git.pukeko.xyz/shmick/portage), and a bunch of other useful things in the [Gentoo Wiki](https://wiki.gentoo.org/wiki/Portage) page about Portage.
{.is-info}
## The Branches
Gentoo has two branches of relevance to me - the ```arch``` and the ```~arch```, which are the stable and testing branches, respectively.
> instead of ```arch```, you put in your processor architecture! ```amd64``` gang! {.is-info}
> It is really, REALLY friggin' hard to change branches as I've smugly predicted I could. Choose your branch carefully!
{.is-warning}
For perspective, ```arch``` is about on par to what you'll see in Ubuntu LTS, while ```~arch``` is what you'd see on... Arch!
> ```~arch``` is really funny to me. Like,
Q: Hey! what processor are you?!
A: Um, AMD64, kinda?
Nice.
{.is-success}
### Setting the branches
Like many Gentoo things, you set your branches in ```/etc/portage/make.conf```. Setting the branch to Testing would be like so:
```ACCEPT_KEYWORDS="~amd64"```
Hilarious!
## The Kernel
> In Gentoo, something-sources is the kernel. We are too artsy-fartsy for just ```linux``` (Arch!)
{.is-info}
### Picking a kernel
There are three kernel types that may be of any interest to me (with my opinion, which is useless and irrelevant):
- The latest and greatest ```gentoo-sources``` (you have been bad and must be punished)
- The Long Term Support release of ```gentoo-sources``` (you are a reasonable person)
- The plain-ol' ```vanilla-sources``` (you are a purist, a snob and a scoundrel)
Under ```arch```, you get the LTS, since you're a reasonable person.
Under ```~arch```, you get the latest and greatest, because you hate it when things are working.
> This is not true; ```~arch``` it working great, but how can you flex otherwise?
{.is-warning}
```vanilla-sources``` acts the same, but with the upstream kernel instead. Gentoo does very little patching, making this a prime example of nitpicking.
To use an LTS kernel on a Testing system,
(smort!) you can change the keywords for the something-sources you're using like so:
```bash
sys-kernel/gentoo-sources -~amd64 amd64
# ^ NO testing
```
And to use the latest kernel on a stable system (why?) you'd do like so:
```bash
sys-kernel/gentoo-sources -amd64 ~amd64
# ^ NO stable
```
You get the gist.
### Setting the kernel
> While easy to forget, not setting the kernel often leads to booting and `initramfs` genenration errors. Follow attentively!
{.is-warning}
After ```emerge```-ing whatever kernel you use, run ```eselect kernel list```, which will show something like this:
```zsh
takahe :: ~ takahe-root* # eselect kernel list
Available kernel symlink targets:
[1] linux-5.13.12
[2] linux-5.13.12-gentoo *
```
select the kernel you desire by running `eselect kernel set` followed by the number of said kernel.
Behind the scene, this symlinks the downloaded kernel archive to `/usr/src/linux`.
### Configuring the kernel
Whatever it is you do, do it in `/usr/src/linux`.
Make sure it is symlinked by running `ls -l /usr/src/linux` - it should show where the real path is.
#### option 1: I have time and like wasting compile power
```genkernel all``` and follow whatever it yells at you. See [here](https://wiki.gentoo.org/wiki/Handbook:AMD64/Installation/Kernel#Alternative:_Using_genkernel).
#### option 2: Use whatever's running already
run `zcat /proc/config.gz > .config` - Gentoos (and every other distribution I've seen) expose their kernel configs this way, but this is not universal.
You can double check this method by running `make menuconfig` and checking your option (or just reading the `.config`, you chad).
#### option 3: Use my AWESOME kernel configs (if you have my system, lol)
You can also clone the kernel config from this [Git repository](https://git.pukeko.xyz/shmick/kernel-confs) - obviously adapted to *my* system.
> Note: Compiling with Clang is a bit messier - see Clang page.
{.is-info}
### Compiling and installing
To start compiling, run
`
make -j$(nproc) && make modules_install && make install
`
which will output some very cool text flying on the terminal which you could fail to impress people with.
Once the compilation is finished, you should see three files under `/boot`:
- a `vmlinuz` whatever - this is the actual kernel
- an `initramfs` whatever - this is the inital RAM filesystem image
- a `System.map` whatever - I have no idea what this is.
Example proper output:
```zsh
ls /boot:
initramfs-5.13.12-gentoo-x86_64.img
System.map-5.13.12-gentoo-x86_64
vmlinuz-5.13.12-gentoo-x86_64
```
Vanilla kernels will not have the `gentoo-x86_64` bit.
> Some systems *can* boot without an `initramfs` - I have yet to achieve this level of awesomeness
{.is-info}
For some reason, *my* system usually fails to copy these files over. After compiling, you can copy them manually to the boot directory:
```zsh
cp arch/x86_64/boot/bzImage /boot/vmlinuz-5.13.12-gentoo-x86_64
#^ This is the kernel
cp System.map /boot/System.map-5.13.12-gentoo-x86_64
```
if the `initramfs` is missing, you can generate it with `genkernel` or with `dracut` - see [here](https://wiki.gentoo.org/wiki/Initramfs/Guide#Creating_an_initramfs).

83
KVM_Network_Bridge.md Normal file
View File

@@ -0,0 +1,83 @@
---
title: Network Bridge
description: Guide to create a network bridge for KVM guests
published: true
date: 2022-04-30T20:10:36.036Z
tags: network, nmcli, networkmanager, kvm, virt-manager
editor: markdown
dateCreated: 2022-04-29T14:05:54.201Z
---
The best way to run virtual machines under Linux is the *virt-manager* UI for `qemu`.
Virt-manager unfortunately insists on creating a NAT network by default - which takes over DNS port 53 and throws the VM onto a different subnet.
To recitify this, the best solution is to generate a network bridge.
There are several ways to go about it, chiefly with the `ip` command, the `bridgeutils` package, or NetworkManager with `nmcli`.
Since `bridgeutils` is a seperate, older utility and the `ip` procedure tends to hamstring the internet connection (probably solveable, but still), I used `nmcli`.
> Arch wiki has a [great page](https://wiki.archlinux.org/title/Network_bridge) on bridge networking.
{.is-info}
# Create the bridge
In essence:
1. Create the bridge:
`nmcli connection add type bridge ifname [bridge name] stp no`
2. Add your main internet interface as a slave:
`nmcli connection add type bridge-slave ifname [internet interface name] master [bridge name]`
3. Bring your internet interface down:
`ncmli connection down [internet interface name]`
4. Bring the bridge up:
`nmcli connection up bridge-[bridge-name]`
(note how `nmcli` appends `bridge-` before the bridge name).
5. Assign an IP address to the bridge using your favourite tool (`ip`, `nmcli`, `nmtui` and the GNOME GUI all work well).
# Register the bridge with Virt-Manager
For Virt-Manager to see your bridge, you must add it as an xml:
1. From the main window, select `Edit` -> `Preferences` -> `Enable XML editing`.
2. Hover over a VM -> `Edit` -> `Connection Details` -> `+` -> `XML`
3. Paste the following:
```xml
<network>
<name>bridged-network</name>
<forward mode="bridge" />
<bridge name="[bridge-name]" />
</network>
```
Note you do not need to add the `bridge-` beforehand.
4. You can now use the bridge for virtual machines. Success!
# Enable internet access for bridged guests
If all you want is a VM on your subnet and guest-to-host access, you're all done.
> Source: [LinuxConfig.org](https://linuxconfig.org/how-to-use-bridged-networking-with-libvirt-and-kvm)
{.is-info}
However, if you want the guest to have internet access, you must enable it via `sysctl` by disabling the Bridge Netfilter:
1. Edit `vim /etc/sysctl.d/99-netfilter-bridge.conf` and write the following:
```vim
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
```
2. Load the `br_netfilter` module with `modprobe br_netfilter`
3. Enable the module at bootime by editing `/etc/modules-load.d/br_netfilter.conf` and adding `br_netfilter`.
4. Load the new settings into `sysctl` with `sysctl -p /etc/sysctl.d/99-netfilter-bridge.conf`
Enjoy your networked KVM guests, yo.

42
Mail.md Normal file
View File

@@ -0,0 +1,42 @@
---
title: Mail
description: Overview of mail settings, and sending system mail.
published: true
date: 2022-04-30T20:10:37.995Z
tags: config, mail
editor: markdown
dateCreated: 2021-08-27T14:00:37.062Z
---
# Mail
A bunch of stuff still use mail unfortunately.
Below are the deets on how to point Containers to the SMTP mailbox I set up, and how to send system mail.
## Container mail via SMTP
Use the following credentials:
```yml
host: smtp.mail.yahoo.com
port: 587
username: "pukekoxyz"
password: "pvefngmuhcxunzqs"
sender: pukekoxyz@yahoo.com
```
`takahe@pukeko.xyz` may also work as I've set up a relay - not tested.
`matan@pukeko.xyz` point to my ProtonMail address - similarly untested.
## System mail via `sendmail`
Gentoo mailing is... weird. Postfix works as is (see [here](https://wiki.gentoo.org/wiki/Postfix)), but `mailx` appears not to, sticking us with `sendmail`.
To send an acceptable (not pretty - acceptable) mail run:
`echo "Subject: [subject]" | sendmail -F "Takahe" -f "takahe@pukeko.xyz" -t matanhorovitz@protonmail.com <<< $([message-body])`
> Example: Tacocat!
`echo "Subject: tacocat" | sendmail -F "Takahe" -f "takahe@pukeko.xyz" -t matanhorovitz@protonmail.com <<< $(echo tacocat)`
{.is-info}
So long as it works, right?
## Nextcloud mail
Nextcloud requires STARTTLS on port 25 - see this example:
![nextcloud_mail.png](/nextcloud_mail.png)

196
Replacing_Failed_Disk.md Normal file
View File

@@ -0,0 +1,196 @@
---
title: Replacing Failed Disk
description: Guide on removing an old yeller from a BtrFS RAID 1 array (for a new yeller)
published: true
date: 2022-04-30T20:10:40.002Z
tags: btrfs, storage, nas, filesystem
editor: markdown
dateCreated: 2022-04-04T16:25:48.663Z
---
One of the old 3TB yellers has started playing dirty.
We do not negotiate with terrorist - a pair of 8TB's were called for reinforcement on that very same day.
Below, I will write this page as I replace the failing, followed by the non failing disk, for the BtrFS RAID1 array on Takahe.
If all goes well, this will be a nice, cozy page. If I cause catastrophic data loss (again), this shall be a monument of my failure.
> Do **NOT** use this method to replace a filesystem with errors! it ***will*** copy them over and they ***will*** be unrecoverable!
{.is-danger}
# Crossing Disk Serial with Device Name
Ever so pretentious, `smartd` will name a disk by it's serial - see this example below:
```zsh
➜ ~ systemctl status smartd
● smartd.service - Self Monitoring and Reporting Technology (SMART) Daemon
Loaded: loaded (/usr/lib/systemd/system/smartd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-04-04 08:01:55 IDT; 11h ago
Docs: man:smartd(8)
man:smartd.conf(5)
Main PID: 1014 (smartd)
Status: "Next check of 2 devices will start at 19:31:55"
Tasks: 1 (limit: 4915)
CPU: 85ms
CGroup: /system.slice/smartd.service
└─1014 /usr/sbin/smartd -n
Apr 04 17:01:55 Takahe smartd[1014]: Device: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY [SAT], 2 Currently unreadable (pending) sectors
Apr 04 17:01:55 Takahe smartd[1014]: Device: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY [SAT], 4 Offline uncorrectable sectors
Apr 04 17:31:55 Takahe smartd[1014]: Device: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY [SAT], 2 Currently unreadable (pending) sectors
Apr 04 17:31:55 Takahe smartd[1014]: Device: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY [SAT], 4 Offline uncorrectable sectors
Apr 04 18:01:56 Takahe smartd[1014]: Device: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY [SAT], 2 Currently unreadable (pending) sectors
Apr 04 18:01:56 Takahe smartd[1014]: Device: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY [SAT], 4 Offline uncorrectable sectors
Apr 04 18:31:55 Takahe smartd[1014]: Device: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY [SAT], 2 Currently unreadable (pending) sectors
Apr 04 18:31:55 Takahe smartd[1014]: Device: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY [SAT], 4 Offline uncorrectable sectors
Apr 04 19:01:55 Takahe smartd[1014]: Device: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY [SAT], 2 Currently unreadable (pending) sectors
Apr 04 19:01:55 Takahe smartd[1014]: Device: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY [SAT], 4 Offline uncorrectable sectors
```
That's wonderful, honey.
But who is `/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY`?
`btrfs` sure as hell doesn't know:
```zsh
➜ ~ btrfs filesystem show /Red-Vol
Label: none uuid: c2d98db0-b903-4cc2-947c-4c4c944da026
Total devices 2 FS bytes used 2.21TiB
devid 1 size 2.73TiB used 2.21TiB path /dev/sdc
devid 2 size 2.73TiB used 2.21TiB path /dev/sdb
```
`udevadm` to the rescue! I even looped it nicely for ya :)
```zsh
➜ ~ for disk in $(btrfs filesystem show /Red-Vol/ | awk '{print $NF}' | grep "/dev"); do echo $disk && udevadm info --query=all --name=$disk | grep ID_SERIAL; done
/dev/sdc
E: ID_SERIAL=WDC_WD30EFRX-68EUZN0_WD-WCC4N3YN0903
E: ID_SERIAL_SHORT=WD-WCC4N3YN0903
/dev/sdb
E: ID_SERIAL=WDC_WD30EFRX-68EUZN0_WD-WCC4N7UEPSDY
E: ID_SERIAL_SHORT=WD-WCC4N7UEPSDY
```
A-ha! `/dev/sdb`, you bastard!
# Crossing Device Name With devid (pointless)
But wait, there's more!
The `btrfs replace` command expects the `devid` (or the device name which we already know, making this section utterly insignificant, but what the heck).
To find it, check `btrfs filesystem show [mountpoint]`:
```zsh
➜ ~ btrfs filesystem show /Red-Vol/
Label: none uuid: c2d98db0-b903-4cc2-947c-4c4c944da026
Total devices 3 FS bytes used 2.21TiB
devid 1 size 2.73TiB used 2.21TiB path /dev/sdc
devid 2 size 2.73TiB used 2.21TiB path /dev/sdb
```
A-ha! `devid 2`, you bastard!
# Replacing The Bastard
Now, run `btrfs replace`:
`➜ ~btrfs replace start 2 /dev/sda /Red-Vol/ -f`
> The `-f` was thrown in because I have chosen to format the new disk with BtrFS beforehand. I have chosen to format the new disk with Btrfs beforehand because I am very stupid.
{.info}
Now, all that is left is watching in panic:
```zsh
➜ ~ btrfs replace status /Red-Vol
1.4% done, 0 write errs, 0 uncorr. read errs
```
Will it work? will it destroy ALL my data?
We shall see.
# Resizing The Bastards
Success! Now, assuming we are replacing with larger disks (go big or go home, shmub), you will have to resize the disks.
First, see your `devid`'s with `btrfs filesystem show`:
```
➜ ~ btrfs filesystem show /Red-Vol/
Label: none uuid: c2d98db0-b903-4cc2-947c-4c4c944da026
Total devices 2 FS bytes used 2.21TiB
devid 1 size 7.28TiB used 2.21TiB path /dev/sdb
devid 2 size 2.73TiB used 2.21TiB path /dev/sda
```
Now, run `btrfs filesystem resize [devid]:max [mountpoint]`:
```
➜ ~ btrfs filesystem resize 1:max /Red-Vol
Resize device id 1 (/dev/sdb) from 7.28TiB to max
➜ ~ btrfs filesystem show /Red-Vol/
Label: none uuid: c2d98db0-b903-4cc2-947c-4c4c944da026
Total devices 2 FS bytes used 2.21TiB
devid 1 size 7.28TiB used 2.21TiB path /dev/sdb
devid 2 size 2.73TiB used 2.21TiB path /dev/sda
➜ ~ btrfs filesystem resize 2:max /Red-Vol
Resize device id 2 (/dev/sda) from 2.73TiB to max
➜ ~ btrfs filesystem show /Red-Vol/
Label: none uuid: c2d98db0-b903-4cc2-947c-4c4c944da026
Total devices 2 FS bytes used 2.21TiB
devid 1 size 7.28TiB used 2.21TiB path /dev/sdb
devid 2 size 7.28TiB used 2.21TiB path /dev/sda
```
Finally, to see your changes, remount the filesystem:
```
➜ ~ mount -o remount,rw /Red-Vol
➜ ~ btrfs filesystem show /Red-Vol/
Label: none uuid: c2d98db0-b903-4cc2-947c-4c4c944da026
Total devices 2 FS bytes used 2.21TiB
devid 1 size 7.28TiB used 2.21TiB path /dev/sdb
devid 2 size 7.28TiB used 2.21TiB path /dev/sda
```
Hurrah!
# Mounting The Bastards
> Do not go there. You know what you did.
{.is-warning}
The best method to mount your new pool is by (one) of your disk's `UUID` - which is always unique.
Finding the `UUID` is easy with `blkid`:
```zsh
➜ ~ blkid | grep /dev/sda
/dev/sda: UUID="c2d98db0-b903-4cc2-947c-4c4c944da026" UUID_SUB="19f4df76-f50b-48c2-ad4b-1f71936440cd" BLOCK_SIZE="4096" TYPE="btrfs"
```
Now, go fish:
```
➜ ~ cat /etc/fstab
...
...
...
UUID=c2d98db0-b903-4cc2-947c-4c4c944da026 /Red-Vol/ btrfs defaults,compress=zstd:11 0 0
# ^ This friendo right here from blkid
...
...
...
```
Or you can go by just the `id`, which is how OpenSUSE did it. I do not know why but I know they know better, you know?
```
...
...
...
/dev/disk/by-id/ata-TOSHIBA_HDWG480_71Q0A0PDFR0H /Red-Vol/ btrfs defaults,compress=zstd:11 0 0
...
...
...
```
Now, reboot and hope for the best.
# Keep An Eye On The Bastards
Now, we add the disk(s) we replaced to `smartd`. Edit `/etc/smartd.conf` and add the disk:
```conf
#DEVICESCAN
/dev/disk/by-id/ata-TOSHIBA_HDWG480_71Q0A0PDFR0H -a
/dev/disk/by-id/ata-TOSHIBA_HDWG480_71Q0A0SHFR0H -a
```
Uncommenting `DEVICESCAN` also works, but we do not trust it.
# Balance The Bastards & Scrub The Bastards
You're not assuming nothing went wrong, are you?
Anyway, if you got this far, run `btrfs balance start [mountpoint]`. If that checks out, run `btrfs scrub start [mountpoint]`. Each of these will take many, many hours.
Enjoy the rest of your day.

183
Solved_Issues.md Normal file
View File

@@ -0,0 +1,183 @@
---
title: Solved Issues
description: A knowledgebase of all vicious errors I came across, and how I slew them.
published: true
date: 2022-04-30T20:10:42.177Z
tags: wireguard
editor: markdown
dateCreated: 2021-08-25T20:27:57.673Z
---
# General Linux Things
Just because you run into errors in a specific distribution (*Gentoo!*) does not mean it is at fault - some things just happen, and you'll just have to accept that.
## Docker doesn't use BtrFS
I'm not sure if this is an error bit or a configuration bit (I *think* it should do it by default), but Docker doesn't always (if at all?) use the BtrFS driver when actually sitting on BtrFS, which is a loss in efficiency and performance (I think).
[The Gentoo wiki's page on Docker](https://wiki.gentoo.org/wiki/Docker#OpenRC) also shows how to set it up to use BtrFS (coincidentally - lucky you!).
[Docker also has an extensive page on BtrFS](https://docs.docker.com/storage/storagedriver/btrfs-driver/), though it seems a bit systemd-centric.
This will also be added to the install checklist, once I bother writing one.
## Gitea denies SSH access despite good key
As I've learned the hard way, Gitea has its SSH directory under `data/git` (and under `data/ssh`, which is surprisingly irrelevant here), and any change in permissions of this directory will bork all SSH access.
If you come across denied SSH access with keys that worked before, or find the following in the log - `Authentication refused: bad ownership or modes for file /data/git/.ssh` (really seems obvious now, does it?), `chmod` the directory to `700`, and the `authorized_keys` file inside it to `600`.
## Inappropriate ioctl for device when decrypting GnuPGP File
Like all great things Linux, the answer came from a [random blog on the web](https://d.sb/2016/11/gpg-inappropriate-ioctl-for-device-errors).
Quote:
To solve the problem, you need to enable loopback pinentry mode.
- Add this to `~/.gnupg/gpg.conf`:
`use-agent pinentry-mode loopback`
- Add this to `~/.gnupg/gpg-agent.conf`, creating the file if it doesn't already exist:
`allow-loopback-pinentry`
- Restart the agent with `echo RELOADAGENT | gpg-connect-agent`
and you should be good to go!
Confirmed working, and just in case - it seems to be a GUI thing.
## Snapper can't create root snapshots
Snapper can't create root snapshots
This on is fully on you, partner. Two things are at fault:
- Snapper really doesn't like it when the entire `.snapshots` gets wiped under its' feet.
- Snapper cannot snapshot a volume which has an active swap file on it
To avoid such errors, you must
- Use snapper's really vague delete-config command, or wipe the config file from `/etc/conf.d/snapper` and `/etc/snapper/configs/` before wiping snapshots completely
- Use a separate subvolume for the swapfile, as [Mentioned in the Arch Wiki](https://wiki.archlinux.org/title/btrfs#Swap_file), which I skimmed.
> Do not skim read the Arch wiki, you waffle!
{.is-danger}
# Gentoo
Don't be fooled by the location and prominence of this list - Gentoo is fantastic, and it's always *my* fault.
## No GUI
Uh oh! did you mess it up that bad already? Try the following:
- Make sure all prerequisites are set as outlined [here](https://wiki.gentoo.org/wiki/LightDM). This is relevant mostly for new installations.
- Read the display manager's log `/var/log/gdm/[log]`. Unless the error is really obvious, this won't help.
- Try running `gdm` manually from the root user.
- What does `/etc/conf.d/display-manager` say? it should be whatever display manager you're trying to use (ex: `DISPLAYMANAGER="gdm"`). If updating it, run `openrc` afterwards.
- Try a lighter, more portable display manager. [LightDM](https://wiki.gentoo.org/wiki/LightDM) is a good candidate.
- If all else fails, try recompiling (or reinstalling) `GDM`, and if that fails - `GNOME` in it's entirety (and bid your afternoon goodbye...)
Step #5 is usually it - whatever it does, it solves for GDM as well.
## Pre-compilation check fails
Did you try to compile a massive package [Without looking here first?](/Compiling_on_RAM)
> To solve this permanantly, you can exclude problematic packages [as described here](https://wiki.gentoo.org/wiki/Portage_TMPDIR_on_tmpfs#Per-package_choices_at_compile_time).
## Cant' see/mount digital camera
After making sure all prerequisites are met for [USB](https://wiki.gentoo.org/wiki/USB/Guide), [MTP](https://wiki.gentoo.org/wiki/MTP), and following [Arch's methods to troubleshoot](https://wiki.archlinux.org/title/Media_Transfer_Protocol), all of which will be utterly uses, enable the `gphoto2` USE flag [as listed here](https://www.gentoo.org/support/use-flags/)
Run a quick `emerge --update --newuse --deep --with-bdeps=y --keep-going @world`, a swift `emerge @preserved-rebuild` and a lively `emerge --depclean` if you're feeling extra snazzy and voilà!
## Icons on GNOME look like they've escaped from hell
Switch to Clang. Why? who knows? not me. But it did it.
> It likely did not do it.
{.is-info}
## Steam does not start
Yes, I used to think this was a Wayland thing. It is not a Wayland thing. If updating does not solve the error, simply run `steam --reset`, which, um, `steam resets` and solves it.
> Tested on Native overlay - not on Flatpak. We like the native overlay now.
{.is-info}
## `libvirt` can't load AppArmor profile
We like AppArmor. We like `libvirt` for KVM virtualization. Unfortunately, they often don't like each other.
Fortunately, you can disable their interaction on both ends:
- AppArmor: `aa-complain /usr/sbin/libvirtd`
- libvirt: set `security_driver = “none”` in `/etc/libvirt/qemu.conf`
> Keep in mind this is likely a security risk, especially for Windows guests. Though AppArmor devs do refer to `libvirtd` as an "inherently trusted process", which is nice.
{.is-warning}
Overall, unpleasant.
## `make` doesn't copy vmlinuz to boot partition after compiling
You forgot to select the kernel with eselect kernel, you twit.
# Wireguard
## Wireguard does not work
I've tried to outsmart this, and I wish I could say better, but it's really simple: purge the container, reset to the defaults.
Swallow your pride. VPNs are rough.
The `docker-compose` on the Git server works great.
### Cannot access internal containers through Wireguard
Appearantly, even though you're coming from inside the network via Wireguard, your queries *still* go through the firewall as though you're from outside (all of the above likely blatantly wrong).
To get a container to work, allow its' internal port through the firewall:
`ufw allow #[/tcp|udp]`
`ufw reload`
> example: I had to run `ufw allow 3333` to access this wiki!
{.is-info}
# Seafile
## New Seafile container can't sign SSL certificate
Unfortunately, the Seafile demands its' own certificate and can't get it via the reverse proxy - simply off the main proxy container, switch the seafile to ports 80 and 443 (won't work otherwise!) and let it generate it's inital certificate. You can revert to the reverse proxy just fine after that.
# Jekyll
## Mixed Content warning behind reverse proxy
There are two actions needed to prevent Jekyll serving assets over http, thus creating this warning and blocking assets on mobile:
1. In the configuration file `_config.yml`, comment out the `baseurl` variable, and put the site's exact address under `url`:
```yml
#baseurl: "" # the subpath of your site, e.g. /blog
url: "https://jekyll.pukeko.xyz" # the base hostname & protocol for your site, e.g. http://example.com
```
> Make sure there are no trailing slashes!
{.is-warning}
2. When loading assets in your page, avoid using the `{{ site_url }}` variable, and load using the full path instead:
```yml
#![Pukeko](https://{{ site_url }}/assets/pukeko.jpeg)
# ^ WRONG
![Pukeko](https://jekyll.pukeko.xyz/assets/pukeko.jpeg)
# Correct.
```
> Solved with assistance from this [Reference](https://github.com/jekyll/jekyll/issues/5709)
{.is-info}
# Nextcloud
## Nextcloud PostgreSQL complains about missing `/var/lib/postgresql/data/pg_stat_tmp/global.tmp`
Now this one is *wierd*. If you search for it, the file *is* there, but isn't *quite* there:
```sh
ls: cannot access 'global.tmp': No such file or directory
-????????? ? ? ? ? ? global.tmp
```
Even stranger, online search yields these errors are either hardware failure or failed network shares, none of which are the case here. The file cannot be `chmod`-ed, deleted or otherwise manipulated - even with root privileges. Thankfully, the fix is simple:
1. Shut down the container.
2. Maybe take a snapshot.
3. Delete the `pg_stat_tmp` folder
4. Watch as PostgreSQL complains, creates the file anyway and works.
# Filebrowser
## Internal filebrowser commands fail to execute
I tried disabling authentication (as this is behing Authelia), but running any `filebrowser` binary command returns `timeout`:
```
2021/11/10 17:55:02 timeout
```
This is because the database [Can't be modified](https://github.com/filebrowser/filebrowser/issues/627) while Filebrowser is running. As it turns out, you can execute offline using `docker-compose`:
`docker-compose run file-browser config set --auth.method=noauth`
> This will only work when the container is offline!
{.is-warning}
> Be sure to clean up the new `run` instances with `docker-compose down` before restarting.
{.is-info}

58
Upgrade_Gitea.md Normal file
View File

@@ -0,0 +1,58 @@
---
title: Upgrading Gitea
description:
published: true
date: 2022-04-30T20:10:44.240Z
tags: git, backup, docker, linux, restore, upgrade
editor: markdown
dateCreated: 2022-01-31T18:03:38.387Z
---
Upgrading Gitea can involve migrating the Postgresql settings, which must be done carefully. It consists of two phases - the export, which is done via the Gitea server, and the import - which is done partly through the server and partly through the database.
# Export
Gitea has a built in `dump` utility - executing the following will dump a `.zip` file into your current directory:
`docker exec -u git -w /data $(docker ps -qf "name=gitea_server") sh -c '/app/gitea/gitea dump -c /data/gitea/conf/app.ini'`
This will generate a zipped dump file:
```bash
Takahe:/Red-Vol/Media/Containers/gitea # ll /Red-Vol/Media/Containers/gitea/*.zip
-rw------- 1 shmick 1000 355487720 Jan 31 20:01 /Red-Vol/Media/Containers/gitea/gitea-dump-1643652088.zip
```
Now, this zip will consist of the server directories, and a `.sql` dump to restore to the database.
# Import
## Server
The dump is simply the contents of the `/data` directory, wherever it is mapped.
Simply move it where it should be and `chown` it to your user:
`Takahe:/Red-Vol/Media/Containers/gitea # chown -R shmick data/`
## Database
Inside the dumped archive you will find an SQL file:
```bash
Takahe:/Red-Vol/Media/Containers/gitea/postgresql # ll *.sql
-rw------- 1 70 root 775806 Jan 31 20:01 gitea-db.sql
```
In order to restore it, you must first bring up the database on the new container after updating the image. Then, move it to the database's mapped directory on the host.
Then, `exec` into the container:
`docker exec -ti gitea_db /bin/bash`
Finally, restore using the `psql` command:
`psql -U $USER -d $DATABASE < gitea-db.sql`
Finally, exit the container, bring the stack down with `docker-compose down` and recreate with `docker-compose up -d`.
Enjoy your new blazing fast Gitea!

69
boot-from-snapshot.md Normal file
View File

@@ -0,0 +1,69 @@
---
title: Restoring Root Filesystem from BtrFS Snapshot
description: Undo your grave mistakes quickly and efficiently. Yay!
published: true
date: 2022-04-30T20:10:46.302Z
tags: btrfs, recovery, rootfs
editor: markdown
dateCreated: 2021-10-20T05:15:59.558Z
---
If you frigged up *really* bad, you likely cannot read this, which is ironic.
However, I'll write it down so it hopefully sinks in the ol' noggin.
# Booting into snapshots
I configured the system to use `snapper`, a tool from OpenSUSE, which takes and manages BtrFS snapshots. Thanks to the `grub-btrfs` plugin, these root filesystem snapshots are accesable from the GRUB2 boot menu.
First, select whichever snapshot (they are neatly dated) by following the `Gentoo snapshots` option in the boot menu.
# Making snapshots writable
By default, snapshots are read only. They should still boot neatly, but lots of things will not work (like NTP and most compiles).
Take a good look around - do you like this snapshot? is this a nice snapshot? would you like to settle down with this snapshot?
If so, run:
`btrfs property set -ts / ro false`
when `/` is the path the snapshot is currently mounted at (use `findmnt -nt btrfs` to find out).
> Once a snapshot is writeable, consider it compromised - booting from it again if you change your mind may not be as smooth!
{.is-warning}
# Setting snapshot as new root
First, find the snapshot ID by runnnig `findmnt -nt btrfs` - which will give you something like this:
```
root@Takahe] ~ (takahe-root)
findmnt -nt btrfs
/ /dev/nvme0n1p2[/.snapshots/232/snapshot] btrfs rw,noatime,compress=zstd:11,ssd,d
```
Now, `grep` the path in the square brackets from the output of `btrfs snapshots list /`:
```
[root@Takahe] ~ (takahe-root)
btrfs subvolume list / | grep .snapshots/232/snapshot
ID 8624 gen 2199024063871 top level 2740 path .snapshots/232/snapshot
```
Noticed how I removed the leading `/`.
Now, take the `ID` of the snapshot, and set it as the new default:
```
[root@Takahe] ~ (takahe-root)
btrfs subvolume set-default 8624 /
```
> The BtrFS default is built into the filesystem - it overrides the value on `/etc/fstab`.
{.is-info}
Finally, update the bootloader to enable the default entry to boot into your snapshot:
`grub-mkconfig -o /boot/grub/grub.cfg`
This should be it - but you can double check if you're really keen by reading that file and looking for something like this in the kernel parameters:
```
linux /vmlinuz-5.14.10-gentoo-x86_64 root=UUID=d9699f1e-6825-4b38-9e3a-c228763b07cc ro rootflags=subvol=.snapshots/232/snapshot
```
See the `rootflags`? that's us, baby.
Happy compiling!

BIN
dsc_0099_2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

BIN
dsc_0108_2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

BIN
dsc_0198_2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

14
home.md Normal file
View File

@@ -0,0 +1,14 @@
---
title: Pukeko.xyz Wiki
description: For Whatever it is I screwed up so bad I had to write it down!
published: true
date: 2022-04-30T20:10:48.981Z
tags:
editor: markdown
dateCreated: 2021-08-21T17:56:21.296Z
---
# Pukeko.xyz wiki
Welcome to the Pukeko.xyz wiki, where I will document what I self host for me, once I bother, alongside whatever misshapen pearls of wisdom I come across.

BIN
linux.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

75
new_subdomain.md Normal file
View File

@@ -0,0 +1,75 @@
---
title: Adding a new Subdomain
description: I always mess something up in the process - so here's a list of a new subdomain procedure
published: true
date: 2022-04-30T20:10:51.032Z
tags: config, docker, container, traefik
editor: markdown
dateCreated: 2022-04-15T08:37:07.403Z
---
So it's not terribly difficult, but it can get suprisingly convoluted. To add a new subdomain:
# Add DNS Record with CloudFlare
1. Go to the [Cloudflare Dashboard](https://dash.cloudflare.com/) (linked from the [home page](pukeko.xyz) for your convinience)
2. Select the site
3. Click 'DNS'
4. Click 'Add Record'
5. Input the new subdomain thus:
- Type: CNAME
- Name: Subdomain name (`blah`.pukeko.xyz)
- Target: pukeko.xyz
- Use the Cloudflare Proxy if no other ports are needed and site is not performance sensitive (streaming, file transfer, etc). Otherwise, do not use the proxy.
This creates an alias - which means I only have to maintain one DNS record (`pukeko.xyz`).
# Configure Certificate with Traefik
## Container-side
Adding the certificate is done using the Traefik reverse proxy. This means it is done via the container's `docker-compose.yml` file - using the `labels` section.
Example.
```yml
labels:
- "traefik.enable=true"
- "traefik.docker.network=[container network]"
- "traefik.http.routers.[router name].entrypoints=websecure"
- "traefik.http.services.[router name].loadbalancer.server.port=[application port]"
- "traefik.http.routers.[router name].rule=Host(`[your subdomain]`)"
- "traefik.http.routers.[router name].tls.certresolver=pukekoresolver"
- "traefik.http.routers.[router name].middlewares=authelia@docker"
```
Some pointers:
1. The `[router name]` can be absolutely anything so long as it's consistent.
2. The `[container network]` must match whatever you defined at the foot of the compose file:
```yml
networks:
network:
driver: bridge
internal:
driver: bridge
```
> If you're running in container folder `container`, you will end up with network `container_network` and `container_internal`. *This is confusing - be wary!*
{.is-info}
3. `[application port]` is whatever the application uses *internally*. It does not matter how you expose it.
4. `[your subdomain]` is whatever you registered with CloudFlare at the previous section.
5. Finally, the `entrypoints`, `certresolver`, and `middlewares` must match whatever is defined in Traefik's `docker-compose` file.
## Traefik-side
Now, Traefik needs to talk to your new service. This means you need to add your external network to Traefik's compose, and tell it to use it. First:
1. Add network to Traefik container:
```yml
networks:
- container_network
```
2. Define `[container network]` as external at the foot of the file:
```yml
networks:
...
...
...
container_network:
external: true
```
> By convention, I use two networks for each multi-container stack - a `network` and an `internal` network. Ideally, Traefik will only see the `network` (which means it only 'sees' the application, and not whatever supports it. This isn't essential, but recommended.
{.is-warning}
Finally, rebuild Traefik with `docker-compose up -d`. Viola!

BIN
nextcloud_mail.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

BIN
nginx_auth.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

54
nginx_password.md Normal file
View File

@@ -0,0 +1,54 @@
---
title: Securing Nginx pages with a password
description: Short guide on securing pages behind a reverse proxy with a web-server based password. Useful for containers without authentication support.
published: true
date: 2022-04-30T20:10:53.225Z
tags: config, nginx
editor: markdown
dateCreated: 2021-08-26T20:29:51.514Z
---
# Securing Nginx pages with a password
Hiding every site behind a secure password is quite easy with nginx.
> As it turns out, Nginx (and web servers in general) only support rather old hashing protocols.
While probably sufficient with SSL, **use other solutions, like builtins (or better yet, Authelia) where possible**.
And if you do use this method, **use a strong password!**
{.is-danger}
## Create authentication file
First, from the web servers directory (in our case, the `swag` containers directory, create an `.htpasswd` file with the user youd like to authenticate with, like so:
`sh -c "echo -n 'shmick:'" > /Red-Vol/Media/Containers/swag/config/nginx/.htpasswd`
> This command will overwrite any previous credentials in this file, if it exists.
{.is-warning}
## Generate and salt a password
You can use `openssl` to generate the password:
`sh -c "openssl passwd -apr1" >> /Red-Vol/Media/Containers/swag/config/nginx/.htpasswd`
This will prompt you to enter and confirm a password, and will output a hash into the `.htpasswd` file.
## Point webpage to authentication file
By default, sites wont reach for the file to authenticate. To point a site to authenticate using the file, youll need to add the following in their configuration:
```plaintext
location / {
# enable the next two lines for http auth
auth_basic "Restricted";
# ^ Message displayed in pop-up window
auth_basic_user_file /config/nginx/.htpasswd;
```
> You may need to restart nginx for the configuration to take effect.
{.is-info}
## All done!
Finally, you should see a pop up when you browse to the site:
![](/nginx_auth.png)
Example pop-up window

104
podman.md Normal file
View File

@@ -0,0 +1,104 @@
---
title: Podman
description: Podman - the sleek, daemonless, rootless container engine. We'll get there!
published: true
date: 2022-04-30T20:10:55.242Z
tags: docker, container, podman
editor: markdown
dateCreated: 2022-02-14T21:19:25.267Z
---
# Podman Tips, Tricks, Shenanigans
I've had my eye on transitioning to Podman for a good while. It does not require a *big **phat** daemon* and can run containers rootless - perfect!
Unfortunately, Docker is so, so easy, and unlike what Red Hat claim, `alias docker=podman` is not nearly enough.
So what does it take to get Podman working?
## Installation
Quite obvious - install the `podman` package.
What is *not* obvious is that you *also* need the `podman-docker` package, and it does not live with Docker.
> Lesson 1: Docker and Podman do not live together. Sorry!
{.info}
## Rootfull or rootless?
Podman has a `rootfull` mode - which closely resembles Docker. However, it also has a `rootless` mode which is much less invasive - with caveats
1. No ping
2. No ports below `<1024` by default
3. Some configuration for DNS
4. Containers do not get their own IP and can only talk via the host
These limitations can be problematic - but the internet claims
> Lesson 2: Podman can do rootfull *and* rootless at once!
{.info}
Still need to confirm this, but should be good.
## Run the Podman Socket
What? another thing that runs and we are dependent on?
Fear not - 'tis a mere schewpid Unix socket. You can run Podman without it, but then the `docker-compose` command does not work which forces you to
1. Write Kubernetes YAML
or
2. Just run containers from the commandline
Both options are high treason, so the socket it is.
> Rootfull:
`systemctl enable --now podman.socket`
Rootless:
`systemctl --user enable --now podman.socket`
{.is-info}
To confirm the socket is working, run `curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping` **as root**.
You should recieve a cute little `OK`:
```fish
# curl -H "Content-Type: application/json" --unix-socket /var/run/docker.sock http://localhost/_ping
OK
```
Cute arrow courtesy of the `fish` shell.
Speaking of `fish`:
> Podman rootless commands will not work on `fish` because it screws up the `$DOCKER_HOST` variable. This is likely because I am stupid.
{.is-warning}
Okay, we have a socket!
If running rootless, you should also run:
`export DOCKER_HOST="unix:///run/user/$UID/podman/podman.sock"`
else stuff will not work with cryptic, menacing Python errors.
(`fish` screws up the `$UID` bit there. Good times.)
## Going Rootless
If you're rootfull, life is pretty good right now. Us little guys need to work a little harder.
First, DNS will not work unless you install the `dnsname` plugin:
`sudo zypper install cni-plugin-dnsname`
Now, erase all networks and run a container (which will create a default bridge).
You can see if it stuck by editing `/etc/cni/net.d/87-podman-bridge.conflist`:
```
{
"type": "dnsname",
"domainName": "pukeko.xyz"
}
```
If you run `podman network ls`, you should also see `dnsname` under `plugins`.
> DNS will not resolve unless the *first* entry in your `/etc/resolv.conf` is valid!
{.is-warning}
At this stage, containers may complain about `dnsmasq` permissions - if so, you need to run:
`sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.dnsmasq`
Still have not found a permanent fix but I think it goes away.
Anyways, that's it for now. I've yet to Podmanize everything - but it will all be here.

BIN
pukeko.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB