mirror of
https://github.com/Derisis13/derisis13.github.io.git
synced 2025-12-06 22:12:48 +01:00
post: home server storage
This commit is contained in:
74
_posts/2024-05-26-storage.md
Normal file
74
_posts/2024-05-26-storage.md
Normal file
@@ -0,0 +1,74 @@
|
||||
---
|
||||
layout: post
|
||||
title: "Storage in my home server"
|
||||
tag: "home server"
|
||||
---
|
||||
|
||||
This is part two of my server writeup.
|
||||
I'll discuss how I organized the storage of my server starting from the hard drives, touching on file systems and redundancy, and even going into the folder structure, permissions and shared folders.
|
||||
|
||||
# Changes to the host system
|
||||
|
||||
As I mentioned in my last post, my power solution is the weakest link in my system.
|
||||
Changing to a USB-C PD charger and trigger board didn't help much either: the current spikes from hardware spinup were too much even for that.
|
||||
In this regard, I'm finding the salvaged PSU better, but I won't change back to it, as it'd be a shock and fire hazard.
|
||||
Having an unstable PSU caused corruption of my data, which is unacceptable.
|
||||
|
||||
The requirements also changed since last time: I no longer intend to replace the Synology NAS, I only want to store my data on this server.
|
||||
This allowed me to drop two of the four redundant disks, which means I'm inside the power budget, however even with two disks I was still getting some errors.
|
||||
Strangely the errors were only affecting one disk.
|
||||
I got one with the same capacity to replace it, but the system became unusably glitchy, crashing and rebooting after about an hour of usage, every single time, until none of the disks were detected.
|
||||
It turned out that the six-port PCIe-SATA adapter died on me.
|
||||
I replaced it with the two-port one I wrote about in part zero (as it's sufficient now), and then also replaced the misbehaving disk with another one.
|
||||
|
||||
With these modifications, my server has been running stable for more than a month now (except when I tripped a breaker, but let's not count that).
|
||||
No crashes, no errors in `dmesg`.
|
||||
It appears I've fixed all hardware issues, and I can move on with the configuration.
|
||||
|
||||
# Block storage and file system
|
||||
|
||||
The system boots from a 16GB eMMC module I bought with the SBC.
|
||||
It's fine for the most part, but container and VM images need to live elsewhere, as they wouldn't fit otherwise.
|
||||
|
||||
I also briefly used a 16GB SD card for swapping (to avoid the OOM killer) but I removed it when the server was crashing constantly.
|
||||
It doesn't look like the system is missing it at all.
|
||||
|
||||
There's a 2.5" 1TB HDD attached via a USB3 SATA adapter that serves as a non-redundant local backup storage.
|
||||
I use it to push (borg) backups from my laptop to and for backups of the most important data on the server as well as system and docker configurations.
|
||||
It's formatted to BTRFS to take advantage of its extra features (compared to ext4).
|
||||
|
||||
The main storage is the two 1TB HDDs in BTRFS-RAID1.
|
||||
I choose BTRFS for redundancy instead of MDRAID, because this way BTRFS can take full advantage of the redundancy and correct more errors.
|
||||
I'm not sure if it's a testament to this or the quality of my "power supplies", but while I had 20-30 files rendered partially unreadable with my RAID-6 config, I had none with the BTRFS-RAID1 one.
|
||||
Do note, that BTRFS on multiple devices is not the best idea, see this article for details: https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/
|
||||
The best solution would be ZFS, but as explained previously, it's not possible for now.
|
||||
|
||||
# Folder layout and permissions
|
||||
|
||||
The redundant array serves two purposes: it holds docker configurations (to increase availability) and all the user data.
|
||||
They are separated into `compose/` and `fileserver/`.
|
||||
Compose holds docker, volumes and compose-files, but not images.
|
||||
Fileserver is shared via SMB and houses one folder for each user, plus a `public/` directory.
|
||||
They all have `Documents/`, `Downloads/`, `Music/`, `Pictures/`, `Templates/` and `Videos/` but media is usually uploaded to `public/` and documents are kept in user directories.
|
||||
|
||||
Everything on this array is owned by `www-data:users`.
|
||||
I would have liked to restrict (write) access to user directories to the users that they belong to, but Nextcloud (which I extensively use) mandates that all directories are owned by the mentioned user and group.
|
||||
To enforce this, all docker containers are configured with a PUID of 33, a PGID of 100 and a UMASK of 002, and in SMB `forceuser=www-data` and `forcegroup=users` options are set for the share.
|
||||
NFS is avoided since it doesn't have these options.
|
||||
|
||||
On the root of the non-redundant disk, there's a directory exposed as an SMB share, titled "Backup".
|
||||
Its purpose is to allow backups to be made from computers on the local network.
|
||||
An Rsync task is set up to create a copy of it in an offsite NAS in the family for a 3-2-1 backup scheme.
|
||||
Outside of the backup directory is a folder containing docker images and another one for Jellyfin to use as a (transcode) cache and metadata storage.
|
||||
These aren't critical so it'd be a waste to store them on RAID.
|
||||
In the future, I'd like to set up an Rsync target to this disk to receive remote backups from someone.
|
||||
I also set up an ISCSI target on this disk but I have yet to put it to use.
|
||||
|
||||
# Summary
|
||||
|
||||
I have 2 TB of usable space in my server.
|
||||
1 TB is redundant and is used as a high-availability NAS, with only the most important files backed up elsewhere.
|
||||
The other 1TB is non-redundant and is used only for containers, caching and as a local backup storage, which saved me a lot of time already.
|
||||
The local backups are further reinforced by an offsite copy, at a family member.
|
||||
They are both running BTRFS for its advanced features.
|
||||
Various workarounds are in effect on the redundant array to ensure compatibility with Nextcloud, for which all files need to be owned by a specific user and group.
|
||||
@@ -1,4 +1,4 @@
|
||||
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.0">Jekyll</generator><link href="http://localhost:4000/feed.xml" rel="self" type="application/atom+xml" /><link href="http://localhost:4000/" rel="alternate" type="text/html" /><updated>2024-05-07T12:13:55+02:00</updated><id>http://localhost:4000/feed.xml</id><title type="html">Derisis13’s temporary blog</title><subtitle>Just some nerd rambling</subtitle><entry><title type="html">Host System</title><link href="http://localhost:4000/2024/05/06/host.html" rel="alternate" type="text/html" title="Host System" /><published>2024-05-06T00:00:00+02:00</published><updated>2024-05-06T00:00:00+02:00</updated><id>http://localhost:4000/2024/05/06/host</id><content type="html" xml:base="http://localhost:4000/2024/05/06/host.html"><p><em>This post was supposed to be published in December 2023, but due to technical dificulties I didn’t do it at the time.
|
||||
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.0">Jekyll</generator><link href="http://localhost:4000/feed.xml" rel="self" type="application/atom+xml" /><link href="http://localhost:4000/" rel="alternate" type="text/html" /><updated>2024-05-26T02:15:15+02:00</updated><id>http://localhost:4000/feed.xml</id><title type="html">Derisis13’s temporary blog</title><subtitle>Just some nerd rambling</subtitle><entry><title type="html">Host System</title><link href="http://localhost:4000/2024/05/06/host.html" rel="alternate" type="text/html" title="Host System" /><published>2024-05-06T00:00:00+02:00</published><updated>2024-05-06T00:00:00+02:00</updated><id>http://localhost:4000/2024/05/06/host</id><content type="html" xml:base="http://localhost:4000/2024/05/06/host.html"><p><em>This post was supposed to be published in December 2023, but due to technical dificulties I didn’t do it at the time.
|
||||
It’s remained an important part of my server writeup, even though some of it is no longer representative of my setup.</em></p>
|
||||
|
||||
<p>This is part one of my server writeup.
|
||||
@@ -50,7 +50,74 @@ Storage still needs to be upgraded (I still have to purchase the drives) and the
|
||||
<p>I installed OpenMediaVault and I love how it only adds to Linux and takes nothing away.
|
||||
It’s a solid foundation for my storage and other services and I find it more convenient than TrueNAS Scale.</p>
|
||||
|
||||
<p>In the next part, I’ll write about the storage setup, from hard disks to folders.</p></content><author><name></name></author><category term="home server" /><summary type="html">This post was supposed to be published in December 2023, but due to technical dificulties I didn’t do it at the time. It’s remained an important part of my server writeup, even though some of it is no longer representative of my setup.</summary></entry><entry><title type="html">Singularity Feedback Loop</title><link href="http://localhost:4000/2023/12/19/singularity.html" rel="alternate" type="text/html" title="Singularity Feedback Loop" /><published>2023-12-19T00:00:00+01:00</published><updated>2023-12-19T00:00:00+01:00</updated><id>http://localhost:4000/2023/12/19/singularity</id><content type="html" xml:base="http://localhost:4000/2023/12/19/singularity.html"><p>In this short essay, I’ll examine the idea of the Singularity Feedback Loop.
|
||||
<p>In the next part, I’ll write about the storage setup, from hard disks to folders.</p></content><author><name></name></author><category term="home server" /><summary type="html">This post was supposed to be published in December 2023, but due to technical dificulties I didn’t do it at the time. It’s remained an important part of my server writeup, even though some of it is no longer representative of my setup.</summary></entry><entry><title type="html">Storage in my home server</title><link href="http://localhost:4000/2024/04/17/storage.html" rel="alternate" type="text/html" title="Storage in my home server" /><published>2024-04-17T00:00:00+02:00</published><updated>2024-04-17T00:00:00+02:00</updated><id>http://localhost:4000/2024/04/17/storage</id><content type="html" xml:base="http://localhost:4000/2024/04/17/storage.html"><p>This is part two of my server writeup.
|
||||
I’ll discuss how I organized the storage of my server starting from the hard drives, touching on file systems and redundancy, and even going into the folder structure, permissions and shared folders.</p>
|
||||
|
||||
<h1 id="changes-to-the-host-system">Changes to the host system</h1>
|
||||
|
||||
<p>As I mentioned in my last post, my power solution is the weakest link in my system.
|
||||
Changing to a USB-C PD charger and trigger board didn’t help much either: the current spikes from hardware spinup were too much even for that.
|
||||
In this regard, I’m finding the salvaged PSU better, but I won’t change back to it, as it’d be a shock and fire hazard.
|
||||
Having an unstable PSU caused corruption of my data, which is unacceptable.</p>
|
||||
|
||||
<p>The requirements also changed since last time: I no longer intend to replace the Synology NAS, I only want to store my data on this server.
|
||||
This allowed me to drop two of the four redundant disks, which means I’m inside the power budget, however even with two disks I was still getting some errors.
|
||||
Strangely the errors were only affecting one disk.
|
||||
I got one with the same capacity to replace it, but the system became unusably glitchy, crashing and rebooting after about an hour of usage, every single time, until none of the disks were detected.
|
||||
It turned out that the six-port PCIe-SATA adapter died on me.
|
||||
I replaced it with the two-port one I wrote about in part zero (as it’s sufficient now), and then also replaced the misbehaving disk with another one.</p>
|
||||
|
||||
<p>With these modifications, my server has been running stable for more than a month now (except when I tripped a breaker, but let’s not count that).
|
||||
No crashes, no errors in <code class="language-plaintext highlighter-rouge">dmesg</code>.
|
||||
It appears I’ve fixed all hardware issues, and I can move on with the configuration.</p>
|
||||
|
||||
<h1 id="block-storage-and-file-system">Block storage and file system</h1>
|
||||
|
||||
<p>The system boots from a 16GB eMMC module I bought with the SBC.
|
||||
It’s fine for the most part, but container and VM images need to live elsewhere, as they wouldn’t fit otherwise.</p>
|
||||
|
||||
<p>I also briefly used a 16GB SD card for swapping (to avoid the OOM killer) but I removed it when the server was crashing constantly.
|
||||
It doesn’t look like the system is missing it at all.</p>
|
||||
|
||||
<p>There’s a 2.5” 1TB HDD attached via a USB3 SATA adapter that serves as a non-redundant local backup storage.
|
||||
I use it to push (borg) backups from my laptop to and for backups of the most important data on the server as well as system and docker configurations.
|
||||
It’s formatted to BTRFS to take advantage of its extra features (compared to ext4).</p>
|
||||
|
||||
<p>The main storage is the two 1TB HDDs in BTRFS-RAID1.
|
||||
I choose BTRFS for redundancy instead of MDRAID, because this way BTRFS can take full advantage of the redundancy and correct more errors.
|
||||
I’m not sure if it’s a testament to this or the quality of my “power supplies”, but while I had 20-30 files rendered partially unreadable with my RAID-6 config, I had none with the BTRFS-RAID1 one.
|
||||
Do note, that BTRFS on multiple devices is not the best idea, see this article for details: https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/
|
||||
The best solution would be ZFS, but as explained previously, it’s not possible for now.</p>
|
||||
|
||||
<h1 id="folder-layout-and-permissions">Folder layout and permissions</h1>
|
||||
|
||||
<p>The redundant array serves two purposes: it holds docker configurations (to increase availability) and all the user data.
|
||||
They are separated into <code class="language-plaintext highlighter-rouge">compose/</code> and <code class="language-plaintext highlighter-rouge">fileserver/</code>.
|
||||
Compose holds docker, volumes and compose-files, but not images.
|
||||
Fileserver is shared via SMB and houses one folder for each user, plus a <code class="language-plaintext highlighter-rouge">public/</code> directory.
|
||||
They all have <code class="language-plaintext highlighter-rouge">Documents/</code>, <code class="language-plaintext highlighter-rouge">Downloads/</code>, <code class="language-plaintext highlighter-rouge">Music/</code>, <code class="language-plaintext highlighter-rouge">Pictures/</code>, <code class="language-plaintext highlighter-rouge">Templates/</code> and <code class="language-plaintext highlighter-rouge">Videos/</code> but media is usually uploaded to <code class="language-plaintext highlighter-rouge">public/</code> and documents are kept in user directories.</p>
|
||||
|
||||
<p>Everything on this array is owned by <code class="language-plaintext highlighter-rouge">www-data:users</code>.
|
||||
I would have liked to restrict (write) access to user directories to the users that they belong to, but Nextcloud (which I extensively use) mandates that all directories are owned by the mentioned user and group.
|
||||
To enforce this, all docker containers are configured with a PUID of 33, a PGID of 100 and a UMASK of 002, and in SMB <code class="language-plaintext highlighter-rouge">forceuser=www-data</code> and <code class="language-plaintext highlighter-rouge">forcegroup=users</code> options are set for the share.
|
||||
NFS is avoided since it doesn’t have these options.</p>
|
||||
|
||||
<p>On the root of the non-redundant disk, there’s a directory exposed as an SMB share, titled “Backup”.
|
||||
Its purpose is to allow backups to be made from computers on the local network.
|
||||
An Rsync task is set up to create a copy of it in an offsite NAS in the family for a 3-2-1 backup scheme.
|
||||
Outside of the backup directory is a folder containing docker images and another one for Jellyfin to use as a (transcode) cache and metadata storage.
|
||||
These aren’t critical so it’d be a waste to store them on RAID.
|
||||
In the future, I’d like to set up an Rsync target to this disk to receive remote backups from someone.
|
||||
I also set up an ISCSI target on this disk but I have yet to put it to use.</p>
|
||||
|
||||
<h1 id="summary">Summary</h1>
|
||||
|
||||
<p>I have 2 TB of usable space in my server.
|
||||
1 TB is redundant and is used as a high-availability NAS, with only the most important files backed up elsewhere.
|
||||
The other 1TB is non-redundant and is used only for containers, caching and as a local backup storage, which saved me a lot of time already.
|
||||
The local backups are further reinforced by an offsite copy, at a family member.
|
||||
They are both running BTRFS for its advanced features.
|
||||
Various workarounds are in effect on the redundant array to ensure compatibility with Nextcloud, for which all files need to be owned by a specific user and group.</p></content><author><name></name></author><category term="home server" /><summary type="html">This is part two of my server writeup. I’ll discuss how I organized the storage of my server starting from the hard drives, touching on file systems and redundancy, and even going into the folder structure, permissions and shared folders.</summary></entry><entry><title type="html">Singularity Feedback Loop</title><link href="http://localhost:4000/2023/12/19/singularity.html" rel="alternate" type="text/html" title="Singularity Feedback Loop" /><published>2023-12-19T00:00:00+01:00</published><updated>2023-12-19T00:00:00+01:00</updated><id>http://localhost:4000/2023/12/19/singularity</id><content type="html" xml:base="http://localhost:4000/2023/12/19/singularity.html"><p>In this short essay, I’ll examine the idea of the Singularity Feedback Loop.
|
||||
I’ll touch on Trans- and Posthumanism (which I’ll shorten to Transhumanism) but the focus will be this discipline’s dreaded and awaited messiah: the Technological Singularity.</p>
|
||||
|
||||
<h1 id="growth-and-singularity">Growth and Singularity</h1>
|
||||
|
||||
@@ -43,6 +43,11 @@
|
||||
<a class="post-link" href="/2024/05/06/host.html">
|
||||
Host System
|
||||
</a>
|
||||
</h3></li><li><span class="post-meta">Apr 17, 2024</span>
|
||||
<h3>
|
||||
<a class="post-link" href="/2024/04/17/storage.html">
|
||||
Storage in my home server
|
||||
</a>
|
||||
</h3></li><li><span class="post-meta">Dec 19, 2023</span>
|
||||
<h3>
|
||||
<a class="post-link" href="/2023/12/19/singularity.html">
|
||||
|
||||
Reference in New Issue
Block a user