https://wiki.bretts.org/api.php?action=feedcontributions&user=Andrew&feedformat=atomBriki - User contributions [en]2024-03-28T20:49:24ZUser contributionsMediaWiki 1.28.0https://wiki.bretts.org/index.php?title=Docker&diff=8547Docker2024-03-07T16:46:12Z<p>Andrew: /* Containers */</p>
<hr />
<div>== Useful Commands ==<br />
<br />
; docker ps -a: List all containers<br />
; docker container inspect <container>: Show details of <container><br />
; docker logs <container>: Show logs for <container><br />
; docker exec -it <container> /bin/bash: Start an interactive shell in <container><br />
<br />
== Updating container ==<br />
<br />
=== Manually ===<br />
sudo docker pull <image><br />
sudo docker stop <container><br />
sudo docker rm <container><br />
<docker run command><br />
<br />
=== Automatically ===<br />
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock taisun/updater --oneshot <container><br />
<br />
== Containers ==<br />
<br />
=== Portainer ===<br />
sudo docker run -d --name portainer \<br />
-p 8000:8000 -p 9443:9443 \<br />
-v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/portainer:/data \<br />
-v /etc/ssl/bretts.org:/certs \<br />
--restart unless-stopped \<br />
portainer/portainer-ce \<br />
--sslcert /certs/fullchain.pem --sslkey /certs/key.pem<br />
<br />
=== Plex ===<br />
<br />
Get your claim token: https://www.plex.tv/claim/<br />
<br />
Create the container with the claim token substituted:<br />
sudo docker run -d --name plex --network=host -e PLEX_UID=111 -e PLEX_GID=127 -e TZ=Europe/London -e PLEX_CLAIM=<CLAIM_TOKEN> \<br />
-v /var/lib/plexmediaserver:/config -v /srv:/srv --device=/dev/dri:/dev/dri \<br />
--restart unless-stopped \<br />
plexinc/pms-docker:plexpass<br />
<br />
=== Tautulli (Plex Monitoring/Notifications) ===<br />
sudo docker run -d --name tautulli -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8181:8181 \<br />
-v /var/lib/torrent/tautulli/config:/config -v /var/lib/plexmediaserver/Library/Logs:/logs \<br />
--restart unless-stopped \<br />
linuxserver/tautulli<br />
<br />
=== Jackett (Torrent Gateway) ===<br />
sudo docker run -d --name=jackett -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 9117:9117 \<br />
-v /var/lib/torrent/jackett/config:/config -v /var/lib/torrent/jackett/downloads:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/jackett<br />
<br />
=== FlareSolverr (CloudFlare proxy bypass) ===<br />
sudo docker run -d --name=flaresolverr \<br />
-p 8191:8191 \<br />
-e LOG_LEVEL=info \<br />
--restart unless-stopped \<br />
ghcr.io/flaresolverr/flaresolverr:latest<br />
<br />
=== Deluge ===<br />
sudo docker run -d --name deluge -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/torrent/deluged/config:/config -v /srv/incoming/torrents/deluge:/srv/incoming/torrents/deluge \<br />
-v /etc/ssl/bretts.org:/etc/ssl/bretts.org \<br />
--restart unless-stopped \<br />
linuxserver/deluge<br />
<br />
Since user groups don't seem to apply across the docker boundary, "torrent" will need to be given explicit permission to the private key file via an ACL:<br />
setfacl -m "u:torrent:rw" /etc/ssl/bretts.org/key.pem<br />
<br />
=== Radarr (Movie Downloads) ===<br />
sudo docker run -d --name radarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7878:7878 \<br />
-v /var/lib/torrent/radarr/config:/config -v /srv/videos/programs/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Sonarr (TV Downloads) ===<br />
sudo docker run -d --name=sonarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8989:8989 \<br />
-v /var/lib/torrent/sonarr/config:/config -v /srv/videos/programs/tv:/tv -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/sonarr<br />
<br />
=== Unifi ===<br />
sudo docker run -d --name=unifi-controller -e PUID=140 -e PGID=150 \<br />
-p 3478:3478/udp -p 10001:10001/udp -p 18080:18080 -p 18081:18081 -p 18443:18443 -p 18880:18880 -p 6789:6789 \<br />
-v /var/lib/unifi:/config \<br />
--restart unless-stopped \<br />
linuxserver/unifi-controller<br />
<br />
=== Home-Assistant (as part of host network) ===<br />
sudo docker run -d --name=home-assistant -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/home-assistant/config:/config -v /srv:/media -v /etc/ssl/bretts.org:/etc/ssl/bretts.org -v /var/www/html/arlo-snapshots:/arlo-snapshots \<br />
--restart unless-stopped \<br />
homeassistant/home-assistant<br />
<br />
=== Atlassian ===<br />
<br />
==== JIRA ====<br />
Note: In this instance JIRA is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name jira<br />
sudo docker run -d --name=jira -e TZ=Europe/London \<br />
-e ATL_TOMCAT_SCHEME=https -e ATL_TOMCAT_SECURE=true -e ATL_PROXY_NAME=jira.bretts.org -e ATL_PROXY_PORT=443 \<br />
-p 7980:8080 \<br />
-v jira:/var/atlassian/application-data/jira \<br />
--restart unless-stopped \<br />
atlassian/jira-software<br />
<br />
Docker JIRA runs with a uid and gid of 2001. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2001 jira-docker<br />
sudo adduser --system --no-create-home --uid 2001 --gid 2001 jira-docker<br />
<br />
==== Bitbucket====<br />
Note: In this instance Bitbucket is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bitbucket<br />
sudo docker run -d --name=bitbucket -e TZ=Europe/London \<br />
-e SERVER_SCHEME=https -e SERVER_SECURE=true -e SERVER_PROXY_NAME=bitbucket.bretts.org -e SERVER_PROXY_PORT=443 \<br />
-p 7990:7990 -p 7999:7999 \<br />
-v bitbucket:/var/atlassian/application-data/bitbucket \<br />
--restart unless-stopped \<br />
atlassian/bitbucket-server<br />
<br />
Docker Bitbucket runs with a uid and gid of 2003. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2003 bitbucket-docker<br />
sudo adduser --system --no-create-home --uid 2003 --gid 2003 bitbucket-docker<br />
<br />
==== Bamboo ====<br />
Note: In this instance Bamboo is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bamboo<br />
sudo docker run -d --name=bamboo -e TZ=Europe/London \<br />
-p 54663:54663 -p 7970:8085 \<br />
-v bamboo:/var/atlassian/application-data/bamboo \<br />
--restart unless-stopped \<br />
atlassian/bamboo-server<br />
<br />
===== Limitations =====<br />
* Bamboo runs with a uid of 1000, which means it's likely to clash with a real user in the containing host<br />
* Bamboo container doesn't support any reverse proxy configuration, which means hiding it behind nginx is likely to result in broken Application Links. This can be worked around by manually editing /opt/atlassian/bamboo/conf/server.xml, but those changes will be overwritten on every container upgrade.<br />
<br />
== Tips / Fixes ==<br />
<br />
=== Tautulli slow to start ===<br />
This may be due to an attempt to chown a large number of files. <br />
Login to the container:<br />
sudo docker exec -it <container> /bin/bash<br />
Disable the chown step by editing <code>/etc/cont-init.d/30-config</code> and commenting out the chown command.<br />
<br />
=== Adding an SSL cert for Unifi ===<br />
sudo openssl pkcs12 -export -inkey /etc/ssl/bretts.org/key.pem -in /etc/ssl/bretts.org/fullchain.pem -out /tmp/cert.p12 -name unifi -password pass:temppass<br />
sudo keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/data/keystore -srckeystore /tmp/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias unifi -noprompt<br />
sudo docker restart unifi-controller<br />
sudo rm /tmp/cert.p12<br />
<br />
=== Local DNS resolution fails on docker 18.09 ===<br />
This may be the result of a bug: https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1820278. Normally the container's /etc/resolv.conf should mirror that of the host, but in this case it seems to just be a default version. As a workaround, create /etc/docker/daemon.json with the following contents:<br />
<br />
{<br />
"dns": ["192.168.1.1", "8.8.8.8"],<br />
"dns-search": ["bretts.org"]<br />
}</div>Andrewhttps://wiki.bretts.org/index.php?title=Fan_Control&diff=8546Fan Control2024-02-11T16:23:47Z<p>Andrew: </p>
<hr />
<div>== Stopping fans ==<br />
Run `pwmconfig` to create a basic /etc/fancontrol and set the min temps to something >40. For example:<br />
<br />
<pre><br />
INTERVAL=10<br />
DEVPATH=hwmon0=devices/platform/nct6775.672<br />
DEVNAME=hwmon0=nct6798<br />
FCTEMPS=hwmon0/pwm6=hwmon0/temp1_input hwmon0/pwm7=hwmon0/temp1_input<br />
FCFANS=hwmon0/pwm6=hwmon0/fan6_input hwmon0/pwm7=hwmon0/fan7_input<br />
MINTEMP=hwmon0/pwm6=40 hwmon0/pwm7=40<br />
MAXTEMP=hwmon0/pwm6=60 hwmon0/pwm7=60<br />
MINSTART=hwmon0/pwm6=140 hwmon0/pwm7=140<br />
MINSTOP=hwmon0/pwm6=90 hwmon0/pwm7=90<br />
</pre><br />
<br />
Then just run `fancontrol` to stop the relevant fans (in this case pwm6 and pwm7).<br />
<br />
== Resetting automatic BIOS/OS fan control ==<br />
<br />
Set the relevant pwm files to "2" (for automatic fan control):<br />
for file in /sys/class/hwmon/hwmon0/device/hwmon/hwmon0/pwm?_enable; do echo 2 > $file; done<br />
<br />
Note: On ASRock this may differ - it seems to be "5" when set to customized fan curves in BIOS. <br />
<br />
Notes for nct6775-compatible chipsets (including nct6798): https://www.kernel.org/doc/Documentation/hwmon/nct6775.rst.</div>Andrewhttps://wiki.bretts.org/index.php?title=Fan_Control&diff=8545Fan Control2024-02-11T16:23:34Z<p>Andrew: /* Resetting automatic BIOS/OS fan control */</p>
<hr />
<div>== Stopping fans ==<br />
Run `pwmconfig` to create a basic /etc/fancontrol and set the min temps to something >40. For example:<br />
<br />
<pre><br />
INTERVAL=10<br />
DEVPATH=hwmon0=devices/platform/nct6775.672<br />
DEVNAME=hwmon0=nct6798<br />
FCTEMPS=hwmon0/pwm6=hwmon0/temp1_input hwmon0/pwm7=hwmon0/temp1_input<br />
FCFANS=hwmon0/pwm6=hwmon0/fan6_input hwmon0/pwm7=hwmon0/fan7_input<br />
MINTEMP=hwmon0/pwm6=40 hwmon0/pwm7=40<br />
MAXTEMP=hwmon0/pwm6=60 hwmon0/pwm7=60<br />
MINSTART=hwmon0/pwm6=140 hwmon0/pwm7=140<br />
MINSTOP=hwmon0/pwm6=90 hwmon0/pwm7=90<br />
</pre><br />
<br />
Then just run `fancontrol` to stop the relevant fans (in this case pwm6 and pwm7).<br />
<br />
== Resetting automatic BIOS/OS fan control ==<br />
<br />
Set the relevant pwm files to "2" (for automatic fan control):<br />
for file in /sys/class/hwmon/hwmon0/device/hwmon/hwmon0/pwm?_enable; do echo 2 > $file; done<br />
<br />
Note: On ASRock this may differ - it seems to be "5" when set to customized fan curves in BIOS. Notes for nct6775-compatible chipsets (including nct6798): https://www.kernel.org/doc/Documentation/hwmon/nct6775.rst.</div>Andrewhttps://wiki.bretts.org/index.php?title=Fan_Control&diff=8544Fan Control2024-02-11T16:14:45Z<p>Andrew: /* Resetting automatic BIOS/OS fan control */</p>
<hr />
<div>== Stopping fans ==<br />
Run `pwmconfig` to create a basic /etc/fancontrol and set the min temps to something >40. For example:<br />
<br />
<pre><br />
INTERVAL=10<br />
DEVPATH=hwmon0=devices/platform/nct6775.672<br />
DEVNAME=hwmon0=nct6798<br />
FCTEMPS=hwmon0/pwm6=hwmon0/temp1_input hwmon0/pwm7=hwmon0/temp1_input<br />
FCFANS=hwmon0/pwm6=hwmon0/fan6_input hwmon0/pwm7=hwmon0/fan7_input<br />
MINTEMP=hwmon0/pwm6=40 hwmon0/pwm7=40<br />
MAXTEMP=hwmon0/pwm6=60 hwmon0/pwm7=60<br />
MINSTART=hwmon0/pwm6=140 hwmon0/pwm7=140<br />
MINSTOP=hwmon0/pwm6=90 hwmon0/pwm7=90<br />
</pre><br />
<br />
Then just run `fancontrol` to stop the relevant fans (in this case pwm6 and pwm7).<br />
<br />
== Resetting automatic BIOS/OS fan control ==<br />
<br />
Set the relevant pwm files to "2" (for automatic fan control):<br />
for file in /sys/class/hwmon/hwmon0/device/hwmon/hwmon0/pwm?_enable; do echo 2 > $file; done<br />
<br />
Note: On ASRock this may differ - it seems to be "5" when set to customized fan curves in BIOS</div>Andrewhttps://wiki.bretts.org/index.php?title=Fan_Control&diff=8543Fan Control2024-02-09T12:11:08Z<p>Andrew: Created page with "== Stopping fans == Run `pwmconfig` to create a basic /etc/fancontrol and set the min temps to something >40. For example: <pre> INTERVAL=10 DEVPATH=hwmon0=devices/platform/n..."</p>
<hr />
<div>== Stopping fans ==<br />
Run `pwmconfig` to create a basic /etc/fancontrol and set the min temps to something >40. For example:<br />
<br />
<pre><br />
INTERVAL=10<br />
DEVPATH=hwmon0=devices/platform/nct6775.672<br />
DEVNAME=hwmon0=nct6798<br />
FCTEMPS=hwmon0/pwm6=hwmon0/temp1_input hwmon0/pwm7=hwmon0/temp1_input<br />
FCFANS=hwmon0/pwm6=hwmon0/fan6_input hwmon0/pwm7=hwmon0/fan7_input<br />
MINTEMP=hwmon0/pwm6=40 hwmon0/pwm7=40<br />
MAXTEMP=hwmon0/pwm6=60 hwmon0/pwm7=60<br />
MINSTART=hwmon0/pwm6=140 hwmon0/pwm7=140<br />
MINSTOP=hwmon0/pwm6=90 hwmon0/pwm7=90<br />
</pre><br />
<br />
Then just run `fancontrol` to stop the relevant fans (in this case pwm6 and pwm7).<br />
<br />
== Resetting automatic BIOS/OS fan control ==<br />
<br />
Set the relevant pwm files to "2" (for automatic fan control):<br />
for file in /sys/class/hwmon/hwmon0/device/hwmon/hwmon0/pwm?_enable; do echo 2 > $file; done</div>Andrewhttps://wiki.bretts.org/index.php?title=Linux_Tips&diff=8542Linux Tips2024-02-09T12:07:51Z<p>Andrew: /* Administration */</p>
<hr />
<div>Note, these tips are mainly aimed at Ubuntu/Kubuntu distributions.<br />
<br />
== Administration ==<br />
* [[apt-get/dpkg]]<br />
* [[Apache2]]<br />
* [[Atlassian]]<br />
* [[Backups]] (restic & b2)<br />
* [[bash]]<br />
* [[AIGLX & Compiz]]<br />
* [[Deluge]]<br />
* [[Docker]]<br />
* [[DVDs]]<br />
* [[Fan Control]]<br />
* [[Firefox]]<br />
* [[Git & Atlassian]]<br />
* [[Grub]]<br />
* [[HomeAssistant]]<br />
* [[Homebridge]]<br />
* [[Java]]<br />
* [[kPlaylist]]<br />
* [[KDE]]<br />
* [[Kerberos & LDAP]]<br />
* [[LIRC]]<br />
* [[LVM]]<br />
* [[Mail Server]] (Postfix, Procmail, Spamassassin etc.)<br />
* [[mdadm]] (RAID)<br />
* [[MediaWiki]]<br />
* [[Miscellaneous]] ([[Miscellaneous Archive|Archive]])<br />
* [[Munin]]<br />
* [[Mutt]]<br />
* [[MySQL]]<br />
* [[MythTV]]<br />
* [[nagios]]<br />
* [[netdata]]<br />
* [[Pine]]<br />
* [[PostgreSQL]]<br />
* [[Samba]]<br />
* [[SnapRAID / MergerFS]]<br />
* [[Snapshot backups using rsync]]<br />
* [[SNMP/MRTG]]<br />
* [[Services]]<br />
* [[SSL]]<br />
* [[Subsonic]]<br />
* [[Subversion]]<br />
* [[Tomcat 5]]<br />
* [[Tripwire]]<br />
* [[Truecrypt]]<br />
* [[Unifi Controller]]<br />
* [[User Management]]<br />
* [[XDMCP & VNC]]<br />
* [[Webmin]]<br />
* [[WPA]]<br />
<br />
== Other ==<br />
* [[Sony TR2MP on Kubuntu Dapper Drake]]</div>Andrewhttps://wiki.bretts.org/index.php?title=Docker&diff=8541Docker2024-01-02T16:43:37Z<p>Andrew: /* Portainer */</p>
<hr />
<div>== Useful Commands ==<br />
<br />
; docker ps -a: List all containers<br />
; docker container inspect <container>: Show details of <container><br />
; docker logs <container>: Show logs for <container><br />
; docker exec -it <container> /bin/bash: Start an interactive shell in <container><br />
<br />
== Updating container ==<br />
<br />
=== Manually ===<br />
sudo docker pull <image><br />
sudo docker stop <container><br />
sudo docker rm <container><br />
<docker run command><br />
<br />
=== Automatically ===<br />
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock taisun/updater --oneshot <container><br />
<br />
== Containers ==<br />
<br />
=== Portainer ===<br />
sudo docker run -d --name portainer \<br />
-p 8000:8000 -p 9443:9443 \<br />
-v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/portainer:/data \<br />
-v /etc/ssl/bretts.org:/certs \<br />
--restart unless-stopped \<br />
portainer/portainer-ce \<br />
--sslcert /certs/fullchain.pem --sslkey /certs/key.pem<br />
<br />
=== Plex ===<br />
<br />
Get your claim token: https://www.plex.tv/claim/<br />
<br />
Create the container with the claim token substituted:<br />
sudo docker run -d --name plex --network=host -e PLEX_UID=111 -e PLEX_GID=127 -e TZ=Europe/London -e PLEX_CLAIM=<CLAIM_TOKEN> \<br />
-v /var/lib/plexmediaserver:/config -v /srv:/srv --device=/dev/dri:/dev/dri \<br />
--restart unless-stopped \<br />
plexinc/pms-docker:plexpass<br />
<br />
=== Tautulli (Plex Monitoring/Notifications) ===<br />
sudo docker run -d --name tautulli -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8181:8181 \<br />
-v /var/lib/torrent/tautulli/config:/config -v /var/lib/plexmediaserver/Library/Logs:/logs \<br />
--restart unless-stopped \<br />
linuxserver/tautulli<br />
<br />
=== Jackett (Torrent Gateway) ===<br />
sudo docker run -d --name=jackett -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 9117:9117 \<br />
-v /var/lib/torrent/jackett/config:/config -v /var/lib/torrent/jackett/downloads:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/jackett<br />
<br />
=== Deluge ===<br />
sudo docker run -d --name deluge -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/torrent/deluged/config:/config -v /srv/incoming/torrents/deluge:/srv/incoming/torrents/deluge \<br />
-v /etc/ssl/bretts.org:/etc/ssl/bretts.org \<br />
--restart unless-stopped \<br />
linuxserver/deluge<br />
<br />
Since user groups don't seem to apply across the docker boundary, "torrent" will need to be given explicit permission to the private key file via an ACL:<br />
setfacl -m "u:torrent:rw" /etc/ssl/bretts.org/key.pem<br />
<br />
=== Radarr (Movie Downloads) ===<br />
sudo docker run -d --name radarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7878:7878 \<br />
-v /var/lib/torrent/radarr/config:/config -v /srv/videos/programs/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Sonarr (TV Downloads) ===<br />
sudo docker run -d --name=sonarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8989:8989 \<br />
-v /var/lib/torrent/sonarr/config:/config -v /srv/videos/programs/tv:/tv -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/sonarr<br />
<br />
=== Unifi ===<br />
sudo docker run -d --name=unifi-controller -e PUID=140 -e PGID=150 \<br />
-p 3478:3478/udp -p 10001:10001/udp -p 18080:18080 -p 18081:18081 -p 18443:18443 -p 18880:18880 -p 6789:6789 \<br />
-v /var/lib/unifi:/config \<br />
--restart unless-stopped \<br />
linuxserver/unifi-controller<br />
<br />
=== Home-Assistant (as part of host network) ===<br />
sudo docker run -d --name=home-assistant -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/home-assistant/config:/config -v /srv:/media -v /etc/ssl/bretts.org:/etc/ssl/bretts.org -v /var/www/html/arlo-snapshots:/arlo-snapshots \<br />
--restart unless-stopped \<br />
homeassistant/home-assistant<br />
<br />
=== Atlassian ===<br />
<br />
==== JIRA ====<br />
Note: In this instance JIRA is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name jira<br />
sudo docker run -d --name=jira -e TZ=Europe/London \<br />
-e ATL_TOMCAT_SCHEME=https -e ATL_TOMCAT_SECURE=true -e ATL_PROXY_NAME=jira.bretts.org -e ATL_PROXY_PORT=443 \<br />
-p 7980:8080 \<br />
-v jira:/var/atlassian/application-data/jira \<br />
--restart unless-stopped \<br />
atlassian/jira-software<br />
<br />
Docker JIRA runs with a uid and gid of 2001. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2001 jira-docker<br />
sudo adduser --system --no-create-home --uid 2001 --gid 2001 jira-docker<br />
<br />
==== Bitbucket====<br />
Note: In this instance Bitbucket is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bitbucket<br />
sudo docker run -d --name=bitbucket -e TZ=Europe/London \<br />
-e SERVER_SCHEME=https -e SERVER_SECURE=true -e SERVER_PROXY_NAME=bitbucket.bretts.org -e SERVER_PROXY_PORT=443 \<br />
-p 7990:7990 -p 7999:7999 \<br />
-v bitbucket:/var/atlassian/application-data/bitbucket \<br />
--restart unless-stopped \<br />
atlassian/bitbucket-server<br />
<br />
Docker Bitbucket runs with a uid and gid of 2003. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2003 bitbucket-docker<br />
sudo adduser --system --no-create-home --uid 2003 --gid 2003 bitbucket-docker<br />
<br />
==== Bamboo ====<br />
Note: In this instance Bamboo is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bamboo<br />
sudo docker run -d --name=bamboo -e TZ=Europe/London \<br />
-p 54663:54663 -p 7970:8085 \<br />
-v bamboo:/var/atlassian/application-data/bamboo \<br />
--restart unless-stopped \<br />
atlassian/bamboo-server<br />
<br />
===== Limitations =====<br />
* Bamboo runs with a uid of 1000, which means it's likely to clash with a real user in the containing host<br />
* Bamboo container doesn't support any reverse proxy configuration, which means hiding it behind nginx is likely to result in broken Application Links. This can be worked around by manually editing /opt/atlassian/bamboo/conf/server.xml, but those changes will be overwritten on every container upgrade.<br />
<br />
== Tips / Fixes ==<br />
<br />
=== Tautulli slow to start ===<br />
This may be due to an attempt to chown a large number of files. <br />
Login to the container:<br />
sudo docker exec -it <container> /bin/bash<br />
Disable the chown step by editing <code>/etc/cont-init.d/30-config</code> and commenting out the chown command.<br />
<br />
=== Adding an SSL cert for Unifi ===<br />
sudo openssl pkcs12 -export -inkey /etc/ssl/bretts.org/key.pem -in /etc/ssl/bretts.org/fullchain.pem -out /tmp/cert.p12 -name unifi -password pass:temppass<br />
sudo keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/data/keystore -srckeystore /tmp/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias unifi -noprompt<br />
sudo docker restart unifi-controller<br />
sudo rm /tmp/cert.p12<br />
<br />
=== Local DNS resolution fails on docker 18.09 ===<br />
This may be the result of a bug: https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1820278. Normally the container's /etc/resolv.conf should mirror that of the host, but in this case it seems to just be a default version. As a workaround, create /etc/docker/daemon.json with the following contents:<br />
<br />
{<br />
"dns": ["192.168.1.1", "8.8.8.8"],<br />
"dns-search": ["bretts.org"]<br />
}</div>Andrewhttps://wiki.bretts.org/index.php?title=Main_Page&diff=8540Main Page2024-01-02T16:22:43Z<p>Andrew: /* Other Contents */</p>
<hr />
<div>== briki Contents ==<br />
=== Technology ===<br />
==== Software ====<br />
* [[Linux Tips]]<br />
* [[Windows 10 Tips]]<br />
* [[Windows XP Tips]]<br />
* [[MacOS X Tips]]<br />
* [[Networking Tips]]<br />
<br />
==== Hardware ====<br />
* [[Machine List]]<br />
* [[Sony TR Tips]]<br />
* [[SPV M5000 Tips]]<br />
* [[Speedtouch 780 Tips]]<br />
<br />
=== Other ===<br />
* [[Bikes]]<br />
* [[Simpsons Quotes]]<br />
* [[Temporary Pages]]<br />
* [[Alex]]<br />
<br />
== Other Contents ==<br />
=== Home ===<br />
----<br />
* [https://home-assistant.bretts.org Home Assistant] (protected)<br />
* [https://arlo.netgear.com Security Cameras] (protected)<br />
<br />
=== Media ===<br />
----<br />
* [https://plex.bretts.org Plex] (protected)<br />
* [https://radarr.bretts.org Radarr] ([https://radarr-lowres.bretts.org low-res]) (private)<br />
* [https://sonarr.bretts.org Sonarr] (private)<br />
* [https://tautulli.bretts.org Tautulli (Plex status)] (private)<br />
* [http://maine.bretts.org:33400 Plex WebTools] (private)<br />
* [http://maine.bretts.org:9117 Jackett] (private)<br />
<br />
=== Admin ===<br />
----<br />
==== Network ====<br />
* [https://router.bretts.org:1443/ Router] (protected)<br />
* [https://unifi.bretts.org Unifi Controller] (protected)<br />
<br />
==== Monitoring ====<br />
* [http://netdata.bretts.org Device Monitoring (netdata)]<br />
* [https://ntopng.bretts.org Bandwidth Monitoring (ntopng)] (protected)<br />
* [http://maine.bretts.org/mon/ Process Monitoring (nagios4)] [https://maine.bretts.org/cgi-bin/nagios4/status.cgi?host=all Services](protected)<br />
* [http://maine.bretts.org/graph/bretts.org/ Network Monitoring (munin)]<br />
* [https://maine.bretts.org:9443 Container Monitoring (portainer)]<br />
* [http://maine.bretts.org:3001/ Grafana] (protected)<br />
<br />
==== Torrents ====<br />
* [https://deluge.bretts.org Deluge] (protected)<br />
<br />
==== Apache (protected) ====<br />
* [http://maine.bretts.org/doc/ Manual]<br />
* [http://maine.bretts.org/server-info Server Info]<br />
* [http://maine.bretts.org/server-status Server Status]<br />
* [http://maine.bretts.org/php-info/ PHP Info]<br />
<br />
==== Tomcat (protected) ====<br />
* [http://maine.bretts.org/tomcat/manager/html Manager]<br />
<br />
=== Development ===<br />
* [http://bitbucket.bretts.org/ BitBucket]<br />
* [http://jira.bretts.org/ JIRA]<br />
* [http://bamboo.bretts.org/ Bamboo]<br />
<br />
== About ==<br />
'''briki''' (''bretts.org wiki'') administered by Andrew Brett. Feel free to contribute to pre-existing pages.</div>Andrewhttps://wiki.bretts.org/index.php?title=Docker&diff=8539Docker2024-01-02T16:18:23Z<p>Andrew: /* Portainer */</p>
<hr />
<div>== Useful Commands ==<br />
<br />
; docker ps -a: List all containers<br />
; docker container inspect <container>: Show details of <container><br />
; docker logs <container>: Show logs for <container><br />
; docker exec -it <container> /bin/bash: Start an interactive shell in <container><br />
<br />
== Updating container ==<br />
<br />
=== Manually ===<br />
sudo docker pull <image><br />
sudo docker stop <container><br />
sudo docker rm <container><br />
<docker run command><br />
<br />
=== Automatically ===<br />
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock taisun/updater --oneshot <container><br />
<br />
== Containers ==<br />
<br />
=== Portainer ===<br />
sudo docker run -d --name portainer \<br />
-p 8000:8000 -p 9443:9443 \<br />
-v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/portainer:/data \<br />
--restart unless-stopped \<br />
portainer/portainer-ce<br />
<br />
=== Plex ===<br />
<br />
Get your claim token: https://www.plex.tv/claim/<br />
<br />
Create the container with the claim token substituted:<br />
sudo docker run -d --name plex --network=host -e PLEX_UID=111 -e PLEX_GID=127 -e TZ=Europe/London -e PLEX_CLAIM=<CLAIM_TOKEN> \<br />
-v /var/lib/plexmediaserver:/config -v /srv:/srv --device=/dev/dri:/dev/dri \<br />
--restart unless-stopped \<br />
plexinc/pms-docker:plexpass<br />
<br />
=== Tautulli (Plex Monitoring/Notifications) ===<br />
sudo docker run -d --name tautulli -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8181:8181 \<br />
-v /var/lib/torrent/tautulli/config:/config -v /var/lib/plexmediaserver/Library/Logs:/logs \<br />
--restart unless-stopped \<br />
linuxserver/tautulli<br />
<br />
=== Jackett (Torrent Gateway) ===<br />
sudo docker run -d --name=jackett -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 9117:9117 \<br />
-v /var/lib/torrent/jackett/config:/config -v /var/lib/torrent/jackett/downloads:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/jackett<br />
<br />
=== Deluge ===<br />
sudo docker run -d --name deluge -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/torrent/deluged/config:/config -v /srv/incoming/torrents/deluge:/srv/incoming/torrents/deluge \<br />
-v /etc/ssl/bretts.org:/etc/ssl/bretts.org \<br />
--restart unless-stopped \<br />
linuxserver/deluge<br />
<br />
Since user groups don't seem to apply across the docker boundary, "torrent" will need to be given explicit permission to the private key file via an ACL:<br />
setfacl -m "u:torrent:rw" /etc/ssl/bretts.org/key.pem<br />
<br />
=== Radarr (Movie Downloads) ===<br />
sudo docker run -d --name radarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7878:7878 \<br />
-v /var/lib/torrent/radarr/config:/config -v /srv/videos/programs/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Sonarr (TV Downloads) ===<br />
sudo docker run -d --name=sonarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8989:8989 \<br />
-v /var/lib/torrent/sonarr/config:/config -v /srv/videos/programs/tv:/tv -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/sonarr<br />
<br />
=== Unifi ===<br />
sudo docker run -d --name=unifi-controller -e PUID=140 -e PGID=150 \<br />
-p 3478:3478/udp -p 10001:10001/udp -p 18080:18080 -p 18081:18081 -p 18443:18443 -p 18880:18880 -p 6789:6789 \<br />
-v /var/lib/unifi:/config \<br />
--restart unless-stopped \<br />
linuxserver/unifi-controller<br />
<br />
=== Home-Assistant (as part of host network) ===<br />
sudo docker run -d --name=home-assistant -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/home-assistant/config:/config -v /srv:/media -v /etc/ssl/bretts.org:/etc/ssl/bretts.org -v /var/www/html/arlo-snapshots:/arlo-snapshots \<br />
--restart unless-stopped \<br />
homeassistant/home-assistant<br />
<br />
=== Atlassian ===<br />
<br />
==== JIRA ====<br />
Note: In this instance JIRA is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name jira<br />
sudo docker run -d --name=jira -e TZ=Europe/London \<br />
-e ATL_TOMCAT_SCHEME=https -e ATL_TOMCAT_SECURE=true -e ATL_PROXY_NAME=jira.bretts.org -e ATL_PROXY_PORT=443 \<br />
-p 7980:8080 \<br />
-v jira:/var/atlassian/application-data/jira \<br />
--restart unless-stopped \<br />
atlassian/jira-software<br />
<br />
Docker JIRA runs with a uid and gid of 2001. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2001 jira-docker<br />
sudo adduser --system --no-create-home --uid 2001 --gid 2001 jira-docker<br />
<br />
==== Bitbucket====<br />
Note: In this instance Bitbucket is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bitbucket<br />
sudo docker run -d --name=bitbucket -e TZ=Europe/London \<br />
-e SERVER_SCHEME=https -e SERVER_SECURE=true -e SERVER_PROXY_NAME=bitbucket.bretts.org -e SERVER_PROXY_PORT=443 \<br />
-p 7990:7990 -p 7999:7999 \<br />
-v bitbucket:/var/atlassian/application-data/bitbucket \<br />
--restart unless-stopped \<br />
atlassian/bitbucket-server<br />
<br />
Docker Bitbucket runs with a uid and gid of 2003. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2003 bitbucket-docker<br />
sudo adduser --system --no-create-home --uid 2003 --gid 2003 bitbucket-docker<br />
<br />
==== Bamboo ====<br />
Note: In this instance Bamboo is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bamboo<br />
sudo docker run -d --name=bamboo -e TZ=Europe/London \<br />
-p 54663:54663 -p 7970:8085 \<br />
-v bamboo:/var/atlassian/application-data/bamboo \<br />
--restart unless-stopped \<br />
atlassian/bamboo-server<br />
<br />
===== Limitations =====<br />
* Bamboo runs with a uid of 1000, which means it's likely to clash with a real user in the containing host<br />
* Bamboo container doesn't support any reverse proxy configuration, which means hiding it behind nginx is likely to result in broken Application Links. This can be worked around by manually editing /opt/atlassian/bamboo/conf/server.xml, but those changes will be overwritten on every container upgrade.<br />
<br />
== Tips / Fixes ==<br />
<br />
=== Tautulli slow to start ===<br />
This may be due to an attempt to chown a large number of files. <br />
Login to the container:<br />
sudo docker exec -it <container> /bin/bash<br />
Disable the chown step by editing <code>/etc/cont-init.d/30-config</code> and commenting out the chown command.<br />
<br />
=== Adding an SSL cert for Unifi ===<br />
sudo openssl pkcs12 -export -inkey /etc/ssl/bretts.org/key.pem -in /etc/ssl/bretts.org/fullchain.pem -out /tmp/cert.p12 -name unifi -password pass:temppass<br />
sudo keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/data/keystore -srckeystore /tmp/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias unifi -noprompt<br />
sudo docker restart unifi-controller<br />
sudo rm /tmp/cert.p12<br />
<br />
=== Local DNS resolution fails on docker 18.09 ===<br />
This may be the result of a bug: https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1820278. Normally the container's /etc/resolv.conf should mirror that of the host, but in this case it seems to just be a default version. As a workaround, create /etc/docker/daemon.json with the following contents:<br />
<br />
{<br />
"dns": ["192.168.1.1", "8.8.8.8"],<br />
"dns-search": ["bretts.org"]<br />
}</div>Andrewhttps://wiki.bretts.org/index.php?title=Docker&diff=8538Docker2024-01-02T16:08:14Z<p>Andrew: /* Containers */</p>
<hr />
<div>== Useful Commands ==<br />
<br />
; docker ps -a: List all containers<br />
; docker container inspect <container>: Show details of <container><br />
; docker logs <container>: Show logs for <container><br />
; docker exec -it <container> /bin/bash: Start an interactive shell in <container><br />
<br />
== Updating container ==<br />
<br />
=== Manually ===<br />
sudo docker pull <image><br />
sudo docker stop <container><br />
sudo docker rm <container><br />
<docker run command><br />
<br />
=== Automatically ===<br />
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock taisun/updater --oneshot <container><br />
<br />
== Containers ==<br />
<br />
=== Portainer ===<br />
sudo docker run -d --name portainer \<br />
-p 9443:9443 \<br />
-v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/portainer:/data \<br />
--restart unless-stopped \<br />
portainer/portainer-ce<br />
<br />
=== Plex ===<br />
<br />
Get your claim token: https://www.plex.tv/claim/<br />
<br />
Create the container with the claim token substituted:<br />
sudo docker run -d --name plex --network=host -e PLEX_UID=111 -e PLEX_GID=127 -e TZ=Europe/London -e PLEX_CLAIM=<CLAIM_TOKEN> \<br />
-v /var/lib/plexmediaserver:/config -v /srv:/srv --device=/dev/dri:/dev/dri \<br />
--restart unless-stopped \<br />
plexinc/pms-docker:plexpass<br />
<br />
=== Tautulli (Plex Monitoring/Notifications) ===<br />
sudo docker run -d --name tautulli -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8181:8181 \<br />
-v /var/lib/torrent/tautulli/config:/config -v /var/lib/plexmediaserver/Library/Logs:/logs \<br />
--restart unless-stopped \<br />
linuxserver/tautulli<br />
<br />
=== Jackett (Torrent Gateway) ===<br />
sudo docker run -d --name=jackett -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 9117:9117 \<br />
-v /var/lib/torrent/jackett/config:/config -v /var/lib/torrent/jackett/downloads:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/jackett<br />
<br />
=== Deluge ===<br />
sudo docker run -d --name deluge -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/torrent/deluged/config:/config -v /srv/incoming/torrents/deluge:/srv/incoming/torrents/deluge \<br />
-v /etc/ssl/bretts.org:/etc/ssl/bretts.org \<br />
--restart unless-stopped \<br />
linuxserver/deluge<br />
<br />
Since user groups don't seem to apply across the docker boundary, "torrent" will need to be given explicit permission to the private key file via an ACL:<br />
setfacl -m "u:torrent:rw" /etc/ssl/bretts.org/key.pem<br />
<br />
=== Radarr (Movie Downloads) ===<br />
sudo docker run -d --name radarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7878:7878 \<br />
-v /var/lib/torrent/radarr/config:/config -v /srv/videos/programs/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Sonarr (TV Downloads) ===<br />
sudo docker run -d --name=sonarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8989:8989 \<br />
-v /var/lib/torrent/sonarr/config:/config -v /srv/videos/programs/tv:/tv -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/sonarr<br />
<br />
=== Unifi ===<br />
sudo docker run -d --name=unifi-controller -e PUID=140 -e PGID=150 \<br />
-p 3478:3478/udp -p 10001:10001/udp -p 18080:18080 -p 18081:18081 -p 18443:18443 -p 18880:18880 -p 6789:6789 \<br />
-v /var/lib/unifi:/config \<br />
--restart unless-stopped \<br />
linuxserver/unifi-controller<br />
<br />
=== Home-Assistant (as part of host network) ===<br />
sudo docker run -d --name=home-assistant -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/home-assistant/config:/config -v /srv:/media -v /etc/ssl/bretts.org:/etc/ssl/bretts.org -v /var/www/html/arlo-snapshots:/arlo-snapshots \<br />
--restart unless-stopped \<br />
homeassistant/home-assistant<br />
<br />
=== Atlassian ===<br />
<br />
==== JIRA ====<br />
Note: In this instance JIRA is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name jira<br />
sudo docker run -d --name=jira -e TZ=Europe/London \<br />
-e ATL_TOMCAT_SCHEME=https -e ATL_TOMCAT_SECURE=true -e ATL_PROXY_NAME=jira.bretts.org -e ATL_PROXY_PORT=443 \<br />
-p 7980:8080 \<br />
-v jira:/var/atlassian/application-data/jira \<br />
--restart unless-stopped \<br />
atlassian/jira-software<br />
<br />
Docker JIRA runs with a uid and gid of 2001. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2001 jira-docker<br />
sudo adduser --system --no-create-home --uid 2001 --gid 2001 jira-docker<br />
<br />
==== Bitbucket====<br />
Note: In this instance Bitbucket is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bitbucket<br />
sudo docker run -d --name=bitbucket -e TZ=Europe/London \<br />
-e SERVER_SCHEME=https -e SERVER_SECURE=true -e SERVER_PROXY_NAME=bitbucket.bretts.org -e SERVER_PROXY_PORT=443 \<br />
-p 7990:7990 -p 7999:7999 \<br />
-v bitbucket:/var/atlassian/application-data/bitbucket \<br />
--restart unless-stopped \<br />
atlassian/bitbucket-server<br />
<br />
Docker Bitbucket runs with a uid and gid of 2003. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2003 bitbucket-docker<br />
sudo adduser --system --no-create-home --uid 2003 --gid 2003 bitbucket-docker<br />
<br />
==== Bamboo ====<br />
Note: In this instance Bamboo is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bamboo<br />
sudo docker run -d --name=bamboo -e TZ=Europe/London \<br />
-p 54663:54663 -p 7970:8085 \<br />
-v bamboo:/var/atlassian/application-data/bamboo \<br />
--restart unless-stopped \<br />
atlassian/bamboo-server<br />
<br />
===== Limitations =====<br />
* Bamboo runs with a uid of 1000, which means it's likely to clash with a real user in the containing host<br />
* Bamboo container doesn't support any reverse proxy configuration, which means hiding it behind nginx is likely to result in broken Application Links. This can be worked around by manually editing /opt/atlassian/bamboo/conf/server.xml, but those changes will be overwritten on every container upgrade.<br />
<br />
== Tips / Fixes ==<br />
<br />
=== Tautulli slow to start ===<br />
This may be due to an attempt to chown a large number of files. <br />
Login to the container:<br />
sudo docker exec -it <container> /bin/bash<br />
Disable the chown step by editing <code>/etc/cont-init.d/30-config</code> and commenting out the chown command.<br />
<br />
=== Adding an SSL cert for Unifi ===<br />
sudo openssl pkcs12 -export -inkey /etc/ssl/bretts.org/key.pem -in /etc/ssl/bretts.org/fullchain.pem -out /tmp/cert.p12 -name unifi -password pass:temppass<br />
sudo keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/data/keystore -srckeystore /tmp/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias unifi -noprompt<br />
sudo docker restart unifi-controller<br />
sudo rm /tmp/cert.p12<br />
<br />
=== Local DNS resolution fails on docker 18.09 ===<br />
This may be the result of a bug: https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1820278. Normally the container's /etc/resolv.conf should mirror that of the host, but in this case it seems to just be a default version. As a workaround, create /etc/docker/daemon.json with the following contents:<br />
<br />
{<br />
"dns": ["192.168.1.1", "8.8.8.8"],<br />
"dns-search": ["bretts.org"]<br />
}</div>Andrewhttps://wiki.bretts.org/index.php?title=Docker&diff=8537Docker2024-01-02T15:56:39Z<p>Andrew: /* Containers */</p>
<hr />
<div>== Useful Commands ==<br />
<br />
; docker ps -a: List all containers<br />
; docker container inspect <container>: Show details of <container><br />
; docker logs <container>: Show logs for <container><br />
; docker exec -it <container> /bin/bash: Start an interactive shell in <container><br />
<br />
== Updating container ==<br />
<br />
=== Manually ===<br />
sudo docker pull <image><br />
sudo docker stop <container><br />
sudo docker rm <container><br />
<docker run command><br />
<br />
=== Automatically ===<br />
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock taisun/updater --oneshot <container><br />
<br />
== Containers ==<br />
=== Plex ===<br />
<br />
Get your claim token: https://www.plex.tv/claim/<br />
<br />
Create the container with the claim token substituted:<br />
sudo docker run -d --name plex --network=host -e PLEX_UID=111 -e PLEX_GID=127 -e TZ=Europe/London -e PLEX_CLAIM=<CLAIM_TOKEN> \<br />
-v /var/lib/plexmediaserver:/config -v /srv:/srv --device=/dev/dri:/dev/dri \<br />
--restart unless-stopped \<br />
plexinc/pms-docker:plexpass<br />
<br />
=== Tautulli (Plex Monitoring/Notifications) ===<br />
sudo docker run -d --name tautulli -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8181:8181 \<br />
-v /var/lib/torrent/tautulli/config:/config -v /var/lib/plexmediaserver/Library/Logs:/logs \<br />
--restart unless-stopped \<br />
linuxserver/tautulli<br />
<br />
=== Jackett (Torrent Gateway) ===<br />
sudo docker run -d --name=jackett -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 9117:9117 \<br />
-v /var/lib/torrent/jackett/config:/config -v /var/lib/torrent/jackett/downloads:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/jackett<br />
<br />
=== Deluge ===<br />
sudo docker run -d --name deluge -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/torrent/deluged/config:/config -v /srv/incoming/torrents/deluge:/srv/incoming/torrents/deluge \<br />
-v /etc/ssl/bretts.org:/etc/ssl/bretts.org \<br />
--restart unless-stopped \<br />
linuxserver/deluge<br />
<br />
Since user groups don't seem to apply across the docker boundary, "torrent" will need to be given explicit permission to the private key file via an ACL:<br />
setfacl -m "u:torrent:rw" /etc/ssl/bretts.org/key.pem<br />
<br />
=== Radarr (Movie Downloads) ===<br />
sudo docker run -d --name radarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7878:7878 \<br />
-v /var/lib/torrent/radarr/config:/config -v /srv/videos/programs/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Sonarr (TV Downloads) ===<br />
sudo docker run -d --name=sonarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8989:8989 \<br />
-v /var/lib/torrent/sonarr/config:/config -v /srv/videos/programs/tv:/tv -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/sonarr<br />
<br />
=== Unifi ===<br />
sudo docker run -d --name=unifi-controller -e PUID=140 -e PGID=150 \<br />
-p 3478:3478/udp -p 10001:10001/udp -p 18080:18080 -p 18081:18081 -p 18443:18443 -p 18880:18880 -p 6789:6789 \<br />
-v /var/lib/unifi:/config \<br />
--restart unless-stopped \<br />
linuxserver/unifi-controller<br />
<br />
=== Home-Assistant (as part of host network) ===<br />
sudo docker run -d --name=home-assistant -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/home-assistant/config:/config -v /srv:/media -v /etc/ssl/bretts.org:/etc/ssl/bretts.org -v /var/www/html/arlo-snapshots:/arlo-snapshots \<br />
--restart unless-stopped \<br />
homeassistant/home-assistant<br />
<br />
=== Atlassian ===<br />
<br />
==== JIRA ====<br />
Note: In this instance JIRA is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name jira<br />
sudo docker run -d --name=jira -e TZ=Europe/London \<br />
-e ATL_TOMCAT_SCHEME=https -e ATL_TOMCAT_SECURE=true -e ATL_PROXY_NAME=jira.bretts.org -e ATL_PROXY_PORT=443 \<br />
-p 7980:8080 \<br />
-v jira:/var/atlassian/application-data/jira \<br />
--restart unless-stopped \<br />
atlassian/jira-software<br />
<br />
Docker JIRA runs with a uid and gid of 2001. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2001 jira-docker<br />
sudo adduser --system --no-create-home --uid 2001 --gid 2001 jira-docker<br />
<br />
==== Bitbucket====<br />
Note: In this instance Bitbucket is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bitbucket<br />
sudo docker run -d --name=bitbucket -e TZ=Europe/London \<br />
-e SERVER_SCHEME=https -e SERVER_SECURE=true -e SERVER_PROXY_NAME=bitbucket.bretts.org -e SERVER_PROXY_PORT=443 \<br />
-p 7990:7990 -p 7999:7999 \<br />
-v bitbucket:/var/atlassian/application-data/bitbucket \<br />
--restart unless-stopped \<br />
atlassian/bitbucket-server<br />
<br />
Docker Bitbucket runs with a uid and gid of 2003. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2003 bitbucket-docker<br />
sudo adduser --system --no-create-home --uid 2003 --gid 2003 bitbucket-docker<br />
<br />
==== Bamboo ====<br />
Note: In this instance Bamboo is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bamboo<br />
sudo docker run -d --name=bamboo -e TZ=Europe/London \<br />
-p 54663:54663 -p 7970:8085 \<br />
-v bamboo:/var/atlassian/application-data/bamboo \<br />
--restart unless-stopped \<br />
atlassian/bamboo-server<br />
<br />
===== Limitations =====<br />
* Bamboo runs with a uid of 1000, which means it's likely to clash with a real user in the containing host<br />
* Bamboo container doesn't support any reverse proxy configuration, which means hiding it behind nginx is likely to result in broken Application Links. This can be worked around by manually editing /opt/atlassian/bamboo/conf/server.xml, but those changes will be overwritten on every container upgrade.<br />
<br />
== Tips / Fixes ==<br />
<br />
=== Tautulli slow to start ===<br />
This may be due to an attempt to chown a large number of files. <br />
Login to the container:<br />
sudo docker exec -it <container> /bin/bash<br />
Disable the chown step by editing <code>/etc/cont-init.d/30-config</code> and commenting out the chown command.<br />
<br />
=== Adding an SSL cert for Unifi ===<br />
sudo openssl pkcs12 -export -inkey /etc/ssl/bretts.org/key.pem -in /etc/ssl/bretts.org/fullchain.pem -out /tmp/cert.p12 -name unifi -password pass:temppass<br />
sudo keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/data/keystore -srckeystore /tmp/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias unifi -noprompt<br />
sudo docker restart unifi-controller<br />
sudo rm /tmp/cert.p12<br />
<br />
=== Local DNS resolution fails on docker 18.09 ===<br />
This may be the result of a bug: https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1820278. Normally the container's /etc/resolv.conf should mirror that of the host, but in this case it seems to just be a default version. As a workaround, create /etc/docker/daemon.json with the following contents:<br />
<br />
{<br />
"dns": ["192.168.1.1", "8.8.8.8"],<br />
"dns-search": ["bretts.org"]<br />
}</div>Andrewhttps://wiki.bretts.org/index.php?title=Services&diff=8536Services2023-12-28T17:12:47Z<p>Andrew: Created page with "https://www.howtogeek.com/687970/how-to-run-a-linux-program-at-startup-with-systemd/"</p>
<hr />
<div>https://www.howtogeek.com/687970/how-to-run-a-linux-program-at-startup-with-systemd/</div>Andrewhttps://wiki.bretts.org/index.php?title=Linux_Tips&diff=8535Linux Tips2023-12-28T17:11:45Z<p>Andrew: /* Administration */</p>
<hr />
<div>Note, these tips are mainly aimed at Ubuntu/Kubuntu distributions.<br />
<br />
== Administration ==<br />
* [[apt-get/dpkg]]<br />
* [[Apache2]]<br />
* [[Atlassian]]<br />
* [[Backups]] (restic & b2)<br />
* [[bash]]<br />
* [[AIGLX & Compiz]]<br />
* [[Deluge]]<br />
* [[Docker]]<br />
* [[DVDs]]<br />
* [[Firefox]]<br />
* [[Git & Atlassian]]<br />
* [[Grub]]<br />
* [[HomeAssistant]]<br />
* [[Homebridge]]<br />
* [[Java]]<br />
* [[kPlaylist]]<br />
* [[KDE]]<br />
* [[Kerberos & LDAP]]<br />
* [[LIRC]]<br />
* [[LVM]]<br />
* [[Mail Server]] (Postfix, Procmail, Spamassassin etc.)<br />
* [[mdadm]] (RAID)<br />
* [[MediaWiki]]<br />
* [[Miscellaneous]] ([[Miscellaneous Archive|Archive]])<br />
* [[Munin]]<br />
* [[Mutt]]<br />
* [[MySQL]]<br />
* [[MythTV]]<br />
* [[nagios]]<br />
* [[netdata]]<br />
* [[Pine]]<br />
* [[PostgreSQL]]<br />
* [[Samba]]<br />
* [[SnapRAID / MergerFS]]<br />
* [[Snapshot backups using rsync]]<br />
* [[SNMP/MRTG]]<br />
* [[Services]]<br />
* [[SSL]]<br />
* [[Subsonic]]<br />
* [[Subversion]]<br />
* [[Tomcat 5]]<br />
* [[Tripwire]]<br />
* [[Truecrypt]]<br />
* [[Unifi Controller]]<br />
* [[User Management]]<br />
* [[XDMCP & VNC]]<br />
* [[Webmin]]<br />
* [[WPA]]<br />
<br />
== Other ==<br />
* [[Sony TR2MP on Kubuntu Dapper Drake]]</div>Andrewhttps://wiki.bretts.org/index.php?title=Machine_List&diff=8534Machine List2023-12-27T23:11:52Z<p>Andrew: /* virginia */</p>
<hr />
<div>== History ==<br />
<br />
=== Pre-networking ===<br />
* (1988-1993) 8086 4.2MHz, MS-DOS 3.3<br />
* (1993-1995) 386SX 16MHz, Windows 3.1 + MS-DOS 5.0<br />
* (1995-1996) 486DX 50MHz, Windows 3.1 + MS-DOS 5.0/Windows 95<br />
<br />
=== indiana ===<br />
* (1996 - 1999) Fujitsu-ICL Pentium 90, Windows 95 + Windows NT 4.0<br />
** 1996?: +Orchid Righteous 3D<br />
<br />
=== colorado === <br />
* (1999 - 2001) Homebuilt Celeron 300, Windows 98/Me/XP<br />
** Matrox Millennium G200?<br />
* (2001 - 2002) Homebuilt Pentium 3 800, Windows XP<br />
* (2002 - 2005) Homebuilt Pentium 3 800, Mandriva Linux 8.0/9.0/10.0<br />
* (2005 - 2008) Dell Dimension 4300 (Pentium 4 1.8), Kubuntu 6.04<br />
** 2005: +128MB Sparkle GeForce MX4000 AGP <br />
** 2005: +Hauppauge WinTV-NOVA-T-MCE <br />
** 2006: +Seagate Barracuda 7200.10 320GB ST3320620A<br />
** 2006: +NEC-4570 16x DVD±RW/RAM Black <br />
* (2008 - 2010) Dell Dimension 4300 (Pentium 4 1.8), Ubuntu 8.04<br />
** 2008: +Seagate Barracuda 7200.10 750GB SATA2 3.5" <br />
** 2008: +SATA & IDE PCI Controller Card<br />
<br />
=== texas ===<br />
* (2002 - 2005) Dell Dimension 4300 (Pentium 4 1.8), Windows XP<br />
** GeForce 2 MX400?<br />
<br />
=== vermont ===<br />
* (2004 - 2006) Sony Vaio TR5MP (Pentium M 1.0), Windows XP<br />
* (2006 - 2008) Sony Vaio TR5MP (Pentium M 1.0), Ubuntu 6.10/7.04/7.10/8.04<br />
<br />
=== alaska ===<br />
* (2005 - 2007) Homebuilt Athlon64 3500+, Windows XP + Ubuntu 7.04 -> 8.04<br />
** Cooler Master Wave Master TAC-T01-E1C Silver All Aluminum Alloy ATX Mid Tower Computer Case<br />
** MSI K8N Diamond<br />
** AMD Athlon 64 3500+<br />
** 512MB Corsair Value Select 400MHz DDR Memory Stick <br />
** 128MB Sparkle GeForce 6600GT PCI-E <br />
** 300Gb Maxtor DiamondMax 10 ATA/133 6L300S0<br />
** NEC ND-3520 Silver <br />
** 460W Akasa PaxPower Ultra Quiet <br />
** 2006: +320GB Seagate Barracuda 7200.10 SATA2 ST3320620AS<br />
** 2007: +Sapphire X1950PRO 512MB GDDR3 PCI-Express<br />
* (2008 - 2014) Homebuilt Core 2 Duo 3.0, Windows XP/7 + Ubuntu 8.04 -> 9.10<br />
** 2008: +Gigabyte GA P35C-DS3R, iP35 Express, S775, PCI-E(x16), DDR2/3 1066/1333/800, SATA II, SATA RAID, ATX<br />
** 2008: +Intel Core 2 Duo E8400 2 x 3.00Ghz 6Mb Cache 1333 FSB Dual Core<br />
** 2008: +Corsair XMS6400 4GB DDR2 (2x2GB) 800Mhz Non-ECC<br />
** 2009: +GeForce GTX 260 Core 216<br />
** 2012: +Samsung 830 256GB SSD<br />
* (2014 - ) Homebuilt Core i7-4770, Windows 7/10<br />
** 2014: +Asus Z87-Plus Motherboard (Socket 1150, 4x DDR3, ATX, 2x PCI Express 3.0/2.0, 6x SATA 6.0 Gb/s, USB 3.0)<br />
** 2014: +Intel Core i7 4770 Quad Core Retail CPU (Socket 1150, 3.40GHz, 8MB, Haswell)<br />
** 2014: +Corsair CML16GX3M2A1600C10 Vengeance Low Profile 16GB (2x8GB) DDR3 1600 Mhz CL10 XMP<br />
** 2014: +Sapphire R9 270X 2GB Vapor-X 1050MHz GDDR 5 PCI Express Graphics Card<br />
** 2015: +ASUS Z87-A Motherboard<br />
** 2015: +Cooler Master Hyper 103 92mm Fan<br />
** 2016: +MSI GeForce GTX 970 GAMING Twin Frozr V 4GB Graphics Card (Maxwell)<br />
** 2016: +Samsung 850 EVO 500 GB 2.5 inch Solid State Drive<br />
** 2021: +Corsair RM650x PSU<br />
<br />
=== hawaii === <br />
* (2007 - 2009) Nintendo Wii<br />
<br />
=== montana ===<br />
* (2007 - ) Apple Mac Mini (Mid 2007), Mac OS X Tiger -> Lion<br />
** Core 2 Duo T7200 @ 2.0GHz<br />
** 4GB DDR2-667 RAM<br />
** 120GB HDD<br />
** Intel GMA 950<br />
<br />
=== pennsylvania ===<br />
* (2008 - 2012) Sony PS3<br />
<br />
=== nevada ===<br />
* (2009 - 2011) Samsung NC20 (VIA Nano 1.6), Windows XP + Ubuntu 9.04 -> 9.10<br />
<br />
=== maine ===<br />
* (2010 - 2023) Homebuilt Core i5 750, Ubuntu 9.10/12.04/14.04/16.04/18.04/20.04/22.04<br />
** Cooler Master ATCS 840 RC-840-KKN1-GP Black Aluminum ATX Full Tower Computer Case<br />
*** Front Case Fan failed<br />
** Gigabyte GA-P55-UD3R LGA 1156 Intel P55 ATX Intel Motherboard<br />
** Intel Core i5-750 Lynnfield 2.66GHz LGA 1156 95W Quad-Core Processor Model BX80605I5750<br />
** OCZ Gold 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model OCZ3G1333LV4GK<br />
** MSI N8400GS-D256H GeForce 8400 GS 256MB 64-bit GDDR2 PCI Express 2.0 x16 HDCP Ready Video Card<br />
** Seagate Barracuda LP ST31500541AS 1.5TB 5900 RPM SATA 3.0Gb/s 3.5"<br />
** Nexus NX-5000 R3 530W ATX12V v2.2 80 PLUS BRONZE Certified Modular Active PFC Power Supply<br />
** 2011 onwards: +Various SATA HDDs<br />
** 2013: +Crucial Ballistix 16GB (2x8GB) 240-pin DIMM, DDR3 PC3-12800<br />
** 2019: +Timetec Hynix IC 16GB (2x8GB) DDR3 PC3-12800 1600 MHz Non ECC Unbuffered 1.35V/1.5V Dual Rank 240 Pin UDIMM<br />
** 2021: +Corsair RM650 PSU<br />
** 2021: +Cooler Master Hyper 212 CPU Fan<br />
* (2023 - ) Homebuilt Core i5 13500, Ubuntu 22.04 <br />
** 2023: +ASRock Z790 PRO RS/D4<br />
** 2023: +Intel Core i5-13500 Desktop Processor 14 cores (6 P-cores + 8 E-cores) <br />
** 2023: +Corsair CMK64GX4M2E3200C16 Vengeance LPX 64GB (2 x 32GB) DDR4 3200<br />
<br />
=== arizona ===<br />
* (2010 - ) Apple Macbook Air (Late 2010 13-inch), Mac OS X Snow Leopard -> macOS Sierra<br />
** Core 2 Duo SL9400 @ 1.86 GHz<br />
** 2GB DDR3-1066 RAM<br />
** 128GB SSD<br />
** Nvidia GeForce 320M<br />
<br />
=== dakota ===<br />
* (2012 - ) Apple Mac Mini (Mid 2011), Mac OS X Lion -> macOS Sierra<br />
** Core i5-2520M @ 2.5 GHz<br />
** 4GB DDR3-1333 RAM<br />
** 500GB SATA HDD<br />
** AMD Radeon HD 6630M<br />
<br />
=== router ===<br />
* (2016 - ) Homebuilt Celeron G1840, pfSense<br />
** IN Win EM050 Matx Black Case<br />
** MSI H97M-G43 Socket 1150 VGA DVI HDMI DisplayPort mATX Motherboard<br />
** Intel Celeron G1840 2.80GHz Socket 1150 2MB L3 Cache<br />
** Corsair 4GB DDR3 1333MHz Memory Module CL9(9-9-9-24) 1.5V Unbuffered Non-ECC<br />
** Corsair Force Series LS 60GB SATA 2.5inch SSD<br />
** 2021: +Corsair RM650 PSU<br />
<br />
=== oregon ===<br />
* (2016 - ) Apple MacBook Pro (Late 2016 13-inch Touch Bar), macOS Sierra -> Mojave<br />
** Core i5-6287U @ 3.1GHz<br />
** 16GB DDR3-2133 RAM<br />
** 256GB PCIe SSD<br />
** Intel Iris Graphics 550<br />
<br />
=== virginia ===<br />
* (2021 - ) Homebuilt Ryzen 5 5600X, Windows 10<br />
** Phanteks Evolv X Antracite Grey Case<br />
** Gigabyte AMD Ryzen X570 AORUS PRO<br />
** Ryzen 5 5600X @ 3.7Ghz<br />
** Corsair Vengeance LPX Black 32GB 3600MHz 2x16GB CAS 18-22-22-42 DDR4<br />
** NVIDIA RTX 3080 Founders Edition<br />
** Corsair Force MP600 1TB M.2 PCIe Gen 4 NVMe SSD<br />
** Corsair RM850 PSU</div>Andrewhttps://wiki.bretts.org/index.php?title=Machine_List&diff=8533Machine List2023-12-27T23:03:08Z<p>Andrew: /* maine */</p>
<hr />
<div>== History ==<br />
<br />
=== Pre-networking ===<br />
* (1988-1993) 8086 4.2MHz, MS-DOS 3.3<br />
* (1993-1995) 386SX 16MHz, Windows 3.1 + MS-DOS 5.0<br />
* (1995-1996) 486DX 50MHz, Windows 3.1 + MS-DOS 5.0/Windows 95<br />
<br />
=== indiana ===<br />
* (1996 - 1999) Fujitsu-ICL Pentium 90, Windows 95 + Windows NT 4.0<br />
** 1996?: +Orchid Righteous 3D<br />
<br />
=== colorado === <br />
* (1999 - 2001) Homebuilt Celeron 300, Windows 98/Me/XP<br />
** Matrox Millennium G200?<br />
* (2001 - 2002) Homebuilt Pentium 3 800, Windows XP<br />
* (2002 - 2005) Homebuilt Pentium 3 800, Mandriva Linux 8.0/9.0/10.0<br />
* (2005 - 2008) Dell Dimension 4300 (Pentium 4 1.8), Kubuntu 6.04<br />
** 2005: +128MB Sparkle GeForce MX4000 AGP <br />
** 2005: +Hauppauge WinTV-NOVA-T-MCE <br />
** 2006: +Seagate Barracuda 7200.10 320GB ST3320620A<br />
** 2006: +NEC-4570 16x DVD±RW/RAM Black <br />
* (2008 - 2010) Dell Dimension 4300 (Pentium 4 1.8), Ubuntu 8.04<br />
** 2008: +Seagate Barracuda 7200.10 750GB SATA2 3.5" <br />
** 2008: +SATA & IDE PCI Controller Card<br />
<br />
=== texas ===<br />
* (2002 - 2005) Dell Dimension 4300 (Pentium 4 1.8), Windows XP<br />
** GeForce 2 MX400?<br />
<br />
=== vermont ===<br />
* (2004 - 2006) Sony Vaio TR5MP (Pentium M 1.0), Windows XP<br />
* (2006 - 2008) Sony Vaio TR5MP (Pentium M 1.0), Ubuntu 6.10/7.04/7.10/8.04<br />
<br />
=== alaska ===<br />
* (2005 - 2007) Homebuilt Athlon64 3500+, Windows XP + Ubuntu 7.04 -> 8.04<br />
** Cooler Master Wave Master TAC-T01-E1C Silver All Aluminum Alloy ATX Mid Tower Computer Case<br />
** MSI K8N Diamond<br />
** AMD Athlon 64 3500+<br />
** 512MB Corsair Value Select 400MHz DDR Memory Stick <br />
** 128MB Sparkle GeForce 6600GT PCI-E <br />
** 300Gb Maxtor DiamondMax 10 ATA/133 6L300S0<br />
** NEC ND-3520 Silver <br />
** 460W Akasa PaxPower Ultra Quiet <br />
** 2006: +320GB Seagate Barracuda 7200.10 SATA2 ST3320620AS<br />
** 2007: +Sapphire X1950PRO 512MB GDDR3 PCI-Express<br />
* (2008 - 2014) Homebuilt Core 2 Duo 3.0, Windows XP/7 + Ubuntu 8.04 -> 9.10<br />
** 2008: +Gigabyte GA P35C-DS3R, iP35 Express, S775, PCI-E(x16), DDR2/3 1066/1333/800, SATA II, SATA RAID, ATX<br />
** 2008: +Intel Core 2 Duo E8400 2 x 3.00Ghz 6Mb Cache 1333 FSB Dual Core<br />
** 2008: +Corsair XMS6400 4GB DDR2 (2x2GB) 800Mhz Non-ECC<br />
** 2009: +GeForce GTX 260 Core 216<br />
** 2012: +Samsung 830 256GB SSD<br />
* (2014 - ) Homebuilt Core i7-4770, Windows 7/10<br />
** 2014: +Asus Z87-Plus Motherboard (Socket 1150, 4x DDR3, ATX, 2x PCI Express 3.0/2.0, 6x SATA 6.0 Gb/s, USB 3.0)<br />
** 2014: +Intel Core i7 4770 Quad Core Retail CPU (Socket 1150, 3.40GHz, 8MB, Haswell)<br />
** 2014: +Corsair CML16GX3M2A1600C10 Vengeance Low Profile 16GB (2x8GB) DDR3 1600 Mhz CL10 XMP<br />
** 2014: +Sapphire R9 270X 2GB Vapor-X 1050MHz GDDR 5 PCI Express Graphics Card<br />
** 2015: +ASUS Z87-A Motherboard<br />
** 2015: +Cooler Master Hyper 103 92mm Fan<br />
** 2016: +MSI GeForce GTX 970 GAMING Twin Frozr V 4GB Graphics Card (Maxwell)<br />
** 2016: +Samsung 850 EVO 500 GB 2.5 inch Solid State Drive<br />
** 2021: +Corsair RM650x PSU<br />
<br />
=== hawaii === <br />
* (2007 - 2009) Nintendo Wii<br />
<br />
=== montana ===<br />
* (2007 - ) Apple Mac Mini (Mid 2007), Mac OS X Tiger -> Lion<br />
** Core 2 Duo T7200 @ 2.0GHz<br />
** 4GB DDR2-667 RAM<br />
** 120GB HDD<br />
** Intel GMA 950<br />
<br />
=== pennsylvania ===<br />
* (2008 - 2012) Sony PS3<br />
<br />
=== nevada ===<br />
* (2009 - 2011) Samsung NC20 (VIA Nano 1.6), Windows XP + Ubuntu 9.04 -> 9.10<br />
<br />
=== maine ===<br />
* (2010 - 2023) Homebuilt Core i5 750, Ubuntu 9.10/12.04/14.04/16.04/18.04/20.04/22.04<br />
** Cooler Master ATCS 840 RC-840-KKN1-GP Black Aluminum ATX Full Tower Computer Case<br />
*** Front Case Fan failed<br />
** Gigabyte GA-P55-UD3R LGA 1156 Intel P55 ATX Intel Motherboard<br />
** Intel Core i5-750 Lynnfield 2.66GHz LGA 1156 95W Quad-Core Processor Model BX80605I5750<br />
** OCZ Gold 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model OCZ3G1333LV4GK<br />
** MSI N8400GS-D256H GeForce 8400 GS 256MB 64-bit GDDR2 PCI Express 2.0 x16 HDCP Ready Video Card<br />
** Seagate Barracuda LP ST31500541AS 1.5TB 5900 RPM SATA 3.0Gb/s 3.5"<br />
** Nexus NX-5000 R3 530W ATX12V v2.2 80 PLUS BRONZE Certified Modular Active PFC Power Supply<br />
** 2011 onwards: +Various SATA HDDs<br />
** 2013: +Crucial Ballistix 16GB (2x8GB) 240-pin DIMM, DDR3 PC3-12800<br />
** 2019: +Timetec Hynix IC 16GB (2x8GB) DDR3 PC3-12800 1600 MHz Non ECC Unbuffered 1.35V/1.5V Dual Rank 240 Pin UDIMM<br />
** 2021: +Corsair RM650 PSU<br />
** 2021: +Cooler Master Hyper 212 CPU Fan<br />
* (2023 - ) Homebuilt Core i5 13500, Ubuntu 22.04 <br />
** 2023: +ASRock Z790 PRO RS/D4<br />
** 2023: +Intel Core i5-13500 Desktop Processor 14 cores (6 P-cores + 8 E-cores) <br />
** 2023: +Corsair CMK64GX4M2E3200C16 Vengeance LPX 64GB (2 x 32GB) DDR4 3200<br />
<br />
=== arizona ===<br />
* (2010 - ) Apple Macbook Air (Late 2010 13-inch), Mac OS X Snow Leopard -> macOS Sierra<br />
** Core 2 Duo SL9400 @ 1.86 GHz<br />
** 2GB DDR3-1066 RAM<br />
** 128GB SSD<br />
** Nvidia GeForce 320M<br />
<br />
=== dakota ===<br />
* (2012 - ) Apple Mac Mini (Mid 2011), Mac OS X Lion -> macOS Sierra<br />
** Core i5-2520M @ 2.5 GHz<br />
** 4GB DDR3-1333 RAM<br />
** 500GB SATA HDD<br />
** AMD Radeon HD 6630M<br />
<br />
=== router ===<br />
* (2016 - ) Homebuilt Celeron G1840, pfSense<br />
** IN Win EM050 Matx Black Case<br />
** MSI H97M-G43 Socket 1150 VGA DVI HDMI DisplayPort mATX Motherboard<br />
** Intel Celeron G1840 2.80GHz Socket 1150 2MB L3 Cache<br />
** Corsair 4GB DDR3 1333MHz Memory Module CL9(9-9-9-24) 1.5V Unbuffered Non-ECC<br />
** Corsair Force Series LS 60GB SATA 2.5inch SSD<br />
** 2021: +Corsair RM650 PSU<br />
<br />
=== oregon ===<br />
* (2016 - ) Apple MacBook Pro (Late 2016 13-inch Touch Bar), macOS Sierra -> Mojave<br />
** Core i5-6287U @ 3.1GHz<br />
** 16GB DDR3-2133 RAM<br />
** 256GB PCIe SSD<br />
** Intel Iris Graphics 550<br />
<br />
=== virginia ===<br />
* (2021 - ) Homebuilt Ryzen 5 5600X, Windows 10<br />
** Phanteks Evolv X Antracite Grey Case<br />
** Gigabyte AMD Ryzen X570 AORUS PRO<br />
** Ryzen 5 5600X @ 3.7Ghz<br />
** Corsair Vengeance LPX Black 32GB 3600MHz 2x16GB CAS 18-22-22-42 DDR4<br />
** NVIDIA RTX 3080 Founders Edit<br />
** Corsair Force MP600 1TB M.2 PCIe Gen 4 NVMe SSD<br />
** Corsair RM850 PSU</div>Andrewhttps://wiki.bretts.org/index.php?title=Machine_List&diff=8532Machine List2023-12-27T23:02:34Z<p>Andrew: /* maine */</p>
<hr />
<div>== History ==<br />
<br />
=== Pre-networking ===<br />
* (1988-1993) 8086 4.2MHz, MS-DOS 3.3<br />
* (1993-1995) 386SX 16MHz, Windows 3.1 + MS-DOS 5.0<br />
* (1995-1996) 486DX 50MHz, Windows 3.1 + MS-DOS 5.0/Windows 95<br />
<br />
=== indiana ===<br />
* (1996 - 1999) Fujitsu-ICL Pentium 90, Windows 95 + Windows NT 4.0<br />
** 1996?: +Orchid Righteous 3D<br />
<br />
=== colorado === <br />
* (1999 - 2001) Homebuilt Celeron 300, Windows 98/Me/XP<br />
** Matrox Millennium G200?<br />
* (2001 - 2002) Homebuilt Pentium 3 800, Windows XP<br />
* (2002 - 2005) Homebuilt Pentium 3 800, Mandriva Linux 8.0/9.0/10.0<br />
* (2005 - 2008) Dell Dimension 4300 (Pentium 4 1.8), Kubuntu 6.04<br />
** 2005: +128MB Sparkle GeForce MX4000 AGP <br />
** 2005: +Hauppauge WinTV-NOVA-T-MCE <br />
** 2006: +Seagate Barracuda 7200.10 320GB ST3320620A<br />
** 2006: +NEC-4570 16x DVD±RW/RAM Black <br />
* (2008 - 2010) Dell Dimension 4300 (Pentium 4 1.8), Ubuntu 8.04<br />
** 2008: +Seagate Barracuda 7200.10 750GB SATA2 3.5" <br />
** 2008: +SATA & IDE PCI Controller Card<br />
<br />
=== texas ===<br />
* (2002 - 2005) Dell Dimension 4300 (Pentium 4 1.8), Windows XP<br />
** GeForce 2 MX400?<br />
<br />
=== vermont ===<br />
* (2004 - 2006) Sony Vaio TR5MP (Pentium M 1.0), Windows XP<br />
* (2006 - 2008) Sony Vaio TR5MP (Pentium M 1.0), Ubuntu 6.10/7.04/7.10/8.04<br />
<br />
=== alaska ===<br />
* (2005 - 2007) Homebuilt Athlon64 3500+, Windows XP + Ubuntu 7.04 -> 8.04<br />
** Cooler Master Wave Master TAC-T01-E1C Silver All Aluminum Alloy ATX Mid Tower Computer Case<br />
** MSI K8N Diamond<br />
** AMD Athlon 64 3500+<br />
** 512MB Corsair Value Select 400MHz DDR Memory Stick <br />
** 128MB Sparkle GeForce 6600GT PCI-E <br />
** 300Gb Maxtor DiamondMax 10 ATA/133 6L300S0<br />
** NEC ND-3520 Silver <br />
** 460W Akasa PaxPower Ultra Quiet <br />
** 2006: +320GB Seagate Barracuda 7200.10 SATA2 ST3320620AS<br />
** 2007: +Sapphire X1950PRO 512MB GDDR3 PCI-Express<br />
* (2008 - 2014) Homebuilt Core 2 Duo 3.0, Windows XP/7 + Ubuntu 8.04 -> 9.10<br />
** 2008: +Gigabyte GA P35C-DS3R, iP35 Express, S775, PCI-E(x16), DDR2/3 1066/1333/800, SATA II, SATA RAID, ATX<br />
** 2008: +Intel Core 2 Duo E8400 2 x 3.00Ghz 6Mb Cache 1333 FSB Dual Core<br />
** 2008: +Corsair XMS6400 4GB DDR2 (2x2GB) 800Mhz Non-ECC<br />
** 2009: +GeForce GTX 260 Core 216<br />
** 2012: +Samsung 830 256GB SSD<br />
* (2014 - ) Homebuilt Core i7-4770, Windows 7/10<br />
** 2014: +Asus Z87-Plus Motherboard (Socket 1150, 4x DDR3, ATX, 2x PCI Express 3.0/2.0, 6x SATA 6.0 Gb/s, USB 3.0)<br />
** 2014: +Intel Core i7 4770 Quad Core Retail CPU (Socket 1150, 3.40GHz, 8MB, Haswell)<br />
** 2014: +Corsair CML16GX3M2A1600C10 Vengeance Low Profile 16GB (2x8GB) DDR3 1600 Mhz CL10 XMP<br />
** 2014: +Sapphire R9 270X 2GB Vapor-X 1050MHz GDDR 5 PCI Express Graphics Card<br />
** 2015: +ASUS Z87-A Motherboard<br />
** 2015: +Cooler Master Hyper 103 92mm Fan<br />
** 2016: +MSI GeForce GTX 970 GAMING Twin Frozr V 4GB Graphics Card (Maxwell)<br />
** 2016: +Samsung 850 EVO 500 GB 2.5 inch Solid State Drive<br />
** 2021: +Corsair RM650x PSU<br />
<br />
=== hawaii === <br />
* (2007 - 2009) Nintendo Wii<br />
<br />
=== montana ===<br />
* (2007 - ) Apple Mac Mini (Mid 2007), Mac OS X Tiger -> Lion<br />
** Core 2 Duo T7200 @ 2.0GHz<br />
** 4GB DDR2-667 RAM<br />
** 120GB HDD<br />
** Intel GMA 950<br />
<br />
=== pennsylvania ===<br />
* (2008 - 2012) Sony PS3<br />
<br />
=== nevada ===<br />
* (2009 - 2011) Samsung NC20 (VIA Nano 1.6), Windows XP + Ubuntu 9.04 -> 9.10<br />
<br />
=== maine ===<br />
* (2010 - 2023) Homebuilt Core i5 750, Ubuntu 9.10/12.04/14.04/16.04/18.04/20.04/22.04<br />
** Cooler Master ATCS 840 RC-840-KKN1-GP Black Aluminum ATX Full Tower Computer Case<br />
*** Front Case Fan failed<br />
** Gigabyte GA-P55-UD3R LGA 1156 Intel P55 ATX Intel Motherboard<br />
** Intel Core i5-750 Lynnfield 2.66GHz LGA 1156 95W Quad-Core Processor Model BX80605I5750<br />
** OCZ Gold 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model OCZ3G1333LV4GK<br />
** MSI N8400GS-D256H GeForce 8400 GS 256MB 64-bit GDDR2 PCI Express 2.0 x16 HDCP Ready Video Card<br />
** Seagate Barracuda LP ST31500541AS 1.5TB 5900 RPM SATA 3.0Gb/s 3.5"<br />
** Nexus NX-5000 R3 530W ATX12V v2.2 80 PLUS BRONZE Certified Modular Active PFC Power Supply<br />
** 2011 onwards: +Various SATA HDDs<br />
** 2013: +Crucial Ballistix 16GB (2x8GB) 240-pin DIMM, DDR3 PC3-12800<br />
** 2019: +Timetec Hynix IC 16GB (2x8GB) DDR3 PC3-12800 1600 MHz Non ECC Unbuffered 1.35V/1.5V Dual Rank 240 Pin UDIMM<br />
** 2021: +Corsair RM650 PSU<br />
** 2021: +Cooler Master Hyper 212 CPU Fan<br />
* (2023 - ) Homebuilt Core i5 13500, Ubuntu 22.04 <br />
** 2023: +ASRock Z790 PRO RS/D4<br />
** 2023: +Intel Core i5-13500 Desktop Processor 14 cores (6 P-cores + 8 E-cores) <br />
** 2023: +CORSAIR CMK64GX4M2E3200C16 VENGEANCE LPX 64GB (2 x 32GB) DDR4 3200<br />
<br />
=== arizona ===<br />
* (2010 - ) Apple Macbook Air (Late 2010 13-inch), Mac OS X Snow Leopard -> macOS Sierra<br />
** Core 2 Duo SL9400 @ 1.86 GHz<br />
** 2GB DDR3-1066 RAM<br />
** 128GB SSD<br />
** Nvidia GeForce 320M<br />
<br />
=== dakota ===<br />
* (2012 - ) Apple Mac Mini (Mid 2011), Mac OS X Lion -> macOS Sierra<br />
** Core i5-2520M @ 2.5 GHz<br />
** 4GB DDR3-1333 RAM<br />
** 500GB SATA HDD<br />
** AMD Radeon HD 6630M<br />
<br />
=== router ===<br />
* (2016 - ) Homebuilt Celeron G1840, pfSense<br />
** IN Win EM050 Matx Black Case<br />
** MSI H97M-G43 Socket 1150 VGA DVI HDMI DisplayPort mATX Motherboard<br />
** Intel Celeron G1840 2.80GHz Socket 1150 2MB L3 Cache<br />
** Corsair 4GB DDR3 1333MHz Memory Module CL9(9-9-9-24) 1.5V Unbuffered Non-ECC<br />
** Corsair Force Series LS 60GB SATA 2.5inch SSD<br />
** 2021: +Corsair RM650 PSU<br />
<br />
=== oregon ===<br />
* (2016 - ) Apple MacBook Pro (Late 2016 13-inch Touch Bar), macOS Sierra -> Mojave<br />
** Core i5-6287U @ 3.1GHz<br />
** 16GB DDR3-2133 RAM<br />
** 256GB PCIe SSD<br />
** Intel Iris Graphics 550<br />
<br />
=== virginia ===<br />
* (2021 - ) Homebuilt Ryzen 5 5600X, Windows 10<br />
** Phanteks Evolv X Antracite Grey Case<br />
** Gigabyte AMD Ryzen X570 AORUS PRO<br />
** Ryzen 5 5600X @ 3.7Ghz<br />
** Corsair Vengeance LPX Black 32GB 3600MHz 2x16GB CAS 18-22-22-42 DDR4<br />
** NVIDIA RTX 3080 Founders Edit<br />
** Corsair Force MP600 1TB M.2 PCIe Gen 4 NVMe SSD<br />
** Corsair RM850 PSU</div>Andrewhttps://wiki.bretts.org/index.php?title=Machine_List&diff=8531Machine List2023-12-27T23:00:14Z<p>Andrew: /* virginia */</p>
<hr />
<div>== History ==<br />
<br />
=== Pre-networking ===<br />
* (1988-1993) 8086 4.2MHz, MS-DOS 3.3<br />
* (1993-1995) 386SX 16MHz, Windows 3.1 + MS-DOS 5.0<br />
* (1995-1996) 486DX 50MHz, Windows 3.1 + MS-DOS 5.0/Windows 95<br />
<br />
=== indiana ===<br />
* (1996 - 1999) Fujitsu-ICL Pentium 90, Windows 95 + Windows NT 4.0<br />
** 1996?: +Orchid Righteous 3D<br />
<br />
=== colorado === <br />
* (1999 - 2001) Homebuilt Celeron 300, Windows 98/Me/XP<br />
** Matrox Millennium G200?<br />
* (2001 - 2002) Homebuilt Pentium 3 800, Windows XP<br />
* (2002 - 2005) Homebuilt Pentium 3 800, Mandriva Linux 8.0/9.0/10.0<br />
* (2005 - 2008) Dell Dimension 4300 (Pentium 4 1.8), Kubuntu 6.04<br />
** 2005: +128MB Sparkle GeForce MX4000 AGP <br />
** 2005: +Hauppauge WinTV-NOVA-T-MCE <br />
** 2006: +Seagate Barracuda 7200.10 320GB ST3320620A<br />
** 2006: +NEC-4570 16x DVD±RW/RAM Black <br />
* (2008 - 2010) Dell Dimension 4300 (Pentium 4 1.8), Ubuntu 8.04<br />
** 2008: +Seagate Barracuda 7200.10 750GB SATA2 3.5" <br />
** 2008: +SATA & IDE PCI Controller Card<br />
<br />
=== texas ===<br />
* (2002 - 2005) Dell Dimension 4300 (Pentium 4 1.8), Windows XP<br />
** GeForce 2 MX400?<br />
<br />
=== vermont ===<br />
* (2004 - 2006) Sony Vaio TR5MP (Pentium M 1.0), Windows XP<br />
* (2006 - 2008) Sony Vaio TR5MP (Pentium M 1.0), Ubuntu 6.10/7.04/7.10/8.04<br />
<br />
=== alaska ===<br />
* (2005 - 2007) Homebuilt Athlon64 3500+, Windows XP + Ubuntu 7.04 -> 8.04<br />
** Cooler Master Wave Master TAC-T01-E1C Silver All Aluminum Alloy ATX Mid Tower Computer Case<br />
** MSI K8N Diamond<br />
** AMD Athlon 64 3500+<br />
** 512MB Corsair Value Select 400MHz DDR Memory Stick <br />
** 128MB Sparkle GeForce 6600GT PCI-E <br />
** 300Gb Maxtor DiamondMax 10 ATA/133 6L300S0<br />
** NEC ND-3520 Silver <br />
** 460W Akasa PaxPower Ultra Quiet <br />
** 2006: +320GB Seagate Barracuda 7200.10 SATA2 ST3320620AS<br />
** 2007: +Sapphire X1950PRO 512MB GDDR3 PCI-Express<br />
* (2008 - 2014) Homebuilt Core 2 Duo 3.0, Windows XP/7 + Ubuntu 8.04 -> 9.10<br />
** 2008: +Gigabyte GA P35C-DS3R, iP35 Express, S775, PCI-E(x16), DDR2/3 1066/1333/800, SATA II, SATA RAID, ATX<br />
** 2008: +Intel Core 2 Duo E8400 2 x 3.00Ghz 6Mb Cache 1333 FSB Dual Core<br />
** 2008: +Corsair XMS6400 4GB DDR2 (2x2GB) 800Mhz Non-ECC<br />
** 2009: +GeForce GTX 260 Core 216<br />
** 2012: +Samsung 830 256GB SSD<br />
* (2014 - ) Homebuilt Core i7-4770, Windows 7/10<br />
** 2014: +Asus Z87-Plus Motherboard (Socket 1150, 4x DDR3, ATX, 2x PCI Express 3.0/2.0, 6x SATA 6.0 Gb/s, USB 3.0)<br />
** 2014: +Intel Core i7 4770 Quad Core Retail CPU (Socket 1150, 3.40GHz, 8MB, Haswell)<br />
** 2014: +Corsair CML16GX3M2A1600C10 Vengeance Low Profile 16GB (2x8GB) DDR3 1600 Mhz CL10 XMP<br />
** 2014: +Sapphire R9 270X 2GB Vapor-X 1050MHz GDDR 5 PCI Express Graphics Card<br />
** 2015: +ASUS Z87-A Motherboard<br />
** 2015: +Cooler Master Hyper 103 92mm Fan<br />
** 2016: +MSI GeForce GTX 970 GAMING Twin Frozr V 4GB Graphics Card (Maxwell)<br />
** 2016: +Samsung 850 EVO 500 GB 2.5 inch Solid State Drive<br />
** 2021: +Corsair RM650x PSU<br />
<br />
=== hawaii === <br />
* (2007 - 2009) Nintendo Wii<br />
<br />
=== montana ===<br />
* (2007 - ) Apple Mac Mini (Mid 2007), Mac OS X Tiger -> Lion<br />
** Core 2 Duo T7200 @ 2.0GHz<br />
** 4GB DDR2-667 RAM<br />
** 120GB HDD<br />
** Intel GMA 950<br />
<br />
=== pennsylvania ===<br />
* (2008 - 2012) Sony PS3<br />
<br />
=== nevada ===<br />
* (2009 - 2011) Samsung NC20 (VIA Nano 1.6), Windows XP + Ubuntu 9.04 -> 9.10<br />
<br />
=== maine ===<br />
* (2010 - ) Homebuilt Core i5 750, Ubuntu 9.10/12.04/14.04/16.04/18.04<br />
** Cooler Master ATCS 840 RC-840-KKN1-GP Black Aluminum ATX Full Tower Computer Case<br />
*** Front Case Fan failed<br />
** Gigabyte GA-P55-UD3R LGA 1156 Intel P55 ATX Intel Motherboard<br />
** Intel Core i5-750 Lynnfield 2.66GHz LGA 1156 95W Quad-Core Processor Model BX80605I5750<br />
** OCZ Gold 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model OCZ3G1333LV4GK<br />
** MSI N8400GS-D256H GeForce 8400 GS 256MB 64-bit GDDR2 PCI Express 2.0 x16 HDCP Ready Video Card<br />
** Seagate Barracuda LP ST31500541AS 1.5TB 5900 RPM SATA 3.0Gb/s 3.5"<br />
** Nexus NX-5000 R3 530W ATX12V v2.2 80 PLUS BRONZE Certified Modular Active PFC Power Supply<br />
** 2011 onwards: +Various SATA HDDs<br />
** 2013: +Crucial Ballistix 16GB (2x8GB) 240-pin DIMM, DDR3 PC3-12800<br />
** 2019: +Timetec Hynix IC 16GB (2x8GB) DDR3 PC3-12800 1600 MHz Non ECC Unbuffered 1.35V/1.5V Dual Rank 240 Pin UDIMM<br />
** 2021: +Corsair RM650 PSU<br />
** 2021: +Cooler Master Hyper 212 CPU Fan<br />
<br />
=== arizona ===<br />
* (2010 - ) Apple Macbook Air (Late 2010 13-inch), Mac OS X Snow Leopard -> macOS Sierra<br />
** Core 2 Duo SL9400 @ 1.86 GHz<br />
** 2GB DDR3-1066 RAM<br />
** 128GB SSD<br />
** Nvidia GeForce 320M<br />
<br />
=== dakota ===<br />
* (2012 - ) Apple Mac Mini (Mid 2011), Mac OS X Lion -> macOS Sierra<br />
** Core i5-2520M @ 2.5 GHz<br />
** 4GB DDR3-1333 RAM<br />
** 500GB SATA HDD<br />
** AMD Radeon HD 6630M<br />
<br />
=== router ===<br />
* (2016 - ) Homebuilt Celeron G1840, pfSense<br />
** IN Win EM050 Matx Black Case<br />
** MSI H97M-G43 Socket 1150 VGA DVI HDMI DisplayPort mATX Motherboard<br />
** Intel Celeron G1840 2.80GHz Socket 1150 2MB L3 Cache<br />
** Corsair 4GB DDR3 1333MHz Memory Module CL9(9-9-9-24) 1.5V Unbuffered Non-ECC<br />
** Corsair Force Series LS 60GB SATA 2.5inch SSD<br />
** 2021: +Corsair RM650 PSU<br />
<br />
=== oregon ===<br />
* (2016 - ) Apple MacBook Pro (Late 2016 13-inch Touch Bar), macOS Sierra -> Mojave<br />
** Core i5-6287U @ 3.1GHz<br />
** 16GB DDR3-2133 RAM<br />
** 256GB PCIe SSD<br />
** Intel Iris Graphics 550<br />
<br />
=== virginia ===<br />
* (2021 - ) Homebuilt Ryzen 5 5600X, Windows 10<br />
** Phanteks Evolv X Antracite Grey Case<br />
** Gigabyte AMD Ryzen X570 AORUS PRO<br />
** Ryzen 5 5600X @ 3.7Ghz<br />
** Corsair Vengeance LPX Black 32GB 3600MHz 2x16GB CAS 18-22-22-42 DDR4<br />
** NVIDIA RTX 3080 Founders Edit<br />
** Corsair Force MP600 1TB M.2 PCIe Gen 4 NVMe SSD<br />
** Corsair RM850 PSU</div>Andrewhttps://wiki.bretts.org/index.php?title=Docker&diff=8530Docker2023-12-27T20:44:40Z<p>Andrew: /* Plex */</p>
<hr />
<div>== Useful Commands ==<br />
<br />
; docker ps -a: List all containers<br />
; docker container inspect <container>: Show details of <container><br />
; docker logs <container>: Show logs for <container><br />
; docker exec -it <container> /bin/bash: Start an interactive shell in <container><br />
<br />
== Updating container ==<br />
<br />
=== Manually ===<br />
sudo docker pull <image><br />
sudo docker stop <container><br />
sudo docker rm <container><br />
<docker run command><br />
<br />
=== Automatically ===<br />
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock taisun/updater --oneshot <container><br />
<br />
== Containers ==<br />
=== Plex ===<br />
<br />
Get your claim token: https://www.plex.tv/claim/<br />
<br />
Create the container with the claim token substituted:<br />
sudo docker run -d --name plex --network=host -e PLEX_UID=111 -e PLEX_GID=127 -e TZ=Europe/London -e PLEX_CLAIM=<CLAIM_TOKEN> \<br />
-v /var/lib/plexmediaserver:/config -v /srv:/srv --device=/dev/dri:/dev/dri \<br />
--restart unless-stopped \<br />
plexinc/pms-docker:plexpass<br />
<br />
=== Tautulli (Plex Monitoring/Notifications) ===<br />
sudo docker run -d --name tautulli -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8181:8181 \<br />
-v /var/lib/torrent/tautulli/config:/config -v /var/lib/plexmediaserver/Library/Logs:/logs \<br />
--restart unless-stopped \<br />
linuxserver/tautulli<br />
<br />
=== Jackett (Torrent Gateway) ===<br />
sudo docker run -d --name=jackett -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 9117:9117 \<br />
-v /var/lib/torrent/jackett/config:/config -v /var/lib/torrent/jackett/downloads:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/jackett<br />
<br />
=== Deluge ===<br />
sudo docker run -d --name deluge -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/torrent/deluged/config:/config -v /srv/incoming/torrents/deluge:/srv/incoming/torrents/deluge \<br />
-v /etc/ssl/bretts.org:/etc/ssl/bretts.org \<br />
--restart unless-stopped \<br />
linuxserver/deluge<br />
<br />
Since user groups don't seem to apply across the docker boundary, "torrent" will need to be given explicit permission to the private key file via an ACL:<br />
setfacl -m "u:torrent:rw" /etc/ssl/bretts.org/key.pem<br />
<br />
=== Radarr (Movie Downloads) ===<br />
sudo docker run -d --name radarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7878:7878 \<br />
-v /var/lib/torrent/radarr/config:/config -v /srv/videos/programs/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Radarr Lowres (Low Resolution (<=1080p) Movie Downloads) ===<br />
sudo docker run -d --name radarr-lowres -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7879:7878 \<br />
-v /var/lib/torrent/radarr-lowres/config:/config -v /srv/videos/lowres/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Sonarr (TV Downloads) ===<br />
sudo docker run -d --name=sonarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8989:8989 \<br />
-v /var/lib/torrent/sonarr/config:/config -v /srv/videos/programs/tv:/tv -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/sonarr<br />
<br />
=== Unifi ===<br />
sudo docker run -d --name=unifi-controller -e PUID=140 -e PGID=150 \<br />
-p 3478:3478/udp -p 10001:10001/udp -p 18080:18080 -p 18081:18081 -p 18443:18443 -p 18880:18880 -p 6789:6789 \<br />
-v /var/lib/unifi:/config \<br />
--restart unless-stopped \<br />
linuxserver/unifi-controller<br />
<br />
=== Home-Assistant (as part of host network) ===<br />
sudo docker run -d --name=home-assistant -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/home-assistant/config:/config -v /srv:/media -v /etc/ssl/bretts.org:/etc/ssl/bretts.org -v /var/www/html/arlo-snapshots:/arlo-snapshots \<br />
--restart unless-stopped \<br />
homeassistant/home-assistant<br />
<br />
=== Atlassian ===<br />
<br />
==== JIRA ====<br />
Note: In this instance JIRA is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name jira<br />
sudo docker run -d --name=jira -e TZ=Europe/London \<br />
-e ATL_TOMCAT_SCHEME=https -e ATL_TOMCAT_SECURE=true -e ATL_PROXY_NAME=jira.bretts.org -e ATL_PROXY_PORT=443 \<br />
-p 7980:8080 \<br />
-v jira:/var/atlassian/application-data/jira \<br />
--restart unless-stopped \<br />
atlassian/jira-software<br />
<br />
Docker JIRA runs with a uid and gid of 2001. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2001 jira-docker<br />
sudo adduser --system --no-create-home --uid 2001 --gid 2001 jira-docker<br />
<br />
==== Bitbucket====<br />
Note: In this instance Bitbucket is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bitbucket<br />
sudo docker run -d --name=bitbucket -e TZ=Europe/London \<br />
-e SERVER_SCHEME=https -e SERVER_SECURE=true -e SERVER_PROXY_NAME=bitbucket.bretts.org -e SERVER_PROXY_PORT=443 \<br />
-p 7990:7990 -p 7999:7999 \<br />
-v bitbucket:/var/atlassian/application-data/bitbucket \<br />
--restart unless-stopped \<br />
atlassian/bitbucket-server<br />
<br />
Docker Bitbucket runs with a uid and gid of 2003. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2003 bitbucket-docker<br />
sudo adduser --system --no-create-home --uid 2003 --gid 2003 bitbucket-docker<br />
<br />
==== Bamboo ====<br />
Note: In this instance Bamboo is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bamboo<br />
sudo docker run -d --name=bamboo -e TZ=Europe/London \<br />
-p 54663:54663 -p 7970:8085 \<br />
-v bamboo:/var/atlassian/application-data/bamboo \<br />
--restart unless-stopped \<br />
atlassian/bamboo-server<br />
<br />
===== Limitations =====<br />
* Bamboo runs with a uid of 1000, which means it's likely to clash with a real user in the containing host<br />
* Bamboo container doesn't support any reverse proxy configuration, which means hiding it behind nginx is likely to result in broken Application Links. This can be worked around by manually editing /opt/atlassian/bamboo/conf/server.xml, but those changes will be overwritten on every container upgrade.<br />
<br />
== Tips / Fixes ==<br />
<br />
=== Tautulli slow to start ===<br />
This may be due to an attempt to chown a large number of files. <br />
Login to the container:<br />
sudo docker exec -it <container> /bin/bash<br />
Disable the chown step by editing <code>/etc/cont-init.d/30-config</code> and commenting out the chown command.<br />
<br />
=== Adding an SSL cert for Unifi ===<br />
sudo openssl pkcs12 -export -inkey /etc/ssl/bretts.org/key.pem -in /etc/ssl/bretts.org/fullchain.pem -out /tmp/cert.p12 -name unifi -password pass:temppass<br />
sudo keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/data/keystore -srckeystore /tmp/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias unifi -noprompt<br />
sudo docker restart unifi-controller<br />
sudo rm /tmp/cert.p12<br />
<br />
=== Local DNS resolution fails on docker 18.09 ===<br />
This may be the result of a bug: https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1820278. Normally the container's /etc/resolv.conf should mirror that of the host, but in this case it seems to just be a default version. As a workaround, create /etc/docker/daemon.json with the following contents:<br />
<br />
{<br />
"dns": ["192.168.1.1", "8.8.8.8"],<br />
"dns-search": ["bretts.org"]<br />
}</div>Andrewhttps://wiki.bretts.org/index.php?title=Truecrypt&diff=8529Truecrypt2023-04-03T13:43:47Z<p>Andrew: Created page with "== Mounting a TC volume == truecrypt volume.tc /mnt/tmp"</p>
<hr />
<div>== Mounting a TC volume ==<br />
truecrypt volume.tc /mnt/tmp</div>Andrewhttps://wiki.bretts.org/index.php?title=Linux_Tips&diff=8528Linux Tips2023-04-03T13:40:09Z<p>Andrew: /* Administration */</p>
<hr />
<div>Note, these tips are mainly aimed at Ubuntu/Kubuntu distributions.<br />
<br />
== Administration ==<br />
* [[apt-get/dpkg]]<br />
* [[Apache2]]<br />
* [[Atlassian]]<br />
* [[Backups]] (restic & b2)<br />
* [[bash]]<br />
* [[AIGLX & Compiz]]<br />
* [[Deluge]]<br />
* [[Docker]]<br />
* [[DVDs]]<br />
* [[Firefox]]<br />
* [[Git & Atlassian]]<br />
* [[Grub]]<br />
* [[HomeAssistant]]<br />
* [[Homebridge]]<br />
* [[Java]]<br />
* [[kPlaylist]]<br />
* [[KDE]]<br />
* [[Kerberos & LDAP]]<br />
* [[LIRC]]<br />
* [[LVM]]<br />
* [[Mail Server]] (Postfix, Procmail, Spamassassin etc.)<br />
* [[mdadm]] (RAID)<br />
* [[MediaWiki]]<br />
* [[Miscellaneous]] ([[Miscellaneous Archive|Archive]])<br />
* [[Munin]]<br />
* [[Mutt]]<br />
* [[MySQL]]<br />
* [[MythTV]]<br />
* [[nagios]]<br />
* [[netdata]]<br />
* [[Pine]]<br />
* [[PostgreSQL]]<br />
* [[Samba]]<br />
* [[SnapRAID / MergerFS]]<br />
* [[Snapshot backups using rsync]]<br />
* [[SNMP/MRTG]]<br />
* [[SSL]]<br />
* [[Subsonic]]<br />
* [[Subversion]]<br />
* [[Tomcat 5]]<br />
* [[Tripwire]]<br />
* [[Truecrypt]]<br />
* [[Unifi Controller]]<br />
* [[User Management]]<br />
* [[XDMCP & VNC]]<br />
* [[Webmin]]<br />
* [[WPA]]<br />
<br />
== Other ==<br />
* [[Sony TR2MP on Kubuntu Dapper Drake]]</div>Andrewhttps://wiki.bretts.org/index.php?title=MySQL&diff=8525MySQL2023-03-13T10:53:11Z<p>Andrew: </p>
<hr />
<div>== Logging in and switching to a DB ==<br />
<pre><br />
sudo mysql<br />
show databases;<br />
use <db>;<br />
show tables;<br />
</pre><br />
<br />
== Assigning passwords to users ==<br />
Login to mysql as the relevant user and run:<br />
<pre><br />
SET PASSWORD = PASSWORD('biscuit');<br />
</pre><br />
<br />
== Creating new users ==<br />
Login to mysql as root, and run:<br />
<pre><br />
GRANT ALL ON database.* TO myuser@localhost IDENTIFIED BY 'password';<br />
</pre><br />
Or, to create a user with no password:<br />
<pre><br />
GRANT ALL ON database.* TO myuser@localhost;<br />
</pre><br />
To allow login for a user from a remote host (2 lines are needed because, without the first, the user privileges default to those of the anonymous local user):<br />
<pre><br />
GRANT ALL ON database.* TO myuser@localhost IDENTIFIED BY 'password';<br />
GRANT ALL ON database.* TO myuser@'%' IDENTIFIED BY 'password';<br />
</pre><br />
Obviously, different privileges can be assigned to databases and tables. To revoke privileges, the syntax is:<br />
<pre><br />
REVOKE ALL ON database.* FROM myuser@localhost;<br />
</pre><br />
<br />
== Show privileges ==<br />
<pre><br />
SHOW GRANTS FOR 'user'@'host';<br />
</pre><br />
<br />
== Recover all corrupt tables ==<br />
<pre><br />
sudo find /var/lib/mysql -name *.MYI -exec myisamchk -r {} \;<br />
</pre><br />
<br />
== Copying a database between hosts ==<br />
* On the source:<br />
<pre><br />
mysqldump <db_name> -u root -p > file.sql<br />
</pre><br />
* On the target:<br />
<pre><br />
mysqladmin create <db_name> -u root -p<br />
cat file.sql | mysql <db_name> -u root -p<br />
</pre><br />
<br />
== Investigating problems ==<br />
* '''mytop --prompt''' will show long-running/large queries<br />
* Turn logging on in ''/etc/mysql/my.cnf'' to trace all queries (though this will slow the server down)<br />
<br />
== Getting rid of /var/log/mysql.* ==<br />
These files never get written to, but apparmor creates them anyway. Comment out the appropriate lines in ''/etc/apparmor.d/usr.sbin.mysqld''. ''Does this work, or can we simply delete them and they'll disappear forever?''</div>Andrewhttps://wiki.bretts.org/index.php?title=Mdadm&diff=8524Mdadm2022-12-16T13:17:45Z<p>Andrew: /* Cancel a hanging md check */</p>
<hr />
<div>== Overview ==<br />
Several physical disks (/dev/sdX) or partitions (/dev/sdX1) of equal size are joined into a single array.<br />
<br />
== Creating a RAID array ==<br />
<br />
* (Recommended) Create a partition on each disk. Note:<br />
** Use optimal alignment, with "-a optimal" (this doesn't appear to have any obvious effect on behaviour though!)<br />
** Use the "GPT" partition table format (to handle disks > 2TB)<br />
** Name the partition "primary" (note that this is free text)<br />
** Use 0% for partition start (this will normally mean that the partition start will be at the 1MB boundary, which gives optimal alignment)<br />
** End 100MB before the end of the disk (this is to allow for slight variances in exact size of similar disks)<br />
** Set partition type to raid (0xFD00); this is optional, but may encourage some tools to avoid writing directly to the disk (and avoid corrupting the array)<br />
<pre><br />
parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% -100MB set 1 raid on<br />
</pre><br />
<br />
<br />
* Create a RAID 5 array over 3 partitions:<br />
** Note, the default metadata version is now 1.2 for create commands<br />
<pre><br />
mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdX1 /dev/sdY1 /dev/sdZ1<br />
</pre><br />
<br />
* Wait (potentially several days) for the array to be built<br />
<br />
* Once built, save the current raid setup to /etc, to allow for automounting on startup:<br />
<pre><br />
diff -u <(cat /etc/mdadm/mdadm.conf) <(/usr/share/mdadm/mkconf)<br />
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.bak<br />
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf<br />
</pre><br />
<br />
* Update the initial boot image for all current kernel versions to include the new mdadm.conf:<br />
<pre><br />
update-initramfs -u<br />
</pre><br />
<br />
* Start the array:<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sdX1 /dev/sdY1 /dev/sdZ1<br />
</pre><br />
<br />
* From this point, just treat the array (/dev/md0) as a normal physical disk.<br />
<br />
== Convert RAID 1 array to RAID 5 ==<br />
<br />
* Create partition on the new disk as for creating a new array<br />
<br />
* Add the new partition to the array:<br />
<pre><br />
mdadm --add /dev/md0 /dev/sdX1<br />
</pre><br />
<br />
* Convert the array to RAID 5, with the correct number of devices:<br />
<pre><br />
mdadm --grow --level=5 --raid-devices=3<br />
</pre><br />
<br />
* Wait (potentially several days) for the array to be reshaped<br />
<br />
* Grow the partition / volume on /dev/md0<br />
<br />
== Readding a disk marked faulty ==<br />
If a disk in the array has been marked faulty for a spurious reason, then to readd it and rebuild the array, you'll first need to remove it. Run:<br />
<pre><br />
mdadm /dev/md0 --remove /dev/sdX1<br />
mdadm /dev/md0 --add /dev/sdX1<br />
</pre><br />
<br />
== Fixing a disk with Current_Pending_Sector count > 0 ==<br />
If a disk in the array has a Current_Pending_Sector count > 0, this suggests one or more blocks on the disk couldn't be read. To force the disk to be recovered from the rest of the array, it needs to be rewritten which will force the pending sector to be reallocated. This entails removing the disk from the array, zeroing the superblock (to ensure it can't just be recovered from the bitmap) and then re-adding it.<br />
<pre><br />
mdadm /dev/md0 --fail /dev/sdX1<br />
mdadm /dev/md0 --remove /dev/sdX1<br />
mdadm --zero-superblock /dev/sdX1<br />
mdadm /dev/md0 --add /dev/sdX1<br />
</pre><br />
<br />
== Recovering from disk failure ==<br />
<br />
* Check the disk status in mdadm:<br />
<pre><br />
mdadm --detail /dev/md0<br />
</pre><br />
<br />
* If the disk is already marked as failed, then skip this step. Otherwise:<br />
<pre><br />
mdadm /dev/md0 --fail /dev/sdX1<br />
</pre><br />
<br />
* From this point, the array will continue to operate in "degraded" mode<br />
<br />
* Remove the failed disk:<br />
<pre><br />
mdadm /dev/md0 --remove /dev/sdX1<br />
</pre><br />
<br />
* To more easily determine the disk for physical removal from the machine (once powered off), note down the serial number as reported by:<br />
<pre><br />
hdparm -i /dev/sdX | grep SerialNo<br />
</pre><br />
<br />
* Add a replacement disk:<br />
<pre><br />
mdadm /dev/md0 --add /dev/sdY1<br />
</pre><br />
<br />
* Wait (potentially several days) for the array to be resynced<br />
<br />
== Recover from a dirty reboot of a degraded array ==<br />
If the server shuts down uncleanly (eg. due to a power cut) when the array is degraded, it will refuse to automatically assemble the array on startup (with a dmesg error of the form "cannot start dirty degraded array"). This is because the data may be in an inconsistent state. In this situation:<br />
<br />
* Check that the good disks have the same number of events. If the numbers differ slightly, that suggests some of the data being written when the server shutdown wasn't written fully, and is probably corrupt (hopefully this will just mean a logfile with some bad characters, or similar).<br />
<pre><br />
mdadm --examine /dev/sdX /dev/sdY /dev/sdZ | grep Events<br />
</pre><br />
<br />
* Assuming the number of events is the same (or very similar), forcibly assemble the array.<br />
<pre><br />
mdadm --assemble --force /dev/md0 /dev/sdX1 /dev/sdY1 /dev/sdZ1<br />
</pre><br />
<br />
== Repairing failing disk on degraded array ==<br />
<br />
If the raid5 array is in a good state, then simply removing and readding the faulty drive should be sufficient. However, if the array is already degraded (ie. there’s no redundancy), or the disk problems became apparent when rebuilding the array from a spare drive, any bad sectors on the failing drive will need to be overwritten with new data (probably just zeros) before the disk is good enough to be able to rebuild the array.<br />
<br />
<ol><br />
<br />
<li>Getting information on failing/failed sectors:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
smartctl –l xerror /dev/sdX<br />
</pre><br />
<br />
<li>Analyze/recover data from failing disk:<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>Ideally, copy all good data to a recovery file sdX.bin, and record details of good/bad sectors in sdX.map (this needs sufficient free space for sdX.bin). This needs to be run when no partitions on the array are mounted:<br />
<pre><br />
ddrescue –ask –verbose –binary-prefixes –idirect /dev/sdX sdX.bin sdX.map<br />
</pre><br />
<br />
<li>If insufficient free space, merely analyze the disk to scan for all failing sectors (force is needed to allow writing to /dev/null). This can be run when partitions are mounted, since we don’t actually care about the data we’re reading, we just care about the bad sectors:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map<br />
</pre><br />
Note that the sdX.map file is human readable, and will generally be quite small. It keeps track of which sectors are good and bad, and can be reused for subsequent ddrescue runs to avoid re-reading good sectors.<br />
<br />
</ol><br />
<br />
<li>Recheck the number of failing sectors, since some may not have been read yet when smartctl was last run:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
<li> Forcibly re-assemble the array (after checking the number of events mismatching between array members):<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>If the number of events is wildly different, then it’s possible there will be corrupted data on the array, but in general if the array was marked as failed then no file writes will have been successful, so event discrepancies might not be reflective of a real problem:<br />
<pre><br />
mdadm –examine /dev/sd[XYZ] | grep Events<br />
</pre><br />
<br />
<li>Reassemble the array, if it’s in a good state (note the disk order isn’t important – mdadm will work out the correct order):<br />
<pre><br />
mdadm –assemble –verbose –run /dev/md0 /dev/sd[XYZ]<br />
</pre><br />
<br />
<li>If reassembly was unsuccessful due to mismatched event numbers, then forcibly reassemble it (be very careful here, disk order doesn’t matter but do make sure the correct disk labels are used – check output of the previous assemble to make sure it looks reasonable):<br />
<pre><br />
mdadm –assemble –verbose –run –force /dev/md0 /dev/sd[XYZ]<br />
</pre><br />
<br />
<li>Remount any affected partitions, or restart the machine to remount all on startup<br />
<br />
</ol><br />
<br />
<li>For mdadm raid5 + lvm arrays, there’s no easy way to determine which files inhabit which bad sectors. Instead, we need to read all files by hand to determine which are unreadable. For each partition which includes space on the bad drive (xdev ensures no other mount points are included):<br />
<pre><br />
find /mountpoint –type f –xdev –exec echo {} \; -exec md5sum {} \; 2>&1 | tee mountpoint-files.log<br />
</pre><br />
Note: Ensure mountpoint-files.log is written somewhere outside of the array<br />
<br />
<br />
<li>It’s possible that reading a bad file with md5sum above will again mark the array as failed. If so, reassemble the array using the steps above. Then look in the mountpoint-files.log file for the first failed md5sum (probably logged with “Input/output error”).<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>Write random data over the bad file, which should force the pending sector to be marked bad and reallocated from spare space on the disk:<br />
<pre><br />
shred –v /path/to/bad/file<br />
</pre><br />
<br />
<li>Check that the number of failing sectors has decreased:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
<li>Assuming the number of pending sectors has decreased, it’s then ok to delete the bad file:<br />
<pre><br />
rm /path/to/bad/file<br />
</pre><br />
<br />
</ol><br />
<br />
<li>Repeat md5sum scanning and file deletion until all mountpoints using the disk are free of bad files<br />
<br />
<li>Rescan the bad sectors to see which have been fixed by deleting files, reusing the previous known state of the drive. Note that we need “-r 1” otherwise the bad sectors will be treated as known bad from the previous state, and won’t be tried at all (after backing up the original map file):<br />
<pre><br />
cp sdX.map sdX.initial.map<br />
ddrescue –ask –verbose –binary-prefixes –idirect –force –r 1 /dev/sdX /dev/null sdX.map<br />
</pre><br />
<br />
<li>If any bad sectors remain, then they must be in free space on the drive.<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>List out all the bad block addresses, based on the ddrescue state file (after backing up the map file):<br />
<pre><br />
ddrescuelog –list-blocks=- sdX.map<br />
</pre><br />
<br />
<li>For each of the bad blocks, check with dd that we’ve got the right block IDs. For each one of these reads we expect to see an error (and “0+0 records in”):<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/sdX of=/dev/null count=1 bs=512 skip=$block<br />
done<br />
</pre><br />
<br />
<li>For each of the bad blocks, write zeros over the block to force it to be reallocated from spare space on the drive. Be careful here – getting it wrong will destroy data! Also note that when reading, “skip” is used to position the input stream, but here “seek” is used to position the output stream:<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/zero of=/dev/sdX count=1 bs=512 seek=$block<br />
done<br />
</pre><br />
<br />
<li>It’s possible that dd will fail to write to the block, in which case try again with hdparm:<br />
<br />
<ol style="list-style-type: lower-roman;"><br />
<br />
<li>First check that we’ve got the right sectors (we expect to see “SG_IO: bad/missing sense data” for each sector on stderr, so we pipe stdout to /dev/null to avoid noise):<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
hdparm –read-sector $block /dev/sdX > /dev/null<br />
done<br />
</pre><br />
<br />
<li>Assuming we’ve seen the expected errors, write zeros over each of the bad sectors. Be careful here – getting it wrong will destroy data! You may be asked to add a “—yes-i-know-what-i-am-doing” flag.<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
hdparm –write-sector $block /dev/sdX<br />
done<br />
</pre><br />
<br />
</ol><br />
<br />
</ol><br />
<br />
<li>Check ddrescue is showing all data as readable (after backing up the map file again):<br />
<pre><br />
cp sdX.map sdX.postshred.map<br />
ddrescue –ask –verbose –binary-prefixes –idirect –force –r 1 /dev/sdX /dev/null sdX.map<br />
</pre><br />
<br />
<li>Check smartctl is showing no pending sectors:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
<li>Readd spare drive and start rebuilding array redundancy:<br />
<pre><br />
mdadm –add /dev/md0 /dev/sdW<br />
</pre><br />
<br />
</ol><br />
<br />
== Reducing the number of disks in a RAID 5 array (including LVM) ==<br />
To reduce the number of disks in an array (so that one can be removed safely):<br />
<br />
* Firstly, ensure we've got a saved copy of the current PV mappings and 'mdadm --detail' somewhere (not on the machine). This will be useful if we need to recover from something having gone wrong<br />
<br />
* If we want to choose which disk is going to be removed (rather than mdadm deciding for us), we need to remove that drive from the RAID 5 array before we start (which will put it into 'degraded' mode):<br />
mdadm /dev/md0 --fail /dev/sdX1<br />
mdadm /dev/md0 --remove /dev/sdX1<br />
<br />
* Unmount the LVM logical volume we're going to take the space from:<br />
umount /dev/VG/LV<br />
<br />
* Shrink the LVM logical volume (and the ext4 filesystem that's on it) from which we're going to be reclaiming the space. Make sure that the reduction in size is larger than the size of the disk we're going to remove (in this case, a 3TB drive). Units are 1024-based, so we know that 3T will be enough (since 3TiB > 3TB):<br />
lvresize --verbose --resizefs -L -3T /dev/VG/LV<br />
This step will take a LONG time (about 15 hours for me).<br />
<br />
* Check the PV mappings - it's likely that the free space we've created won't be at the end of the physical volume:<br />
pvdisplay -m /dev/md0<br />
<br />
* Assuming it's not at the end, we need to move the LV segments around to ensure all free space is at the end of the volume (you may need to do this more than once). Choose a segment that's after the free space, and move its extents into a similarly sized space at the beginning of the free area:<br />
pvmove --alloc anywhere /dev/md0:3538424-3576823 /dev/md0:2751992-2790391<br />
This step also takes several hours.<br />
<br />
* Check that we have some free PEs, and calculate the new size of the physical volume with `PE Size * (Alloc PE + 1)` from:<br />
vgdisplay VG<br />
I'm not sure why we need the `+ 1`, but without it the next step warns us that we shrinking by one too many extents (`cannot resize to 2790391 extents as 2790392 are allocated`).<br />
<br />
* Shrink the physical volume. Use KB here so that we can compare with the size of the mdadm array in the next step. pvresize will warn you that the requested size is less than the real size; check that the requested size matches the Alloc Size from vgdisplay.<br />
pvresize --setphysicalvolumesize 11429449728K /dev/md0<br />
<br />
* Check that we now have 0 Free PEs:<br />
vgdisplay VG<br />
<br />
* Check what the new size of the array will be, and ensure that it's larger than the size of the PV as set by `pvresize`. Here `5` is the new number of disks in the array. Also, ensure the backup file is not stored on the array itself:<br />
mdadm --grow /dev/md0 -n5 --backup-file /var/log/raid.backup<br />
This step will fail to resize, but will give us the new array size we need from the next step.<br />
<br />
* Transiently resize the array (which will change the size reported to the OS until the next reboot). Use the size reported by the previous step, but ensure it's larger than the physical volume size we set with `pvresize`:<br />
mdadm --grow /dev/md0 --array-size 11720540160<br />
<br />
* At this point, you may want to run e2fsck on all the volumes on in the group to make sure we haven't accidentally truncated any by shrinking the reported size of the array.<br />
<br />
* Assuming all is well, go ahead and rerun the command to reshape the array with new number of disks:<br />
mdadm --grow /dev/md0 -n5 --backup-file /var/log/raid.backup<br />
This will take a REALLY LONG time (several days for me). Whilst this is running though, we can run the next few steps (all except adding the spare drive back to the array).<br />
<br />
* First, we can grow the PV to get back the space we used as a buffer to make sure we shrank the PV more than we shrank the array:<br />
pvresize /dev/md0<br />
<br />
* Now grow one of the LVs to use the free space we got back from the step above (it doesn't need to be the same one we took space from at the beginning, and can be done while the LV is mounted). Note the free extents from vgdisplay and use that as the input for lvextend:<br />
vgdisplay VG<br />
lvextend -l +71067 /dev/VG/LV<br />
resize2fs /dev/VG/LV<br />
<br />
* Check that we've no more free extents:<br />
vgdisplay VG<br />
<br />
* Remount any unmounted partitions, and restart any services that were shutdown to allow for the unmounting<br />
<br />
* Once the reshape has finished, the array will probably still be in degraded more with one spare drive, so add that back into the array:<br />
mdadm /dev/md0 --add /dev/sdY1<br />
This step will probably also take a day or more.<br />
<br />
== Reducing recovery time after unclean shutdown ==<br />
The default (at least when I setup mdadm) consistency policy is `resync`, which means a full resync is needed if the machine shuts down uncleanly (eg. due to power loss). To see the current consistency policy:<br />
mdadm --detail /dev/md0 | grep Consistency<br />
<br />
If it's set to `resync` then recoveries will be slow; `bitmap` means recoveries will be fast. To set a different consistency policy (eg. an internal bitmap) with:<br />
mdadm --grow --bitmap=internal /dev/md0<br />
<br />
Changing the consistency policy only takes a few seconds.<br />
<br />
== Cancel a hanging md check ==<br />
<br />
Sometimes the monthly consistency check will hang. This can be seen with output like (note the finish and speed):<br />
<br />
md0 : active raid5 sdh1[5] sdi1[4] sdg1[1] sdd1[0]<br />
8790107136 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]<br />
[===================>.] check = 99.9% (2930035712/2930035712) finish=0.0min speed=0K/sec<br />
bitmap: 14/22 pages [56KB], 65536KB chunk<br />
<br />
This can also lead to very high load warnings (>20).<br />
<br />
Cancelling in the regular way will just hang too, so first we need to set the state to `active` rather than `write-pending` before cancelling:<br />
<br />
# cat /sys/block/md0/md/array_state<br />
write-pending<br />
# echo active > /sys/block/md0/md/array_state<br />
# cat /sys/devices/virtual/block/md0/md/sync_action<br />
check<br />
# echo idle > /sys/devices/virtual/block/md0/md/sync_action<br />
<br />
== Useful Commands ==<br />
<br />
; cat /proc/mdstat : Display a summary of current raid status<br />
; mdadm --detail /dev/md0 : Display raid information on array md0<br />
; mdadm --examine /dev/sdf : Display raid information on device/partition sdf</div>Andrewhttps://wiki.bretts.org/index.php?title=Mdadm&diff=8523Mdadm2022-12-16T13:17:14Z<p>Andrew: /* Cancel a hanging md check */</p>
<hr />
<div>== Overview ==<br />
Several physical disks (/dev/sdX) or partitions (/dev/sdX1) of equal size are joined into a single array.<br />
<br />
== Creating a RAID array ==<br />
<br />
* (Recommended) Create a partition on each disk. Note:<br />
** Use optimal alignment, with "-a optimal" (this doesn't appear to have any obvious effect on behaviour though!)<br />
** Use the "GPT" partition table format (to handle disks > 2TB)<br />
** Name the partition "primary" (note that this is free text)<br />
** Use 0% for partition start (this will normally mean that the partition start will be at the 1MB boundary, which gives optimal alignment)<br />
** End 100MB before the end of the disk (this is to allow for slight variances in exact size of similar disks)<br />
** Set partition type to raid (0xFD00); this is optional, but may encourage some tools to avoid writing directly to the disk (and avoid corrupting the array)<br />
<pre><br />
parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% -100MB set 1 raid on<br />
</pre><br />
<br />
<br />
* Create a RAID 5 array over 3 partitions:<br />
** Note, the default metadata version is now 1.2 for create commands<br />
<pre><br />
mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdX1 /dev/sdY1 /dev/sdZ1<br />
</pre><br />
<br />
* Wait (potentially several days) for the array to be built<br />
<br />
* Once built, save the current raid setup to /etc, to allow for automounting on startup:<br />
<pre><br />
diff -u <(cat /etc/mdadm/mdadm.conf) <(/usr/share/mdadm/mkconf)<br />
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.bak<br />
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf<br />
</pre><br />
<br />
* Update the initial boot image for all current kernel versions to include the new mdadm.conf:<br />
<pre><br />
update-initramfs -u<br />
</pre><br />
<br />
* Start the array:<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sdX1 /dev/sdY1 /dev/sdZ1<br />
</pre><br />
<br />
* From this point, just treat the array (/dev/md0) as a normal physical disk.<br />
<br />
== Convert RAID 1 array to RAID 5 ==<br />
<br />
* Create partition on the new disk as for creating a new array<br />
<br />
* Add the new partition to the array:<br />
<pre><br />
mdadm --add /dev/md0 /dev/sdX1<br />
</pre><br />
<br />
* Convert the array to RAID 5, with the correct number of devices:<br />
<pre><br />
mdadm --grow --level=5 --raid-devices=3<br />
</pre><br />
<br />
* Wait (potentially several days) for the array to be reshaped<br />
<br />
* Grow the partition / volume on /dev/md0<br />
<br />
== Readding a disk marked faulty ==<br />
If a disk in the array has been marked faulty for a spurious reason, then to readd it and rebuild the array, you'll first need to remove it. Run:<br />
<pre><br />
mdadm /dev/md0 --remove /dev/sdX1<br />
mdadm /dev/md0 --add /dev/sdX1<br />
</pre><br />
<br />
== Fixing a disk with Current_Pending_Sector count > 0 ==<br />
If a disk in the array has a Current_Pending_Sector count > 0, this suggests one or more blocks on the disk couldn't be read. To force the disk to be recovered from the rest of the array, it needs to be rewritten which will force the pending sector to be reallocated. This entails removing the disk from the array, zeroing the superblock (to ensure it can't just be recovered from the bitmap) and then re-adding it.<br />
<pre><br />
mdadm /dev/md0 --fail /dev/sdX1<br />
mdadm /dev/md0 --remove /dev/sdX1<br />
mdadm --zero-superblock /dev/sdX1<br />
mdadm /dev/md0 --add /dev/sdX1<br />
</pre><br />
<br />
== Recovering from disk failure ==<br />
<br />
* Check the disk status in mdadm:<br />
<pre><br />
mdadm --detail /dev/md0<br />
</pre><br />
<br />
* If the disk is already marked as failed, then skip this step. Otherwise:<br />
<pre><br />
mdadm /dev/md0 --fail /dev/sdX1<br />
</pre><br />
<br />
* From this point, the array will continue to operate in "degraded" mode<br />
<br />
* Remove the failed disk:<br />
<pre><br />
mdadm /dev/md0 --remove /dev/sdX1<br />
</pre><br />
<br />
* To more easily determine the disk for physical removal from the machine (once powered off), note down the serial number as reported by:<br />
<pre><br />
hdparm -i /dev/sdX | grep SerialNo<br />
</pre><br />
<br />
* Add a replacement disk:<br />
<pre><br />
mdadm /dev/md0 --add /dev/sdY1<br />
</pre><br />
<br />
* Wait (potentially several days) for the array to be resynced<br />
<br />
== Recover from a dirty reboot of a degraded array ==<br />
If the server shuts down uncleanly (eg. due to a power cut) when the array is degraded, it will refuse to automatically assemble the array on startup (with a dmesg error of the form "cannot start dirty degraded array"). This is because the data may be in an inconsistent state. In this situation:<br />
<br />
* Check that the good disks have the same number of events. If the numbers differ slightly, that suggests some of the data being written when the server shutdown wasn't written fully, and is probably corrupt (hopefully this will just mean a logfile with some bad characters, or similar).<br />
<pre><br />
mdadm --examine /dev/sdX /dev/sdY /dev/sdZ | grep Events<br />
</pre><br />
<br />
* Assuming the number of events is the same (or very similar), forcibly assemble the array.<br />
<pre><br />
mdadm --assemble --force /dev/md0 /dev/sdX1 /dev/sdY1 /dev/sdZ1<br />
</pre><br />
<br />
== Repairing failing disk on degraded array ==<br />
<br />
If the raid5 array is in a good state, then simply removing and readding the faulty drive should be sufficient. However, if the array is already degraded (ie. there’s no redundancy), or the disk problems became apparent when rebuilding the array from a spare drive, any bad sectors on the failing drive will need to be overwritten with new data (probably just zeros) before the disk is good enough to be able to rebuild the array.<br />
<br />
<ol><br />
<br />
<li>Getting information on failing/failed sectors:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
smartctl –l xerror /dev/sdX<br />
</pre><br />
<br />
<li>Analyze/recover data from failing disk:<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>Ideally, copy all good data to a recovery file sdX.bin, and record details of good/bad sectors in sdX.map (this needs sufficient free space for sdX.bin). This needs to be run when no partitions on the array are mounted:<br />
<pre><br />
ddrescue –ask –verbose –binary-prefixes –idirect /dev/sdX sdX.bin sdX.map<br />
</pre><br />
<br />
<li>If insufficient free space, merely analyze the disk to scan for all failing sectors (force is needed to allow writing to /dev/null). This can be run when partitions are mounted, since we don’t actually care about the data we’re reading, we just care about the bad sectors:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map<br />
</pre><br />
Note that the sdX.map file is human readable, and will generally be quite small. It keeps track of which sectors are good and bad, and can be reused for subsequent ddrescue runs to avoid re-reading good sectors.<br />
<br />
</ol><br />
<br />
<li>Recheck the number of failing sectors, since some may not have been read yet when smartctl was last run:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
<li> Forcibly re-assemble the array (after checking the number of events mismatching between array members):<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>If the number of events is wildly different, then it’s possible there will be corrupted data on the array, but in general if the array was marked as failed then no file writes will have been successful, so event discrepancies might not be reflective of a real problem:<br />
<pre><br />
mdadm –examine /dev/sd[XYZ] | grep Events<br />
</pre><br />
<br />
<li>Reassemble the array, if it’s in a good state (note the disk order isn’t important – mdadm will work out the correct order):<br />
<pre><br />
mdadm –assemble –verbose –run /dev/md0 /dev/sd[XYZ]<br />
</pre><br />
<br />
<li>If reassembly was unsuccessful due to mismatched event numbers, then forcibly reassemble it (be very careful here, disk order doesn’t matter but do make sure the correct disk labels are used – check output of the previous assemble to make sure it looks reasonable):<br />
<pre><br />
mdadm –assemble –verbose –run –force /dev/md0 /dev/sd[XYZ]<br />
</pre><br />
<br />
<li>Remount any affected partitions, or restart the machine to remount all on startup<br />
<br />
</ol><br />
<br />
<li>For mdadm raid5 + lvm arrays, there’s no easy way to determine which files inhabit which bad sectors. Instead, we need to read all files by hand to determine which are unreadable. For each partition which includes space on the bad drive (xdev ensures no other mount points are included):<br />
<pre><br />
find /mountpoint –type f –xdev –exec echo {} \; -exec md5sum {} \; 2>&1 | tee mountpoint-files.log<br />
</pre><br />
Note: Ensure mountpoint-files.log is written somewhere outside of the array<br />
<br />
<br />
<li>It’s possible that reading a bad file with md5sum above will again mark the array as failed. If so, reassemble the array using the steps above. Then look in the mountpoint-files.log file for the first failed md5sum (probably logged with “Input/output error”).<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>Write random data over the bad file, which should force the pending sector to be marked bad and reallocated from spare space on the disk:<br />
<pre><br />
shred –v /path/to/bad/file<br />
</pre><br />
<br />
<li>Check that the number of failing sectors has decreased:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
<li>Assuming the number of pending sectors has decreased, it’s then ok to delete the bad file:<br />
<pre><br />
rm /path/to/bad/file<br />
</pre><br />
<br />
</ol><br />
<br />
<li>Repeat md5sum scanning and file deletion until all mountpoints using the disk are free of bad files<br />
<br />
<li>Rescan the bad sectors to see which have been fixed by deleting files, reusing the previous known state of the drive. Note that we need “-r 1” otherwise the bad sectors will be treated as known bad from the previous state, and won’t be tried at all (after backing up the original map file):<br />
<pre><br />
cp sdX.map sdX.initial.map<br />
ddrescue –ask –verbose –binary-prefixes –idirect –force –r 1 /dev/sdX /dev/null sdX.map<br />
</pre><br />
<br />
<li>If any bad sectors remain, then they must be in free space on the drive.<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>List out all the bad block addresses, based on the ddrescue state file (after backing up the map file):<br />
<pre><br />
ddrescuelog –list-blocks=- sdX.map<br />
</pre><br />
<br />
<li>For each of the bad blocks, check with dd that we’ve got the right block IDs. For each one of these reads we expect to see an error (and “0+0 records in”):<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/sdX of=/dev/null count=1 bs=512 skip=$block<br />
done<br />
</pre><br />
<br />
<li>For each of the bad blocks, write zeros over the block to force it to be reallocated from spare space on the drive. Be careful here – getting it wrong will destroy data! Also note that when reading, “skip” is used to position the input stream, but here “seek” is used to position the output stream:<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/zero of=/dev/sdX count=1 bs=512 seek=$block<br />
done<br />
</pre><br />
<br />
<li>It’s possible that dd will fail to write to the block, in which case try again with hdparm:<br />
<br />
<ol style="list-style-type: lower-roman;"><br />
<br />
<li>First check that we’ve got the right sectors (we expect to see “SG_IO: bad/missing sense data” for each sector on stderr, so we pipe stdout to /dev/null to avoid noise):<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
hdparm –read-sector $block /dev/sdX > /dev/null<br />
done<br />
</pre><br />
<br />
<li>Assuming we’ve seen the expected errors, write zeros over each of the bad sectors. Be careful here – getting it wrong will destroy data! You may be asked to add a “—yes-i-know-what-i-am-doing” flag.<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
hdparm –write-sector $block /dev/sdX<br />
done<br />
</pre><br />
<br />
</ol><br />
<br />
</ol><br />
<br />
<li>Check ddrescue is showing all data as readable (after backing up the map file again):<br />
<pre><br />
cp sdX.map sdX.postshred.map<br />
ddrescue –ask –verbose –binary-prefixes –idirect –force –r 1 /dev/sdX /dev/null sdX.map<br />
</pre><br />
<br />
<li>Check smartctl is showing no pending sectors:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
<li>Readd spare drive and start rebuilding array redundancy:<br />
<pre><br />
mdadm –add /dev/md0 /dev/sdW<br />
</pre><br />
<br />
</ol><br />
<br />
== Reducing the number of disks in a RAID 5 array (including LVM) ==<br />
To reduce the number of disks in an array (so that one can be removed safely):<br />
<br />
* Firstly, ensure we've got a saved copy of the current PV mappings and 'mdadm --detail' somewhere (not on the machine). This will be useful if we need to recover from something having gone wrong<br />
<br />
* If we want to choose which disk is going to be removed (rather than mdadm deciding for us), we need to remove that drive from the RAID 5 array before we start (which will put it into 'degraded' mode):<br />
mdadm /dev/md0 --fail /dev/sdX1<br />
mdadm /dev/md0 --remove /dev/sdX1<br />
<br />
* Unmount the LVM logical volume we're going to take the space from:<br />
umount /dev/VG/LV<br />
<br />
* Shrink the LVM logical volume (and the ext4 filesystem that's on it) from which we're going to be reclaiming the space. Make sure that the reduction in size is larger than the size of the disk we're going to remove (in this case, a 3TB drive). Units are 1024-based, so we know that 3T will be enough (since 3TiB > 3TB):<br />
lvresize --verbose --resizefs -L -3T /dev/VG/LV<br />
This step will take a LONG time (about 15 hours for me).<br />
<br />
* Check the PV mappings - it's likely that the free space we've created won't be at the end of the physical volume:<br />
pvdisplay -m /dev/md0<br />
<br />
* Assuming it's not at the end, we need to move the LV segments around to ensure all free space is at the end of the volume (you may need to do this more than once). Choose a segment that's after the free space, and move its extents into a similarly sized space at the beginning of the free area:<br />
pvmove --alloc anywhere /dev/md0:3538424-3576823 /dev/md0:2751992-2790391<br />
This step also takes several hours.<br />
<br />
* Check that we have some free PEs, and calculate the new size of the physical volume with `PE Size * (Alloc PE + 1)` from:<br />
vgdisplay VG<br />
I'm not sure why we need the `+ 1`, but without it the next step warns us that we shrinking by one too many extents (`cannot resize to 2790391 extents as 2790392 are allocated`).<br />
<br />
* Shrink the physical volume. Use KB here so that we can compare with the size of the mdadm array in the next step. pvresize will warn you that the requested size is less than the real size; check that the requested size matches the Alloc Size from vgdisplay.<br />
pvresize --setphysicalvolumesize 11429449728K /dev/md0<br />
<br />
* Check that we now have 0 Free PEs:<br />
vgdisplay VG<br />
<br />
* Check what the new size of the array will be, and ensure that it's larger than the size of the PV as set by `pvresize`. Here `5` is the new number of disks in the array. Also, ensure the backup file is not stored on the array itself:<br />
mdadm --grow /dev/md0 -n5 --backup-file /var/log/raid.backup<br />
This step will fail to resize, but will give us the new array size we need from the next step.<br />
<br />
* Transiently resize the array (which will change the size reported to the OS until the next reboot). Use the size reported by the previous step, but ensure it's larger than the physical volume size we set with `pvresize`:<br />
mdadm --grow /dev/md0 --array-size 11720540160<br />
<br />
* At this point, you may want to run e2fsck on all the volumes on in the group to make sure we haven't accidentally truncated any by shrinking the reported size of the array.<br />
<br />
* Assuming all is well, go ahead and rerun the command to reshape the array with new number of disks:<br />
mdadm --grow /dev/md0 -n5 --backup-file /var/log/raid.backup<br />
This will take a REALLY LONG time (several days for me). Whilst this is running though, we can run the next few steps (all except adding the spare drive back to the array).<br />
<br />
* First, we can grow the PV to get back the space we used as a buffer to make sure we shrank the PV more than we shrank the array:<br />
pvresize /dev/md0<br />
<br />
* Now grow one of the LVs to use the free space we got back from the step above (it doesn't need to be the same one we took space from at the beginning, and can be done while the LV is mounted). Note the free extents from vgdisplay and use that as the input for lvextend:<br />
vgdisplay VG<br />
lvextend -l +71067 /dev/VG/LV<br />
resize2fs /dev/VG/LV<br />
<br />
* Check that we've no more free extents:<br />
vgdisplay VG<br />
<br />
* Remount any unmounted partitions, and restart any services that were shutdown to allow for the unmounting<br />
<br />
* Once the reshape has finished, the array will probably still be in degraded more with one spare drive, so add that back into the array:<br />
mdadm /dev/md0 --add /dev/sdY1<br />
This step will probably also take a day or more.<br />
<br />
== Reducing recovery time after unclean shutdown ==<br />
The default (at least when I setup mdadm) consistency policy is `resync`, which means a full resync is needed if the machine shuts down uncleanly (eg. due to power loss). To see the current consistency policy:<br />
mdadm --detail /dev/md0 | grep Consistency<br />
<br />
If it's set to `resync` then recoveries will be slow; `bitmap` means recoveries will be fast. To set a different consistency policy (eg. an internal bitmap) with:<br />
mdadm --grow --bitmap=internal /dev/md0<br />
<br />
Changing the consistency policy only takes a few seconds.<br />
<br />
== Cancel a hanging md check ==<br />
<br />
Sometimes the monthly consistency check will hang. This can be seen with output like:<br />
<br />
md0 : active raid5 sdh1[5] sdi1[4] sdg1[1] sdd1[0]<br />
8790107136 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]<br />
[===================>.] check = 99.9% (2930035712/2930035712) finish=0.0min speed=0K/sec<br />
bitmap: 14/22 pages [56KB], 65536KB chunk<br />
<br />
This can also lead to very high load warnings (>20).<br />
<br />
Cancelling in the regular way will just hang too, so first we need to set the state to `active` rather than `write-pending` before cancelling:<br />
<br />
# cat /sys/block/md0/md/array_state<br />
write-pending<br />
# echo active > /sys/block/md0/md/array_state<br />
# cat /sys/devices/virtual/block/md0/md/sync_action<br />
check<br />
# echo idle > /sys/devices/virtual/block/md0/md/sync_action<br />
<br />
== Useful Commands ==<br />
<br />
; cat /proc/mdstat : Display a summary of current raid status<br />
; mdadm --detail /dev/md0 : Display raid information on array md0<br />
; mdadm --examine /dev/sdf : Display raid information on device/partition sdf</div>Andrewhttps://wiki.bretts.org/index.php?title=Mdadm&diff=8522Mdadm2022-12-16T13:16:37Z<p>Andrew: </p>
<hr />
<div>== Overview ==<br />
Several physical disks (/dev/sdX) or partitions (/dev/sdX1) of equal size are joined into a single array.<br />
<br />
== Creating a RAID array ==<br />
<br />
* (Recommended) Create a partition on each disk. Note:<br />
** Use optimal alignment, with "-a optimal" (this doesn't appear to have any obvious effect on behaviour though!)<br />
** Use the "GPT" partition table format (to handle disks > 2TB)<br />
** Name the partition "primary" (note that this is free text)<br />
** Use 0% for partition start (this will normally mean that the partition start will be at the 1MB boundary, which gives optimal alignment)<br />
** End 100MB before the end of the disk (this is to allow for slight variances in exact size of similar disks)<br />
** Set partition type to raid (0xFD00); this is optional, but may encourage some tools to avoid writing directly to the disk (and avoid corrupting the array)<br />
<pre><br />
parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% -100MB set 1 raid on<br />
</pre><br />
<br />
<br />
* Create a RAID 5 array over 3 partitions:<br />
** Note, the default metadata version is now 1.2 for create commands<br />
<pre><br />
mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdX1 /dev/sdY1 /dev/sdZ1<br />
</pre><br />
<br />
* Wait (potentially several days) for the array to be built<br />
<br />
* Once built, save the current raid setup to /etc, to allow for automounting on startup:<br />
<pre><br />
diff -u <(cat /etc/mdadm/mdadm.conf) <(/usr/share/mdadm/mkconf)<br />
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.bak<br />
/usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf<br />
</pre><br />
<br />
* Update the initial boot image for all current kernel versions to include the new mdadm.conf:<br />
<pre><br />
update-initramfs -u<br />
</pre><br />
<br />
* Start the array:<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sdX1 /dev/sdY1 /dev/sdZ1<br />
</pre><br />
<br />
* From this point, just treat the array (/dev/md0) as a normal physical disk.<br />
<br />
== Convert RAID 1 array to RAID 5 ==<br />
<br />
* Create partition on the new disk as for creating a new array<br />
<br />
* Add the new partition to the array:<br />
<pre><br />
mdadm --add /dev/md0 /dev/sdX1<br />
</pre><br />
<br />
* Convert the array to RAID 5, with the correct number of devices:<br />
<pre><br />
mdadm --grow --level=5 --raid-devices=3<br />
</pre><br />
<br />
* Wait (potentially several days) for the array to be reshaped<br />
<br />
* Grow the partition / volume on /dev/md0<br />
<br />
== Readding a disk marked faulty ==<br />
If a disk in the array has been marked faulty for a spurious reason, then to readd it and rebuild the array, you'll first need to remove it. Run:<br />
<pre><br />
mdadm /dev/md0 --remove /dev/sdX1<br />
mdadm /dev/md0 --add /dev/sdX1<br />
</pre><br />
<br />
== Fixing a disk with Current_Pending_Sector count > 0 ==<br />
If a disk in the array has a Current_Pending_Sector count > 0, this suggests one or more blocks on the disk couldn't be read. To force the disk to be recovered from the rest of the array, it needs to be rewritten which will force the pending sector to be reallocated. This entails removing the disk from the array, zeroing the superblock (to ensure it can't just be recovered from the bitmap) and then re-adding it.<br />
<pre><br />
mdadm /dev/md0 --fail /dev/sdX1<br />
mdadm /dev/md0 --remove /dev/sdX1<br />
mdadm --zero-superblock /dev/sdX1<br />
mdadm /dev/md0 --add /dev/sdX1<br />
</pre><br />
<br />
== Recovering from disk failure ==<br />
<br />
* Check the disk status in mdadm:<br />
<pre><br />
mdadm --detail /dev/md0<br />
</pre><br />
<br />
* If the disk is already marked as failed, then skip this step. Otherwise:<br />
<pre><br />
mdadm /dev/md0 --fail /dev/sdX1<br />
</pre><br />
<br />
* From this point, the array will continue to operate in "degraded" mode<br />
<br />
* Remove the failed disk:<br />
<pre><br />
mdadm /dev/md0 --remove /dev/sdX1<br />
</pre><br />
<br />
* To more easily determine the disk for physical removal from the machine (once powered off), note down the serial number as reported by:<br />
<pre><br />
hdparm -i /dev/sdX | grep SerialNo<br />
</pre><br />
<br />
* Add a replacement disk:<br />
<pre><br />
mdadm /dev/md0 --add /dev/sdY1<br />
</pre><br />
<br />
* Wait (potentially several days) for the array to be resynced<br />
<br />
== Recover from a dirty reboot of a degraded array ==<br />
If the server shuts down uncleanly (eg. due to a power cut) when the array is degraded, it will refuse to automatically assemble the array on startup (with a dmesg error of the form "cannot start dirty degraded array"). This is because the data may be in an inconsistent state. In this situation:<br />
<br />
* Check that the good disks have the same number of events. If the numbers differ slightly, that suggests some of the data being written when the server shutdown wasn't written fully, and is probably corrupt (hopefully this will just mean a logfile with some bad characters, or similar).<br />
<pre><br />
mdadm --examine /dev/sdX /dev/sdY /dev/sdZ | grep Events<br />
</pre><br />
<br />
* Assuming the number of events is the same (or very similar), forcibly assemble the array.<br />
<pre><br />
mdadm --assemble --force /dev/md0 /dev/sdX1 /dev/sdY1 /dev/sdZ1<br />
</pre><br />
<br />
== Repairing failing disk on degraded array ==<br />
<br />
If the raid5 array is in a good state, then simply removing and readding the faulty drive should be sufficient. However, if the array is already degraded (ie. there’s no redundancy), or the disk problems became apparent when rebuilding the array from a spare drive, any bad sectors on the failing drive will need to be overwritten with new data (probably just zeros) before the disk is good enough to be able to rebuild the array.<br />
<br />
<ol><br />
<br />
<li>Getting information on failing/failed sectors:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
smartctl –l xerror /dev/sdX<br />
</pre><br />
<br />
<li>Analyze/recover data from failing disk:<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>Ideally, copy all good data to a recovery file sdX.bin, and record details of good/bad sectors in sdX.map (this needs sufficient free space for sdX.bin). This needs to be run when no partitions on the array are mounted:<br />
<pre><br />
ddrescue –ask –verbose –binary-prefixes –idirect /dev/sdX sdX.bin sdX.map<br />
</pre><br />
<br />
<li>If insufficient free space, merely analyze the disk to scan for all failing sectors (force is needed to allow writing to /dev/null). This can be run when partitions are mounted, since we don’t actually care about the data we’re reading, we just care about the bad sectors:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map<br />
</pre><br />
Note that the sdX.map file is human readable, and will generally be quite small. It keeps track of which sectors are good and bad, and can be reused for subsequent ddrescue runs to avoid re-reading good sectors.<br />
<br />
</ol><br />
<br />
<li>Recheck the number of failing sectors, since some may not have been read yet when smartctl was last run:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
<li> Forcibly re-assemble the array (after checking the number of events mismatching between array members):<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>If the number of events is wildly different, then it’s possible there will be corrupted data on the array, but in general if the array was marked as failed then no file writes will have been successful, so event discrepancies might not be reflective of a real problem:<br />
<pre><br />
mdadm –examine /dev/sd[XYZ] | grep Events<br />
</pre><br />
<br />
<li>Reassemble the array, if it’s in a good state (note the disk order isn’t important – mdadm will work out the correct order):<br />
<pre><br />
mdadm –assemble –verbose –run /dev/md0 /dev/sd[XYZ]<br />
</pre><br />
<br />
<li>If reassembly was unsuccessful due to mismatched event numbers, then forcibly reassemble it (be very careful here, disk order doesn’t matter but do make sure the correct disk labels are used – check output of the previous assemble to make sure it looks reasonable):<br />
<pre><br />
mdadm –assemble –verbose –run –force /dev/md0 /dev/sd[XYZ]<br />
</pre><br />
<br />
<li>Remount any affected partitions, or restart the machine to remount all on startup<br />
<br />
</ol><br />
<br />
<li>For mdadm raid5 + lvm arrays, there’s no easy way to determine which files inhabit which bad sectors. Instead, we need to read all files by hand to determine which are unreadable. For each partition which includes space on the bad drive (xdev ensures no other mount points are included):<br />
<pre><br />
find /mountpoint –type f –xdev –exec echo {} \; -exec md5sum {} \; 2>&1 | tee mountpoint-files.log<br />
</pre><br />
Note: Ensure mountpoint-files.log is written somewhere outside of the array<br />
<br />
<br />
<li>It’s possible that reading a bad file with md5sum above will again mark the array as failed. If so, reassemble the array using the steps above. Then look in the mountpoint-files.log file for the first failed md5sum (probably logged with “Input/output error”).<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>Write random data over the bad file, which should force the pending sector to be marked bad and reallocated from spare space on the disk:<br />
<pre><br />
shred –v /path/to/bad/file<br />
</pre><br />
<br />
<li>Check that the number of failing sectors has decreased:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
<li>Assuming the number of pending sectors has decreased, it’s then ok to delete the bad file:<br />
<pre><br />
rm /path/to/bad/file<br />
</pre><br />
<br />
</ol><br />
<br />
<li>Repeat md5sum scanning and file deletion until all mountpoints using the disk are free of bad files<br />
<br />
<li>Rescan the bad sectors to see which have been fixed by deleting files, reusing the previous known state of the drive. Note that we need “-r 1” otherwise the bad sectors will be treated as known bad from the previous state, and won’t be tried at all (after backing up the original map file):<br />
<pre><br />
cp sdX.map sdX.initial.map<br />
ddrescue –ask –verbose –binary-prefixes –idirect –force –r 1 /dev/sdX /dev/null sdX.map<br />
</pre><br />
<br />
<li>If any bad sectors remain, then they must be in free space on the drive.<br />
<br />
<ol style="list-style-type: lower-alpha;"><br />
<br />
<li>List out all the bad block addresses, based on the ddrescue state file (after backing up the map file):<br />
<pre><br />
ddrescuelog –list-blocks=- sdX.map<br />
</pre><br />
<br />
<li>For each of the bad blocks, check with dd that we’ve got the right block IDs. For each one of these reads we expect to see an error (and “0+0 records in”):<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/sdX of=/dev/null count=1 bs=512 skip=$block<br />
done<br />
</pre><br />
<br />
<li>For each of the bad blocks, write zeros over the block to force it to be reallocated from spare space on the drive. Be careful here – getting it wrong will destroy data! Also note that when reading, “skip” is used to position the input stream, but here “seek” is used to position the output stream:<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/zero of=/dev/sdX count=1 bs=512 seek=$block<br />
done<br />
</pre><br />
<br />
<li>It’s possible that dd will fail to write to the block, in which case try again with hdparm:<br />
<br />
<ol style="list-style-type: lower-roman;"><br />
<br />
<li>First check that we’ve got the right sectors (we expect to see “SG_IO: bad/missing sense data” for each sector on stderr, so we pipe stdout to /dev/null to avoid noise):<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
hdparm –read-sector $block /dev/sdX > /dev/null<br />
done<br />
</pre><br />
<br />
<li>Assuming we’ve seen the expected errors, write zeros over each of the bad sectors. Be careful here – getting it wrong will destroy data! You may be asked to add a “—yes-i-know-what-i-am-doing” flag.<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
hdparm –write-sector $block /dev/sdX<br />
done<br />
</pre><br />
<br />
</ol><br />
<br />
</ol><br />
<br />
<li>Check ddrescue is showing all data as readable (after backing up the map file again):<br />
<pre><br />
cp sdX.map sdX.postshred.map<br />
ddrescue –ask –verbose –binary-prefixes –idirect –force –r 1 /dev/sdX /dev/null sdX.map<br />
</pre><br />
<br />
<li>Check smartctl is showing no pending sectors:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
<li>Readd spare drive and start rebuilding array redundancy:<br />
<pre><br />
mdadm –add /dev/md0 /dev/sdW<br />
</pre><br />
<br />
</ol><br />
<br />
== Reducing the number of disks in a RAID 5 array (including LVM) ==<br />
To reduce the number of disks in an array (so that one can be removed safely):<br />
<br />
* Firstly, ensure we've got a saved copy of the current PV mappings and 'mdadm --detail' somewhere (not on the machine). This will be useful if we need to recover from something having gone wrong<br />
<br />
* If we want to choose which disk is going to be removed (rather than mdadm deciding for us), we need to remove that drive from the RAID 5 array before we start (which will put it into 'degraded' mode):<br />
mdadm /dev/md0 --fail /dev/sdX1<br />
mdadm /dev/md0 --remove /dev/sdX1<br />
<br />
* Unmount the LVM logical volume we're going to take the space from:<br />
umount /dev/VG/LV<br />
<br />
* Shrink the LVM logical volume (and the ext4 filesystem that's on it) from which we're going to be reclaiming the space. Make sure that the reduction in size is larger than the size of the disk we're going to remove (in this case, a 3TB drive). Units are 1024-based, so we know that 3T will be enough (since 3TiB > 3TB):<br />
lvresize --verbose --resizefs -L -3T /dev/VG/LV<br />
This step will take a LONG time (about 15 hours for me).<br />
<br />
* Check the PV mappings - it's likely that the free space we've created won't be at the end of the physical volume:<br />
pvdisplay -m /dev/md0<br />
<br />
* Assuming it's not at the end, we need to move the LV segments around to ensure all free space is at the end of the volume (you may need to do this more than once). Choose a segment that's after the free space, and move its extents into a similarly sized space at the beginning of the free area:<br />
pvmove --alloc anywhere /dev/md0:3538424-3576823 /dev/md0:2751992-2790391<br />
This step also takes several hours.<br />
<br />
* Check that we have some free PEs, and calculate the new size of the physical volume with `PE Size * (Alloc PE + 1)` from:<br />
vgdisplay VG<br />
I'm not sure why we need the `+ 1`, but without it the next step warns us that we shrinking by one too many extents (`cannot resize to 2790391 extents as 2790392 are allocated`).<br />
<br />
* Shrink the physical volume. Use KB here so that we can compare with the size of the mdadm array in the next step. pvresize will warn you that the requested size is less than the real size; check that the requested size matches the Alloc Size from vgdisplay.<br />
pvresize --setphysicalvolumesize 11429449728K /dev/md0<br />
<br />
* Check that we now have 0 Free PEs:<br />
vgdisplay VG<br />
<br />
* Check what the new size of the array will be, and ensure that it's larger than the size of the PV as set by `pvresize`. Here `5` is the new number of disks in the array. Also, ensure the backup file is not stored on the array itself:<br />
mdadm --grow /dev/md0 -n5 --backup-file /var/log/raid.backup<br />
This step will fail to resize, but will give us the new array size we need from the next step.<br />
<br />
* Transiently resize the array (which will change the size reported to the OS until the next reboot). Use the size reported by the previous step, but ensure it's larger than the physical volume size we set with `pvresize`:<br />
mdadm --grow /dev/md0 --array-size 11720540160<br />
<br />
* At this point, you may want to run e2fsck on all the volumes on in the group to make sure we haven't accidentally truncated any by shrinking the reported size of the array.<br />
<br />
* Assuming all is well, go ahead and rerun the command to reshape the array with new number of disks:<br />
mdadm --grow /dev/md0 -n5 --backup-file /var/log/raid.backup<br />
This will take a REALLY LONG time (several days for me). Whilst this is running though, we can run the next few steps (all except adding the spare drive back to the array).<br />
<br />
* First, we can grow the PV to get back the space we used as a buffer to make sure we shrank the PV more than we shrank the array:<br />
pvresize /dev/md0<br />
<br />
* Now grow one of the LVs to use the free space we got back from the step above (it doesn't need to be the same one we took space from at the beginning, and can be done while the LV is mounted). Note the free extents from vgdisplay and use that as the input for lvextend:<br />
vgdisplay VG<br />
lvextend -l +71067 /dev/VG/LV<br />
resize2fs /dev/VG/LV<br />
<br />
* Check that we've no more free extents:<br />
vgdisplay VG<br />
<br />
* Remount any unmounted partitions, and restart any services that were shutdown to allow for the unmounting<br />
<br />
* Once the reshape has finished, the array will probably still be in degraded more with one spare drive, so add that back into the array:<br />
mdadm /dev/md0 --add /dev/sdY1<br />
This step will probably also take a day or more.<br />
<br />
== Reducing recovery time after unclean shutdown ==<br />
The default (at least when I setup mdadm) consistency policy is `resync`, which means a full resync is needed if the machine shuts down uncleanly (eg. due to power loss). To see the current consistency policy:<br />
mdadm --detail /dev/md0 | grep Consistency<br />
<br />
If it's set to `resync` then recoveries will be slow; `bitmap` means recoveries will be fast. To set a different consistency policy (eg. an internal bitmap) with:<br />
mdadm --grow --bitmap=internal /dev/md0<br />
<br />
Changing the consistency policy only takes a few seconds.<br />
<br />
== Cancel a hanging md check ==<br />
<br />
Sometimes the monthly consistency check will hang. This can be seen with output like:<br />
<br />
md0 : active raid5 sdh1[5] sdi1[4] sdg1[1] sdd1[0]<br />
8790107136 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]<br />
[===================>.] check = 99.9% (2930035712/2930035712) finish=0.0min speed=0K/sec<br />
bitmap: 14/22 pages [56KB], 65536KB chunk<br />
<br />
Cancelling in the regular way will just hang too, so first we need to set the state to `active` rather than `write-pending` before cancelling:<br />
<br />
# cat /sys/block/md0/md/array_state<br />
write-pending<br />
# echo active > /sys/block/md0/md/array_state<br />
# cat /sys/devices/virtual/block/md0/md/sync_action<br />
check<br />
# echo idle > /sys/devices/virtual/block/md0/md/sync_action<br />
<br />
== Useful Commands ==<br />
<br />
; cat /proc/mdstat : Display a summary of current raid status<br />
; mdadm --detail /dev/md0 : Display raid information on array md0<br />
; mdadm --examine /dev/sdf : Display raid information on device/partition sdf</div>Andrewhttps://wiki.bretts.org/index.php?title=Main_Page&diff=8521Main Page2022-10-01T23:28:04Z<p>Andrew: /* Monitoring */</p>
<hr />
<div>== briki Contents ==<br />
=== Technology ===<br />
==== Software ====<br />
* [[Linux Tips]]<br />
* [[Windows 10 Tips]]<br />
* [[Windows XP Tips]]<br />
* [[MacOS X Tips]]<br />
* [[Networking Tips]]<br />
<br />
==== Hardware ====<br />
* [[Machine List]]<br />
* [[Sony TR Tips]]<br />
* [[SPV M5000 Tips]]<br />
* [[Speedtouch 780 Tips]]<br />
<br />
=== Other ===<br />
* [[Bikes]]<br />
* [[Simpsons Quotes]]<br />
* [[Temporary Pages]]<br />
* [[Alex]]<br />
<br />
== Other Contents ==<br />
=== Home ===<br />
----<br />
* [https://home-assistant.bretts.org Home Assistant] (protected)<br />
* [https://arlo.netgear.com Security Cameras] (protected)<br />
<br />
=== Media ===<br />
----<br />
* [https://plex.bretts.org Plex] (protected)<br />
* [https://radarr.bretts.org Radarr] ([https://radarr-lowres.bretts.org low-res]) (private)<br />
* [https://sonarr.bretts.org Sonarr] (private)<br />
* [https://tautulli.bretts.org Tautulli (Plex status)] (private)<br />
* [http://maine.bretts.org:33400 Plex WebTools] (private)<br />
* [http://maine.bretts.org:9117 Jackett] (private)<br />
<br />
=== Admin ===<br />
----<br />
==== Network ====<br />
* [https://router.bretts.org:1443/ Router] (protected)<br />
* [https://unifi.bretts.org Unifi Controller] (protected)<br />
<br />
==== Monitoring ====<br />
* [http://netdata.bretts.org Device Monitoring (netdata)]<br />
* [https://ntopng.bretts.org Bandwidth Monitoring (ntopng)] (protected)<br />
* [http://maine.bretts.org/mon/ Process Monitoring (nagios4)] [https://maine.bretts.org/cgi-bin/nagios4/status.cgi?host=all Services](protected)<br />
* [http://maine.bretts.org/graph/bretts.org/ Network Monitoring (munin)]<br />
* [http://maine.bretts.org:3001/ Grafana] (protected)<br />
<br />
==== Torrents ====<br />
* [https://deluge.bretts.org Deluge] (protected)<br />
<br />
==== Apache (protected) ====<br />
* [http://maine.bretts.org/doc/ Manual]<br />
* [http://maine.bretts.org/server-info Server Info]<br />
* [http://maine.bretts.org/server-status Server Status]<br />
* [http://maine.bretts.org/php-info/ PHP Info]<br />
<br />
==== Tomcat (protected) ====<br />
* [http://maine.bretts.org/tomcat/manager/html Manager]<br />
<br />
=== Development ===<br />
* [http://bitbucket.bretts.org/ BitBucket]<br />
* [http://jira.bretts.org/ JIRA]<br />
* [http://bamboo.bretts.org/ Bamboo]<br />
<br />
== About ==<br />
'''briki''' (''bretts.org wiki'') administered by Andrew Brett. Feel free to contribute to pre-existing pages.</div>Andrewhttps://wiki.bretts.org/index.php?title=Main_Page&diff=8520Main Page2022-10-01T23:27:08Z<p>Andrew: /* Monitoring */</p>
<hr />
<div>== briki Contents ==<br />
=== Technology ===<br />
==== Software ====<br />
* [[Linux Tips]]<br />
* [[Windows 10 Tips]]<br />
* [[Windows XP Tips]]<br />
* [[MacOS X Tips]]<br />
* [[Networking Tips]]<br />
<br />
==== Hardware ====<br />
* [[Machine List]]<br />
* [[Sony TR Tips]]<br />
* [[SPV M5000 Tips]]<br />
* [[Speedtouch 780 Tips]]<br />
<br />
=== Other ===<br />
* [[Bikes]]<br />
* [[Simpsons Quotes]]<br />
* [[Temporary Pages]]<br />
* [[Alex]]<br />
<br />
== Other Contents ==<br />
=== Home ===<br />
----<br />
* [https://home-assistant.bretts.org Home Assistant] (protected)<br />
* [https://arlo.netgear.com Security Cameras] (protected)<br />
<br />
=== Media ===<br />
----<br />
* [https://plex.bretts.org Plex] (protected)<br />
* [https://radarr.bretts.org Radarr] ([https://radarr-lowres.bretts.org low-res]) (private)<br />
* [https://sonarr.bretts.org Sonarr] (private)<br />
* [https://tautulli.bretts.org Tautulli (Plex status)] (private)<br />
* [http://maine.bretts.org:33400 Plex WebTools] (private)<br />
* [http://maine.bretts.org:9117 Jackett] (private)<br />
<br />
=== Admin ===<br />
----<br />
==== Network ====<br />
* [https://router.bretts.org:1443/ Router] (protected)<br />
* [https://unifi.bretts.org Unifi Controller] (protected)<br />
<br />
==== Monitoring ====<br />
* [http://netdata.bretts.org Device Monitoring (netdata)]<br />
* [https://ntopng.bretts.org Bandwidth Monitoring (ntopng)] (protected)<br />
* [http://maine.bretts.org/mon/ Process Monitoring (nagios4)] (protected)<br />
* [http://maine.bretts.org/graph/bretts.org/ Network Monitoring (munin)]<br />
* [http://maine.bretts.org:3001/ Grafana] (protected)<br />
<br />
==== Torrents ====<br />
* [https://deluge.bretts.org Deluge] (protected)<br />
<br />
==== Apache (protected) ====<br />
* [http://maine.bretts.org/doc/ Manual]<br />
* [http://maine.bretts.org/server-info Server Info]<br />
* [http://maine.bretts.org/server-status Server Status]<br />
* [http://maine.bretts.org/php-info/ PHP Info]<br />
<br />
==== Tomcat (protected) ====<br />
* [http://maine.bretts.org/tomcat/manager/html Manager]<br />
<br />
=== Development ===<br />
* [http://bitbucket.bretts.org/ BitBucket]<br />
* [http://jira.bretts.org/ JIRA]<br />
* [http://bamboo.bretts.org/ Bamboo]<br />
<br />
== About ==<br />
'''briki''' (''bretts.org wiki'') administered by Andrew Brett. Feel free to contribute to pre-existing pages.</div>Andrewhttps://wiki.bretts.org/index.php?title=Docker&diff=8519Docker2022-09-29T07:36:24Z<p>Andrew: /* Home-Assistant (as part of host network) */</p>
<hr />
<div>== Useful Commands ==<br />
<br />
; docker ps -a: List all containers<br />
; docker container inspect <container>: Show details of <container><br />
; docker logs <container>: Show logs for <container><br />
; docker exec -it <container> /bin/bash: Start an interactive shell in <container><br />
<br />
== Updating container ==<br />
<br />
=== Manually ===<br />
sudo docker pull <image><br />
sudo docker stop <container><br />
sudo docker rm <container><br />
<docker run command><br />
<br />
=== Automatically ===<br />
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock taisun/updater --oneshot <container><br />
<br />
== Containers ==<br />
=== Plex ===<br />
First, setup NVIDIA:<br />
* Get latest drivers: https://linuxconfig.org/how-to-install-the-nvidia-drivers-on-ubuntu-18-04-bionic-beaver-linux<br />
* Enable NVIDIA extensions for docker: https://forums.plex.tv/t/how-to-setup-nvidia-hw-acceleration-in-ubuntu-docker/288625/7<br />
<br />
Next, get your claim token: https://www.plex.tv/claim/<br />
<br />
Finally, create the container with the claim token substituted:<br />
sudo docker run -d --name plex --network=host -e PLEX_UID=111 -e PLEX_GID=127 -e TZ=Europe/London -e PLEX_CLAIM=<CLAIM_TOKEN> \<br />
-v /var/lib/plexmediaserver:/config -v /srv:/srv \<br />
--restart unless-stopped \<br />
plexinc/pms-docker:plexpass<br />
<br />
=== Tautulli (Plex Monitoring/Notifications) ===<br />
sudo docker run -d --name tautulli -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8181:8181 \<br />
-v /var/lib/torrent/tautulli/config:/config -v /var/lib/plexmediaserver/Library/Logs:/logs \<br />
--restart unless-stopped \<br />
linuxserver/tautulli<br />
<br />
=== Jackett (Torrent Gateway) ===<br />
sudo docker run -d --name=jackett -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 9117:9117 \<br />
-v /var/lib/torrent/jackett/config:/config -v /var/lib/torrent/jackett/downloads:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/jackett<br />
<br />
=== Deluge ===<br />
sudo docker run -d --name deluge -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/torrent/deluged/config:/config -v /srv/incoming/torrents/deluge:/srv/incoming/torrents/deluge \<br />
-v /etc/ssl/bretts.org:/etc/ssl/bretts.org \<br />
--restart unless-stopped \<br />
linuxserver/deluge<br />
<br />
Since user groups don't seem to apply across the docker boundary, "torrent" will need to be given explicit permission to the private key file via an ACL:<br />
setfacl -m "u:torrent:rw" /etc/ssl/bretts.org/key.pem<br />
<br />
=== Radarr (Movie Downloads) ===<br />
sudo docker run -d --name radarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7878:7878 \<br />
-v /var/lib/torrent/radarr/config:/config -v /srv/videos/programs/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Radarr Lowres (Low Resolution (<=1080p) Movie Downloads) ===<br />
sudo docker run -d --name radarr-lowres -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7879:7878 \<br />
-v /var/lib/torrent/radarr-lowres/config:/config -v /srv/videos/lowres/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Sonarr (TV Downloads) ===<br />
sudo docker run -d --name=sonarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8989:8989 \<br />
-v /var/lib/torrent/sonarr/config:/config -v /srv/videos/programs/tv:/tv -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/sonarr<br />
<br />
=== Unifi ===<br />
sudo docker run -d --name=unifi-controller -e PUID=140 -e PGID=150 \<br />
-p 3478:3478/udp -p 10001:10001/udp -p 18080:18080 -p 18081:18081 -p 18443:18443 -p 18880:18880 -p 6789:6789 \<br />
-v /var/lib/unifi:/config \<br />
--restart unless-stopped \<br />
linuxserver/unifi-controller<br />
<br />
=== Home-Assistant (as part of host network) ===<br />
sudo docker run -d --name=home-assistant -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/home-assistant/config:/config -v /srv:/media -v /etc/ssl/bretts.org:/etc/ssl/bretts.org -v /var/www/html/arlo-snapshots:/arlo-snapshots \<br />
--restart unless-stopped \<br />
homeassistant/home-assistant<br />
<br />
=== Atlassian ===<br />
<br />
==== JIRA ====<br />
Note: In this instance JIRA is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name jira<br />
sudo docker run -d --name=jira -e TZ=Europe/London \<br />
-e ATL_TOMCAT_SCHEME=https -e ATL_TOMCAT_SECURE=true -e ATL_PROXY_NAME=jira.bretts.org -e ATL_PROXY_PORT=443 \<br />
-p 7980:8080 \<br />
-v jira:/var/atlassian/application-data/jira \<br />
--restart unless-stopped \<br />
atlassian/jira-software<br />
<br />
Docker JIRA runs with a uid and gid of 2001. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2001 jira-docker<br />
sudo adduser --system --no-create-home --uid 2001 --gid 2001 jira-docker<br />
<br />
==== Bitbucket====<br />
Note: In this instance Bitbucket is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bitbucket<br />
sudo docker run -d --name=bitbucket -e TZ=Europe/London \<br />
-e SERVER_SCHEME=https -e SERVER_SECURE=true -e SERVER_PROXY_NAME=bitbucket.bretts.org -e SERVER_PROXY_PORT=443 \<br />
-p 7990:7990 -p 7999:7999 \<br />
-v bitbucket:/var/atlassian/application-data/bitbucket \<br />
--restart unless-stopped \<br />
atlassian/bitbucket-server<br />
<br />
Docker Bitbucket runs with a uid and gid of 2003. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2003 bitbucket-docker<br />
sudo adduser --system --no-create-home --uid 2003 --gid 2003 bitbucket-docker<br />
<br />
==== Bamboo ====<br />
Note: In this instance Bamboo is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bamboo<br />
sudo docker run -d --name=bamboo -e TZ=Europe/London \<br />
-p 54663:54663 -p 7970:8085 \<br />
-v bamboo:/var/atlassian/application-data/bamboo \<br />
--restart unless-stopped \<br />
atlassian/bamboo-server<br />
<br />
===== Limitations =====<br />
* Bamboo runs with a uid of 1000, which means it's likely to clash with a real user in the containing host<br />
* Bamboo container doesn't support any reverse proxy configuration, which means hiding it behind nginx is likely to result in broken Application Links. This can be worked around by manually editing /opt/atlassian/bamboo/conf/server.xml, but those changes will be overwritten on every container upgrade.<br />
<br />
== Tips / Fixes ==<br />
<br />
=== Tautulli slow to start ===<br />
This may be due to an attempt to chown a large number of files. <br />
Login to the container:<br />
sudo docker exec -it <container> /bin/bash<br />
Disable the chown step by editing <code>/etc/cont-init.d/30-config</code> and commenting out the chown command.<br />
<br />
=== Adding an SSL cert for Unifi ===<br />
sudo openssl pkcs12 -export -inkey /etc/ssl/bretts.org/key.pem -in /etc/ssl/bretts.org/fullchain.pem -out /tmp/cert.p12 -name unifi -password pass:temppass<br />
sudo keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/data/keystore -srckeystore /tmp/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias unifi -noprompt<br />
sudo docker restart unifi-controller<br />
sudo rm /tmp/cert.p12<br />
<br />
=== Local DNS resolution fails on docker 18.09 ===<br />
This may be the result of a bug: https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1820278. Normally the container's /etc/resolv.conf should mirror that of the host, but in this case it seems to just be a default version. As a workaround, create /etc/docker/daemon.json with the following contents:<br />
<br />
{<br />
"dns": ["192.168.1.1", "8.8.8.8"],<br />
"dns-search": ["bretts.org"]<br />
}</div>Andrewhttps://wiki.bretts.org/index.php?title=Backups&diff=8518Backups2022-05-20T07:12:37Z<p>Andrew: </p>
<hr />
<div>== Update restic ==<br />
restic self-update<br />
<br />
== Listing previous snapshots ==<br />
sudo -i<br />
. /etc/restic-init<br />
restic snapshots<br />
<br />
== Listing contents of the latest snapshot ==<br />
sudo -i<br />
. /etc/restic-init<br />
restic ls -l latest<br />
<br />
== Deleting old snapshots ==<br />
Substitute `2021-12` with the month (or months) for which you want to keep a full history. Note: `2019-03-26` is the first-ever backup in this instance, so we want to keep that too.<br />
for snap in `restic snapshots -c | grep maine | sed -e 's!.* !!' | grep -v -- '-01$' | grep -v '2019-03-26' | grep -v '2021-12'`<br />
do<br />
restic forget --path /var/backup/snapshot/latest --tag $snap --keep-last=-1<br />
done<br />
<br />
== Pruning old history ==<br />
Note: This will also trim the size of the /root/.cache/restic directory<br />
sudo -i<br />
. /etc/restic-init<br />
restic prune</div>Andrewhttps://wiki.bretts.org/index.php?title=Docker&diff=8517Docker2022-05-05T07:51:57Z<p>Andrew: </p>
<hr />
<div>== Useful Commands ==<br />
<br />
; docker ps -a: List all containers<br />
; docker container inspect <container>: Show details of <container><br />
; docker logs <container>: Show logs for <container><br />
; docker exec -it <container> /bin/bash: Start an interactive shell in <container><br />
<br />
== Updating container ==<br />
<br />
=== Manually ===<br />
sudo docker pull <image><br />
sudo docker stop <container><br />
sudo docker rm <container><br />
<docker run command><br />
<br />
=== Automatically ===<br />
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock taisun/updater --oneshot <container><br />
<br />
== Containers ==<br />
=== Plex ===<br />
First, setup NVIDIA:<br />
* Get latest drivers: https://linuxconfig.org/how-to-install-the-nvidia-drivers-on-ubuntu-18-04-bionic-beaver-linux<br />
* Enable NVIDIA extensions for docker: https://forums.plex.tv/t/how-to-setup-nvidia-hw-acceleration-in-ubuntu-docker/288625/7<br />
<br />
Next, get your claim token: https://www.plex.tv/claim/<br />
<br />
Finally, create the container with the claim token substituted:<br />
sudo docker run -d --name plex --network=host -e PLEX_UID=111 -e PLEX_GID=127 -e TZ=Europe/London -e PLEX_CLAIM=<CLAIM_TOKEN> \<br />
-v /var/lib/plexmediaserver:/config -v /srv:/srv \<br />
--restart unless-stopped \<br />
plexinc/pms-docker:plexpass<br />
<br />
=== Tautulli (Plex Monitoring/Notifications) ===<br />
sudo docker run -d --name tautulli -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8181:8181 \<br />
-v /var/lib/torrent/tautulli/config:/config -v /var/lib/plexmediaserver/Library/Logs:/logs \<br />
--restart unless-stopped \<br />
linuxserver/tautulli<br />
<br />
=== Jackett (Torrent Gateway) ===<br />
sudo docker run -d --name=jackett -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 9117:9117 \<br />
-v /var/lib/torrent/jackett/config:/config -v /var/lib/torrent/jackett/downloads:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/jackett<br />
<br />
=== Deluge ===<br />
sudo docker run -d --name deluge -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/torrent/deluged/config:/config -v /srv/incoming/torrents/deluge:/srv/incoming/torrents/deluge \<br />
-v /etc/ssl/bretts.org:/etc/ssl/bretts.org \<br />
--restart unless-stopped \<br />
linuxserver/deluge<br />
<br />
Since user groups don't seem to apply across the docker boundary, "torrent" will need to be given explicit permission to the private key file via an ACL:<br />
setfacl -m "u:torrent:rw" /etc/ssl/bretts.org/key.pem<br />
<br />
=== Radarr (Movie Downloads) ===<br />
sudo docker run -d --name radarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7878:7878 \<br />
-v /var/lib/torrent/radarr/config:/config -v /srv/videos/programs/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Radarr Lowres (Low Resolution (<=1080p) Movie Downloads) ===<br />
sudo docker run -d --name radarr-lowres -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7879:7878 \<br />
-v /var/lib/torrent/radarr-lowres/config:/config -v /srv/videos/lowres/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Sonarr (TV Downloads) ===<br />
sudo docker run -d --name=sonarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8989:8989 \<br />
-v /var/lib/torrent/sonarr/config:/config -v /srv/videos/programs/tv:/tv -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/sonarr<br />
<br />
=== Unifi ===<br />
sudo docker run -d --name=unifi-controller -e PUID=140 -e PGID=150 \<br />
-p 3478:3478/udp -p 10001:10001/udp -p 18080:18080 -p 18081:18081 -p 18443:18443 -p 18880:18880 -p 6789:6789 \<br />
-v /var/lib/unifi:/config \<br />
--restart unless-stopped \<br />
linuxserver/unifi-controller<br />
<br />
=== Home-Assistant (as part of host network) ===<br />
sudo docker run --init -d --name=home-assistant -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/home-assistant/config:/config -v /srv:/media -v /etc/ssl/bretts.org:/etc/ssl/bretts.org -v /var/www/html/arlo-snapshots:/arlo-snapshots \<br />
--restart unless-stopped \<br />
homeassistant/home-assistant<br />
<br />
=== Atlassian ===<br />
<br />
==== JIRA ====<br />
Note: In this instance JIRA is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name jira<br />
sudo docker run -d --name=jira -e TZ=Europe/London \<br />
-e ATL_TOMCAT_SCHEME=https -e ATL_TOMCAT_SECURE=true -e ATL_PROXY_NAME=jira.bretts.org -e ATL_PROXY_PORT=443 \<br />
-p 7980:8080 \<br />
-v jira:/var/atlassian/application-data/jira \<br />
--restart unless-stopped \<br />
atlassian/jira-software<br />
<br />
Docker JIRA runs with a uid and gid of 2001. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2001 jira-docker<br />
sudo adduser --system --no-create-home --uid 2001 --gid 2001 jira-docker<br />
<br />
==== Bitbucket====<br />
Note: In this instance Bitbucket is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bitbucket<br />
sudo docker run -d --name=bitbucket -e TZ=Europe/London \<br />
-e SERVER_SCHEME=https -e SERVER_SECURE=true -e SERVER_PROXY_NAME=bitbucket.bretts.org -e SERVER_PROXY_PORT=443 \<br />
-p 7990:7990 -p 7999:7999 \<br />
-v bitbucket:/var/atlassian/application-data/bitbucket \<br />
--restart unless-stopped \<br />
atlassian/bitbucket-server<br />
<br />
Docker Bitbucket runs with a uid and gid of 2003. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2003 bitbucket-docker<br />
sudo adduser --system --no-create-home --uid 2003 --gid 2003 bitbucket-docker<br />
<br />
==== Bamboo ====<br />
Note: In this instance Bamboo is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bamboo<br />
sudo docker run -d --name=bamboo -e TZ=Europe/London \<br />
-p 54663:54663 -p 7970:8085 \<br />
-v bamboo:/var/atlassian/application-data/bamboo \<br />
--restart unless-stopped \<br />
atlassian/bamboo-server<br />
<br />
===== Limitations =====<br />
* Bamboo runs with a uid of 1000, which means it's likely to clash with a real user in the containing host<br />
* Bamboo container doesn't support any reverse proxy configuration, which means hiding it behind nginx is likely to result in broken Application Links. This can be worked around by manually editing /opt/atlassian/bamboo/conf/server.xml, but those changes will be overwritten on every container upgrade.<br />
<br />
== Tips / Fixes ==<br />
<br />
=== Tautulli slow to start ===<br />
This may be due to an attempt to chown a large number of files. <br />
Login to the container:<br />
sudo docker exec -it <container> /bin/bash<br />
Disable the chown step by editing <code>/etc/cont-init.d/30-config</code> and commenting out the chown command.<br />
<br />
=== Adding an SSL cert for Unifi ===<br />
sudo openssl pkcs12 -export -inkey /etc/ssl/bretts.org/key.pem -in /etc/ssl/bretts.org/fullchain.pem -out /tmp/cert.p12 -name unifi -password pass:temppass<br />
sudo keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/data/keystore -srckeystore /tmp/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias unifi -noprompt<br />
sudo docker restart unifi-controller<br />
sudo rm /tmp/cert.p12<br />
<br />
=== Local DNS resolution fails on docker 18.09 ===<br />
This may be the result of a bug: https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1820278. Normally the container's /etc/resolv.conf should mirror that of the host, but in this case it seems to just be a default version. As a workaround, create /etc/docker/daemon.json with the following contents:<br />
<br />
{<br />
"dns": ["192.168.1.1", "8.8.8.8"],<br />
"dns-search": ["bretts.org"]<br />
}</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8516SnapRAID / MergerFS2022-04-26T09:00:48Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
* Run ddrescue to identify the failing bytes on the disk:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map <br />
</pre><br />
* ddrescue will write out a map file containing start positions and sizes (all in bytes) of good (+) and bad (-) byte ranges<br />
* Get the block size for the volume with <code>tune2fs -l /dev/sdXY | grep "Block size"</code> (we'll call this '''B'''). In my case this is 4096.<br />
* Get the sector size for the disk with <code>fdisk -l /dev/sdX | grep Units</code> (we'll call this '''S'''). In my case this is 512.<br />
* Identify the starting sector for the /dev/sdXY volume (eg. /dev/sda1) with <code>fdisk -l /dev/sdX</code> (we'll call this '''T'''). In my case this is 2048.<br />
* Run ddrescuelog to list out the bad block locations (using the block size B):<br />
<pre><br />
ddrescuelog -b B --list-blocks=- sdX.map<br />
</pre><br />
* For each of these, convert to a block location in the volume (rather than the disk) by subtracting <code>(T * S / B)</code>. In my case that's 256. Let's call each of these bad volume blocks '''BB'''<br />
* Start debugfs for the volume with <code>debugfs /dev/sdXY</code><br />
* For each '''BB''' run:<br />
<pre><br />
testb BB<br />
</pre><br />
* If debugfs returns "Block BB not in use", then that block isn't part of a file and can safely be overwritten. If it returns an inode number (we'll call it '''I''') then you can convert that inode number to a file path with:<br />
<pre><br />
ncheck I<br />
</pre><br />
<br />
For any unused blocks, we need to do the following:<br />
* Check with dd that we’ve got the right block IDs. For each one of these reads we expect to see an error (and “0+0 records in”):<br />
<pre><br />
for block in `ddrescuelog -b B --list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/sdX of=/dev/null count=1 bs=B skip=$block<br />
done<br />
</pre><br />
<br />
* For each of the bad blocks, write zeros over the block to force it to be reallocated from spare space on the drive. Be careful here – getting it wrong will destroy data! Also note that when reading, “skip” is used to position the input stream, but here “seek” is used to position the output stream:<br />
<pre><br />
for block in `ddrescuelog -b B --list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/zero of=/dev/sdX count=1 bs=B seek=$block<br />
done<br />
</pre><br />
<br />
* It’s possible that dd will fail to write to the block, in which case try again with hdparm:<br />
<br />
** First check that we’ve got the right sectors (we expect to see “SG_IO: bad/missing sense data” for each sector on stderr, so we pipe stdout to /dev/null to avoid noise). Note that we use '''S''' as the block size for ddrescuelog, since hdparm deals in sectors:<br />
<pre><br />
for block in `ddrescuelog -b S --list-blocks=- sdX.map`<br />
do<br />
hdparm --read-sector $block /dev/sdX > /dev/null<br />
done<br />
</pre><br />
<br />
** Assuming we’ve seen the expected errors, write zeros over each of the bad sectors. Note that we use '''S''' as the block size for ddrescuelog, since hdparm deals in sectors. Be careful here – getting it wrong will destroy data! You may be asked to add a “—yes-i-know-what-i-am-doing” flag.<br />
<pre><br />
for block in `ddrescuelog -b S --list-blocks=- sdX.map`<br />
do<br />
hdparm --write-sector $block /dev/sdX<br />
done<br />
</pre><br />
<br />
* Check that the number of failing sectors has decreased:<br />
<pre><br />
smartctl -a /dev/sdX | grep Pending<br />
</pre><br />
<br />
For any files reported by `debugfs`:<br />
* Write random data over the bad file, which should force the pending sector to be marked bad and reallocated from spare space on the disk:<br />
<pre><br />
shred –v /path/to/bad/file<br />
</pre><br />
<br />
* Check that the number of failing sectors has decreased:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
* Assuming the number of pending sectors has decreased, it’s then ok to delete the bad file:<br />
<pre><br />
rm /path/to/bad/file<br />
</pre><br />
<br />
More details:<br />
* https://www.smartmontools.org/wiki/BadBlockHowto<br />
* [[Mdadm#Repairing_failing_disk_on_degraded_array]]</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8515SnapRAID / MergerFS2022-04-26T09:00:14Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
* Run ddrescue to identify the failing bytes on the disk:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map <br />
</pre><br />
* ddrescue will write out a map file containing start positions and sizes (all in bytes) of good (+) and bad (-) byte ranges<br />
* Get the block size for the volume with <code>tune2fs -l /dev/sdXY | grep "Block size"</code> (we'll call this '''B'''). In my case this is 4096.<br />
* Get the sector size for the disk with <code>fdisk -l /dev/sdX | grep Units</code> (we'll call this '''S'''). In my case this is 512.<br />
* Identify the starting sector for the /dev/sdXY volume (eg. /dev/sda1) with <code>fdisk -l /dev/sdX</code> (we'll call this '''T'''). In my case this is 2048.<br />
* Run ddrescuelog to list out the bad block locations (using the block size B):<br />
<pre><br />
ddrescuelog -b B --list-blocks=- sdX.map<br />
</pre><br />
* For each of these, convert to a block location in the volume (rather than the disk) by subtracting <code>(T * S / B)</code>. In my case that's 256. Let's call each of these bad volume blocks '''BB'''<br />
* Start debugfs for the volume with <code>debugfs /dev/sdXY</code><br />
* For each '''BB''' run:<br />
<pre><br />
testb BB<br />
</pre><br />
* If debugfs returns "Block BB not in use", then that block isn't part of a file and can safely be overwritten. If it returns an inode number (we'll call it '''I''') then you can convert that inode number to a file path with:<br />
<pre><br />
ncheck I<br />
</pre><br />
<br />
For any unused blocks, we need to do the following:<br />
* Check with dd that we’ve got the right block IDs. For each one of these reads we expect to see an error (and “0+0 records in”):<br />
<pre><br />
for block in `ddrescuelog -b B --list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/sdX of=/dev/null count=1 bs=B skip=$block<br />
done<br />
</pre><br />
<br />
* For each of the bad blocks, write zeros over the block to force it to be reallocated from spare space on the drive. Be careful here – getting it wrong will destroy data! Also note that when reading, “skip” is used to position the input stream, but here “seek” is used to position the output stream:<br />
<pre><br />
for block in `ddrescuelog -b B --list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/zero of=/dev/sdX count=1 bs=B seek=$block<br />
done<br />
</pre><br />
<br />
* It’s possible that dd will fail to write to the block, in which case try again with hdparm:<br />
<br />
** First check that we’ve got the right sectors (we expect to see “SG_IO: bad/missing sense data” for each sector on stderr, so we pipe stdout to /dev/null to avoid noise). Note that we use '''S''' as the block size for ddrescuelog, since hdparm deals in sectors:<br />
<pre><br />
for block in `ddrescuelog -b S --list-blocks=- sdX.map`<br />
do<br />
hdparm --read-sector $block /dev/sdX > /dev/null<br />
done<br />
</pre><br />
<br />
** Assuming we’ve seen the expected errors, write zeros over each of the bad sectors. Note that we use '''S''' as the block size for ddrescuelog, since hdparm deals in sectors. Be careful here – getting it wrong will destroy data! You may be asked to add a “—yes-i-know-what-i-am-doing” flag.<br />
<pre><br />
for block in `ddrescuelog -b S --list-blocks=- sdX.map`<br />
do<br />
hdparm --write-sector $block /dev/sdX<br />
done<br />
</pre><br />
<br />
* Check that the number of failing sectors has decreased:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
For any files reported by `debugfs`:<br />
* Write random data over the bad file, which should force the pending sector to be marked bad and reallocated from spare space on the disk:<br />
<pre><br />
shred –v /path/to/bad/file<br />
</pre><br />
<br />
* Check that the number of failing sectors has decreased:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
* Assuming the number of pending sectors has decreased, it’s then ok to delete the bad file:<br />
<pre><br />
rm /path/to/bad/file<br />
</pre><br />
<br />
More details:<br />
* https://www.smartmontools.org/wiki/BadBlockHowto<br />
* [[Mdadm#Repairing_failing_disk_on_degraded_array]]</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8514SnapRAID / MergerFS2022-04-26T08:56:49Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
* Run ddrescue to identify the failing bytes on the disk:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map <br />
</pre><br />
* ddrescue will write out a map file containing start positions and sizes (all in bytes) of good (+) and bad (-) byte ranges<br />
* Get the block size for the volume with <code>tune2fs -l /dev/sdXY | grep "Block size"</code> (we'll call this '''B'''). In my case this is 4096.<br />
* Get the sector size for the disk with <code>fdisk -l /dev/sdX | grep Units</code> (we'll call this '''S'''). In my case this is 512.<br />
* Identify the starting sector for the /dev/sdXY volume (eg. /dev/sda1) with <code>fdisk -l /dev/sdX</code> (we'll call this '''T'''). In my case this is 2048.<br />
* Run ddrescuelog to list out the bad block locations (using the block size B):<br />
<pre><br />
ddrescuelog -b B --list-blocks=- sdX.map<br />
</pre><br />
* For each of these, convert to a block location in the volume (rather than the disk) by subtracting <code>(T * S / B)</code>. In my case that's 256. Let's call each of these bad volume blocks '''BB'''<br />
* Start debugfs for the volume with <code>debugfs /dev/sdXY</code><br />
* For each '''BB''' run:<br />
<pre><br />
testb BB<br />
</pre><br />
* If debugfs returns "Block BB not in use", then that block isn't part of a file and can safely be overwritten. If it returns an inode number (we'll call it '''I''') then you can convert that inode number to a file path with:<br />
<pre><br />
ncheck I<br />
</pre><br />
<br />
For any unused blocks, we need to do the following:<br />
* Check with dd that we’ve got the right block IDs. For each one of these reads we expect to see an error (and “0+0 records in”):<br />
<pre><br />
for block in `ddrescuelog -b B --list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/sdX of=/dev/null count=1 bs=B skip=$block<br />
done<br />
</pre><br />
<br />
* For each of the bad blocks, write zeros over the block to force it to be reallocated from spare space on the drive. Be careful here – getting it wrong will destroy data! Also note that when reading, “skip” is used to position the input stream, but here “seek” is used to position the output stream:<br />
<pre><br />
for block in `ddrescuelog -b B --list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/zero of=/dev/sdX count=1 bs=B seek=$block<br />
done<br />
</pre><br />
<br />
* It’s possible that dd will fail to write to the block, in which case try again with hdparm:<br />
<br />
** First check that we’ve got the right sectors (we expect to see “SG_IO: bad/missing sense data” for each sector on stderr, so we pipe stdout to /dev/null to avoid noise). Note that we use '''S''' as the block size for ddrescuelog, since hdparm deals in sectors:<br />
<pre><br />
for block in `ddrescuelog -b S --list-blocks=- sdX.map`<br />
do<br />
hdparm –read-sector $block /dev/sdX > /dev/null<br />
done<br />
</pre><br />
<br />
** Assuming we’ve seen the expected errors, write zeros over each of the bad sectors. Note that we use '''S''' as the block size for ddrescuelog, since hdparm deals in sectors. Be careful here – getting it wrong will destroy data! You may be asked to add a “—yes-i-know-what-i-am-doing” flag.<br />
<pre><br />
for block in `ddrescuelog -b S --list-blocks=- sdX.map`<br />
do<br />
hdparm –write-sector $block /dev/sdX<br />
done<br />
</pre><br />
<br />
* Check that the number of failing sectors has decreased:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
For any files reported by `debugfs`:<br />
* Write random data over the bad file, which should force the pending sector to be marked bad and reallocated from spare space on the disk:<br />
<pre><br />
shred –v /path/to/bad/file<br />
</pre><br />
<br />
* Check that the number of failing sectors has decreased:<br />
<pre><br />
smartctl –a /dev/sdX | grep Pending<br />
</pre><br />
<br />
* Assuming the number of pending sectors has decreased, it’s then ok to delete the bad file:<br />
<pre><br />
rm /path/to/bad/file<br />
</pre><br />
<br />
More details:<br />
* https://www.smartmontools.org/wiki/BadBlockHowto<br />
* [[Mdadm#Repairing_failing_disk_on_degraded_array]]</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8513SnapRAID / MergerFS2022-04-26T08:45:59Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
* Run ddrescue to identify the failing bytes on the disk:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map <br />
</pre><br />
* ddrescue will write out a map file containing start positions and sizes (all in bytes) of good (+) and bad (-) byte ranges<br />
* Get the block size for the volume with <code>tune2fs -l /dev/sdXY | grep "Block size"</code> (we'll call this '''B'''). In my case this is 4096.<br />
* Get the sector size for the disk with <code>fdisk -l /dev/sdX | grep Units</code> (we'll call this '''S'''). In my case this is 512.<br />
* Identify the starting sector for the /dev/sdXY volume (eg. /dev/sda1) with <code>fdisk -l /dev/sdX</code> (we'll call this '''T'''). In my case this is 2048.<br />
* Run ddrescuelog to list out the bad block locations (using the block size B):<br />
<pre><br />
ddrescuelog -b B --list-blocks=- sdX.map<br />
</pre><br />
* For each of these, convert to a block location in the volume (rather than the disk) by subtracting <code>(T * S / B)</code>. In my case that's 256. Let's call each of these bad volume blocks '''BB'''<br />
* Start debugfs for the volume with <code>debugfs /dev/sdXY</code><br />
* For each '''BB''' run:<br />
<pre><br />
testb BB<br />
</pre><br />
* If debugfs returns "Block BB not in use", then that block isn't part of a file and can safely be overwritten. If it returns an inode number (we'll call it '''I''') then you can convert that inode number to a file path with:<br />
<pre><br />
ncheck I<br />
</pre><br />
<br />
For any unused blocks, we need to do the following:<br />
* Check with dd that we’ve got the right block IDs. For each one of these reads we expect to see an error (and “0+0 records in”):<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/sdX of=/dev/null count=1 bs=512 skip=$block<br />
done<br />
</pre><br />
<br />
* For each of the bad blocks, write zeros over the block to force it to be reallocated from spare space on the drive. Be careful here – getting it wrong will destroy data! Also note that when reading, “skip” is used to position the input stream, but here “seek” is used to position the output stream:<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
dd if=/dev/zero of=/dev/sdX count=1 bs=512 seek=$block<br />
done<br />
</pre><br />
<br />
* It’s possible that dd will fail to write to the block, in which case try again with hdparm:<br />
<br />
** First check that we’ve got the right sectors (we expect to see “SG_IO: bad/missing sense data” for each sector on stderr, so we pipe stdout to /dev/null to avoid noise):<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
hdparm –read-sector $block /dev/sdX > /dev/null<br />
done<br />
</pre><br />
<br />
** Assuming we’ve seen the expected errors, write zeros over each of the bad sectors. Be careful here – getting it wrong will destroy data! You may be asked to add a “—yes-i-know-what-i-am-doing” flag.<br />
<pre><br />
for block in `ddrescuelog –list-blocks=- sdX.map`<br />
do<br />
hdparm –write-sector $block /dev/sdX<br />
done<br />
</pre><br />
<br />
More details:<br />
* https://www.smartmontools.org/wiki/BadBlockHowto<br />
* [[Mdadm#Repairing_failing_disk_on_degraded_array]]</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8512SnapRAID / MergerFS2022-04-26T08:31:50Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
* Run ddrescue to identify the failing bytes on the disk:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map <br />
</pre><br />
* ddrescue will write out a map file containing start positions and sizes (all in bytes) of good (+) and bad (-) byte ranges<br />
* Get the block size for the volume with <code>tune2fs -l /dev/sdXY | grep "Block size"</code> (we'll call this '''B'''). In my case this is 4096.<br />
* Get the sector size for the disk with <code>fdisk -l /dev/sdX | grep Units</code> (we'll call this '''S'''). In my case this is 512.<br />
* Identify the starting sector for the /dev/sdXY volume (eg. /dev/sda1) with <code>fdisk -l /dev/sdX</code> (we'll call this '''T'''). In my case this is 2048.<br />
* Run ddrescuelog to list out the bad block locations (using the block size B):<br />
<pre><br />
ddrescuelog -b B --list-blocks=- sdX.map<br />
</pre><br />
* For each of these, convert to a block location in the volume (rather than the disk) by subtracting <code>(T * S / B)</code>. In my case that's 256.<br />
<br />
<br />
More details:<br />
* https://www.smartmontools.org/wiki/BadBlockHowto<br />
* [[Mdadm#Repairing_failing_disk_on_degraded_array]]</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8511SnapRAID / MergerFS2022-04-26T07:36:05Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
* Run ddrescue to identify the failing bytes on the disk:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map <br />
</pre><br />
* ddrescue will write out a map file containing start positions and sizes (all in bytes). Look for lines ending "-", and convert the hex positions to decimal (we'll call this '''P''')<br />
* Get the block size for the volume with <code>tune2fs -l /dev/sdXY | grep "Block size"</code> (we'll call this '''B'''). In my case this is 4096.<br />
* Get the sector size for the disk with <code>fdisk -l /dev/sdX | grep Units</code> (we'll call this '''S'''). In my case this is 512.<br />
* Identify the starting sector for the /dev/sdXY volume (eg. /dev/sda1) with <code>fdisk -l /dev/sdX</code> (we'll call this '''T'''). In my case this is 2048.<br />
* To work out the volume block from the ddrescue position we need to calculate <code>(P - (T * S)) / B</code><br />
<br />
More details:<br />
* https://www.smartmontools.org/wiki/BadBlockHowto<br />
* [[Mdadm#Repairing_failing_disk_on_degraded_array]]</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8510SnapRAID / MergerFS2022-04-26T07:35:42Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
* Run ddrescue to identify the failing bytes on the disk:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map <br />
</pre><br />
* ddrescue will write out a map file containing start positions and sizes (all in bytes). Look for lines ending "-", and convert the hex positions to decimal (we'll call this '''P''')<br />
* Get the block size for the partition with <code>tune2fs -l /dev/sdXY | grep "Block size"</code> (we'll call this '''B'''). In my case this is 4096.<br />
* Get the sector size for the disk with <code>fdisk -l /dev/sdX | grep Units</code> (we'll call this '''S'''). In my case this is 512.<br />
* Identify the starting sector for the /dev/sdXY volume (eg. /dev/sda1) with <code>fdisk -l /dev/sdX</code> (we'll call this '''T'''). In my case this is 2048.<br />
* To work out the volume block from the ddrescue position we need to calculate <code>(P - (T * S)) / B</code><br />
<br />
More details:<br />
* https://www.smartmontools.org/wiki/BadBlockHowto<br />
* [[Mdadm#Repairing_failing_disk_on_degraded_array]]</div>Andrewhttps://wiki.bretts.org/index.php?title=Machine_List&diff=8509Machine List2022-03-27T23:05:53Z<p>Andrew: /* maine */</p>
<hr />
<div>== History ==<br />
<br />
=== Pre-networking ===<br />
* (1988-1993) 8086 4.2MHz, MS-DOS 3.3<br />
* (1993-1995) 386SX 16MHz, Windows 3.1 + MS-DOS 5.0<br />
* (1995-1996) 486DX 50MHz, Windows 3.1 + MS-DOS 5.0/Windows 95<br />
<br />
=== indiana ===<br />
* (1996 - 1999) Fujitsu-ICL Pentium 90, Windows 95 + Windows NT 4.0<br />
** 1996?: +Orchid Righteous 3D<br />
<br />
=== colorado === <br />
* (1999 - 2001) Homebuilt Celeron 300, Windows 98/Me/XP<br />
** Matrox Millennium G200?<br />
* (2001 - 2002) Homebuilt Pentium 3 800, Windows XP<br />
* (2002 - 2005) Homebuilt Pentium 3 800, Mandriva Linux 8.0/9.0/10.0<br />
* (2005 - 2008) Dell Dimension 4300 (Pentium 4 1.8), Kubuntu 6.04<br />
** 2005: +128MB Sparkle GeForce MX4000 AGP <br />
** 2005: +Hauppauge WinTV-NOVA-T-MCE <br />
** 2006: +Seagate Barracuda 7200.10 320GB ST3320620A<br />
** 2006: +NEC-4570 16x DVD±RW/RAM Black <br />
* (2008 - 2010) Dell Dimension 4300 (Pentium 4 1.8), Ubuntu 8.04<br />
** 2008: +Seagate Barracuda 7200.10 750GB SATA2 3.5" <br />
** 2008: +SATA & IDE PCI Controller Card<br />
<br />
=== texas ===<br />
* (2002 - 2005) Dell Dimension 4300 (Pentium 4 1.8), Windows XP<br />
** GeForce 2 MX400?<br />
<br />
=== vermont ===<br />
* (2004 - 2006) Sony Vaio TR5MP (Pentium M 1.0), Windows XP<br />
* (2006 - 2008) Sony Vaio TR5MP (Pentium M 1.0), Ubuntu 6.10/7.04/7.10/8.04<br />
<br />
=== alaska ===<br />
* (2005 - 2007) Homebuilt Athlon64 3500+, Windows XP + Ubuntu 7.04 -> 8.04<br />
** Cooler Master Wave Master TAC-T01-E1C Silver All Aluminum Alloy ATX Mid Tower Computer Case<br />
** MSI K8N Diamond<br />
** AMD Athlon 64 3500+<br />
** 512MB Corsair Value Select 400MHz DDR Memory Stick <br />
** 128MB Sparkle GeForce 6600GT PCI-E <br />
** 300Gb Maxtor DiamondMax 10 ATA/133 6L300S0<br />
** NEC ND-3520 Silver <br />
** 460W Akasa PaxPower Ultra Quiet <br />
** 2006: +320GB Seagate Barracuda 7200.10 SATA2 ST3320620AS<br />
** 2007: +Sapphire X1950PRO 512MB GDDR3 PCI-Express<br />
* (2008 - 2014) Homebuilt Core 2 Duo 3.0, Windows XP/7 + Ubuntu 8.04 -> 9.10<br />
** 2008: +Gigabyte GA P35C-DS3R, iP35 Express, S775, PCI-E(x16), DDR2/3 1066/1333/800, SATA II, SATA RAID, ATX<br />
** 2008: +Intel Core 2 Duo E8400 2 x 3.00Ghz 6Mb Cache 1333 FSB Dual Core<br />
** 2008: +Corsair XMS6400 4GB DDR2 (2x2GB) 800Mhz Non-ECC<br />
** 2009: +GeForce GTX 260 Core 216<br />
** 2012: +Samsung 830 256GB SSD<br />
* (2014 - ) Homebuilt Core i7-4770, Windows 7/10<br />
** 2014: +Asus Z87-Plus Motherboard (Socket 1150, 4x DDR3, ATX, 2x PCI Express 3.0/2.0, 6x SATA 6.0 Gb/s, USB 3.0)<br />
** 2014: +Intel Core i7 4770 Quad Core Retail CPU (Socket 1150, 3.40GHz, 8MB, Haswell)<br />
** 2014: +Corsair CML16GX3M2A1600C10 Vengeance Low Profile 16GB (2x8GB) DDR3 1600 Mhz CL10 XMP<br />
** 2014: +Sapphire R9 270X 2GB Vapor-X 1050MHz GDDR 5 PCI Express Graphics Card<br />
** 2015: +ASUS Z87-A Motherboard<br />
** 2015: +Cooler Master Hyper 103 92mm Fan<br />
** 2016: +MSI GeForce GTX 970 GAMING Twin Frozr V 4GB Graphics Card (Maxwell)<br />
** 2016: +Samsung 850 EVO 500 GB 2.5 inch Solid State Drive<br />
** 2021: +Corsair RM650x PSU<br />
<br />
=== hawaii === <br />
* (2007 - 2009) Nintendo Wii<br />
<br />
=== montana ===<br />
* (2007 - ) Apple Mac Mini (Mid 2007), Mac OS X Tiger -> Lion<br />
** Core 2 Duo T7200 @ 2.0GHz<br />
** 4GB DDR2-667 RAM<br />
** 120GB HDD<br />
** Intel GMA 950<br />
<br />
=== pennsylvania ===<br />
* (2008 - 2012) Sony PS3<br />
<br />
=== nevada ===<br />
* (2009 - 2011) Samsung NC20 (VIA Nano 1.6), Windows XP + Ubuntu 9.04 -> 9.10<br />
<br />
=== maine ===<br />
* (2010 - ) Homebuilt Core i5 750, Ubuntu 9.10/12.04/14.04/16.04/18.04<br />
** Cooler Master ATCS 840 RC-840-KKN1-GP Black Aluminum ATX Full Tower Computer Case<br />
*** Front Case Fan failed<br />
** Gigabyte GA-P55-UD3R LGA 1156 Intel P55 ATX Intel Motherboard<br />
** Intel Core i5-750 Lynnfield 2.66GHz LGA 1156 95W Quad-Core Processor Model BX80605I5750<br />
** OCZ Gold 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model OCZ3G1333LV4GK<br />
** MSI N8400GS-D256H GeForce 8400 GS 256MB 64-bit GDDR2 PCI Express 2.0 x16 HDCP Ready Video Card<br />
** Seagate Barracuda LP ST31500541AS 1.5TB 5900 RPM SATA 3.0Gb/s 3.5"<br />
** Nexus NX-5000 R3 530W ATX12V v2.2 80 PLUS BRONZE Certified Modular Active PFC Power Supply<br />
** 2011 onwards: +Various SATA HDDs<br />
** 2013: +Crucial Ballistix 16GB (2x8GB) 240-pin DIMM, DDR3 PC3-12800<br />
** 2019: +Timetec Hynix IC 16GB (2x8GB) DDR3 PC3-12800 1600 MHz Non ECC Unbuffered 1.35V/1.5V Dual Rank 240 Pin UDIMM<br />
** 2021: +Corsair RM650 PSU<br />
** 2021: +Cooler Master Hyper 212 CPU Fan<br />
<br />
=== arizona ===<br />
* (2010 - ) Apple Macbook Air (Late 2010 13-inch), Mac OS X Snow Leopard -> macOS Sierra<br />
** Core 2 Duo SL9400 @ 1.86 GHz<br />
** 2GB DDR3-1066 RAM<br />
** 128GB SSD<br />
** Nvidia GeForce 320M<br />
<br />
=== dakota ===<br />
* (2012 - ) Apple Mac Mini (Mid 2011), Mac OS X Lion -> macOS Sierra<br />
** Core i5-2520M @ 2.5 GHz<br />
** 4GB DDR3-1333 RAM<br />
** 500GB SATA HDD<br />
** AMD Radeon HD 6630M<br />
<br />
=== router ===<br />
* (2016 - ) Homebuilt Celeron G1840, pfSense<br />
** IN Win EM050 Matx Black Case<br />
** MSI H97M-G43 Socket 1150 VGA DVI HDMI DisplayPort mATX Motherboard<br />
** Intel Celeron G1840 2.80GHz Socket 1150 2MB L3 Cache<br />
** Corsair 4GB DDR3 1333MHz Memory Module CL9(9-9-9-24) 1.5V Unbuffered Non-ECC<br />
** Corsair Force Series LS 60GB SATA 2.5inch SSD<br />
** 2021: +Corsair RM650 PSU<br />
<br />
=== oregon ===<br />
* (2016 - ) Apple MacBook Pro (Late 2016 13-inch Touch Bar), macOS Sierra -> Mojave<br />
** Core i5-6287U @ 3.1GHz<br />
** 16GB DDR3-2133 RAM<br />
** 256GB PCIe SSD<br />
** Intel Iris Graphics 550<br />
<br />
=== virginia ===<br />
* (2021 - ) Homebuilt Ryzen 5 5600X, Windows 10<br />
** Phanteks Evolv X Antracite Grey Case<br />
** Gigabyte AMD Ryzen X570 AORUS PRO<br />
** Ryzen 5 5600X @ 3.7Ghz<br />
** Corsair Vengeance LPX Black 32GB 3600MHz 2x16GB CAS 18-22-22-42 DDR4<br />
** Corsair Force MP600 1TB M.2 PCIe Gen 4 NVMe SSD<br />
** Corsair RM850 PSU</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8508SnapRAID / MergerFS2022-03-07T14:39:30Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
* Run ddrescue to identify the failing bytes on the disk:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map <br />
</pre><br />
* ddrescue will write out a map file containing start positions and sizes (all in bytes). Look for lines ending "-", and convert the hex positions to decimal (we'll call this '''P''')<br />
* Get the block size for the disk with <code>tune2fs -l /dev/sdX | grep "Block size"</code> (we'll call this '''B'''). In my case this is 4096.<br />
* Get the sector size for the disk with <code>fdisk -l /dev/sdX | grep Units</code> (we'll call this '''S'''). In my case this is 512.<br />
* Identify the starting sector for the /dev/sdXY volume (eg. /dev/sda1) with <code>fdisk -l /dev/sdX</code> (we'll call this '''T'''). In my case this is 2048.<br />
* To work out the volume block from the ddrescue position we need to calculate <code>(P - (T * S)) / B</code><br />
<br />
More details:<br />
* https://www.smartmontools.org/wiki/BadBlockHowto<br />
* [[Mdadm#Repairing_failing_disk_on_degraded_array]]</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8507SnapRAID / MergerFS2022-03-07T14:38:34Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
* Run ddrescue to identify the failing bytes on the disk:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map <br />
</pre><br />
* ddrescue will write out a map file containing start positions and sizes (all in bytes). Look for lines ending "-", and convert the hex positions to decimal (we'll call this '''P''')<br />
* Get the block size for the disk with <code>tune2fs -l /dev/sdX | grep "Block size"</code> (we'll call this '''B''')<br />
* Get the sector size for the disk with <code>fdisk -l /dev/sdX | grep Units</code> (we'll call this '''S''')<br />
* Identify the starting sector for the /dev/sdXY volume (eg. /dev/sda1) with <code>fdisk -l /dev/sdX</code> (we'll call this '''T''')<br />
* To work out the volume block from the ddrescue position we need to calculate <code>(P - (T * S)) / B</code><br />
<br />
More details:<br />
* https://www.smartmontools.org/wiki/BadBlockHowto<br />
* [[Mdadm#Repairing_failing_disk_on_degraded_array]]</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8506SnapRAID / MergerFS2022-03-07T14:38:19Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
* https://www.smartmontools.org/wiki/BadBlockHowto<br />
* [[Mdadm#Repairing_failing_disk_on_degraded_array]]<br />
<br />
* Run ddrescue to identify the failing bytes on the disk:<br />
<pre><br />
ddrescue --ask --verbose --binary-prefixes --idirect --force /dev/sdX /dev/null sdX.map <br />
</pre><br />
* ddrescue will write out a map file containing start positions and sizes (all in bytes). Look for lines ending "-", and convert the hex positions to decimal (we'll call this '''P''')<br />
* Get the block size for the disk with <code>tune2fs -l /dev/sdX | grep "Block size"</code> (we'll call this '''B''')<br />
* Get the sector size for the disk with <code>fdisk -l /dev/sdX | grep Units</code> (we'll call this '''S''')<br />
* Identify the starting sector for the /dev/sdXY volume (eg. /dev/sda1) with <code>fdisk -l /dev/sdX</code> (we'll call this '''T''')<br />
* To work out the volume block from the ddrescue position we need to calculate <code>(P - (T * S)) / B</code></div>Andrewhttps://wiki.bretts.org/index.php?title=Docker&diff=8505Docker2022-02-04T18:22:41Z<p>Andrew: /* Home-Assistant (as part of host network) */</p>
<hr />
<div>== Useful Commands ==<br />
<br />
; docker ps -a: List all containers<br />
; docker container inspect <container>: Show details of <container><br />
; docker logs <container>: Show logs for <container><br />
; docker exec -it <container> /bin/bash: Start an interactive shell in <container><br />
<br />
== Updating container ==<br />
<br />
=== Manually ===<br />
sudo docker pull <image><br />
sudo docker stop <container><br />
sudo docker rm <container><br />
<docker run command><br />
<br />
=== Automatically ===<br />
sudo docker run --rm -v /var/run/docker.sock:/var/run/docker.sock taisun/updater --oneshot <container><br />
<br />
== Containers ==<br />
=== Plex ===<br />
First, setup NVIDIA:<br />
* Get latest drivers: https://linuxconfig.org/how-to-install-the-nvidia-drivers-on-ubuntu-18-04-bionic-beaver-linux<br />
* Enable NVIDIA extensions for docker: https://forums.plex.tv/t/how-to-setup-nvidia-hw-acceleration-in-ubuntu-docker/288625/7<br />
<br />
Next, get your claim token: https://www.plex.tv/claim/<br />
<br />
Finally, create the container with the claim token substituted:<br />
sudo docker run -d --name plex --network=host -e PLEX_UID=111 -e PLEX_GID=127 -e TZ=Europe/London -e PLEX_CLAIM=<CLAIM_TOKEN> \<br />
-v /var/lib/plexmediaserver:/config -v /srv:/srv \<br />
--restart unless-stopped \<br />
plexinc/pms-docker:plexpass<br />
<br />
=== Tautulli (Plex Monitoring/Notifications) ===<br />
sudo docker run -d --name tautulli -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8181:8181 \<br />
-v /var/lib/torrent/tautulli/config:/config -v /var/lib/plexmediaserver/Library/Logs:/logs \<br />
--restart unless-stopped \<br />
linuxserver/tautulli<br />
<br />
=== Jackett (Torrent Gateway) ===<br />
sudo docker run -d --name=jackett -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 9117:9117 \<br />
-v /var/lib/torrent/jackett/config:/config -v /var/lib/torrent/jackett/downloads:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/jackett<br />
<br />
=== Deluge ===<br />
sudo docker run -d --name deluge -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/torrent/deluged/config:/config -v /srv/incoming/torrents/deluge:/srv/incoming/torrents/deluge \<br />
-v /etc/ssl/bretts.org:/etc/ssl/bretts.org \<br />
--restart unless-stopped \<br />
linuxserver/deluge<br />
<br />
Since user groups don't seem to apply across the docker boundary, "torrent" will need to be given explicit permission to the private key file via an ACL:<br />
setfacl -m "u:torrent:rw" /etc/ssl/bretts.org/key.pem<br />
<br />
=== Radarr (Movie Downloads) ===<br />
sudo docker run -d --name radarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7878:7878 \<br />
-v /var/lib/torrent/radarr/config:/config -v /srv/videos/programs/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Radarr Lowres (Low Resolution (<=1080p) Movie Downloads) ===<br />
sudo docker run -d --name radarr-lowres -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 7879:7878 \<br />
-v /var/lib/torrent/radarr-lowres/config:/config -v /srv/videos/lowres/movies:/movies -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/radarr<br />
<br />
=== Sonarr (TV Downloads) ===<br />
sudo docker run -d --name=sonarr -e PUID=127 -e PGID=138 -e TZ=Europe/London \<br />
-p 8989:8989 \<br />
-v /var/lib/torrent/sonarr/config:/config -v /srv/videos/programs/tv:/tv -v /srv/incoming/torrents/deluge:/downloads \<br />
--restart unless-stopped \<br />
linuxserver/sonarr<br />
<br />
=== Unifi ===<br />
sudo docker run -d --name=unifi-controller -e PUID=140 -e PGID=150 \<br />
-p 3478:3478/udp -p 10001:10001/udp -p 18080:18080 -p 18081:18081 -p 18443:18443 -p 18880:18880 -p 6789:6789 \<br />
-v /var/lib/unifi:/config \<br />
--restart unless-stopped \<br />
linuxserver/unifi-controller<br />
<br />
=== Home-Assistant (as part of host network) ===<br />
sudo docker run --init -d --name=home-assistant -e TZ=Europe/London \<br />
--net=host \<br />
-v /var/lib/home-assistant/config:/config -v /srv:/media -v /etc/ssl/bretts.org:/etc/ssl/bretts.org -v /var/www/html/arlo-snapshots:/arlo-snapshots \<br />
--restart unless-stopped \<br />
homeassistant/home-assistant<br />
<br />
=== Home-Assistant (with dedicated IP) - DEPRECATED ===<br />
sudo docker network create -d macvlan \<br />
--gateway 192.168.1.1 --subnet 192.168.1.0/24 --ip-range 192.168.1.231/29 -o parent=eth0 \<br />
docker-subnet<br />
sudo docker run --init -d --name=home-assistant -e TZ=Europe/London \<br />
--net docker-subnet --ip 192.168.1.231 \<br />
-v /var/lib/homeassistant/docker:/config -v /var/www/html/arlo-snapshots:/arlo-snapshots \<br />
--restart unless-stopped \<br />
homeassistant/home-assistant<br />
<br />
=== Atlassian ===<br />
<br />
==== JIRA ====<br />
Note: In this instance JIRA is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name jira<br />
sudo docker run -d --name=jira -e TZ=Europe/London \<br />
-e ATL_TOMCAT_SCHEME=https -e ATL_TOMCAT_SECURE=true -e ATL_PROXY_NAME=jira.bretts.org -e ATL_PROXY_PORT=443 \<br />
-p 7980:8080 \<br />
-v jira:/var/atlassian/application-data/jira \<br />
--restart unless-stopped \<br />
atlassian/jira-software<br />
<br />
Docker JIRA runs with a uid and gid of 2001. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2001 jira-docker<br />
sudo adduser --system --no-create-home --uid 2001 --gid 2001 jira-docker<br />
<br />
==== Bitbucket====<br />
Note: In this instance Bitbucket is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bitbucket<br />
sudo docker run -d --name=bitbucket -e TZ=Europe/London \<br />
-e SERVER_SCHEME=https -e SERVER_SECURE=true -e SERVER_PROXY_NAME=bitbucket.bretts.org -e SERVER_PROXY_PORT=443 \<br />
-p 7990:7990 -p 7999:7999 \<br />
-v bitbucket:/var/atlassian/application-data/bitbucket \<br />
--restart unless-stopped \<br />
atlassian/bitbucket-server<br />
<br />
Docker Bitbucket runs with a uid and gid of 2003. To ensure they show up as a named user in the hosting system you can run:<br />
sudo addgroup --gid 2003 bitbucket-docker<br />
sudo adduser --system --no-create-home --uid 2003 --gid 2003 bitbucket-docker<br />
<br />
==== Bamboo ====<br />
Note: In this instance Bamboo is configured (with `-v`) using a named volume, rather than a bind mount<br />
sudo docker volume create --name bamboo<br />
sudo docker run -d --name=bamboo -e TZ=Europe/London \<br />
-p 54663:54663 -p 7970:8085 \<br />
-v bamboo:/var/atlassian/application-data/bamboo \<br />
--restart unless-stopped \<br />
atlassian/bamboo-server<br />
<br />
===== Limitations =====<br />
* Bamboo runs with a uid of 1000, which means it's likely to clash with a real user in the containing host<br />
* Bamboo container doesn't support any reverse proxy configuration, which means hiding it behind nginx is likely to result in broken Application Links. This can be worked around by manually editing /opt/atlassian/bamboo/conf/server.xml, but those changes will be overwritten on every container upgrade.<br />
<br />
== Tips / Fixes ==<br />
<br />
=== Tautulli slow to start ===<br />
This may be due to an attempt to chown a large number of files. <br />
Login to the container:<br />
sudo docker exec -it <container> /bin/bash<br />
Disable the chown step by editing <code>/etc/cont-init.d/30-config</code> and commenting out the chown command.<br />
<br />
=== Adding an SSL cert for Unifi ===<br />
sudo openssl pkcs12 -export -inkey /etc/ssl/bretts.org/key.pem -in /etc/ssl/bretts.org/fullchain.pem -out /tmp/cert.p12 -name unifi -password pass:temppass<br />
sudo keytool -importkeystore -deststorepass aircontrolenterprise -destkeypass aircontrolenterprise -destkeystore /var/lib/unifi/data/keystore -srckeystore /tmp/cert.p12 -srcstoretype PKCS12 -srcstorepass temppass -alias unifi -noprompt<br />
sudo docker restart unifi-controller<br />
sudo rm /tmp/cert.p12<br />
<br />
=== Local DNS resolution fails on docker 18.09 ===<br />
This may be the result of a bug: https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1820278. Normally the container's /etc/resolv.conf should mirror that of the host, but in this case it seems to just be a default version. As a workaround, create /etc/docker/daemon.json with the following contents:<br />
<br />
{<br />
"dns": ["192.168.1.1", "8.8.8.8"],<br />
"dns-search": ["bretts.org"]<br />
}</div>Andrewhttps://wiki.bretts.org/index.php?title=HomeAssistant&diff=8504HomeAssistant2022-02-04T17:19:32Z<p>Andrew: /* Home Assistant */</p>
<hr />
<div>= Home Assistant =<br />
<br />
==Updating==<br />
See [[Docker#Home-Assistant_.28as_part_of_host_network.29|Docker]]<br />
<br />
==Alexa Media Player==<br />
Ensure external URL is set to https://maine.bretts.org:8123/ otherwise nginx will rewrite the request back to the Home Assistant home page.<br />
<br />
== Devices Missing ==<br />
If you've added new cameras you need to give the aarlo user permission to see them with Settings -> Grant Access in the Arlo app</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8503SnapRAID / MergerFS2022-01-20T09:05:03Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
* https://www.smartmontools.org/wiki/BadBlockHowto<br />
* [[Mdadm#Repairing_failing_disk_on_degraded_array]]</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8502SnapRAID / MergerFS2022-01-20T09:03:51Z<p>Andrew: /* Identifying/Fixing a bad block */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
https://www.smartmontools.org/wiki/BadBlockHowto<br />
[Mdadm#Repairing_failing_disk_on_degraded_array]</div>Andrewhttps://wiki.bretts.org/index.php?title=SnapRAID_/_MergerFS&diff=8501SnapRAID / MergerFS2022-01-20T08:56:32Z<p>Andrew: /* SnapRAID / MergerFS */</p>
<hr />
<div>= SnapRAID / MergerFS =<br />
<br />
== Setup ==<br />
https://zackreed.me/setting-up-snapraid-on-ubuntu/<br />
<br />
Note that if (like me) you use a dedicated snapraid content directory then you'll need to create that by hand for each disk with:<br />
<br />
mkdir /mnt/data/disk1/.snapraid<br />
<br />
== Partitioning a new data disk ==<br />
Note: "-m 2" here reserves 2% of the filesystem for root-owned files (eg. .../.snapraid/content)<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 2 -T largefile4 /dev/sdX1<br />
<br />
== Partitioning a new parity disk ==<br />
Note: "-m 0" here reserves 0% of the filesystem, ensuring that the parity disks are slightly larger than the data disks<br />
sudo parted -a optimal -s /dev/sdX -- mklabel gpt mkpart primary 0% 100%<br />
sudo mkfs.ext4 -m 0 -T largefile4 /dev/sdX1<br />
<br />
== Adding a new data disk to mergerfs ==<br />
From: https://zackreed.me/mergerfs-neat-tricks/<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '+>/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Removing a data disk from mergerfs ==<br />
From within the root of the mergerfs filesystem (eg. /srv)<br />
xattr -w user.mergerfs.srcmounts '-/mnt/data/disk4/srv' .mergerfs<br />
<br />
== Forcing a resync ==<br />
<br />
sudo snapraid sync<br />
<br />
== Identifying/Fixing a bad block ==<br />
https://www.smartmontools.org/wiki/BadBlockHowto</div>Andrewhttps://wiki.bretts.org/index.php?title=Machine_List&diff=8500Machine List2022-01-16T09:47:15Z<p>Andrew: /* virginia */</p>
<hr />
<div>== History ==<br />
<br />
=== Pre-networking ===<br />
* (1988-1993) 8086 4.2MHz, MS-DOS 3.3<br />
* (1993-1995) 386SX 16MHz, Windows 3.1 + MS-DOS 5.0<br />
* (1995-1996) 486DX 50MHz, Windows 3.1 + MS-DOS 5.0/Windows 95<br />
<br />
=== indiana ===<br />
* (1996 - 1999) Fujitsu-ICL Pentium 90, Windows 95 + Windows NT 4.0<br />
** 1996?: +Orchid Righteous 3D<br />
<br />
=== colorado === <br />
* (1999 - 2001) Homebuilt Celeron 300, Windows 98/Me/XP<br />
** Matrox Millennium G200?<br />
* (2001 - 2002) Homebuilt Pentium 3 800, Windows XP<br />
* (2002 - 2005) Homebuilt Pentium 3 800, Mandriva Linux 8.0/9.0/10.0<br />
* (2005 - 2008) Dell Dimension 4300 (Pentium 4 1.8), Kubuntu 6.04<br />
** 2005: +128MB Sparkle GeForce MX4000 AGP <br />
** 2005: +Hauppauge WinTV-NOVA-T-MCE <br />
** 2006: +Seagate Barracuda 7200.10 320GB ST3320620A<br />
** 2006: +NEC-4570 16x DVD±RW/RAM Black <br />
* (2008 - 2010) Dell Dimension 4300 (Pentium 4 1.8), Ubuntu 8.04<br />
** 2008: +Seagate Barracuda 7200.10 750GB SATA2 3.5" <br />
** 2008: +SATA & IDE PCI Controller Card<br />
<br />
=== texas ===<br />
* (2002 - 2005) Dell Dimension 4300 (Pentium 4 1.8), Windows XP<br />
** GeForce 2 MX400?<br />
<br />
=== vermont ===<br />
* (2004 - 2006) Sony Vaio TR5MP (Pentium M 1.0), Windows XP<br />
* (2006 - 2008) Sony Vaio TR5MP (Pentium M 1.0), Ubuntu 6.10/7.04/7.10/8.04<br />
<br />
=== alaska ===<br />
* (2005 - 2007) Homebuilt Athlon64 3500+, Windows XP + Ubuntu 7.04 -> 8.04<br />
** Cooler Master Wave Master TAC-T01-E1C Silver All Aluminum Alloy ATX Mid Tower Computer Case<br />
** MSI K8N Diamond<br />
** AMD Athlon 64 3500+<br />
** 512MB Corsair Value Select 400MHz DDR Memory Stick <br />
** 128MB Sparkle GeForce 6600GT PCI-E <br />
** 300Gb Maxtor DiamondMax 10 ATA/133 6L300S0<br />
** NEC ND-3520 Silver <br />
** 460W Akasa PaxPower Ultra Quiet <br />
** 2006: +320GB Seagate Barracuda 7200.10 SATA2 ST3320620AS<br />
** 2007: +Sapphire X1950PRO 512MB GDDR3 PCI-Express<br />
* (2008 - 2014) Homebuilt Core 2 Duo 3.0, Windows XP/7 + Ubuntu 8.04 -> 9.10<br />
** 2008: +Gigabyte GA P35C-DS3R, iP35 Express, S775, PCI-E(x16), DDR2/3 1066/1333/800, SATA II, SATA RAID, ATX<br />
** 2008: +Intel Core 2 Duo E8400 2 x 3.00Ghz 6Mb Cache 1333 FSB Dual Core<br />
** 2008: +Corsair XMS6400 4GB DDR2 (2x2GB) 800Mhz Non-ECC<br />
** 2009: +GeForce GTX 260 Core 216<br />
** 2012: +Samsung 830 256GB SSD<br />
* (2014 - ) Homebuilt Core i7-4770, Windows 7/10<br />
** 2014: +Asus Z87-Plus Motherboard (Socket 1150, 4x DDR3, ATX, 2x PCI Express 3.0/2.0, 6x SATA 6.0 Gb/s, USB 3.0)<br />
** 2014: +Intel Core i7 4770 Quad Core Retail CPU (Socket 1150, 3.40GHz, 8MB, Haswell)<br />
** 2014: +Corsair CML16GX3M2A1600C10 Vengeance Low Profile 16GB (2x8GB) DDR3 1600 Mhz CL10 XMP<br />
** 2014: +Sapphire R9 270X 2GB Vapor-X 1050MHz GDDR 5 PCI Express Graphics Card<br />
** 2015: +ASUS Z87-A Motherboard<br />
** 2015: +Cooler Master Hyper 103 92mm Fan<br />
** 2016: +MSI GeForce GTX 970 GAMING Twin Frozr V 4GB Graphics Card (Maxwell)<br />
** 2016: +Samsung 850 EVO 500 GB 2.5 inch Solid State Drive<br />
** 2021: +Corsair RM650x PSU<br />
<br />
=== hawaii === <br />
* (2007 - 2009) Nintendo Wii<br />
<br />
=== montana ===<br />
* (2007 - ) Apple Mac Mini (Mid 2007), Mac OS X Tiger -> Lion<br />
** Core 2 Duo T7200 @ 2.0GHz<br />
** 4GB DDR2-667 RAM<br />
** 120GB HDD<br />
** Intel GMA 950<br />
<br />
=== pennsylvania ===<br />
* (2008 - 2012) Sony PS3<br />
<br />
=== nevada ===<br />
* (2009 - 2011) Samsung NC20 (VIA Nano 1.6), Windows XP + Ubuntu 9.04 -> 9.10<br />
<br />
=== maine ===<br />
* (2010 - ) Homebuilt Core i5 750, Ubuntu 9.10/12.04/14.04/16.04/18.04<br />
** Cooler Master ATCS 840 RC-840-KKN1-GP Black Aluminum ATX Full Tower Computer Case<br />
** Gigabyte GA-P55-UD3R LGA 1156 Intel P55 ATX Intel Motherboard<br />
** Intel Core i5-750 Lynnfield 2.66GHz LGA 1156 95W Quad-Core Processor Model BX80605I5750<br />
** OCZ Gold 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model OCZ3G1333LV4GK<br />
** MSI N8400GS-D256H GeForce 8400 GS 256MB 64-bit GDDR2 PCI Express 2.0 x16 HDCP Ready Video Card<br />
** Seagate Barracuda LP ST31500541AS 1.5TB 5900 RPM SATA 3.0Gb/s 3.5"<br />
** Nexus NX-5000 R3 530W ATX12V v2.2 80 PLUS BRONZE Certified Modular Active PFC Power Supply<br />
** 2011 onwards: +Various SATA HDDs<br />
** 2013: +Crucial Ballistix 16GB (2x8GB) 240-pin DIMM, DDR3 PC3-12800<br />
** 2019: +Timetec Hynix IC 16GB (2x8GB) DDR3 PC3-12800 1600 MHz Non ECC Unbuffered 1.35V/1.5V Dual Rank 240 Pin UDIMM<br />
** 2021: +Corsair RM650 PSU<br />
** 2021: +Cooler Master Hyper 212 CPU Fan<br />
<br />
=== arizona ===<br />
* (2010 - ) Apple Macbook Air (Late 2010 13-inch), Mac OS X Snow Leopard -> macOS Sierra<br />
** Core 2 Duo SL9400 @ 1.86 GHz<br />
** 2GB DDR3-1066 RAM<br />
** 128GB SSD<br />
** Nvidia GeForce 320M<br />
<br />
=== dakota ===<br />
* (2012 - ) Apple Mac Mini (Mid 2011), Mac OS X Lion -> macOS Sierra<br />
** Core i5-2520M @ 2.5 GHz<br />
** 4GB DDR3-1333 RAM<br />
** 500GB SATA HDD<br />
** AMD Radeon HD 6630M<br />
<br />
=== router ===<br />
* (2016 - ) Homebuilt Celeron G1840, pfSense<br />
** IN Win EM050 Matx Black Case<br />
** MSI H97M-G43 Socket 1150 VGA DVI HDMI DisplayPort mATX Motherboard<br />
** Intel Celeron G1840 2.80GHz Socket 1150 2MB L3 Cache<br />
** Corsair 4GB DDR3 1333MHz Memory Module CL9(9-9-9-24) 1.5V Unbuffered Non-ECC<br />
** Corsair Force Series LS 60GB SATA 2.5inch SSD<br />
** 2021: +Corsair RM650 PSU<br />
<br />
=== oregon ===<br />
* (2016 - ) Apple MacBook Pro (Late 2016 13-inch Touch Bar), macOS Sierra -> Mojave<br />
** Core i5-6287U @ 3.1GHz<br />
** 16GB DDR3-2133 RAM<br />
** 256GB PCIe SSD<br />
** Intel Iris Graphics 550<br />
<br />
=== virginia ===<br />
* (2021 - ) Homebuilt Ryzen 5 5600X, Windows 10<br />
** Phanteks Evolv X Antracite Grey Case<br />
** Gigabyte AMD Ryzen X570 AORUS PRO<br />
** Ryzen 5 5600X @ 3.7Ghz<br />
** Corsair Vengeance LPX Black 32GB 3600MHz 2x16GB CAS 18-22-22-42 DDR4<br />
** Corsair Force MP600 1TB M.2 PCIe Gen 4 NVMe SSD<br />
** Corsair RM850 PSU</div>Andrewhttps://wiki.bretts.org/index.php?title=Backups&diff=8499Backups2022-01-02T22:19:16Z<p>Andrew: </p>
<hr />
<div>== Update restic ==<br />
restic self-update<br />
<br />
== Listing previous snapshots ==<br />
sudo -i<br />
. /etc/restic-init<br />
restic snapshots<br />
<br />
== Deleting old snapshots ==<br />
Substitute `2021-12` with the month (or months) for which you want to keep a full history. Note: `2019-03-26` is the first-ever backup in this instance, so we want to keep that too.<br />
for snap in `restic snapshots -c | grep maine | sed -e 's!.* !!' | grep -v -- '-01$' | grep -v '2019-03-26' | grep -v '2021-12'`<br />
do<br />
restic forget --path /var/backup/snapshot/latest --tag $snap --keep-last=-1<br />
done<br />
<br />
== Pruning old history ==<br />
Note: This will also trim the size of the /root/.cache/restic directory<br />
sudo -i<br />
. /etc/restic-init<br />
restic prune</div>Andrewhttps://wiki.bretts.org/index.php?title=Backups&diff=8498Backups2021-12-31T16:05:00Z<p>Andrew: /* Deleting old snapshots */</p>
<hr />
<div>== Listing previous snapshots ==<br />
sudo -i<br />
. /etc/restic-init<br />
restic snapshots<br />
<br />
== Deleting old snapshots ==<br />
Substitute `2021-12` with the month (or months) for which you want to keep a full history. Note: `2019-03-26` is the first-ever backup in this instance, so we want to keep that too.<br />
for snap in `restic snapshots -c | grep maine | sed -e 's!.* !!' | grep -v -- '-01$' | grep -v '2019-03-26' | grep -v '2021-12'`<br />
do<br />
restic forget --path /var/backup/snapshot/latest --tag $snap --keep-last=-1<br />
done<br />
<br />
== Pruning old history ==<br />
Note: This will also trim the size of the /root/.cache/restic directory<br />
sudo -i<br />
. /etc/restic-init<br />
restic prune</div>Andrewhttps://wiki.bretts.org/index.php?title=Backups&diff=8497Backups2021-12-31T16:04:32Z<p>Andrew: /* Deleting old snapshots */</p>
<hr />
<div>== Listing previous snapshots ==<br />
sudo -i<br />
. /etc/restic-init<br />
restic snapshots<br />
<br />
== Deleting old snapshots ==<br />
Substitute `2021-12` with the month (or months) for which you want to keep a full history<br />
for snap in `restic snapshots -c | grep maine | sed -e 's!.* !!' | grep -v -- '-01$' | grep -v '2019-03-26' | grep -v '2021-12'`<br />
do<br />
restic forget --path /var/backup/snapshot/latest --tag $snap --keep-last=-1<br />
done<br />
<br />
== Pruning old history ==<br />
Note: This will also trim the size of the /root/.cache/restic directory<br />
sudo -i<br />
. /etc/restic-init<br />
restic prune</div>Andrewhttps://wiki.bretts.org/index.php?title=Backups&diff=8496Backups2021-12-31T16:03:16Z<p>Andrew: </p>
<hr />
<div>== Listing previous snapshots ==<br />
sudo -i<br />
. /etc/restic-init<br />
restic snapshots<br />
<br />
== Deleting old snapshots ==<br />
Substitute `2021-12` with the month (or months) for which you want to keep a full history<br />
for snap in `restic snapshots -c | grep maine | sed -e 's!.* !!' | grep -v -- '-01$' | grep -v '2021-12'`<br />
do<br />
restic forget --path /var/backup/snapshot/latest --tag $snap --keep-last=-1<br />
done<br />
<br />
== Pruning old history ==<br />
Note: This will also trim the size of the /root/.cache/restic directory<br />
sudo -i<br />
. /etc/restic-init<br />
restic prune</div>Andrew