commit 46da30106421b3c41b3ac8ffa7676304ced802f0 Author: lauralani Date: Fri Sep 1 08:20:19 2023 +0200 ✨ initial commit diff --git a/.woodpecker/upload.yml b/.woodpecker/upload.yml new file mode 100644 index 0000000..88c9975 --- /dev/null +++ b/.woodpecker/upload.yml @@ -0,0 +1,20 @@ +when: + event: push + branch: main + +steps: + upload: + image: alpine:latest + secrets: + - SSH_KEY + - TARGET_SERVER + - TARGET_USER + environment: + - TARGET_PATH=/webroot/web-archive.lauka.net + commands: + - apk add --update --no-cache openssh rsync git + - mkdir -p $HOME/.ssh + - echo "$SSH_KEY" > $HOME/.ssh/id_ed25519 + - chmod 0600 $HOME/.ssh/id_ed25519 + - ssh-keyscan -t ed25519 $TARGET_SERVER >> $HOME/.ssh/known_hosts + - rsync -avh --delete ./ $TARGET_USER@$TARGET_SERVER:$TARGET_PATH --exclude 'readme.md' --exclude '.woodpecker' --exclude '.git' diff --git a/802.1x und dynamische VLANs.html b/802.1x und dynamische VLANs.html new file mode 100644 index 0000000..c3fb37c --- /dev/null +++ b/802.1x und dynamische VLANs.html @@ -0,0 +1,926 @@ + + + + + +802.1X und dynamische VLANs im WLAN mit Ubiquiti Unifi - Jans Blog + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+ +
+
+
+
+ +
+ +
+
+ +
+

802.1X und dynamische VLANs im WLAN mit Ubiquiti Unifi

+ +
+ +

Hier kommt nun endlich mein Folge-Artikel zu dem doch recht erfolgreichen Beitrag über dynamische VLANs in Verbindung mit 802.1X mit Ubiquiti Unifi-Hardware: Einrichtung von 802.1X und Dynamic VLANs mit Ubiquiti USG Pro. Dieser Artikel beschreibt die Einrichtung und Nutzung von Unifi Access Points in Verbindung mit 802.1X und dynamischen VLANs basierend auf der MAC-Adresse.

+

Eine kurze Übersicht

+

Grundsätzlich gehe ich in diesem Artikel wieder davon aus, dass das Thema VLANs bekannt ist und ein grundlegendes Verständnis über diese tolle Technik vorhanden ist 🙂

+

Ziel der Aufgabe soll es sein, dass wie eine WLAN-SSID ausstrahlen und uns mit unseren Geräten dort anmelden können. Je nach Gerät bzw. Adresse soll dann dynamisch das korrekte Netzwerk für das Gerät ausgewählt und zugewiesen werden. In meinem Netzwerk gibt es mehrere VLANs, die für unterschiedliche Zwecke genutzt werden:

+
  • Private Geräte
  • Das Büro/Firmen-Netzwerk
  • Das “Internet of shitty things” Netzwerk für all die hochsicheren Geräte wie Fernseher, Sonos usw…
+

Die Anzahl der Netze könnte auch noch deutlich höher sein, belassen wir es in diesem Fall bei diesen drei Netzwerken.

+

Trennung per VLAN

+

Jedes dieser Netze wird mittels VLAN voneinander getrennt und isoliert. Früher liefen die Netze alle auf einem Ubiquiti USG Router auf, das Gerät ist mittlerweile einer Fortigate 60F gewichen. Die Fortigate-Router können nahezu alles was man sich vorstellen kann, und da ich die Geräte mittlerweile auch bei einigen Kunden im Einsatz habe, habe ich mir ein Gerät ebenfalls für mein Netzwerk zugelegt. Grundsätzlich sind die Netzwerke untereinander nicht verbunden, dies wäre aber bei Bedarf auch möglich (z.B. Notebook im Privat-Netz muss auf einem Drucker im Firmennetz drucken, …).

+

Internet oder nicht?

+

Ich habe den Zugriff nach außen zusätzlich noch gefiltert, so dass nur die Geräte eine Verbindung aufbauen dürfen, denen ich das auch erlaube. Dazu existieren pro Netz/VLAN Gruppen in der Firewall, die bei Bedarf erweitert oder entfernt werden können. Somit kann genau gesteuert bzw. überwacht werden, wer wohin kommunizieren darf und wer nicht.

+

Der genutzte RADIUS-Server

+

Da ich in meinem Fall ja eine Fortigate als primären Router nutze, entfällt bei mir die Möglichkeit, den USG-internen RADIUS-Server zu nutzen. Wer dies nicht möchte, kann natürlich gerne weiterhin den integrierten Dienst aktivieren und nutzen. Die Konfiguration ist identisch wie in meinem vorherigen Beitrag: https://www.zueschen.eu/einrichtung-von-802-1x-und-dynamic-vlans-mit-ubiquiti-usg-pro/ => Der interne RADIUS-Server

+

Installation von Freeradius

+

Zur Authentifizierung meiner Clients nutze ich ein virtuelles Debian-System in der gerade aktuellen Version 10.6.0. Die Installation geschieht nach Standard-Einstellungen, ich installiere keinen Desktop, dafür aber einen SSH-Server zum Zugriff per Netzwerk. Nach der Grundinstallation kann der Freeradius-Server installiert werden.

+
apt install freeradius -y
+

Die Konfigurationsdateien liegen nach der Installation unter

+
/etc/freeradius/3.0
+

Interessant sind für uns die folgenden beiden Dateien:

+
+

clients.conf

+

In dieser Datei werden die Geräte eingetragen, die eine Verbindung zu unserem RADIUS-Server aufbauen dürfen. Hier können entweder einzelne IP-Adressen eingetragen werden oder alternativ ein IP-Bereich bzw. Subnetz. Da ich ein eigenes Management-Netzwerk für meine Geräte habe, habe ich hier das komplette Subnetz eingetragen:

+
+
# Netzwerk Jan
+client Unifi {
+        ipaddr = 192.168.1.0
+                netmask = 24
+        secret = www.building-networks.de
+}
+
+

Nach diesen Anpassungen kann die Datei gespeichert und geschlossen werden.

+

users

+

Diese zweite Datei enthält die MAC-Adressen der Geräte, die wir zuweisen möchten. Um den Server generell zu testen, kann ganz oben in der Datei der folgende Eintrag gemacht werden:

+
testing Cleartext-Password := "password"
+

Nun können wir die Datei speichern und müssen den RADIUS-Service einmal stoppen.

+
systemctl stop freeradius
+

Nachdem der Dienst aus ist, können wir ihn interaktiv wieder starten. Dies hat den Vorteil, dass wir direkt sehen, ob der Dienst korrekt arbeitet oder nicht. Dies geschieht durch den Aufruf der Applikation mit dem Schalter “-X”.

+
/usr/sbin/freeradius -X
+

Manueller Test

+

Um den Server zu testen, kann in einer zweiten SSH-Sitzung der folgende Befehl abgesetzt werden:

+
radtest testing password 127.0.0.1 0 testing123
+

Wie es nicht aussehen sollte

+

Wenn der Eintrag nicht korrekt ist bzw. nicht gesetzt wurde, sieht die Ausgabe wie folgt aus:

+
+
+

Man erkennt sehr gut, dass das Kennwort nicht gefunden wurde und das der Zugriff mit “Access-Reject” verweigert wurde.

+

Wenn alles klappt

+

Ist der Eintrag erfolgreich gesetzt worden, muss die Ausgabe wie folgt aussehen:

+
+
+

RADIUS-Server im Unifi Controller eintragen

+

Ist der Server nun betriebsbereit, können wir im Unifi Controller in den Einstellungen unter Profiles => RADIUS unseren Server eintragen.

+
+

Wir benötigen einen Namen, die IP-Adresse von unserer freeradius-VM und das Kennwort, welches in der client.conf von uns konfiguriert wurde. Nun können wir die Einstellungen speichern, mehr müssen wir hier nicht einstellen.

+

Falls ein USG als RADIUS-Server genutzt wird

+

Kommt ein USG zum Einsatz, muss dies unter Services => RADIUS => Server aktiviert werden.

+
+

Die Benutzer (bzw. MAC-Adressen), die wir später im freeradius-Server konfigurieren, müssen in eurem Fall dann hier unter User konfiguriert werden. Was genau eingestellt werden muss, steht im ersten Artikel 🙂

+

Geräte eintragen und zuweisen

+

Welches Gerät in welches Netzwerk fällt, müssen wir nun auf dem freeradius-Server in der Datei users konfigurieren. In meinem Fall sieht die Datei wie folgt aus:

+
+
# Thinkpad-JK WLAN
+AABBCCDDEEFF Cleartext-Password := "AABBCCDDEEFF"
+        Tunnel-Type = 13,
+        Tunnel-Medium-Type = 6,
+        Tunnel-Private-Group-ID = 10
+
+# Thinkpad-JK LAN
+AAABBBCCCDDD Cleartext-Password := "AAABBBCCCDDD"
+        Tunnel-Type = 13,
+        Tunnel-Medium-Type = 6,
+        Tunnel-Private-Group-ID = 20
+
+DEFAULT Auth-Type := Accept
+        Tunnel-Type = 13,
+        Tunnel-Medium-Type = 6,
+        Tunnel-Private-Group-ID = 30
+

Zum testen habe ich mal die MAC-Adressen von meinem Thinkpad eingetragen. Einmal WLAN, einmal LAN.

+
  • Der Tunnel-Type muss immer auf 13 stehen, dies sind die gleichen Einstellungen wie im USG.
  • Tunnel-Medium-Type muss auf 6 stehen, dies ist ebenfalls die gleichen Einstellung wie im USG bzw. dem Unifi-Controller.
  • Das gewünschte VLAN, in das der Client zugewiesen werden soll, muss in der freeradius-Konfiguration als Tunnel-Private-Group-ID konfiguriert werden.
+

Die entsprechenden Einstellungen im Controller mit USG sehen wie folgt aus:

+
+

Der letzte Block in der Config-Datei, beginnend mit DEFAULT, führt eine Standard-Konfiguration durch. Dies passiert immer dann, wenn keine der darüber liegenden Regeln greift. Ich führe in meinem Fall eine Zuweisung in ein Fallback-VLAN durch, ohne die Fallback-Konfiguration seitens Ubiquiti zu nutzen. Diese hat den Nachteil, dass sie erst nach 60 Sekunden greift und in dieser Zeit mancher Client schon auf die 169.254-APIPA Adresse geschaltet hat, weil er davon ausgeht, dass kein DHCP zur Verfügung steht. Dies ist hier nicht der Fall, in meiner Fallback-Option mittels RADIUS geht es deutlich schneller (ein paar Sekunden laut freeradius-Logfile).

+

Man könnte hier theoretisch auf kein VLAN zuweisen, dann würde das Gerät in das Default-LAN fallen, was kein VLAN zugewiesen hat. Das möchte ich aber nicht, ich möchte das Gerät in ein definiertes Fallback-VLAN einsperren.

+

Vorsicht mit Windows 10 und WLAN und dynamischen MAC-Adressen

+

Microsoft hat, genau wie Apple in seinem letzten iOS-Update, in Windows 10 eine Funktion eingebaut, die dynamisch MAC-Adressen bei WLAN-Verbindungen nutzt. Ist diese Funktion aktiv, generiert der Client bei jedem WLAN-Netzwerk eine neue MAC-Adresse und nutzt diese für die Verbindung. Dies ist natürlich bei einer Filterung auf MAC-Basis nicht wirklich vorteilhaft. Zu finden ist die Option in den Einstellungen unter Wi-Fi

+
+

Die Konfiguration eines WLANs

+

Damit unser Access Point nun dynamische VLANs und die dazugehörigen IP-Adressen verteilt, müssen/können die Einstellungen wie folgt aussehen:

+
+

Ich habe in meinem Fall ein “normales” WPA2 Personal WLAN erstellt. Heißt, ich muss mich mit einem Pre-Shared-Key anmelden und authentifizieren. Dies könnte theoretisch auch ein WPA Enterprise sein, ganz nach Bedarf und Möglichkeiten. Man könnte hier z.B. eine Anmeldung an die Active Directory koppeln, damit jeder Benutzer sich mit seinen persönlichen Credentials anmelden kann.

+

In den Advanced Options sieht man, dass unter VLAN die Option RADIUS assigned VLAN fest hinterlegt und ausgegraut ist. Dies ist korrekt so und kann auch nicht geändert werden. Alle weiteren Einstellungen können nach Bedarf angepasst und gesetzt werden. Dies wäre z.B. das zeitgesteuerte Ausschalten vom WLAN und noch vieles mehr.

+

Ganz unten unter RADIUS MAC AUTHENTICATION muss die Option aktiviert werden, weiterhin müssen wir unser vorab angelegtes freeradius-Profil auswählen. Das Format der MAC-Adresse nutze ich komplett in GROSSEN BUCHSTABEN. Dies könnte bei Bedarf auch angepasst werden, dies ist aber der Standard bei Ubiquiti, daher belasse ich es so und trage meine MAC-Adressen alle wie folgt ein.

+

Das WLAN können wir nun so speichern und mit unseren Geräten testen. Sollte euer freeradius-Service noch mit dem Parameter -X gestartet sein, könnt ihr bei der Verbindung von einem WLAN-Client sehr gut sehen, welches VLAN / Netzwerk dem Client zugeordnet wird.

+

Ich habe mein Notebook verbunden, hier wird nun im Log direkt angezeigt, dass die Adresse bekannt ist und dem entsprechenden VLAN zugeordnet wird:

+
+

Verbinde ich mein Handy, ist dies aktuell noch unbekannt und wird in das Fallback-VLAN / Netz geschmissen. Trage ich die MAC von meinem Handy nun im freeradius ein und starte den Dienst einmal neu, landet das Gerät danach ebenfalls im gewünschten Netzwerk.

+

Fazit

+

Der Artikel ist ein bisschen länger geworden als geplant, aber ich hoffe ich habe alle relevanten Optionen und Einstellungen abgedeckt. Ich habe initial mit einem Windows Server RADIUS begonnen, bin aber relativ schnell auf den freeradius umgeschwenkt. Dies hat mehrere Gründe und Vorteile:

+
  • freeradius läuft in vielen Linux-Distributionen und ist somit frei verfügbar
  • Der Dienst läuft auch auf NAS-Geräten oder Raspberry Pis ziemlich gut
  • freeradius hat direkt funktioniert und bietet ein gutes Logging, das war bei dem Windows Server nicht direkt der Fall.
+

Die hier genutzte Konfiguration ist die Basis für den Aufbau von meinem Netzwerk. Ich brauche nur noch eine einzige SSID ausstrahlen und kann im Hintergrund trotzdem alle Geräte auftrennen, isolieren und vom Internet isolieren, wenn gewünscht. Mit einer steigenden Anzahl an Geräten macht sich dies schon gut bemerkbar 🙂

+

Wenn euch der Artikel gefallen hat oder ihr Fragen habt hinterlasst mir doch einen kleinen Kommentar.

+

+

Sie benötigten persönliche Unterstützung oder haben nicht die richtige Lösung für Ihr Problem gefunden?

+

Dieser Blog wird von mir, Jan Kappen, in seiner Freizeit betrieben, hier beschreibe ich Lösungen für Probleme aller Art oder technische Anleitungen mit Lösungsansätzen.

+

Die berufliche Unabhängigkeit

+

Ich bin seit Januar 2020 vollständig selbstständig und habe meine eigene Firma gegründet, die Building Networks mit Sitz in Winterberg im schönen Sauerland. Hier stehe ich als Dienstleister gerne für Anfragen, Support oder Projekte zur Verfügung.

+

Die Firma Building Networks bietet Ihnen:

+
    +
  • Hilfe und Support per Telefon, Fernwartung oder persönlich vor Ort
  • +
  • Projekt-Unterstützung
  • +
  • Ausgezeichnete Kompetenz zu den Themen
    +
      +
    • Microsoft Hyper-V
    • +
    • Microsoft Failover Clustering & HA
    • +
    • Storage Spaces Direct (S2D) & Azure Stack HCI
    • +
    • Veeam Backup & Recovery
    • +
    • Microsoft Exchange
    • +
    • Microsoft Exchange Hybrid Infrastruktur
    • +
    • Microsoft Active Directory
    • +
    • Microsoft Office 365
    • +
    • Ubiquiti
    • +
    • 3CX VoIP PBX
    • +
    • Fortinet Network Security
    • +
    • Baramundi Software
    • +
    • ...
    • +
    +
  • +
+

Ich freue mich über Ihren Kontakt, weitere Informationen finden Sie auf der Webseite meiner Firma unter Building-Networks.de

+
+
+
+
+

Jan

+ Jan Kappen arbeitet seit 2005 in der IT. Er hat seine Ausbildung 2008 abgeschlossen und war bis 2018 als IT-Consultant im Bereich Hyper-V, Failover Clustering und Software Defined Storage unterwegs. Seit 2015 wurde er jährlich von Microsoft als Most Valuable Professional (MVP) im Bereich "Cloud & Datacenter Management" ausgezeichnet für seine Kenntnisse und die Weitergabe seines Wissens. Jan ist häufig auf Konferenzen als Sprecher zu finden, weiterhin bloggt er viel. +Von September 2018 bis Dezember 2019 war Jan als Senior Network- und Systemadministrator bei einem großen mittelständischen Unternehmen im schönen Sauerland angestellt. Im Januar 2020 hat er den Sprung in die Selbstständigkeit gewagt und ist seitdem Geschäftsführer der Firma Building Networks in Winterberg. +In seiner Freizeit kümmert er sich um das Freifunk-Netzwerk in Winterberg und Umgebung. +
+
+ +
+ +

+ 23 Kommentare:

+
    +
  1. +

    Pingback:Einrichtung von 802.1X und Dynamic VLANs mit Ubiquiti USG Pro - Jans Blog

    +
  2. +
  3. +
    + +
    + +

    Hallo,

    +

    in der aktuellen Controller Version gibt es kein Radius assigned VLAN beim WLAN mehr?
    +Und nun?

    + +
    +
    +
      +
    • +
      + +
      + +

      Stimmt, gerade nachgeschaut, die Option ist verschwunden. Allerdings kann weiterhin unten in den erweiterten Einstellungen ein RADIUS-Server aktiviert und konfiguriert werden. Hast du das schon ausprobiert, ob die Option oben quasi “automatisch” aktiviert wird, wenn man unten die RADIUS-Option aktiviert?
      +Gruß, Jan

      + +
      +
      +
        +
      • +
        + +
        + +

        Hi, unten die Einstellung betrifft ja nur Mac Authentification.

        +

        Ich würde gerne Dyn Vlans anhand von Radius Usern mit AD nutzen.

        +

        Grüße

        + +
        +
        +
          +
        • +
          + +
          + +

          Das VLAN wird ja im User mit angegeben. Daher die Überlegung, ob dieser Eintrag alleine reicht, um das korrekte VLAN zu übermitteln und zuzuweisen.
          +Gruß, Jan

          + +
          +
          +
            +
          • +
            + +
            + +

            Ich stehe gerade an gleicher Stelle – das VLAN aus dem Radius User übersteuert leider nicht das VLAN des im Wifi hinterlegten Netzwerk. Auch ist im aktuellen Controller (6.2.26) bei Nutzung des integrierten Radius dann nur noch ein Zugriff mit vorhandenem Radius User möglich? Mein Ziel wäre nur gezielte Clients in ein spezielles VLAN zu schleusen, während per Default das im Netzwerk hinterlegte VLAN zieht.

            +

            Konntet ihr das schon lösen?

            +
            +
            +
            +
            +
          • +
          +
        • +
        +
      • +
      +
    • +
    +
  4. +
  5. +
    +
    +
    + Matthias Ney +
    +
    +
    + +

    Hi, ich habe aktuell ein Problem sobald ich das Radius Profil aktiviere gibt er nicht mehr das eingestelle Netz über die AP´s an die clients.

    +

    Konfig abriss:
    +Dreammaschine Pro mit Ipsec tunnel zum Radius
    +Netzwerk für AP´s Switche im VLAN 10
    +Client Netz soll VLAN 68 werden.
    +Sow wie ich die Radius Authentifizierung Aktiviere bekomme ich eine IP aus VALN 10. Lasse ich dies Aus gibt der AP eine IP aus VLAN 68.

    +

    Eine Idee was ich falsch mache ?

    +

    Danke Gruß
    +Matthias aus dem schönen Saarland

    + +
    +
    +
      +
    • +
      + +
      + +

      Hallo Matthias,
      +verwendest du auch freeradius oder den integrierten RADIUS server von dem UDMP? Wenn freeradius, schau dir mal das Logfile an, ob du hier erkennen kannst warum das “falsche” Netz zugewiesen wird.
      +Gruß, Jan

      + +
      +
      +
    • +
    +
  6. +
  7. +
    +
    +
    + Matthias Ney +
    +
    +
    + +

    hi ich verwende einen Clearpass Server^^ (Windows Radius)

    + +
    +
    +
  8. +
  9. +

    Pingback:Freeradius mit MySQL, daloRADIUS und dynamische VLANs mit Ubiquiti Unifi - Jans Blog

    +
  10. +
  11. +
    +
    +
    + Alois Eimannsberger +
    +
    +
    + +

    Hallo,

    +

    ich verwende de RADIUS Server des USG’s für die Trennung der VLAN’s. Das funktioniert soweit. Nur wenn sich ein Gerät am WLAN anmeldet, dass anhand der MAC Adresse noch keine Zuordnung zu einer VLAN ID hat, kann sich dieses nicht zum WLAN verbinden. Mir fehlt eine Art Fallback VLAN ID beim USG. Gibt es da etwas?

    + +
    +
    + +
  12. +
  13. +
    + +
    + +

    Zwei Dinge, die mir aufgefallen sind:
    +Warum kann ich in der client.conf nur ein Gerät bzw. nur ein Netz eintragen?
    +Wenn ich ein zweites Gerät im anderen Netz einfügen will, stürzt FreeRADIUS ab.

    +

    Und: Muss der USG Radius aus sein, wenn ich einen neuen Radius im USG definiere?
    +Oder kann man die parallel laufen lassen? Weil beide doch 1812 benutzen

    + +
    +
    +
      +
    • +
      + +
      + +

      Hallo Dirl,
      +für mehr als einen Bereich brauchst du einen komplett neuen Eintrag in der clients.conf. In der Vorlagen-Datei bei Github sieht man, wie die Bereiche einzutragen sind: https://github.com/redBorder/freeradius/blob/master/raddb/clients.conf
      +Der RADIUS auf dem USG kann ruhig an bleiben, er wird halt dann nicht mehr gefragt, sondern der externe freeradius. Da der freeradius eine eigene IP hat, gibt es da keine Komplikationen bezüglich Port. Das wäre nur der Fall, wenn der RADIUS-Dienst 2x auf dem gleichen Gerät laufen würde. Da das USG in diesem Fall als RADIUS-Client agiert, passt das.
      +Schönen Gruß, Jan

      + +
      +
      +
        +
      • +
        + +
        + +

        Alles was du oben beschrieben hast, funktioniert bei mir. Alle meine Geräte gehen über das USG Radius ins richtige VLAN.
        +Aber für mein Abschlussprojekt muss das mit FreeRADIUS gehen. Tut es aber nicht.
        +Auf meiner Synology ein CentOS virtualisierrt, ein eigens Subnetz erstellt, eigener Switch, eigener AccesPoint, alles im extra Netz.
        +Aber der Debug Modus sagt:
        + Failed binding to auth address * port 1812 bound to server default: Address already in use
        +/etc/raddb/sites-enabled/default(59): Error binding to port for 0.0.0.0 port 1812

        +

        Vielen Dank für deine Hilfe
        +Gruss Dirk

        + +
        +
        +
          +
        • +
          + +
          + +

          Hallo,
          +das hört sich danach an, als wenn der Dienst bereits läuft. Wenn du den Debug-Modus nutzt, muss der automatisch gestartete Dienst vorher immer manuell beendet werden. Unter CentOS weiß ich es nicht, unter Debian wäre es ein “systemctl stop freeradius” bzw. mit “systemctl status freeradius” kannst du prüfen, ob der Dienst bereits läuft.
          +Gruß
          +Jan

          + +
          +
          +
            +
          • +
            + +
            + +

            Juhu, ich hab es geschafft…. Vielen Dank.
            +Da kann die mündliche Prüfung kommen. Alles funktioniert. Tolle Seite auch hier.
            +Ich muss nur zusehen, dass das nachher kein Plagiat wird.
            +Aber weißte was:
            +Bei der WPA Enterprise Authentifizierung muss man ja die MAC eingeben. Benutzer und Password.
            +Ist das nicht etwas tau einfach, zu unsicher? Oder denke ich da falsch?
            +Wenn Leute das wissen, dass sie nur ihre MAC brauchen, um zu surfen, ja dann kann ja jeder in mein Default Netz oder??
            +Und nochwas: Das mit der MAC für VLAN ist OK, aber zur Authenifizierung find ich das unsicher.
            +Grade weil man diese MAC generieren kann und fälschen kann.
            +Oder was sagst du Jan…

            +
            +
            +
            +
            +
          • +
          • +
            + +
            + +

            Hi,
            +eine Authentifizierung per MAC-Adresse ist nur eine von mehreren Möglichkeiten, die du bei der WPA Enterprise-Authentifizierung hast. Wenn es dir um Sicherheit geht, dann wirst du um eine Authentifizierung mit Zertifikaten nicht drum herum kommen. Hierbei ist dann nicht nur die MAC-Adresse ausschlaggebend, sondern auch ein Zertifikat. Hast du kein Zertifikat, kommst du nicht rein.
            +Du hast bei deiner Denkweise vollkommen recht, eine MAC-Adresse ist kein ausreichender Schutz. Ist halt immer die Frage, was du damit schützen möchtest und worum es dir geht und wie sensibel die Infrastruktur ist. Diese Anleitung hier habe ich geschrieben, damit mein Netzwerk zuhause nur eine SSID ausstrahlt, aber ich trotzdem diverse Geräte voneinander trennen kann. Insbesondere ging es mir darum, dass so Geräte wie ein Amazon Fire TV oder ein TV-Gerät in einem eigenen Netz rumturnen, und nicht in meinem Privat-Netz sind. Da diese Geräte aber wiederum keine Zertifikate können, habe ich die MAC-Variante genutzt.
            +Schönen Gruß
            +Jan

            +
            +
            +
            +
            +
          • +
          +
        • +
        +
      • +
      +
    • +
    +
  14. +
  15. +
    +
    +
    +
    +
    + +

    Hallo Jan,

    +

    spricht was dagegen den Freeradius auf dem gleichen PI zu betreiben auf dem ich den Controller am laufen habe?

    +

    Liebe Grüße
    +Sascha

    + +
    +
    +
  16. +
  17. +
    + +
    + +

    Servus Jan, hab nen UDR und nutze den „onboard-Radius-Server“ kannste mir nen Tip geben wo ich das eqivalent zur Client.Conf vom freedius auf dem UDR finde? Ich möchte nen Menge iOT-Geräte in einen Rutsch anlegen und nicht alle einzeln über das UI reinklopfen müssen

    + +
    +
    +
      +
    • +
      + +
      + +

      Hallo Roman,
      +weiß ich nicht, ich nutze den internen nicht, weil er mir zu eingeschränkt ist. Du musst auch beachten, dass die lokalen Dateien ggf. durch den Controller wieder überschrieben werden bei der nächsten Priorisierung.
      +Schönen Gruß
      +Jan

      + +
      +
      +
    • +
    +
  18. +
  19. +
    + +
    + +

    Hallo Jan,

    +

    ich habe das gleiche Setup, wie von dir beschrieben, seit längerer Zeit im Einsatz. Seit einem Unifi-Update (leider kann es nicht mehr reproduzieren) werden alle Clients nur noch in das Standard-Verwaltungs-VLAN geschoben. Funktioniert dein Setup noch wie ursprünglich eingerichtet?

    +

    Bei der Fehlersuche habe den Freeradius überprüft. Er liefert die korreten Daten.
    +Der Access-Point scheint das VLAN-nicht richtig zu übermitteln. Ich habe schon ein Firmwaredowngrade auf alle erdenklichen Versionen probiert – kein Erfolg. Hast du vielleicht einen Tipp / Workaround.

    +

    Gruß
    +Holger

    + +
    +
    +
      +
    • +
      + +
      + +

      Hallo Holger,
      +ich habe letzte Woche erst ein Update des Unifi Controllers auf die aktuelle Version gemacht, ich habe das Problem nicht. Mein Setup läuft weiterhin so wie beschrieben.
      +Schönen Gruß
      +Jan

      + +
      +
      +
    • +
    +
  20. +
+
+

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

+ +

+

+

+
+
+ +
+
+
+ +
+
+
+
+ + diff --git a/AlpineLinux/Alpine Linux on Raspberry Pi Diskless Mode with persistent storage.html b/AlpineLinux/Alpine Linux on Raspberry Pi Diskless Mode with persistent storage.html new file mode 100644 index 0000000..e0054da --- /dev/null +++ b/AlpineLinux/Alpine Linux on Raspberry Pi Diskless Mode with persistent storage.html @@ -0,0 +1,73 @@ + ★ Alpine Linux on Raspberry Pi: Diskless Mode with persistent storage | Not Just Serendipity +

★ Alpine Linux on Raspberry Pi: Diskless Mode with persistent storage

Use case: Given an Alpine Linux diskless1 installation meant for +a Raspberry Pi setup, we would like to add a persistent storage component to it +to make it survive across reboots.

Goal

The Alpine Linux Wiki covers most of the installation process, hence I will only document the bits that were lacking and/or confusing therein.

My use case is the following:

Given a Raspberry Pi 3B with an old 4GiB SD Card as CF storage2, install Alpine Linux in diskless mode. Find a way to preserve modifications in /etc and /var, as well as any installed packages through its apk package manager.

Let’s follow the steps outlined in the wiki.

Copy Alpine to the SD Card

Grab the SD card and install Alpine Linux in it.

Alpine provides officially supported images designed for the Raspberry Pi.

Most Linux distributions provide an .iso or .img file to be installed with a tool like Balena Etcher, Rufus, Raspberry Pi Imager or plain dd3.

Alpine is not like most Linux distributions: Instead, it provides a .tar.gz archive with files that should be copied directly to the SD card. Grab the latest version (3.15 at the time of this post) from https://alpinelinux.org/downloads/. There are 3 options:

  • armhf: Works with all Pis, but may perform less optimally on recent versions.

  • armv7: Works with the Pi 3B, 32-bit.

  • aarch64: Works with the Pi 3B, 64-bit.

I opted for aarch64 to make it 64-bit, but armv7 would also have worked well for my setup. In fact, Raspberry Pi OS (Debian) uses armv7 (32-bit) at the time of this writing.

Before copying files over, format the SD Card. As I was doing this +from a Windows machine because it was the only one I had readily available with +a SD card slot, I just used the native Windows Disk Management tool to do so. +I decided to allocate a 100MB4 FAT32 partition. The rest of the SD card would be +blank for now. Alpine is surprisingly small, 100MB was more than enough for the kernel and other needed files.

Once the SD card is formatted, copy the files over to it. It turns out Windows cannot extract tarballs (.tar.gz); a tool like 7-zip should do the job. Copy the files over to the root of the newly allocated FAT32 partition, and then safely eject the SD card.

Boot Alpine from the SD Card

The next step is to insert the SD Card into the Pi and then boot. I had some trouble in this step and eventually figured out I didn’t mark the primary FAT32 partition as bootable. Unfortunately it’s not straightforward to mark the partition as bootable from Windows. On a Linux machine there’s a wide array of tools to do so: fdisk, cfdisk (TUI), sfdisk (scriptable fdisk), parted, gparted (GUI) are some of them. I worked around that by installing Raspberry Pi OS on the SD card with the Raspberry Pi imager, and then overwriting it with the Alpine files. This works because the Raspberry PI OS installation marks the FAT32 partition as bootable.

Install Alpine

Installing Alpine is well documented in the wiki thus it won’t be covered here. It basically comes down to invoking setup-alpine, which then invokes other setup-* scripts.

Keep in mind we’re not really “installing” Alpine as this is a diskless installation. A more accurate term here would be “configuring”.

Before invoking the installation script, I created a second primary partition in the SD card, set to ext4:

# Configure networking to get working internet access.
+% setup-interfaces
+
+# Install some partitioning tools.
+% apk add cfdisk e2fsprogs
+
+# Create a second partition (mmcblk0p2) and write it.
+% cfdisk /dev/mmcblk0
+
+# Format the partition as ext4.
+% mkfs.ext4 /dev/mmcblk0p2
+
+# Mount the partition under /media.
+% mount /dev/mmcblk0p2 /media/mmcblk0p2
+

The installation is straightforward, we just need to pay attention to a few select steps:

  • setup-disk: Select none to ensure a diskless installation5.
  • setup-apkcache: Select /media/mmcblk0p2/cache to persist downloaded apk packages.
  • setup-lbu: Edit /etc/lbu/lbu.conf and set LBU_MEDIA="mmcblk0p2". Note: Do not add /media as it is implicit.

Once the installation is complete, run lbu commit to persist the changes in the second partition. Once you do so, a <hostname>.apkovl.tar.gz6 file should materialize on /media/mmcblk0p2/.

This is a good moment to reboot. Before we do so, let’s cache the packages we had previously downloaded.

# Cache packages.
+% apk cache download
+
+% reboot
+

After the first reboot

If everything worked as expected, once you reboot all your previously installed packages should have been preserved and automatically restored / reinstalled, as well as your modifications done to /etc.

From this point on, whenever you install a new package that you want to be preserved for subsequent reboots, run lbu commit afterwards. For example:

% apk add vim
+% lbu commit
+

If you would like to see what is going to be committed, run lbu status or lbu diff before doing the actual commit. Whenever you commit, /media/mmcblk0p2/<hostname>.apkovl.tar.gz gets overwritten with your most recent modifications.

It’s possible to keep more than one backup file by changing BACKUP_LIMIT= in /etc/lbu/lbu.conf. This is specially handy if you decide to revert to an earlier system snapshot / state later on. The stock config looks like this:

% cat /etc/lbu/lbu.conf
+# what cipher to use with -e option
+DEFAULT_CIPHER=aes-256-cbc
+
+# Uncomment the row below to encrypt config by default
+# ENCRYPTION=$DEFAULT_CIPHER
+
+# Uncomment below to avoid <media> option to 'lbu commit'
+# Can also be set to 'floppy'
+# LBU_MEDIA=usb
+
+# Set the LBU_BACKUPDIR variable in case you prefer to save the apkovls
+# in a normal directory instead of mounting an external media.
+# LBU_BACKUPDIR=/root/config-backups
+
+# Uncomment below to let lbu make up to 3 backups
+# BACKUP_LIMIT=3
+

Tip: You can find the list of all explicitly installed packages in /etc/apk/world.

The last piece: make /var persistent

There are three natural ways that come to mind to make /var persistent:

A) Separate partition (or file)

Instead of two partitions (FAT32 and ext4), create 3 partitions: FAT32, ext4 and ext4. Use the latter one to mount /var on, saving this information in /etc/fstab. The main disadvantage of this setup is that you’ll need to allocate a fixed amount of space of each of the ext4 partitions and it may be difficult to figure out how to split the space between them.

A variant of this approach is to just create the third partition as a file:

# 500MB file
+% dd if=/dev/zero of=/media/mmcblk0p2/var.img bs=1M count=500 status=progress
+% mkfs.ext4 /media/mmcblk0p2/var.img
+% mount /media/mmcblk0p2/var.img /var
+

This works because the Linux kernel supports mounting files as if they were device blocks, treating them as loop devices (pseudo-devices).

I don’t like these approaches because they shadow the preexisting /var from the boot media, which in turn messes up with existing services that use it such as cron: % crontab -l would fail. One workaround would be to mount a /var subdirectory instead: for example, /var/lib/docker for docker.

B) Bind mount

This one is straightforward:

% mount --bind /media/mmcblk0p2/var/lib/docker /var/lib/docker
+

The actual partition lives in the SD card, however we make a bind mount under +/var, which is like an alias. From Stack Exchange:

A bind mount is an alternate view of a directory tree. Classically, mounting creates a view of a storage device as a directory tree. A bind mount instead takes an existing directory tree and replicates it under a different point. The directories and files in the bind mount are the same as the original. Any modification on one side is immediately reflected on the other side, since the two views show the same data.

C) Overlay mount

From ArchWiki:

Overlayfs allows one, usually read-write, directory tree to be overlaid onto another, read-only directory tree. All modifications go to the upper, writable layer. This type of mechanism is most often used for live CDs but there is a wide variety of other uses.

It’s perfect for our use case, which uses a live bootable SD card for Alpine. It blends the preexisting, ephemeral, in-memory /var with the persistent in-disk /var.

I wanted to mount /var directly but found it to be problematic for the same reasons mentioned earlier, therefore I just went with /var/lib/docker instead:

# Create overlay upper and work directories.
+% mkdir -p /media/mmcblk0p2/var/lib/docker /media/mmcblk0p2/var/lib/docker-work
+
+# Add mountpoint entry to fstab. Note: The work dir must be an empty directory in the same filesystem mount as the upper directory.
+% echo "overlay /var/lib/docker overlay lowerdir=/var/lib/docker,upperdir=/media/mmcblk0p2/var/lib/docker,workdir=/media/mmcblk0p2/var/lib/docker-work 0 0" >> /etc/fstab
+
+# Mount all fstab entries, including our newly added one.
+% mount -a
+

Conclusion

I opted for the third approach, using an overlay mount, it was the most +seamless one. A bind mount would have been fine as well.

The final setup works surprisingly well:

  • Alpine Linux is very lightweight and runs mostly from RAM
  • apk cache is persistent to the ext4 partition
  • /var/ is persistent to the ext4 partition
  • lbu commit persists changes in /etc/ and /home/ in the ext4 partition
  • Every reboot fully resets the system sans persistent components above

References


  1. Running (almost) fully from RAM. ↩︎

  2. CF = Compact disk. ↩︎

  3. On Linux I’d usually opt for dd, on Windows the Raspberry Pi Imager is a sensible choice. ↩︎

  4. 100MB is overly conservative, but keep in mind I had a very small SD Card, with only 4GiB storage. 250MB or even 500MB should be a more sensible default if you have a bigger SD Card (e.g. 32GiB). ↩︎

  5. An alternative is to select data disk mode, but it didn’t work for me. ↩︎

  6. ovl is short for overlay. Not to be confused with vol for volume↩︎

\ No newline at end of file diff --git a/AlpineLinux/Raspi with alpine pt1.html b/AlpineLinux/Raspi with alpine pt1.html new file mode 100644 index 0000000..388ef3e --- /dev/null +++ b/AlpineLinux/Raspi with alpine pt1.html @@ -0,0 +1,585 @@ + +Raspberry Pi - Alpine Linux + + + + + + + + + +
+ + Jump to content +
+ +
+
+ + + +
+
+ + + +
+
+
+
+ + +
+
+
+
+
+ + +
+
+ +
+ + + +

Raspberry Pi

+
+ +
+
+
+
+
From Alpine Linux
+
+ + + +
Tango-dialog-warning.png
Warning: 11 Feb 2021 - There is currently a known bug upstream
kernel/initramfs cannot be loaded from subdirectory with same name as volume label


+ + +
+
+

This tutorial explains how to install Alpine Linux on a Raspberry Pi. Alpine Linux will be installed in diskless mode, hence, Alpine Local Backup (lbu) is required to save modifications between reboots. +

For scenarios where there is not expected to be significant changes to disk after setup (like running a static HTTP server), this is likely preferable, as running the entire system from memory will improve performance (by avoiding the slow SD card) and improve the SD card life (by reducing the writes to the card, as all logging will happen in RAM). Diskless installations still allow you to install packages, save local files, and tune the system to your needs. +

If any of the following apply, then installation in sys-mode installation is likely more appropriate. +

+
  • There will be constant changes to the disk after initial setup (for example, if you expect people to login and save files to their home directories)
  • +
  • Logs should persists across reboots
  • +
  • Plan to install packages which consume more space than can be loaded into RAM
  • +
  • Plan to install kernel modules (such as ZFS or Wireguard)
+

Preparation

+
  1. Download the Alpine for Raspberry Pi tarball. You should be safe using the armhf build on all versions of Raspberry Pi (including Pi Zero and Compute Modules); but it may perform less optimally on recent versions of Raspberry Pi. The armv7 build is compatible with Raspberry Pi 2 Model B. The aarch64 build should be compatible with Raspberry Pi 2 Model v1.2, Raspberry Pi 3 and Compute Module 3, and Raspberry Pi 4 model B.
  2. +
  3. Create a bootable FAT32 partition on your SD card. The partitioning and formatting part of the instructions on the linked page could be done using a graphical partitioning tool such as gnome-disks, just make sure the partition type is W95 FAT32 (LBA). (The current type can be found in the "Type" column in the output of fdisk -l.)
  4. +
  5. Extract the tarball to the root of the bootable FAT32 partition.
+

To setup a headless system, a bootstrapping configuration overlay file headless.apkovl.tar.gz may be added to enable basic networking, so that following configuration steps can be performed under ssh. Pi Zero may be configured with simple USB ethernet-gadget networking with another computer sharing its internet connection. +

Optionally create a usercfg.txt file on the partition to configure low-level system settings. Specifications can be found here. However, note some settings can only be set directly in config.txt, which may be overwritten after updates. In particular, gpu_mem will have no effect when specified in usercfg.txt (source). Some interesting values include: +

+
  • To enable the UART console: enable_uart=1
  • +
  • To enable audio: dtparam=audio=on
  • +
  • If you see black edges around your screen after booting the Pi, you can add disable_overscan=1
  • +
  • If you plan to install on a Pi Compute Module 4 with I/O board, you may need to add: otg_mode=1
+

Recent versions include Broadcom firmware files. If you're using an older Alpine version, see section below. +

+

Installation

+

Follow these steps to install Alpine Linux in Diskless Mode: +

+
  1. Insert the SD card into the Raspberry Pi and power it on
  2. +
  3. Login into the Alpine system as root. Leave the password empty.
  4. +
  5. Type setup-alpine
  6. +
  7. Once the installation is complete, commit the changes by typing lbu commit -d
+

Type reboot to verify that the installation was indeed successful. +

+

Post Installation

+

Update the System

+

After installation, make sure your system is up-to-date: +

+

apk update +apk upgrade

+

Don't forget to save the changes: +

+

lbu commit -d

+

Note: this does not upgrade the kernel. In order to upgrade the kernel, a full upgrade of the Alpine Linux version must be performed as described in upgrading Alpine Linux for removable media. +

+

Clock-related error messages

+

During the booting time, you might notice errors related to the hardware clock. The Raspberry Pi does not have +a hardware clock, thus you need to disable the hwclock daemon and enable swclock: +

+

rc-update add swclock boot # enable the software clock +rc-update del hwclock boot # disable the hardware clock

+

Since the Raspberry Pi does not have a clock, Alpine Linux needs to know what the time is by using a +Network Time Protocol (NTP) daemon. Make sure you have a +NTP daemon installed and running. If you are not sure, you can install an NTP client by running the following +command: +

+

setup-ntp

+

The Busybox NTP client might be the most lightweight solution. Save the changes and reboot, once the NTP software is +installed and running: +

+

lbu commit -d +reboot

+

After reboot, make sure the date command outputs the correct date and time. +

+

WiFi on boot

+

If you have already configured WiFi during the setup, the connection will not return on reboot. +You will need to start up a service to automatically connect to the wireless access point. +

+
  1. Run rc-update add wpa_supplicant boot to connect to the wireless access point during bootup.
  2. +
  3. Run it manually with /etc/init.d/wpa_supplicant start.
+

Enable Graphics

+

Remount the boot partition writeable (i.e. /media/mmcblk0p1): +

+

mount /media/mmcblk0p1 -o rw,remount

+

Add the following lines to /media/mmcblk0p1/config.txt: +

+
dtoverlay=vc4-kms-v3d
+
+

If you are experiencing graphical issues, you can also try: +

+
dtoverlay=vc4-fkms-v3d
+
+

And perhaps also raising the default gpu_mem: +

+
gpu_mem=128
+
+

Note that raising the gpu memory is not likely to actually improve performance on the Pi4 +

Install the Mesa drivers: +

+

apk add mesa-dri-gallium

+

Then reboot: +

+

lbu_commit -d; reboot

+

WiFi drivers

+

As of Alpine 3.14, the WiFi drivers for the Raspberry Pi were moved from linux-firmware-brcm to the linux-firmware-cypress package (source?). Since the images seem to be an outdated version of the former, Wi-Fi will work during installation, but after the first update it will break. +Use the ethernet interface to download the required packages: +

+

apk add linux-firmware-cypress

+

And reboot. +

+

Persistent storage

+

Loopback image with overlayfs

+

When you install Alpine in diskless mode, the entire system is loaded into memory at boot. If you want additional storage (for example, if you need more space than offered by your RAM) we need to create loop-back storage onto the SD card mounted with overlayfs. +

First, make the SD card writable again and change fstab to always do so: +

+

mount /media/mmcblk0p1 -o rw,remount +sed -i 's/vfat\ ro,/vfat\ rw,/' /etc/fstab

+

Create the loop-back file, this example is 1 GB: +

+

dd if=/dev/zero of=/media/mmcblk0p1/persist.img bs=1024 count=0 seek=1048576

+

Install the ext utilities: +

+

apk add e2fsprogs

+

Format the loop-back file: +

+

mkfs.ext4 /media/mmcblk0p1/persist.img

+

Mount the storage: +

+

echo "/media/mmcblk0p1/persist.img /media/persist ext4 rw,relatime,errors=remount-ro 0 0" >> /etc/fstab +mkdir /media/persist +mount -a

+

Make the overlay folders, we are using the /usr directory here, but you can use /home or anything else. +

+
Tango-dialog-warning.png
Warning: Overlay workdir needs to be an empty directory on the same filesystem mount as the upper directory. So each overlay must use its own workdir.


+


+

+

mkdir /media/persist/usr +mkdir /media/persist/.work_usr +echo "overlay /usr overlay lowerdir=/usr,upperdir=/media/persist/usr,workdir=/media/persist/.work_usr 0 0" >> /etc/fstab +mount -a

+

Your /etc/fstab should look something like this: +

+

/dev/cdrom /media/cdrom iso9660 noauto,ro 0 0 +/dev/usbdisk /media/usb vfat noauto,ro 0 0 +/dev/mmcblk0p1 /media/mmcblk0p1 vfat rw,relatime,fmask=0022,dmask=0022,errors=remount-ro 0 0 +/media/mmcblk0p1/persist.img /media/persist ext4 rw,relatime,errors=remount-ro 0 0 +overlay /usr overlay lowerdir=/usr,upperdir=/media/persist/usr,workdir=/media/persist/.work_usr 0 0

+

Now commit the changes: (optionally remove the e2fsprogs, but it does contain repair tools) +

+

lbu_commit -d

+

Remember, with this setup if you install things and you have done this overlay for /usr, you must not commit the 'apk add', otherwise, while it boots it will try and install it to memory, not to the persistent storage. +

If you do want to install something small at boot, you can use `apk add` and `lbu commit -d`. +

If it is something a bit bigger, then you can use `apk add` but then not commit it. It will be persistent (in /user), but be sure to check everything you need is in that directory and not in folders you have not made persistent. +

+

Traditional disk-based (sys) installation

+
+ + +
Tango-two-arrows.png
This material is proposed for merging ...

It should be merged with Classic install or sys mode on Raspberry Pi. There's an existing page for sys-installations on RasPi. +(Discuss)

+
+

It is also possible to switch to a fully disk-based installation. This is not yet formally supported, but can be done somewhat manually. This frees all the memory otherwise needed for the root filesystem, allowing more installed packages. +

Split your SD card into two partitions: the FAT32 boot partition described above (in this example it'll be mmcblk0p1) , and a second partition to hold the root filesystem (here it'll be mmcblk0p2). Boot and configure your diskless system as above, then create a root filesystem: +

+

apk add e2fsprogs +mkfs.ext4 /dev/mmcblk0p2

+

Now do a disk install via a mountpoint. The setup-disk script will give some errors about syslinux/extlinux, but you can ignore them. +The Raspberry Pi doesn't need them to boot. +

+

mkdir /stage +mount /dev/mmcblk0p2 /stage +setup-disk -o /media/mmcblk0p1/MYHOSTNAME.apkovl.tar.gz /stage +# (ignore errors about syslinux/extlinux)

+

Add a line to /stage/etc/fstab to mount the Pi's boot partition again: +

+

/dev/mmcblk0p1 /media/mmcblk0p1 vfat defaults 0 0

+

Now add a root=/dev/mmcblk0p2 parameter to the Pi's boot command line, either cmdline-rpi2.txt or cmdline-rpi.txt depending on model: +

+

mount -o remount,rw /media/mmcblk0p1 +sed -i '$ s/$/ root=\/dev\/mmcblk0p2/' /media/mmcblk0p1/cmdline-rpi2.txt

+

You might also consider overlaytmpfs=yes here, which will cause the underlying SD card root filesystem to be mounted read-only, with an overlayed tmpfs for modifications which will be discarded at shutdown. +

N.B. the contents of /boot will be ignored when the Pi boots. It will use the kernel, initramfs, and modloop images from the FAT32 boot partition. To update the kernel, initfs or modules, you will need to manually (generate and) copy these to the boot partition or you could use bind mount, in which case, +copying the files to boot partition manually, is not needed. +

+

echo /media/mmcblk0p1/boot /boot none defaults,bind 0 0 >> /etc/fstab

+

Persistent Installation on Raspberry Pi 3

+

See Classic install or sys mode on Raspberry Pi and https://web.archive.org/web/20171125115835/https://forum.alpinelinux.org/comment/1084#comment-1084 +

+

Persistent Installation on Raspberry Pi 4

+

As of 3.14, setup-alpine should ask you if you want to create a sys mode partition on your Raspberry Pi 4. +

+

Troubleshooting

+

Long boot time when running headless

+

If no peripherals are connected, the system might hang for an exceptionally long period of time while it attempts to accumulate entropy. +

If this is the case, simply plugging in any USB device should work around this issue. +

Alternatively, installing haveged, the random numbers generator, would speed up the process: +

+
 apk update 
+ apk add haveged
+ rc-update add haveged boot
+ lbu commit -d
+ service haveged start
+
+

(Tested on a raspberry pi zero W in headless mode, no USB connected, Alpine 3.10.3) +

+

apk indicating 'No space left on device'

+

Note some models of the Raspberry Pi such as the 3A+ only have 512M of RAM, which on fresh Alpine deployment will only leave around 200M for tmpfs root. It's important to keep this limitation in mind when using these boards. +

+

Wireless support with older Alpine images

+

If you need Wi-Fi, you'll need to download the latest Broadcom drivers to your SD card. +(Replace /mnt/sdcard with the correct mount point.) +

+
 git clone --depth 1 https://github.com/RPi-Distro/firmware-nonfree.git
+ cp firmware-nonfree/brcm/* /mnt/sdcard/firmware/brcm
+
+

See Also

+ +
+
Retrieved from ""
+ +
+
+ +
+ +
+
+ \ No newline at end of file diff --git a/AlpineLinux/Raspi with alpine pt2.html b/AlpineLinux/Raspi with alpine pt2.html new file mode 100644 index 0000000..00b195b --- /dev/null +++ b/AlpineLinux/Raspi with alpine pt2.html @@ -0,0 +1,789 @@ + +Alpine setup scripts - Alpine Linux + + + + + + + + + +
+ + Jump to content +
+ +
+
+ + + +
+
+ + + +
+
+
+
+ + +
+
+
+
+
+ +
+
+ +
+
+
+
+ +
+ + + +

Alpine setup scripts

+
+ +
+
+
+
+
From Alpine Linux
+
+ + + +

Feature descriptions for available Alpine Linux setup scripts (/sbin/setup-*). +

These scripts can be installed by using apk to install the alpine-conf package. +

If you don't have an Alpine Linux install, you can find and examine the scripts in their git repository. +

+ +

setup-alpine

+

This is the main Alpine configuration and installation script. +

The script interactively walks the user through executing several auxiliary setup-* scripts, in the order shown below. +

The bracketed options represent example configuration choices, formatted as they may be supplied when manually calling the auxiliary setup scripts, or using a setup-alpine "answerfile" (see below). +


+

+
  1. setup-keymap [us us]
  2. +
  3. setup-hostname [-n alpine-test]
  4. +
  5. setup-interfaces [-i < interfaces-file]
  6. +
  7. /etc/init.d/networking --quiet start &
  8. +
  9. if none of the networking interfaces were configured using dhcp, then: setup-dns [-d example.com -n "192.168.0.1 [...]"]
  10. +
  11. set the root password
  12. +
  13. if not in quick mode, then: setup-timezone [-z UTC | -z America/New_York | -p EST+5]
  14. +
  15. enable the new hostname (/etc/init.d/hostname --quiet restart)
  16. +
  17. add networking and seedrng (also referred to as urandom in versions prior to OpenRC 0.45) to the boot rc level, and acpid and crond to the default rc level, and start the boot and default rc services
  18. +
  19. extract the fully-qualified domain name and hostname from /etc/resolv.conf and hostname, and update /etc/hosts
  20. +
  21. setup-proxy [-q "http://webproxy:8080"], and activate proxy if it was configured
  22. +
  23. setup-apkrepos [-r (to select a mirror randomly)]
  24. +
  25. if not in quick mode, then: setup-sshd [-c openssh | dropbear | none]
  26. +
  27. if not in quick mode, then: setup-ntp [-c chrony | openntpd | busybox | none]
  28. +
  29. if not in quick mode, then: DEFAULT_DISK=none setup-disk -q [-m data /dev/sda] (see Installation#Installation_Overview about the disk modes)
  30. +
  31. if installation mode selected during setup-disk was "data" instead of "sys", then: setup-lbu [/media/sdb1]
  32. +
  33. if installation mode selected during setup-disk was "data" instead of "sys", then: setup-apkcache [/media/sdb1/cache | none]
+


+setup-alpine itself accepts the following command-line switches: +

+

+
-h
+
Shows the up-to-date usage help message.
+

+

+

+
-a
+
Create an overlay file: this creates a temporary directory and saves its location in ROOT; however, the script doesn't export this variable so I think this feature isn't currently functional.
+

+

+
-c answerfile
+
Create a new answerfile with default choices. You can edit the file and then invoke setup-alpine -f answerfile.
+
-f answerfile
+
Use an existing answerfile, which may override some or all of the interactive prompts. You can also specify a HTTP(S) or FTP URL for setup-alpine to download an answerfile from. Doing so will spin up a temporary networking config if one is not already active.
+

+

+
-q
+
Run in "quick mode".
+

+


+

+

setup-hostname

+
setup-hostname [-h] [-n hostname]
+

Options: +

-h Show help +

-n Specify hostname +

This script allows quick and easy setup of the system hostname by writing it to /etc/hostname. The script prevents you from writing an invalid hostname (such as one that used invalid characters or starts with a '-' or is too long). +The script can be invoked manually or is called as part of the setup-alpine script. +


+

+

setup-interfaces

+

setup-interfaces [-i < interfaces-file]

+

Note that the contents of interfaces-file has to be supplied as stdin, rather than naming the file as an additional argument. The contents should have the format of /etc/network/interfaces, such as: +

+
auto lo
+iface lo inet loopback
+
+auto eth0
+iface eth0 inet dhcp
+    hostname alpine-test
+
+


+

+

setup-dns

+
setup-dns [-h] [-d domain name] [-n name server]
+

Options: +

-h Show help +

-d specify search domain name +

-n name server IP +

The setup-dns script is stored in /sbin/setup-dns and allows quick and simple setup of DNS servers (and a DNS search domain if required). Simply running setup-dns will allow interactive use of the script, or the options can be specified. +

The information fed to this script is written to /etc/resolv.conf +

+Example usage (with 192.168.0.1 being the local router/dns-forwarder):

setup-dns -d example.org -n 192.168.0.1

+

Example /etc/resolv.conf: +

+
search example.org
+nameserver 192.168.0.1
+
+

It can be run manually but is also invoked in the setup-alpine script unless interfaces are configured for DHCP. +


+

+

setup-timezone

+
setup-timezone [-z UTC | -z America/New_York | -p EST+5]
+

Can pre-select the timezone using either of these switches: +

-z subfolder of /usr/share/zoneinfo +

-p POSIX TZ format +


+

+

setup-proxy

+
setup-proxy [-hq] [PROXYURL]
+

Options: +

-h Show help +

-q Quiet mode prevents changes from taking effect until after reboot +

This script requests the system proxy to use in the form http://<proxyurl>:<port> for example: +http://10.0.0.1:8080 +

To set no system proxy use none. +This script exports the following environmental variables: +

http_proxy=$proxyurl +

https_proxy=$proxyurl +

ftp_proxy=$proxyurl +

where $proxyurl is the value input. +If none was chosen then the value it is set to a blank value (and so no proxy is used). +


+

+

setup-apkrepos

+
setup-apkrepos [-fhr] [REPO...]
+

Setup apk repositories. +

options: +

-f Detect and add fastest mirror +

-r Add a random mirror and do not prompt +

-1 Add first mirror on the list (normally a CDN) +

This is run as part of the setup-alpine script. +


+

+

setup-sshd

+
setup-sshd [-h] [-c choice of SSH daemon]
+

Options: +

-h Show help +

-c SSH daemon where SSH daemon can be one of the following: +

openssh install the openSSH daemon +

dropbear install the dropbear daemon +

none Do not install an SSH daemon +

+Example usage:

setup-sshd -c dropbear

+

The setup-sshd script is stored in /sbin/setup-sshd and allows quick and simple setup of either the OpenSSH or Dropbear SSH daemon & client. +It can be run manually but is also invoked in the setup-alpine script. +


+

+

setup-ntp

+

From Wikipedia: +

The Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. +


+

+

usage: setup-ntp [-h] [busybox|openntpd|chrony|none] + +Setup NTP time synchronization + +options: + -h Show this help + + User is prompted if no NTP daemon is specified

+

setup-ntp script is stored in /sbin/setup-ntp and allows quick and simple setup of the NTP client, +It can be run manually but is also invoked in the setup-alpine script. +


+

+

setup-disk

+
DEFAULT_DISK=none setup-disk -q [-m data | sys] [mountpoint directory | /dev/sda ...]
+

In "sys" mode, it's an installer, it permanently installs Alpine on the disk, in "data" mode, it provides a larger and persistent /var volume. +

This script accepts the following command-line switches: +

+
-k kernel flavor
+
-o apkovl file
+
Restore system from apkovl file
+
-m data | sys
+
Don't prompt for installation mode. With -m data, the supplied devices are formatted to use as a /var volume.
+

+

+
-r
+
Use RAID1 with a single disk (degraded mode)
+

+ +

+
-L
+
Create and use volumes in a LVM group
+

+

+
-s swap size in MB
+
Use 0 to disable swap
+

+

+
-q
+
Exit quietly if no disks are found
+

+ +

+
-v
+
Verbose mode
+

+

The script also honors the following environment variables: +

BOOT_SIZE +

+
Size of the boot partition in MB; defaults to 100. Only used if -m sys is specified or interactively selected.
+

SWAP_SIZE +

+
Size of the swap volume in MB; set to 0 to disable swap. If not specified, will default to twice RAM, up to 4096, but won't be more than 1/3 the size of the smallest disk, and if less than 64 will just be 0. Only used if -m sys is specified or interactively selected.
+

ROOTFS +

+
Filesystem to use for the / volume; defaults to ext4. Only used if -m sys is specified or interactively selected. Supported filesystems are: ext2 ext3 ext4 btrfs xfs.
+

BOOTFS +

+
Filesystem to use for the /boot volume; defaults to ext4. Only used if -m sys is specified or interactively selected. Supported filesystems are: ext2 ext3 ext4 btrfs xfs.
+

VARFS +

+
Filesystem to use for the /var volume; defaults to ext4. Only used if -m data is specified or interactively selected. Supported filesystems are: ext2 ext3 ext4 btrfs xfs.
+

SYSROOT +

+
Mountpoint to use when creating volumes and doing traditional disk install (-m sys). Defaults to /mnt.
+

MBR +

+
Path of MBR binary code, defaults to /usr/share/syslinux/mbr.bin.
+

BOOTLOADER +

+
Bootloader to use, defaults to syslinux. Supported bootloaders are: grub syslinux zipl.
+

DISKLABEL +

+
Disklabel to use, defaults to dos. Supported disklabels are: dos gpt eckd.
+


+

+

Partitioning

+

If you have complex partitioning needs, that go beyond above alpine-disk options, you can partition, format, and mount your volumes manually, and then just supply the root mountpoint to setup-disk. Doing so implicitly behaves as though -m sys had also been specified. +

See Setting up disks manually for more information. +


+

+

RAID

+

setup-disk will automatically build a RAID array if you supply the -r switch, or if you specify more than one device. The array will always be RAID1 (and --metadata=0.90) for the /boot volumes, but will be RAID5 (and --metadata=1.2 for non-boot volumes when 3 or more devices are supplied. +

If you instead want to build your RAID array manually, see Setting up a software RAID array. Then format and mount the disks, and supply the root mountpoint to setup-disk. +

+

LVM

+

setup-disk will automatically build and use volumes in a LVM group if you supply the -L switch. The group and volumes created by the script will have the following names: +

+
  • volume group: vg0
  • +
  • swap volume: lv_swap (only created when swap size > 0)
  • +
  • root volume: lv_root (only created when -m sys is specified or interactively selected)
  • +
  • var volume: lv_var (only created when -m data is specified or interactively selected)
+

The lv_var or lv_root volumes are created to occupy all remaining space in the volume group. +

If you need to change any of these settings, you can use vgrename, lvrename, lvreduce or lvresize. +

If you instead want to build your LVM system manually, see Setting up Logical Volumes with LVM. Then format and mount the disks, and supply the root mountpoint to setup-disk. +


+
+

+

setup-lbu

+

This script will only be invoked for by setup-alpine when installing data installation types (ramdisk) +

It configures where lbu commit will store the .apkovl backup. See Alpine local backup for more information. +

When started, setup-lbu will prompt where to store your data. The options it will prompt for will be taken from the directories found in /media (except for cdrom). [not sure how these are mounted: are they automatically mounted by setup-lbu? Does the user have to manually mount using another tty?] +


+

+

setup-apkcache

+

This script will only be invoked for by setup-alpine when installing data installation types (ramdisk) +

It configures where to save the apk package files. The apkcache is where apk stores downloaded packages, such that the system does not need to download them again on each reboot, and doesn't have to depend on the network. See Local APK cache for a detailed explanation. +

You should be able to use a partition that you set up in the previous steps. +


+

+

setup-bootable

+

This is a standalone script; it's not invoked by setup-alpine but must be run manually. +

It allows to create boot media that boots the system running from RAM memory (diskless) like the installation images, but using a writable (i.e. not iso9660) filesystem. So that it can also serve to store local customizations (e.g. apkovl files and cached packages). +

First, the script copies files from an ISO image (as file on a CD/DVD/USB etc.) onto a USB-Stick/CompactFlash/SDCard etc., or harddisk partition. And then, it installs the syslinux bootloader to make the device bootable. +

However, its current syslinux installation seems to fail on non-FAT32 partitions. So in these cases, you may start over with a FAT32 filesystem, or rather with the desired filesystem and using setup-bootable only with the -u option, to skip the syslinux install, and then refer to the manual method to fix the problem, or use one of the other bootloader options, instead. +

+
Tip: The Bootloaders page shows different ways to setup booting, and multi-boot menus!
+


+

The setup-bootable script accepts the following arguments and command-line switches (you can run setup-bootable -h to see a usage message). +

+

setup-bootable source [dest]

+

The argument source can be a directory or an ISO (will be mounted to MNT or /mnt) or a URL (will be downloaded with WGET or wget). The argument dest can be a directory mountpoint, or will default to /media/usb if not supplied. +

+

+
-k
+
Keep alpine_dev in syslinux.cfg; otherwise, replace with UUID.
+

+ +

+
-u
+
Upgrade mode: keep existing syslinux.cfg and don't run syslinux
+

+ +

+
-f
+
Overwrite syslinux.cfg even if -u was specified.
+

+ +

+
-s
+
Force the running of syslinux even if -u was specified.
+

+ +

+
-v
+
Verbose mode
+

+

The script will ensure that source and dest are available; will copy the contents of source to dest, ensuring first that there's enough space; and unless -u was specified, will make dest bootable. +

Suppose the target device is /dev/sdXY, then this partition can be prepared for booting with +

+

# setup-bootable -v /media/<installation-media-device> /dev/sdXY +

+

For the manual way to set up boot media see Manually_copying_Alpine_files. +


+

+

setup-xorg-base

+

This is a standalone script; it's not invoked by setup-alpine but must be run manually. +

It configures a graphical environment, installing basic Xorg packages and udev (replacing mdev), and is also required for Wayland sessions. +

The script installs, among other packages, e.g.: xorg-server xf86-input-libinput xinit udev. +

Additional packages to install may be supplied as arguments. +

+

setup-xorg-base [additional package(s) to install]

+


+

+

Video packages (optional)

+

You may install specific xf86 xorg driver packages for your video card's chipset, as they may support specific features, effects and acceleration modes, and avoid error messages during X initialization. +

However, the most basic X features should work fine with just using the default kernel video-modesetting drivers. +

Info about the particular video cards that are installed in the computer may be found in the list of PCI devices: +

+

# apk add pciutils +$ lspci

+

To see available video driver packages run: +

+

$ apk search xf86-video

+

For example, +

+
  • For an Sis video chipset install 'xf86-video-sis'.
+

# apk add xf86-video-sis

+

Others: +

+
  • For Intel video chipsets install 'xf86-video-intel' and see Intel Video.
+
Tip: In some cases, freezes on suspend/resume stop happening when changing the video port the monitor is connected to.
+ +

Input packages

+

If the Numlock settings are not working, or getting 'setleds not found' errors: +

+

# apk add kbd

+

If some input device is not working at all, the available xf86-input drivers can be listed with: +

+

$ apk search xf86-input

+You probably at least want

xf86-input-libinput

or

xf86-input-evdev

+

libinput is for Wayland with wrapper for Xorg. evdev is Xorg only.

+

Typical legacy drivers (not packaged. at least as of 2/2022): +

+

# apk add xf86-input-mouse xf86-input-keyboard

+

And for touchpad tapping support on many laptops, also: +

+

# apk add xf86-input-synaptics

+

Configure xorg-server (optional)

+

On most systems, xorg should be able to autodetect all devices. However you can still configure xorg-server by hand by launching: +

+

# Xorg -configure

+

This will create a `/root/xorg.conf.new` file. You can modify this file to fit your needs.
+(When finished modifying and testing the above configuration file, move it to `/etc/X11/xorg.conf` for normal usage.) +

+

Keyboard Layout (optional)

+

If you use a keyboard layout different than "us", and you are using a window manager or desktop environment that does not support to configure the keyboard layout itself, then you need to +

+ +

and install setxkbmap: +

+

# apk add setxkbmap

+

Then try +

+
# setxkbmap <%a language layout from /usr/share/X11/xkb/rules/xorg.lst%>
+
+


+In order to make it persistent add this section to /etc/X11/xorg.conf: +

+

Section "InputClass" + Identifier "Keyboard Default" + MatchIsKeyboard "yes" + Option "XkbLayout" "<%a language layout from /usr/share/X11/xkb/rules/xorg.lst%>" +EndSection +

+


+Another way to change the keymap when logging into X is to use ~/.xinitrc. The following example loads a British keymap, simply add this line to the beginning of the file: +setxkbmap gb & +


+If you need to create the ~/.xinitrc file, you may also want to add a second line like exec openbox-session to still start the window manager with startx or xinit. +


+

+

Documentation needed

+

setup-xen-dom0

+


+

+

setup-mta

+

Uses ssmtp. +

This is a standalone script; it's not invoked by setup-alpine but must be run manually. +


+

+

setup-acf

+

This is a standalone script; it's not invoked by setup-alpine but must be run manually. +

This script was named setup-webconf before Alpine 1.9 beta 4. +

See ACF pages for more information. +

+

See also

+ +
+
Retrieved from ""
+ +
+
+ +
+ +
+
+ \ No newline at end of file diff --git a/AlpineLinux/Wireguard on Alpine.html b/AlpineLinux/Wireguard on Alpine.html new file mode 100644 index 0000000..91c2abf --- /dev/null +++ b/AlpineLinux/Wireguard on Alpine.html @@ -0,0 +1,336 @@ + +Configure a Wireguard interface (wg) - Alpine Linux + + + + + + + + + +
+ + Jump to content +
+ +
+
+ + + +
+
+ + + +
+
+
+
+ + +
+
+
+
+
+ + +
+
+ +
+ + + +

Configure a Wireguard interface (wg)

+
+ +
+
+
+
+
From Alpine Linux
+
+ + + +
+ +
+
+

WireGuard is a very promising VPN technology available in the community repository since Alpine 3.10. +

There are several ways to install and configure an interface. +

In order to load the WireGuard kernel module, you need a compatible kernel: +

+
  • linux-lts
  • +
  • linux-virt
+

Bringing up an interface using wg-tools

+

The most straightforward method, and the one recommended in WireGuard documentation, is to use wg-quick. +

Install wireguard-tools +

+
apk add wireguard-tools
+
+

Reboot and then load the module +

+
modprobe wireguard
+
+

Add it to /etc/modules to automatically load it on boot. +

Then, we need to create a private and a public key: +

+
wg genkey | tee privatekey | wg pubkey > publickey
+
+

Then, we create a new config file /etc/wireguard/wg0.conf using those keys: +

+
[Interface]
+Address = 10.123.0.1/24
+ListenPort = 45340
+PrivateKey = SG1nXk2+kAAKnMkL5aX3NSFPaGjf9SQI/wWwFj9l9U4= # the key from the previously generated privatekey file
+PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE;iptables -A FORWARD -o %i -j ACCEPT
+PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE;iptables -D FORWARD -o %i -j ACCEPT
+
+

The PostUp and PostDown steps are there to ensure the interface wg0 will accept and forward traffic to eth0. The postrouting and forward to %i is not required, but it will enable "VPN mode" where users can access the internet via this server if desired. +Note that this requires iptables installed and enabled: apk add iptables && rc-update add iptables. +Reference this WireGuard documentation for information on adding peers to the config file. +

To bring up the new interface we use: +

+
wg-quick up wg0
+
+

To take it down, we can use wg-quick down wg0 which will clean up the interface and remove the iptables rules. +Note: If running in a Docker container, you will need to run with --cap-add=NET_ADMIN to modify your interfaces. +

+

Bringing up an interface using ifupdown-ng

+

The official documents from WireGuard show examples of how to set up an interface with the use of wg-quick. +In this how-to, we are not going to use that utility. We'll use the plain wg command and ifupdown-ng. +

+
apk add wireguard-tools-wg
+
+

Now that all the tools are installed, you can setup the interface. +The setup of your interface config is out of the scope of this document. You should consult the manual page of wg. +

After you have finished setting up your wgX interface config, you can add it to your /etc/network/interfaces: +

+
auto wg0
+iface wg0 inet static
+       requires eth0
+       use wireguard
+       address 192.168.42.1
+       post-up iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE;iptables -A FORWARD -o wg0 -j ACCEPT
+       post-down iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE;iptables -D FORWARD -o wg0 -j ACCEPT
+
+

This config will automatically: +

+
  • bring the WireGuard interface up after the eth0 interface
  • +
  • assign a config to this interface (which you have previously created)
  • +
  • setup the interface address and netmask
  • +
  • add the route once the interface is up
  • +
  • remove the interface when it goes down
  • +
  • enable traffic forwarding (the post-up and post-down lines; requires iptables) (note that this is not required unless you want peers to be able to access external resources like the internet)
+
Note: If you are using the same config (/etc/wireguard/wg0.conf) from a wg-quick setup, you must comment out the Address line in the [Interface] section. Otherwise, the interface will not come up.
+

To start and stop the interface, you execute: +

+
ifup wg0
+ifdown wg0
+
+

If your interface config is not stored under /etc/wireguard/ you need to specify a wireguard-config-path as well. +

+

Enable IP Forwarding

+

If you intend for peers to be able to access external resources (including the internet), you will need to enable forwarding. +Edit the file /etc/sysctl.conf (or a .conf file under /etc/sysctl.d/) and add the following line. +

+
net.ipv4.ip_forward = 1
+
+

Then either reboot or run sysctl -p /etc/sysctl.conf to reload the settings. +To ensure forwarding is turned on, run sysctl -a | grep ip_forward and ensure net.ipv4.ip_forward is set to 1. +To make the change permanent across reboots, you may need to enable the sysctl service: rc-update add sysctl. +


+

+

Running with modloop

+

If you are running from a RAM disk, you can't modify the modloop. +

You can get around it by unpacking the modloop, mounting the unpacked modules folder, then installing WireGuard. +

+
#!/bin/sh
+apk add squashfs-tools # install squashfs tools to unpack modloop
+unsquashfs -d /root/squash /lib/modloop-lts # unpack modloop to root dir
+umount /.modloop # unmount existing modloop
+mount /root/squash/ /.modloop/ # mount unpacked modloop
+apk del wireguard-lts # uninstall previous WireGuard install
+apk add wireguard-lts
+apk add wireguard-tools
+
+

You can repack the squash filesystem or put this script in the /etc/local.d/ path so it runs at boot-up. +

+
+
Retrieved from ""
+ +
+
+ +
+ +
+
+ \ No newline at end of file diff --git a/Change MAC Addresses on Linux.html b/Change MAC Addresses on Linux.html new file mode 100644 index 0000000..4cce7ed --- /dev/null +++ b/Change MAC Addresses on Linux.html @@ -0,0 +1,337 @@ + + + + +How to Permanently Change Your MAC Address on Linux + + + + + + + + + + + + + + + + + + + + + + + + + +
We select and review products independently. When you purchase through our links we may earn a commission. Learn more.
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+ +

How to Permanently Change Your MAC Address on Linux

+ +
+
+
+
+
+
+
+
Linux laptop showing a terminal window with a globe pattern inthe background and a binary watermark
fatmawati achmad zaenuri/Shutterstock
+
You can set a permanent new MAC address in the terminal using the macchanger utility and a systemctl unit file. Or in GNOME, go to Settings > Wi-Fi [or Network] > Identity, and enter a custom MAC address.

Every network interface has a unique MAC address, set by the manufacturer. It’s how network connections identify connection endpoints. On Linux, you can permanently change a MAC address if you want.

+ +

What Is a MAC Address?

+

A MAC address is a unique code used to identify by networks to identify devices as connection endpoints. It answers the critical question of “who’s who” among network interfaces.

+

Every piece of network equipment has at least one network interface built into it. A desktop computer or a server may have multiple network cards installed in them. Many laptops are supplied with a CAT5 network socket and a Wi-Fi card, giving you two network interfaces straight out of the box.

+

Every network interface has a unique, baked-in identifier. Regardless of the network protocol that is used to communicate with that device, at the lowest level, the connection is identified by its media access control, or MAC, address. That’s why they have to be unique. Making your network interface use a different MAC address is called spoofing.

+

A MAC address is made up of six hexadecimal numbers. They’re written with a colon “:” or a hyphen “-” between each of the six numbers. Here’s a MAC address from one of our test computers.

+
b0:c0:90:58:b0:72
+

Most often, the first three numbers are an organizationally unique identifier, representing the hardware manufacturer. You can decode the OUI using the Wireshark Manufacturer Lookup page. Note that this may be the manufacturer of your computer’s motherboard, network card, or Wi-Fi card. Manufacturers buy in many of the components of their computers and assemble them into the finished item, so don’t be surprised if it is different than the manufacturer of your computer.

+

Because MAC addresses are built-in, you can’t really change them. What you can do is configure your Linux system so that it pretends to have a different MAC address. As far as any other device on the network is concerned, the MAC address of your computer is the one it broadcasts, so the end result is the same.

+

Finding Your MAC Address

+

To find out your current MAC Address, you can use the ip command with the link object. This will list your network interfaces, whether they are in use or disconnected the network.

+
ip link
+

Using the ip link command to discover the MAC addresses of a computer

+

This computer is a laptop with an active Wi-Fi connection, wlan0 , and a wired Ethernet connection, enp3s0 . The wired connection isn’t plugged in, so it is inactive. The laptop also has the default loopback connection, lo, configured.

+

RELATED: How to Use the ip Command on Linux

+

Use macchanger to Change Your Linux MAC Address

+

The macchanger utility allows you to change the MAC address of a network interface with flexible options. You can set a specific MAC address or use a random MAC address. You can also get a new MAC address that uses the same three OUI bytes as your hardware, so that the network interface manufacturer stays the same.

+
+
+

Installing macchanger

+

To install macchanger on Ubuntu, use this command:

+
sudo apt install macchanger
+

Installing macchanger on Ubuntu

+

To install macchanger on Fedora, you need to type:

+
sudo dnf install macchanger
+

Installing macchanger on Fedora

+

On Manjaro, the command is:

+
sudo pacman -S macchanger
+

Installing macchanger on Manjaro

+

Depending on the version of macchanger that is in your distribution’s repositories, you may see a screen asking you whether you want to have a new MAC address created every time a network connection is brought online. That is, when you connect an Ethernet cable or enable Wi-Fi.

+

The macchanger installation options screen

+

Use the arrow keys to move the highlight to the option you wish to use, and press “Enter.”

+

There is some convenience to this method, but we’re going to select “No”. We want to have some control over the MAC addresses we’re using. Also, you may not want to change the MAC address on every network interface that your computer has. Perhaps you only want to change it on your Wi-Fi card, for example.

+

Using macchanger to Temporarily Change a MAC Address

+

You can’t reset the MAC address on a network interface that is in use. We can change the MAC address of our Ethernet connection because it isn’t connected, so it is inactive.

+

The -r (random) option generates a completely random MAC address. We need to pass the name of the network interface we want to set the MAC address on.

+
sudo macchanger -r enp3s0
+

Setting a random MAC address with macchanger

+

The MAC address that was in use was the same as the underlying hardware MAC address, or permanent MAC address. The new MAC address is shown at the bottom.

+

We can change the Wi-Fi card’s MAC address too, if we bring down the Wi-Fi adapter, change the MAC address, then enable the Wi-Fi adapter.

+
sudo networkctl down wlan0
+
sudo macchanger -r wlan0
+
sudo networkctl up wlan0
+

Disabling and enabling a Wi-Fi connection to allow its MAC address to be changed using machanger

+

If you don’t want a random MAC address, you can use the -m (MAC address option) and specify a MAC address in colon “:” format, in lowercase hexadecimal.

+
sudo macchanger -m ae:f9:9b:31:40:c0 enp3s0
+

Setting a specific MAC address with macchanger

+

RELATED: How to Set a Static IP Address in Ubuntu

+

How to Permanently Change a MAC Address

+

That’s all nice and simple, but it doesn’t survive a reboot.

+

We can achieve that however, by using a systemd unit file. We’ll get macchanger to give our laptop new MAC addresses for its Ethernet and Wi-Fi interfaces each time it boots.

+

We’re going to use the -e (ending) option so that the MAC address is changed but the three OUI bytes remain the same.

+

That means our spoofed MAC address will appear to belong to hardware manufactured by the same companies that made our actual Ethernet and Wi-Fi hardware. This will avoid problems with any routers, firewalls, or switches that reject packets that don’t come from hardware with a recognized manufacturer.

+

We’re going to create two services. There’ll be one for the Ethernet connection, and one for the Wi-Fi connection. A single unit file will act as a template for each service.

+ +

To create our unit file, we need to use sudo and edit a file with the base name we want our services to have. The at sign “@” sign in the file name is replaced by the name of the network connection when the service is launched, as we’ll see.

+

We’re calling our unit file “macspoof@.service” because it spoofs MAC addresses.

+
sudo gedit /etc/systemd/system/macspoof@.service
+

Launching an editor to create a systemd unit file

+

Copy this text into your unit file, save your file, and close your editor.

+
[Unit]
+Description=Spoofing MAC address on %I
+Wants=network-pre.target
+Before=network-pre.target
+BindsTo=sys-subsystem-net-devices-%i.device
+After=sys-subsystem-net-devices-%i.device
+
+[Service]
+ExecStart=/usr/bin/macchanger -e %I
+Type=oneshot
+
+[Install]
+WantedBy=multi-user.target
+

We need to create a service for each of our connections. We do this by adding the name of the network interface behind the at sign “@” in the unit name. We’ll do our Ethernet connection first:

+
sudo systemctl enable macspoof@enp3s0.service
+

And we’ll do the same thing for our Wi-Fi connection.

+
sudo systemctl enable macspoof@wlan0.service
+

Enabling the two services to change MAC addresses at boot time

+

After rebooting our laptop, we can use macchanger to see what our current MAC addresses are. Note we don’t need to use sudo because we’re only using macchanger to report on the MAC address settings, and not to change them.

+
macchanger enp3s0
+
macchanger wlan0
+

Using macchanger to show the current MAC addresses for the Ethernet and Wi-Fi connections

+

This shows us the currently active, spoofed, MAC addresses on our two network interfaces, and their original MAC addresses.

+

Because we used the -e (ending) option in our unit file, the first three bytes of the spoofed addresses are the same as the first three bytes of the original MAC addresses.

+

Permanently Changing a MAC Address with GNOME

+

Most desktop environments allow you to set a new MAC address. In GNOME you can do this by opening “Settings” and selecting either “Wi-Fi” or “Network” from the sidebar.

+

Click the cogged wheel icon next to the connection you wish to set a MAC address for, and select the “Identity” tab.

+

You can enter a new MAC address in the “MAC Address” field, or select the genuine MAC address from the drop-down menu.

+

The GNOME network connection Identity tab in the Settings application

+

The “Cloned Address” drop-down menu lets you select from:

+
    +
  • Preserve: Keep the MAC address at boot-time. Don’t change from the set MAC address.
  • +
  • Permanent: Use the genuine hardware MAC address.
  • +
  • Random: Generate a random MAC address.
  • +
  • Stable: Generate a stable, hashed MAC address. Every time the connection activates, the same fake MAC address is used. This can be useful in cases where you want to hide your hardware MAC address, but you need to get the same IP address from a DHCP router.
  • +
+

Your changes will take place when you reboot, or turn the connection off and on again.

+

Be Careful!

+

Changing your MAC address isn’t illegal, so long as you don’t do it to impersonate someone else’s network device. Your jurisdiction will probably have laws in place to deal with unlawfully receiving network traffic. For example, the UK has the Computer Misuse Act and the U.S. has the Computer Fraud and Abuse Act.

+

Become anonymous by all means, but don’t pretend to be someone else.

+

RELATED: How to Use bmon to Monitor Network Bandwidth on Linux

+ +
+
+
+Profile Photo for Dave McKay + +Dave McKay +
Dave McKay first used computers when punched paper tape was in vogue, and he has been programming ever since. After over 30 years in the IT industry, he is now a full-time technology journalist. During his career, he has worked as a freelance programmer, manager of an international software development team, an IT services project manager, and, most recently, as a Data Protection Officer. His writing has been published by  howtogeek.com, cloudsavvyit.com, itenterpriser.com, and opensource.com. Dave is a Linux evangelist and open source advocate.
Read Full Bio »
+
+
+
+
+
+ +
+
+ + +
+
+
+
+
How-To Geek is where you turn when you want experts to explain technology. Since we launched in 2006, our articles have been read billions of times. Want to know more? +
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/Games/Backup Ark - Nitrado.html b/Games/Backup Ark - Nitrado.html new file mode 100644 index 0000000..df0ace9 --- /dev/null +++ b/Games/Backup Ark - Nitrado.html @@ -0,0 +1,358 @@ + +How to Backup Your ARK Server | Nitrado | NITRADO + + +
    + 7 Days to Die +
    + American Truck Simulator +
    + ARK: Survival Evolved +
    + ARK: Survival of the Fittest +
    + Arma 3 +
    + Atlas +
    + Avorion +
    + Battlefield 4 +
    + Conan Exiles +
    + Counter-Strike: Global Offensive +
    + Dark and Light +
    + DayZ +
    + Don't Starve Together +
    + ECO +
    + Empyrion +
    + Euro Truck Simulator 2 +
    + Farming Simulator 22 +
    + Fear The Night +
    + Garry's Mod +
    + Hellion +
    + Hurtworld +
    + Insurgency 2014 +
    + Killing Floor 2 +
    + Left 4 Dead 2 +
    + Life is Feudal +
    + Medieval Engineers +
    + Minecraft +
    + Miscreated +
    + Mordhau +
    + PixARK admin commands +
    + Project Zomboid +
    + Reign of Kings +
    + Rising World +
    + Satisfactory +
    + Space Engineers +
    + The Isle +
    + V Rising +
    + Valheim +
  • WEBINTERFACE
    • + Nitrado Refund Support – Server Guide +
      + Runtime Nitrado Promo Code – How to Save Guide +
      + Rent Your Own Xbox Servers at Nitrado +
      + Gametracker – Game Server List and Advertising +
      + Wipe Your Nitrado Server – How To +
      + How to Check Your Nitrado Server Status +
      + Using Your Nitrado Server FTP +
      + Navigating the Nitrado Server Panel +
    1. home /
    2. guides /
    3. backup your ark server

    + Guides +

    + Category A-Z +
    ARK: Survival Evolved Settings

    How to Backup Your ARK Server

    How to Backup Your ARK Server – Introduction

    +

    During ARK server adventures, players experience many memorable moments while playing. From adventuring through the narrow molten element caves of the ARK Aberration DLC to taming a mighty T-Rex to head into a PvP battle against your best friend. With such amazing moments, saving those memories is just as important as the adventure. Backing up an ARK server preserves those files, storing that part of your journey onto your server and/or PC.

    +
    +
    +how to backup your ARK server using Nitrado +
    +

    +

    When you backup your ARK server, this not only provides a way to go back to that world later but also makes a restore point in case something breaks and you need to restore your server. An ARK server with Nitrado includes free automatic backups, but you can also take your own backups directly to your computer. In this guide, we will show you how to create a backup of your ARK server and perform restores if needed so that you can continue your adventure!

    +

    How to Make Backups

    +

    Automatic Server Backups and Restorations

    +

    With Nitrado, your ARK server will come equipped with automatic server backups. Two types of backups occur with an ARK server at Nitrado, map backups, and server backups. Map backups are taken every three hours while server backups are taken once a day. Server backups mean that if any issues ever occur, such as map corruption or even your favorite tamed dinosaur dying, you can quickly restore your server to a point before that problem began. To find and restore ARK backups on your server follow these steps:

    +
      +
    1. Head to your ARK server panel and, to the left under “Tools”, select the “Restore Backup” option. +
      +
      +restore backup option on your ARK server +
      +

      +
    2. +
    3. From there, you’ll be able to see the “Backup Management” section that will include “Map Save Backups (Internal Save Game Backups)” specifically for ARK server maps and “Server Backups”. Under the Server Backups section, you’ll see ARK: SE backups along with the backups of any other game that was installed and had taken automatic backups.
    4. +
    5. Afterward, if you’d like to restore a backup you can press the red “Restore” option next to that backup. Your server will ask for confirmation. Press “Restore” once more to confirm the backup restoration. +
      +
      +ARK server backups and internal game saves +
      +

      +
    6. +
    7. Finally, once the restoration finishes, you can start your server. You’ll be ready to start your ARK exploration from that backup restore point.
    8. +
    +

    Backup Your ARK Server Locally

    +

    Local ARK backups are another great method of backing up your server. A local backup of your ARK server is one stored on your computer, ensuring that you have a copy with you at all times. To create a local backup of your ARK server, follow these steps:

    +
      +
    1. To start, login to your ARK server’s FTP by following this guide.
    2. +
    3. Next, in your server’s FTP go to the location /arkse/ShooterGame/Saved/.
    4. +
    5. In the “Saved” folder, you will see three important folders named “Config”, “Logs”, and “SavedArks”. +
      +
      +SavedARKs folder - where to find ARK server files +
      +

      +
    6. +
        +
      1. Config – The config folder holds your important server configuration files including the GameUserSettings.ini, Game.ini, and Engine.ini.
      2. +
      3. Logs – In the logs folder, you’ll find detailed server logs for when your server is in use with everything from server startup information to player connect/disconnect messages.
      4. +
      5. SavedArks – The most important folder of the three, the SavedArks folder contains your map and player save files.
      6. +
      +
    7. Once you’re at this point, on the left side of your FTP client, use the navigation to go to a safe location on your computer and create a new folder named “ARK server backup – calendar date”. This will provide you with a place to save your files and be able to access them again. After creating the folder, click and enter that folder. +
      +
      +creating an ARK server backup folder +
      +

      +
    8. +
    9. Now, drag and drop the ARK folders you’d like to backup to that newly created folder on your computer. This will start the backup process. +
      +
      +how to backup your ARK server to your local files +
      +

      +
    10. +
    11. Lastly, allow the files to finish backing up to your computer. You can now exit your FTP program and you’ll have a local backup of your ARK server.
    12. +
    +

    Need to restore a local ARK server backup? Look below to learn how!

    +

    Restoring a Manual Server Backup

    +
      +
    1. To restore a local backup of your ARK server, begin by stopping your server.
    2. +
    3. Next, log in to your ARK server’s FTP by following this guide.
    4. +
    5. After logging in, in your FTP go to the location /arkse/ShooterGame/Saved/.
    6. +
    7. In this location, if there are any files we recommend removing them. Alternatively, you can create a folder named “old” and move the folders inside of that to disable them. This will prepare your server for the backup. +
      +
      +restoring old ARK server files to your server +
      +

      +
    8. +
    9. On the left side of your external FTP program, find the local ARK server backup.
    10. +
    11. Once you have found those files, select them and move them over to your server to upload them. Allow those files to completely upload to your server. Depending on the file sizes and your internet upload speed, this can take some time. +
      +
      +how to restore your ARK files +
      +

      +
    12. +
    13. After the files finish uploading, go back to your ARK server. You’ll need to ensure your ARK server is set to the correct map to load your ARK save files.
    14. +
    15. While on your ARK server panel, on the left side, press “General” to edit your server settings. +
      +
      +editing your ARK server settings +
      +

      +
    16. +
    17. In your ARK server settings find the “Mapname” option. Use this option’s dropdown to select the correct map. +
      +
      +restoring the correct ARK map to your server +
      +

      +
    18. +
    19. Finally, with the correct map selected, press “Save changes” and restart your ARK server. Your ARK game server will now load with that backup, ready for you and your fellow ARK players to continue the ARK adventure!
    20. +
    +

    Backup and Protect Your ARK Server

    +

    Are you protected with an ARK server backup? If you don’t have a backup ready, now is a great time to get started. Not only will it only take a short time, but it’ll protect your server for as long as you keep that backup. Now that you know how to create a backup of your ARK server, backup your ARK game server today and have peace of mind knowing you and your players are safe.

    +


    Protect and play on your own ARK server!

    +

    +

    + How to Backup Your ARK Server – Introduction +

    + How to Make Backups +

    + - Automatic Server Backups and Restorations +

    + - Backup Your ARK Server Locally +

    + Backup and Protect Your ARK Server +

    + Related articles +

    Increasing the ARK Level Cap – Guide

    read more

    ARK Player Stats Per Level – Server Guide

    read more

    + Share this article +


    + #ARK: Survival Evolved +
    + #Settings +
    07/12/2022

    Grab your very own game server now.

    + Order game server +
    1. home /
    2. guides /
    3. backup your ark server
    Pay the way you prefer
    Klarna Payment Badge
    Our memberships
    RIPE NCC - Ripe Network Coordination Centre
    + © 2023 marbis / Nitrado USA, Inc.. All rights reserved. All prices are shown as gross prices and include VAT. +
    + +
    \ No newline at end of file diff --git a/Games/Backup Ark.html b/Games/Backup Ark.html new file mode 100644 index 0000000..31eb5de --- /dev/null +++ b/Games/Backup Ark.html @@ -0,0 +1,144 @@ + +Backup Game Saves Locally in ARK: Survival Evolved – Ark + + + + + + + + + + + +
    + + +
    +
    + +
    +
    + +
    + +
    + +
    +
    +

    Backup Game Saves Locally in ARK: Survival Evolved

    +

    + GM Argos + -

    +
    +
    +
    +

    If you are here you probably already have some knowledge on how to use and connect to your server via FTP. This article is for you to learn how to create local Backups of your game saves.
    If you do not know how to use FTP or do not have a client installed, please refer to our Wiki article: "FTP - FileZilla"

    +

    Making a Local Backup for ARK:SE Game Saves

    +

    1. Stop the server and wait 5 approximately minutes.
    2. Connect to your server via FTP using the credentials found on your Nitrado web interface.
    3. Create a new Directory on your "Local site." (This will be for your saves)
    4. Navigate your game save file path on the "Remote Site".

    +
      +
    • Ark:SE > ShooterGame > Saves > SavedARKs
    • +
    +

    5. Click and drag the Game save folder (SavedARKs) to the directory you made on the "Local Site"

    +

    How Do I Restore My Game Save?

    +

    1. Stop the server and wait approximately 5 minutes.
    2. Connect to your server via FTP using the credentials found on your Nitrado web interface.
    3. Navigate your file path on the "Remote Site" to find your game save folder.

    +
      +
    • Ark:SE > ShooterGame > Saves
    • +
    +

    4. Rename your current "SavedArks" folder to "SavedArks_Old"
    5. Drag The desired ARK game save folder (SavedArks) from your "local site" to the "Remote Site"
    6. Start server

    +

     

    +

    NOTE: This does not apply to console servers; We cannot provide any access to the FTP system or save games of any Xbox One or PS4 unofficial hosted services.

    +
    +
    +
      + +
    +
    +
    + +
    + Was this article helpful? +
    + + +
    + + 0 out of 0 found this helpful + +
    + + + +
    +
    +
    +
    +
    + +
    +
    +
    +

    0 Comments

    +
    + +
    +
    + + +
    Article is closed for comments.
    + +
    +
    +
    + + + + + + + + + + diff --git a/Hugo/Sections.html b/Hugo/Sections.html new file mode 100644 index 0000000..ca13045 --- /dev/null +++ b/Hugo/Sections.html @@ -0,0 +1,64 @@ + Hugo sections tutorial: How to customise section pages

    How to customise Hugo sections

    Do you want to supercharge your Hugo sections? There are several great tips here for you.

    Ron Erdos
    Updated +March 9, 2020
    Tested with Hugo version 0.74.3

    Hugo sections overview

    Let’s go over how Hugo sections work before customising them.

    Hugo automatically generates sections if you have subfolders in your content folder

    If you use subfolders to organise your posts, Hugo will automatically create sections for this content for you.

    For example, let’s say you have content on Mars, as well as content on Venus.

    And let’s say you organise your Hugo content like this:

    📁 content
      📁 mars
         +is-there-life-on-mars.md
         +red-planet.md
      📁 venus
         +second-brightest.md

    Given the above, you’d end up with the following URIs:

    /mars/
    +/mars/is-there-life-on-mars/
    +/mars/red-planet/
    +
    +/venus/
    +/venus/second-brightest/
    +

    The first two URIs in each group above—/mars/ and /venus/—are our section indexes; the rest of the URIs are blog posts.

    Sections appear in Hugo blog post urls by default

    As you can see above, whatever you name your section folder will appear in blog post URLs from that section.

    For example, the blog post /venus/second-brightest/ contains the /venus/ subfolder.

    You can eliminate the section subfolder from Hugo blog post urls

    In Hugo, you can overwrite the url to eliminate the subfolder.

    However, in this tutorial we’ll be using the default URL setting we’ve been exploring, i.e. where section subfolder names appear in blog post URLs.

    Hugo also generates section index pages by default

    Hugo also automatically generates index pages for each section. These are the /mars/ and /venus/ pages we discussed above.

    However, if your theme doesn’t have a customised layout file for sections, then it will likely look quite boring.

    I’ll now show you how to make these section index pages more interesting.

    How to customise section indexes to make them more interesting

    What if we wanted a custom h1 heading of “Destination Mars”? And what if we wanted some custom intro text before the list of Mars blog posts?

    The way to do that is below. There are two parts.

    Part 1: Store custom headings, descriptions, and images in an _index.md file

    We’ll need to keep our custom h1 heading (“Destination Mars”) and our custom intro text in a special file named _index.md, which we’ll need to store in the root (top level) of our Mars section.

    In other words, the file will need to live at /content/mars/_index.md.

    Code for playing along

    First, create a (blank) new file under /content/mars/ and name it _index.md. Don’t forget the leading underscore in the filename.

    Secondly, copy and paste in the following text into your _index.md file and save it:

    title: "Destination Mars"
    +summary: "When will we land on the Red Planet?"
    +

    Part 2: Create a custom layout named section.html

    OK, so we’ve stored our custom text in the right place, now we need to create a layout file to call that custom text.

    Code for playing along

    Create a blank new file named section.html in the /layouts/_default/ subfolder that’s included automatically in each Hugo site.

    Inside this /layouts/_default/section.html file, copy and paste the below:

    <h1>{{ .Title }}</h1>
    +<p>{{ .Summary }}</p>
    +<div>
    +	{{ range .Pages }}
    +		{{ .Render "li" }}
    +	{{ end }}
    +</div>
    +

    Note that you’ll also need to include any header or footer partials that you use elsewhere on your site. You can learn more about partials by scrolling down about a screen’s worth.

    HTML output

    So now our rendered /mars/ section homepage will contain this HTML:

    <h1>Destination Mars</h1>
    +<p>When will we land on the Red Planet?</p>
    +<!-- Blog posts below -->
    +

    Pretty cool right?

    Example of a customised Hugo section homepage

    You can see a live example of a customised section homepage right here on this site.

    How to get different email signup forms on posts from different sections

    What if you want to allow people to sign up for email updates just for Mars posts, or just for Venus posts?

    You’d need a custom email signup form for each section. I do that on this site, by the way.

    I’ll show you how to do that, but first, you need to know about a Hugo feature called “partials”.

    OK, so let’s say we have a subscribe partial in our single blog post template. In this subscribe partial we’ll put our email signup forms.

    Code for playing along

    Our single blog post template, /layouts/_default/single.html, might look like this:

    {{ partial "header.html" . }}
    +
    +<h1>{{ .Title }}</h1>
    +{{ .Content }}
    +
    +{{ partial "subscribe.html" . }}
    +{{ partial "footer.html" . }}
    +

    So there are three partials in there: the header, the footer, and the email signup forms.

    All pretty straightforward.

    However, what if you want a different signup form for each section? One for Venus, one for Mars?

    You do it in the subscribe partial itself, using Hugo’s if and else if statements:

    Code for playing along

    In our subscribe partial, /layouts/partials/subscribe.html, we see:

    {{ if eq .Section "mars" }}
    +	<!-- Mars signup form goes here -->
    +{{ else if eq .Section "venus" }}
    +	<!-- Venus signup form goes here-->
    +{{ end }}
    +

    Nested partials (Hugo partials inside partials)

    You can also use nested partials (partials inside other partials) in Hugo.

    So rather than putting the two sets of email signup forms (one for Mars, one for Venus) directly in the subscribe partial like we did above … (draws breath) … you could first put each form in its own partial, then use them in the conditional logic:

    {{ if eq .Section "mars" }}
    +	{{ partial "subscribe-mars.html" . }}
    +{{ else if eq .Section "venus" }}
    +	{{ partial "subscribe-venus.html" . }}
    +{{ end }}
    +

    That way, you’d be free to use just the Mars email signup form elsewhere on the site, by calling {{ partial "subscribe-mars.html" . }}

    How to get different RSS feeds in different Hugo sections

    In a similar way to the section-specific email signup form work we did above, what if you wanted a different RSS feed for each section?

    Hugo does most of the work for us, but not all. Read on to find out how to do this.

    How does RSS work in Hugo by default?

    Even if you don’t alter any RSS-related code, Hugo will create a bunch of different RSS feeds for you, right out of the box.

    To start, there’ll be one for each section; that is, one for each folder in the root of your Hugo /content folder.

    So let’s say we had the same file structure as we saw earlier in this tutorial:

    📁 content
      📁 mars
         +is-there-life-on-mars.md
         +red-planet.md
      📁 venus
         +second-brightest.md

    You can see that we have two sections, one for Mars, the other for Venus.

    Hugo will automatically generate RSS feeds at:

    /mars/index.xml

    /venus/index.xml

    It will also generate a top-level RSS feed at:

    /index.xml

    which will contain content on both Mars and Venus.

    How to eliminate the top-level RSS feed and just use the section-specific RSS feeds

    I’m not sure if there’s an easy way to suppress publication of the top-level RSS feed (i.e. /index.xml), but I can show you how to encourage feed readers to only present your section-specific RSS feeds to readers.

    OK, here’s how to do that:

    Code for playing along

    In the <head> section of your homepage, include this code:

    {{ range .Site.Sections }}
    +	<link rel="alternate" type="application/rss+xml"
    +	title="{{ .Site.Title }} {{ title .Section }} Articles"
    +	href="{{ .Site.BaseURL }}{{ .Section }}/index.xml">
    +{{ end }}
    +

    As usual, I’ve broken up the code over a few lines so you can read this tutorial more easily on your mobile device. However, your production code should keep the link element all on one line.

    Now let’s go through the code, line by line.

    {{ range .Site.Sections }} starts a loop over each site section. In our case, that’s mars and venus.

    <link rel="alternate" type="application/rss+xml" starts our RSS link. This is all standard, nothing to customise here.

    title="{{ .Site.Title }} {{ title .Section }} Articles" generates the title of the RSS link. We’re using two variables here.

    The first variable is {{ .Site.Title }}. This is the name of your site, and it’s pulled from your config.toml file, specifically the title field.

    The second variable is {{ title .Section }}. The .Section part pulls the name of section subfolders (e.g. mars or venus), while title turns them into Title Case (Mars and Venus).

    Note that I’ve also hardcoded Articles to the end of the link title, so it will read, for example, “MoonBooth Mars Articles”.

    href="{{ .Site.BaseURL }}{{ .Section }}/index.xml"> generates the actual link itself. There are two variables in here, one that we saw a second ago.

    The first variable is {{ .Site.BaseURL }}, and this also looks in your config.toml file, but this time, it’s looking for the baseURL field. For example: baseURL = "https://example.com/".

    And the second variable we saw a second ago, it’s {{ .Section }}, and it will fetch mars and venus as the names of the subfolders under your Hugo /content/ folder.

    I’ve hardcoded /index.xml to the end of the link, so we’ll end up with, for example, https://example.com/mars/index.xml as the value of href.

    {{ end }} closes the loop we opened a few lines above.

    OK, let’s see how it looks!

    HTML output

    Putting that together, you end up with this output in your rendered homepage’s HTML:

    <link rel="alternate"
    +type="application/rss+xml"
    +title="MoonBooth Mars Articles"
    +href="https://example.com/mars/index.xml">
    +
    +<link rel="alternate"
    +type="application/rss+xml"
    +title="MoonBooth Venus Articles"
    +href="https://example.com/venus/index.xml">
    +

    Note that Hugo pulls the sections in alphabetical order, which is why we see Mars before Venus.

    "Thanks so much for your work ... I'm migrating my WordPress blog to Hugo and it's been really helpful." — Francisco S., engineer and blogger

    "I love your content. The information that you provide have been very useful in the development of my personal website." — Mattia C., engineer and researcher

    + +
    The planets in our solar system
    \ No newline at end of file diff --git a/Install Unifi Controller Linux.html b/Install Unifi Controller Linux.html new file mode 100644 index 0000000..d4bea0a --- /dev/null +++ b/Install Unifi Controller Linux.html @@ -0,0 +1,392 @@ + +Updating Self-Hosted UniFi Network Servers (Linux) – Ubiquiti Support and Help Center + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + + +
    + + +
    + + + +
    +
    +
    +
    +
    + +
    +
    + +
    +
    +

    Updating Self-Hosted UniFi Network Servers (Linux)

    +
    +
    +
    Updated on 9 May 2023
    + +
    +
    +

    This article provides the steps to update the UniFi Network application to the current stable release on a Debian or Ubuntu system via APT (Advanced Package Tool). If you run into issues following the process described in this article, please take a look at the scripts provided in this Community post that includes UniFi Network software installation on Ubuntu 18.04 and 16.04 and Debian 8/9.

    +

    Requirements

    +

    In order to update the UniFi Network application via APT, it is necessary to create source files or edit lines in an existing sources.list file with Linux text editors: vi or nano. The repo structure should be permanent, but if there are any changes they will be pointed out in the UniFi Network software version release posts, found in the Release section of the Community.

    +

    Before upgrading the UniFi Network application, make sure that you have backed up the UniFi Network Database. You will need to make sure that the user has sudo permissions. For more information about adding a user to sudo list, see this Debian article.

    +

    UniFi Network APT Steps

    +
      +
    1. Install required packages before you begin with the following command: +
      sudo apt-get update && sudo apt-get install ca-certificates apt-transport-https
      +
    2. +
    3. Use the following command to add a new source list: +
      echo 'deb https://www.ui.com/downloads/unifi/debian stable ubiquiti' | sudo tee /etc/apt/sources.list.d/100-ubnt-unifi.list
      +
    4. +
    5. Add the GPG Keys. +
        +
      1. +Method A - Recommended +
          +
        1. Install the following trusted key into /etc/apt/trusted.gpg.d +
          sudo wget -O /etc/apt/trusted.gpg.d/unifi-repo.gpg https://dl.ui.com/unifi/unifi-repo.gpg
          +
        2. +
        +
      2. +
      3.  Method B +
          +
        1. Using apt-key: +
          sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 06E85760C0A52C50
          +
        2. +
        +
      4. +
      5. When using the commands above, it is assumed you have sudo and wget installed. More information about sudo can be found here, and wget here.
      6. +
      7. For Ubuntu 18.04, run the following commands before installing UniFi in step 4. +
        wget -qO - https://www.mongodb.org/static/pgp/server-3.4.asc | sudo apt-key add -
        echo "deb https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
        sudo apt-get update
        +
      8. +
      9. See an example of what scripts the Community is using to install the UniFi Network application on Ubuntu 16.04 and 18.04 in this Community post.
      10. +
      +
    6. +
    7. Install and upgrade the UniFi Network application. +
        +
      1. On some Distributions, it's possible an incompatible Java release can be installed during this step. We recommend running the following command before proceeding with this step, to restrict Ubuntu from automatically installing Java 11. If you wish to undo this later, replace "hold" with "unhold". +
        sudo apt-mark hold openjdk-11-*
        +
      2. +
      3. Install and upgrade the UniFi Network application with the following command: +
        sudo apt-get update && sudo apt-get install unifi -y
        +
      4. +
      +
    8. +
    9. This step may not be required, depending on the Linux distro you have. If your distro does not come with MongoDB, and it's not available in their repo, then please see the MongoDB installation guide. You can find the latest installation guide for Ubuntu here, and Debian here. We recommend at least MongoDB 2.6.10. Some users have changed the backend to use MongoDB 3 successfully too.
    10. +
    11. The UniFi Network application should now be accessible at the computer's configured local or public IP address, by typing that IP address in a browser's navigation bar (Chrome is recommended). If it is not launching, use the following command: +
      sudo service unifi start
      +
    12. +
    +

    Other Helpful Commands

    +
      +
    • To stop the UniFi service: sudo service unifi stop +
    • +
    • To restart the UniFi service: sudo service unifi restart +
    • +
    • To see the status of UniFi service: sudo service unifi status +
    • +
    +
    +
    Click here for possible suite names and code names.
    +
    +
    "Testing" refers to the next generation release, which is not released to the general public yet. "Stable" refers to the current stable release, which is supported by Ubiquiti and described in this article. "Old Stable" is the previous stable release, which has been replaced by the new stable release. +
    +
    +

    Log Files Location

    +

    Log files are essential for any troubleshooting. Find them here:

    +
      +
    • /usr/lib/unifi/logs/server.log
    • +
    • /usr/lib/unifi/logs/mongod.log
    • +
    +

    If your application is running on a Unix/Linux based system, then you will require superuser (sudo) privileges to access these log files.

    +

    Notes and Tips

    +
    +
    These notes have been added thanks to user collaboration. Click to expand.
    +
    +
    +
    +
    + +
    +
    + +
    +
    + + Was this article helpful? +
    +
    + + +
    +
    939 found this article helpful
    + 939 out of 1672 found this helpful + +
    + +
    +
    +
    +
    +
    +
    + + + + + + + + + + \ No newline at end of file diff --git a/Polyamorie/The Triforce of Communication.html b/Polyamorie/The Triforce of Communication.html new file mode 100644 index 0000000..4ab9298 --- /dev/null +++ b/Polyamorie/The Triforce of Communication.html @@ -0,0 +1,323 @@ + + + +The Triforce of Communication – blog.cone + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    +
    +
    + +
    +
    +

    The Triforce of Communication

    +
    + +
    + +

    That moment when someone tells you about a problem, and you start offering solutions but they annoyedly shrug it off because they’re already fixing it..
    Or that moment when you confide in a friend because you need emotional support, and they just start lecturing you on what to do..

    +

    So on the triforce of communication you are sharing to be heard 🙂

    Me, responding to a colleague on a random update
    +

    The Triforce of Communication is a term that comes from the non monogamous communities, but it’s extremely useful in general life, coaching or management situations!

    +

    The triforce was coined by the Multiamory podcast in 2016 and nicely summarizes some basic modes of communication in any kind of professional or private communication. When communicating with others, it helps to establish where on the triforce you or them are basing your communication from.

    +

    To quote the summary in their DLC edition..

    +
    • Triforce number one, which is building intimacy or sharing.
    • Triforce number two, which is seeking support or acknowledgement, and
    • Triforce number three, which is seeking advice or problem solving.
    +

    In the opening paragraph, they were communicating on triforce one, sharing their plight. And so was the colleague from the quote. And in the second one, I was communicating on the second level, looking for support. Yet both examples got a response in triforce number three, trying to solve the situation presented.
    Realizing what the goal is and matching the goal makes for better conversations all around. And sometimes it starts with just observing for yourself what you’re looking for.

    +

    An invaluable communication tool.

    +

    Check them out!

    + +

    +
    + +
    +

    + + Published by Gert +

    +

    + Person-at-large. +

    +
    + +
    + +
    +
    +

    Leave a Reply

    + + +
    + +
    +
    + + + +

    +

    +
    +
    +
    +
    +
    +
    + +
    + + + + + +
    +
    + +
    +
    +
    + +
    + + + + + + +
    + + +
    + + + \ No newline at end of file diff --git a/Remove Systemd Networking.html b/Remove Systemd Networking.html new file mode 100644 index 0000000..ced51b3 --- /dev/null +++ b/Remove Systemd Networking.html @@ -0,0 +1,113 @@ + +Taking Back Control from systemd Networking + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    December 12, 2018 · linux networking
    +

    Taking Back Control from systemd Networking

    +

    systemd is a software suite that is common to many Linux distributions. Although useful, systemd is hard to configure and is too bloated. With systemd, the current networking configuration of the computer becomes much less transparent and manageable, and this is not ideal when managing a networked server. This guide therefore describes how to disable some systemd services, specifically for Ubuntu Server 18.04.

    +

    Disabling networkd

    +

    First, we can revert the networking service to the original Debian /etc/network/interfaces style of configuring the network:

    +
    sudo apt-get update
    +sudo apt-get install ifupdown
    +
    +

    Configure your /etc/network/interfaces using the handy Debian guide. Next, we disable the networkd services.

    +
    systemctl stop systemd-networkd.socket systemd-networkd \
    +networkd-dispatcher systemd-networkd-wait-online
    +systemctl disable systemd-networkd.socket systemd-networkd \
    +networkd-dispatcher systemd-networkd-wait-online
    +
    +

    We can also remove netplan, as it is no longer used.

    +
    sudo apt -y purge netplan.io
    +
    +

    Disabling resolved

    +

    systemd also has a DNS resolver, but we can disable that:

    +
    sudo systemctl disable systemd-resolved.service
    +sudo systemctl stop systemd-resolved
    +
    +

    Delete the symlink /etc/resolv.conf, so that we can edit it.

    +
    sudo rm /etc/resolv.conf
    +
    +

    Create a new resolv.conf file, and input the DNS Server that you would like to use.

    +
    sudo nano /etc/resolv.conf
    +
    +

    For example, my file contains:

    +
    nameserver 172.16.0.100
    +
    +

    Note that the /etc/resolv.conf file will not be automatically updated by your DHCP client unless you delete the following file:

    +
    sudo rm /etc/dhcp/dhclient-enter-hooks.d/resolved
    +
    +

    References:

    +

    https://askubuntu.com/questions/1031709/ubuntu-18-04-switch-back-to-etc-network-interfaces
    +https://askubuntu.com/questions/907246/how-to-disable-systemd-resolved-in-ubuntu

    +

    Update November 2019

    +

    Here are two more tips for making a server more manageable. Remove openresolv, with sudo apt remove openresolv. Finally, remove the DHCP client altogether, with sudo apt purge dhcpcd5 isc-dhcp-client isc-dhcp-common

    +
    + +
    M ↓   Markdown
    ?
    Anonymous
    1 point
    3 years ago

    Much appreciated. And to think there was a time where a standard minimal installation was actually respectful...

    +
    ?
    Anonymous
    1 point
    3 years ago

    This is great, thanks! I want control of my network, thank you very much, systemDenigrater.

    +
    ?
    Anonymous
    1 point
    4 years ago

    Thank you for this, saved me massive headache with proxmox and bridges. You are the man!

    +
    S
    stefanhart
    1 point
    4 years ago

    Thank you very much! +My search of "get rid of systemd-networkd" pointed me fast to your site. +Unfortunately systemdr*ck settles anywhere, and it is not becoming easier to get rid of it and substitute it with something keep it simpler but a little slower.

    +
    ?
    Anonymous
    0 points
    19 months ago

    Thank you very much, I still can't understand how they can leave such a broken setup which absolutely does not work. At least there's some hope with your solution to get out of the 4th dimension!

    +
    ?
    Anonymous
    0 points
    3 years ago

    This is awesome!!! I even expanded on your blog over here.

    +

    https: //unix.stackexchange.com/questions/591414/how-do-you-block-network-acceess-to-systemd

    +
    ?
    Anonymous
    0 points
    14 days ago

    Thanks for the article, ubuntu becomes acceptable after the removal of complex crap that goes against tradition and ease of use.

    +
    ?
    Anonymous
    0 points
    13 months ago

    Love you

    +
    +
    +
    + + + + + + + diff --git a/SystemD Service Files.html b/SystemD Service Files.html new file mode 100644 index 0000000..99b7ac2 --- /dev/null +++ b/SystemD Service Files.html @@ -0,0 +1,924 @@ +systemd.serviceIndex · + Directives systemd 253

    Name

    systemd.service — Service unit configuration

    Synopsis

    service.service

    Description

    A unit configuration file whose name ends in + ".service" encodes information about a process + controlled and supervised by systemd.

    This man page lists the configuration options specific to + this unit type. See + systemd.unit(5) + for the common options of all unit configuration files. The common + configuration items are configured in the generic + [Unit] and [Install] + sections. The service specific configuration options are + configured in the [Service] section.

    Additional options are listed in + systemd.exec(5), + which define the execution environment the commands are executed + in, and in + systemd.kill(5), + which define the way the processes of the service are terminated, + and in + systemd.resource-control(5), + which configure resource control settings for the processes of the + service.

    If SysV init compat is enabled, systemd automatically creates service units that wrap SysV init + scripts (the service name is the same as the name of the script, with a ".service" + suffix added); see + systemd-sysv-generator(8). +

    The systemd-run(1) + command allows creating .service and .scope units dynamically + and transiently from the command line.

    Service Templates

    It is possible for systemd services to take a single argument via the + "service@argument.service" + syntax. Such services are called "instantiated" services, while the unit definition without the + argument parameter is called a "template". An example could be a + dhcpcd@.service service template which takes a network interface as a + parameter to form an instantiated service. Within the service file, this parameter or "instance + name" can be accessed with %-specifiers. See + systemd.unit(5) + for details.

    Automatic Dependencies

    Implicit Dependencies

    The following dependencies are implicitly added:

    • Services with Type=dbus set automatically + acquire dependencies of type Requires= and + After= on + dbus.socket.

    • Socket activated services are automatically ordered after + their activating .socket units via an + automatic After= dependency. + Services also pull in all .socket units + listed in Sockets= via automatic + Wants= and After= dependencies.

    Additional implicit dependencies may be added as result of + execution and resource control parameters as documented in + systemd.exec(5) + and + systemd.resource-control(5).

    Default Dependencies

    The following dependencies are added unless DefaultDependencies=no is set:

    • Service units will have dependencies of type Requires= and + After= on sysinit.target, a dependency of type After= on + basic.target as well as dependencies of type Conflicts= and + Before= on shutdown.target. These ensure that normal service units pull in + basic system initialization, and are terminated cleanly prior to system shutdown. Only services involved with early + boot or late system shutdown should disable this option.

    • Instanced service units (i.e. service units with an "@" in their name) are assigned by + default a per-template slice unit (see + systemd.slice(5)), named after the + template unit, containing all instances of the specific template. This slice is normally stopped at shutdown, + together with all template instances. If that is not desired, set DefaultDependencies=no in the + template unit, and either define your own per-template slice unit file that also sets + DefaultDependencies=no, or set Slice=system.slice (or another suitable slice) + in the template unit. Also see + systemd.resource-control(5). +

    Options

    Service unit files may include [Unit] and [Install] sections, which are described in + systemd.unit(5). +

    Service unit files must include a [Service] + section, which carries information about the service and the + process it supervises. A number of options that may be used in + this section are shared with other unit types. These options are + documented in + systemd.exec(5), + systemd.kill(5) + and + systemd.resource-control(5). + The options specific to the [Service] section + of service units are the following:

    Type=

    Configures the process start-up type for this service unit. One of simple, + exec, forking, oneshot, dbus, + notify, notify-reload or idle:

    • If set to simple (the default if ExecStart= is + specified but neither Type= nor BusName= are), the service manager + will consider the unit started immediately after the main service process has been forked off. It is + expected that the process configured with ExecStart= is the main process of the + service. In this mode, if the process offers functionality to other processes on the system, its + communication channels should be installed before the service is started up (e.g. sockets set up by + systemd, via socket activation), as the service manager will immediately proceed starting follow-up units, + right after creating the main service process, and before executing the service's binary. Note that this + means systemctl start command lines for simple services will report + success even if the service's binary cannot be invoked successfully (for example because the selected + User= doesn't exist, or the service binary is missing).

    • The exec type is similar to simple, but the service + manager will consider the unit started immediately after the main service binary has been executed. The service + manager will delay starting of follow-up units until that point. (Or in other words: + simple proceeds with further jobs right after fork() returns, while + exec will not proceed before both fork() and + execve() in the service process succeeded.) Note that this means systemctl + start command lines for exec services will report failure when the service's + binary cannot be invoked successfully (for example because the selected User= doesn't + exist, or the service binary is missing).

    • If set to forking, it is expected that the process configured with + ExecStart= will call fork() as part of its start-up. The parent + process is expected to exit when start-up is complete and all communication channels are set up. The child + continues to run as the main service process, and the service manager will consider the unit started when + the parent process exits. This is the behavior of traditional UNIX services. If this setting is used, it is + recommended to also use the PIDFile= option, so that systemd can reliably identify the + main process of the service. systemd will proceed with starting follow-up units as soon as the parent + process exits.

    • Behavior of oneshot is similar to simple; + however, the service manager will consider the unit up after the main process exits. It will then + start follow-up units. RemainAfterExit= is particularly useful for this type + of service. Type=oneshot is the implied default if neither + Type= nor ExecStart= are specified. Note that if this + option is used without RemainAfterExit= the service will never enter + "active" unit state, but directly transition from "activating" + to "deactivating" or "dead" since no process is configured that + shall run continuously. In particular this means that after a service of this type ran (and which + has RemainAfterExit= not set) it will not show up as started afterwards, but + as dead.

    • Behavior of dbus is similar to simple; however, + it is expected that the service acquires a name on the D-Bus bus, as configured by + BusName=. systemd will proceed with starting follow-up units after the D-Bus + bus name has been acquired. Service units with this option configured implicitly gain + dependencies on the dbus.socket unit. This type is the default if + BusName= is specified. A service unit of this type is considered to be in the + activating state until the specified bus name is acquired. It is considered activated while the + bus name is taken. Once the bus name is released the service is considered being no longer + functional which has the effect that the service manager attempts to terminate any remaining + processes belonging to the service. Services that drop their bus name as part of their shutdown + logic thus should be prepared to receive a SIGTERM (or whichever signal is + configured in KillSignal=) as result.

    • Behavior of notify is similar to exec; however, + it is expected that the service sends a "READY=1" notification message via + sd_notify(3) or + an equivalent call when it has finished starting up. systemd will proceed with starting follow-up + units after this notification message has been sent. If this option is used, + NotifyAccess= (see below) should be set to open access to the notification + socket provided by systemd. If NotifyAccess= is missing or set to + none, it will be forcibly set to main.

    • Behavior of notify-reload is identical to + notify. However, it extends the logic in one way: the + SIGHUP UNIX process signal is sent to the service's main process when the + service is asked to reload. (The signal to send can be tweaked via + ReloadSignal=, see below.). When + initiating the reload process the service is then expected to reply with a notification message + via sd_notify(3) + that contains the "RELOADING=1" field in combination with + "MONOTONIC_USEC=" set to the current monotonic time + (i.e. CLOCK_MONOTONIC in + clock_gettime(2)) + in µs, formatted as decimal string. Once reloading is complete another notification message must + be sent, containing "READY=1". Using this service type and implementing this + reload protocol is an efficient alternative to providing an ExecReload= + command for reloading of the service's configuration.

    • Behavior of idle is very similar to simple; however, + actual execution of the service program is delayed until all active jobs are dispatched. This may be used + to avoid interleaving of output of shell services with the status output on the console. Note that this + type is useful only to improve console output, it is not useful as a general unit ordering tool, and the + effect of this service type is subject to a 5s timeout, after which the service program is invoked + anyway.

    It is generally recommended to use Type=simple for + long-running services whenever possible, as it is the simplest and fastest option. However, as this + service type won't propagate service start-up failures and doesn't allow ordering of other units + against completion of initialization of the service (which for example is useful if clients need to + connect to the service through some form of IPC, and the IPC channel is only established by the + service itself — in contrast to doing this ahead of time through socket or bus activation or + similar), it might not be sufficient for many cases. If so, notify, + notify-reload or dbus (the latter only in case the service + provides a D-Bus interface) are the preferred options as they allow service program code to + precisely schedule when to consider the service started up successfully and when to proceed with + follow-up units. The notify/notify-reload service types require + explicit support in the service codebase (as sd_notify() or an equivalent API + needs to be invoked by the service at the appropriate time) — if it's not supported, then + forking is an alternative: it supports the traditional UNIX service start-up + protocol. Finally, exec might be an option for cases where it is enough to ensure + the service binary is invoked, and where the service binary itself executes no or little + initialization on its own (and its initialization is unlikely to fail). Note that using any type + other than simple possibly delays the boot process, as the service manager needs + to wait for service initialization to complete. It is hence recommended not to needlessly use any + types other than simple. (Also note it is generally not recommended to use + idle or oneshot for long-running services.)

    ExitType=

    Specifies when the manager should consider the service to be finished. One of main or + cgroup:

    • If set to main (the default), the service manager + will consider the unit stopped when the main process, which is determined according to the + Type=, exits. Consequently, it cannot be used with + Type=oneshot.

    • If set to cgroup, the service will be considered running as long as at + least one process in the cgroup has not exited.

    It is generally recommended to use ExitType=main when a service has + a known forking model and a main process can reliably be determined. ExitType= + cgroup is meant for applications whose forking model is not known ahead of time and which + might not have a specific main process. It is well suited for transient or automatically generated services, + such as graphical applications inside of a desktop environment.

    RemainAfterExit=

    Takes a boolean value that specifies whether + the service shall be considered active even when all its + processes exited. Defaults to no.

    GuessMainPID=

    Takes a boolean value that specifies whether + systemd should try to guess the main PID of a service if it + cannot be determined reliably. This option is ignored unless + Type=forking is set and + PIDFile= is unset because for the other types + or with an explicitly configured PID file, the main PID is + always known. The guessing algorithm might come to incorrect + conclusions if a daemon consists of more than one process. If + the main PID cannot be determined, failure detection and + automatic restarting of a service will not work reliably. + Defaults to yes.

    PIDFile=

    Takes a path referring to the PID file of the service. Usage of this option is recommended for + services where Type= is set to forking. The path specified typically points + to a file below /run/. If a relative path is specified it is hence prefixed with + /run/. The service manager will read the PID of the main process of the service from this + file after start-up of the service. The service manager will not write to the file configured here, although it + will remove the file after the service has shut down if it still exists. The PID file does not need to be owned + by a privileged user, but if it is owned by an unprivileged user additional safety restrictions are enforced: + the file may not be a symlink to a file owned by a different user (neither directly nor indirectly), and the + PID file must refer to a process already belonging to the service.

    Note that PID files should be avoided in modern projects. Use Type=notify, + Type=notify-reload or Type=simple where possible, which does not + require use of PID files to determine the main process of a service and avoids needless + forking.

    BusName=

    Takes a D-Bus destination name that this service shall use. This option is mandatory + for services where Type= is set to dbus. It is recommended to + always set this property if known to make it easy to map the service name to the D-Bus destination. + In particular, systemctl service-log-level/service-log-target verbs make use of + this.

    ExecStart=

    Commands with their arguments that are + executed when this service is started. The value is split into + zero or more command lines according to the rules described + below (see section "Command Lines" below). +

    Unless Type= is oneshot, exactly one command must be given. When + Type=oneshot is used, zero or more commands may be specified. Commands may be specified by + providing multiple command lines in the same directive, or alternatively, this directive may be specified more + than once with the same effect. If the empty string is assigned to this option, the list of commands to start + is reset, prior assignments of this option will have no effect. If no ExecStart= is + specified, then the service must have RemainAfterExit=yes and at least one + ExecStop= line set. (Services lacking both ExecStart= and + ExecStop= are not valid.)

    For each of the specified commands, the first argument must be either an absolute path to an executable + or a simple file name without any slashes. Optionally, this filename may be prefixed with a number of special + characters:

    Table 1. Special executable prefixes

    PrefixEffect
    "@"If the executable path is prefixed with "@", the second specified token will be passed as "argv[0]" to the executed process (instead of the actual filename), followed by the further arguments specified.
    "-"If the executable path is prefixed with "-", an exit code of the command normally considered a failure (i.e. non-zero exit status or abnormal exit due to signal) is recorded, but has no further effect and is considered equivalent to success.
    ":"If the executable path is prefixed with ":", environment variable substitution (as described by the "Command Lines" section below) is not applied.
    "+"If the executable path is prefixed with "+" then the process is executed with full privileges. In this mode privilege restrictions configured with User=, Group=, CapabilityBoundingSet= or the various file system namespacing options (such as PrivateDevices=, PrivateTmp=) are not applied to the invoked command line (but still affect any other ExecStart=, ExecStop=, … lines). However, note that this will not bypass options that apply to the whole control group, such as DevicePolicy=, see systemd.resource-control(5) for the full list.
    "!"Similar to the "+" character discussed above this permits invoking command lines with elevated privileges. However, unlike "+" the "!" character exclusively alters the effect of User=, Group= and SupplementaryGroups=, i.e. only the stanzas that affect user and group credentials. Note that this setting may be combined with DynamicUser=, in which case a dynamic user/group pair is allocated before the command is invoked, but credential changing is left to the executed process itself.
    "!!"This prefix is very similar to "!", however it only has an effect on systems lacking support for ambient process capabilities, i.e. without support for AmbientCapabilities=. It's intended to be used for unit files that take benefit of ambient capabilities to run processes with minimal privileges wherever possible while remaining compatible with systems that lack ambient capabilities support. Note that when "!!" is used, and a system lacking ambient capability support is detected any configured SystemCallFilter= and CapabilityBoundingSet= stanzas are implicitly modified, in order to permit spawned processes to drop credentials and capabilities themselves, even if this is configured to not be allowed. Moreover, if this prefix is used and a system lacking ambient capability support is detected AmbientCapabilities= will be skipped and not be applied. On systems supporting ambient capabilities, "!!" has no effect and is redundant.

    "@", "-", ":", and one of + "+"/"!"/"!!" may be used together and they can appear in any + order. However, only one of "+", "!", "!!" may be used at a + time. Note that these prefixes are also supported for the other command line settings, + i.e. ExecStartPre=, ExecStartPost=, ExecReload=, + ExecStop= and ExecStopPost=.

    If more than one command is specified, the commands are + invoked sequentially in the order they appear in the unit + file. If one of the commands fails (and is not prefixed with + "-"), other lines are not executed, and the + unit is considered failed.

    Unless Type=forking is set, the + process started via this command line will be considered the + main process of the daemon.

    ExecStartPre=, ExecStartPost=

    Additional commands that are executed before + or after the command in ExecStart=, + respectively. Syntax is the same as for + ExecStart=, except that multiple command + lines are allowed and the commands are executed one after the + other, serially.

    If any of those commands (not prefixed with + "-") fail, the rest are not executed and the + unit is considered failed.

    ExecStart= commands are only run after + all ExecStartPre= commands that were not prefixed + with a "-" exit successfully.

    ExecStartPost= commands are only run after the commands specified in + ExecStart= have been invoked successfully, as determined by + Type= (i.e. the process has been started for Type=simple or + Type=idle, the last ExecStart= process exited successfully for + Type=oneshot, the initial process exited successfully for + Type=forking, "READY=1" is sent for + Type=notify/Type=notify-reload, or the + BusName= has been taken for Type=dbus).

    Note that ExecStartPre= may not be + used to start long-running processes. All processes forked + off by processes invoked via ExecStartPre= will + be killed before the next service process is run.

    Note that if any of the commands specified in ExecStartPre=, + ExecStart=, or ExecStartPost= fail (and are not prefixed with + "-", see above) or time out before the service is fully up, execution continues with commands + specified in ExecStopPost=, the commands in ExecStop= are skipped.

    Note that the execution of ExecStartPost= is taken into account for the purpose of + Before=/After= ordering constraints.

    ExecCondition=

    Optional commands that are executed before the commands in ExecStartPre=. + Syntax is the same as for ExecStart=, except that multiple command lines are allowed and the + commands are executed one after the other, serially.

    The behavior is like an ExecStartPre= and condition check hybrid: when an + ExecCondition= command exits with exit code 1 through 254 (inclusive), the remaining + commands are skipped and the unit is not marked as failed. However, if an + ExecCondition= command exits with 255 or abnormally (e.g. timeout, killed by a + signal, etc.), the unit will be considered failed (and remaining commands will be skipped). Exit code of 0 or + those matching SuccessExitStatus= will continue execution to the next commands.

    The same recommendations about not running long-running processes in ExecStartPre= + also applies to ExecCondition=. ExecCondition= will also run the commands + in ExecStopPost=, as part of stopping the service, in the case of any non-zero or abnormal + exits, like the ones described above.

    ExecReload=

    Commands to execute to trigger a configuration reload in the service. This argument + takes multiple command lines, following the same scheme as described for + ExecStart= above. Use of this setting is optional. Specifier and environment + variable substitution is supported here following the same scheme as for + ExecStart=.

    One additional, special environment variable is set: if known, $MAINPID is + set to the main process of the daemon, and may be used for command lines like the following:

    ExecReload=kill -HUP $MAINPID

    Note however that reloading a daemon by enqueuing a signal (as with the example line above) is + usually not a good choice, because this is an asynchronous operation and hence not suitable when + ordering reloads of multiple services against each other. It is thus strongly recommended to either + use Type=notify-reload in place of + ExecReload=, or to set ExecReload= to a command that not only + triggers a configuration reload of the daemon, but also synchronously waits for it to complete. For + example, dbus-broker(1) + uses the following:

    ExecReload=busctl call org.freedesktop.DBus \
    +        /org/freedesktop/DBus org.freedesktop.DBus \
    +        ReloadConfig
    +
    ExecStop=

    Commands to execute to stop the service started via + ExecStart=. This argument takes multiple command lines, following the same scheme + as described for ExecStart= above. Use of this setting is optional. After the + commands configured in this option are run, it is implied that the service is stopped, and any + processes remaining for it are terminated according to the KillMode= setting (see + systemd.kill(5)). + If this option is not specified, the process is terminated by sending the signal specified in + KillSignal= or RestartKillSignal= when service stop is + requested. Specifier and environment variable substitution is supported (including + $MAINPID, see above).

    Note that it is usually not sufficient to specify a command for this setting that only asks the + service to terminate (for example, by sending some form of termination signal to it), but does not + wait for it to do so. Since the remaining processes of the services are killed according to + KillMode= and KillSignal= or + RestartKillSignal= as described above immediately after the command exited, this + may not result in a clean stop. The specified command should hence be a synchronous operation, not an + asynchronous one.

    Note that the commands specified in ExecStop= are only executed when the service + started successfully first. They are not invoked if the service was never started at all, or in case its + start-up failed, for example because any of the commands specified in ExecStart=, + ExecStartPre= or ExecStartPost= failed (and weren't prefixed with + "-", see above) or timed out. Use ExecStopPost= to invoke commands when a + service failed to start up correctly and is shut down again. Also note that the stop operation is always + performed if the service started successfully, even if the processes in the service terminated on their + own or were killed. The stop commands must be prepared to deal with that case. $MAINPID + will be unset if systemd knows that the main process exited by the time the stop commands are called.

    Service restart requests are implemented as stop operations followed by start operations. This + means that ExecStop= and ExecStopPost= are executed during a + service restart operation.

    It is recommended to use this setting for commands that communicate with the service requesting + clean termination. For post-mortem clean-up steps use ExecStopPost= instead. +

    ExecStopPost=

    Additional commands that are executed after the service is stopped. This includes cases where + the commands configured in ExecStop= were used, where the service does not have any + ExecStop= defined, or where the service exited unexpectedly. This argument takes multiple + command lines, following the same scheme as described for ExecStart=. Use of these settings + is optional. Specifier and environment variable substitution is supported. Note that – unlike + ExecStop= – commands specified with this setting are invoked when a service failed to start + up correctly and is shut down again.

    It is recommended to use this setting for clean-up operations that shall be executed even when the + service failed to start up correctly. Commands configured with this setting need to be able to operate even if + the service failed starting up half-way and left incompletely initialized data around. As the service's + processes have been terminated already when the commands specified with this setting are executed they should + not attempt to communicate with them.

    Note that all commands that are configured with this setting are invoked with the result code of the + service, as well as the main process' exit code and status, set in the $SERVICE_RESULT, + $EXIT_CODE and $EXIT_STATUS environment variables, see + systemd.exec(5) for + details.

    Note that the execution of ExecStopPost= is taken into account for the purpose of + Before=/After= ordering constraints.

    RestartSec=

    Configures the time to sleep before restarting + a service (as configured with Restart=). + Takes a unit-less value in seconds, or a time span value such + as "5min 20s". Defaults to 100ms.

    TimeoutStartSec=

    Configures the time to wait for start-up. If a daemon service does not signal + start-up completion within the configured time, the service will be considered failed and will be + shut down again. The precise action depends on the TimeoutStartFailureMode= + option. Takes a unit-less value in seconds, or a time span value such as "5min 20s". Pass + "infinity" to disable the timeout logic. Defaults to + DefaultTimeoutStartSec= set in the manager, except when + Type=oneshot is used, in which case the timeout is disabled by default (see + systemd-system.conf(5)). +

    If a service of Type=notify/Type=notify-reload sends + "EXTEND_TIMEOUT_USEC=…", this may cause the start time to be extended beyond + TimeoutStartSec=. The first receipt of this message must occur before + TimeoutStartSec= is exceeded, and once the start time has extended beyond + TimeoutStartSec=, the service manager will allow the service to continue to start, + provided the service repeats "EXTEND_TIMEOUT_USEC=…" within the interval specified + until the service startup status is finished by "READY=1". (see + sd_notify(3)). +

    TimeoutStopSec=

    This option serves two purposes. First, it configures the time to wait for each + ExecStop= command. If any of them times out, subsequent ExecStop= commands + are skipped and the service will be terminated by SIGTERM. If no ExecStop= + commands are specified, the service gets the SIGTERM immediately. This default behavior + can be changed by the TimeoutStopFailureMode= option. Second, it configures the time + to wait for the service itself to stop. If it doesn't terminate in the specified time, it will be forcibly terminated + by SIGKILL (see KillMode= in + systemd.kill(5)). + Takes a unit-less value in seconds, or a time span value such + as "5min 20s". Pass "infinity" to disable the + timeout logic. Defaults to + DefaultTimeoutStopSec= from the manager + configuration file (see + systemd-system.conf(5)). +

    If a service of Type=notify/Type=notify-reload sends + "EXTEND_TIMEOUT_USEC=…", this may cause the stop time to be extended beyond + TimeoutStopSec=. The first receipt of this message must occur before + TimeoutStopSec= is exceeded, and once the stop time has extended beyond + TimeoutStopSec=, the service manager will allow the service to continue to stop, + provided the service repeats "EXTEND_TIMEOUT_USEC=…" within the interval specified, + or terminates itself (see + sd_notify(3)). +

    TimeoutAbortSec=

    This option configures the time to wait for the service to terminate when it was aborted due to a + watchdog timeout (see WatchdogSec=). If the service has a short TimeoutStopSec= + this option can be used to give the system more time to write a core dump of the service. Upon expiration the service + will be forcibly terminated by SIGKILL (see KillMode= in + systemd.kill(5)). The core file will + be truncated in this case. Use TimeoutAbortSec= to set a sensible timeout for the core dumping per + service that is large enough to write all expected data while also being short enough to handle the service failure + in due time. +

    Takes a unit-less value in seconds, or a time span value such as "5min 20s". Pass an empty value to skip + the dedicated watchdog abort timeout handling and fall back TimeoutStopSec=. Pass + "infinity" to disable the timeout logic. Defaults to DefaultTimeoutAbortSec= from + the manager configuration file (see + systemd-system.conf(5)). +

    If a service of Type=notify/Type=notify-reload handles + SIGABRT itself (instead of relying on the kernel to write a core dump) it can + send "EXTEND_TIMEOUT_USEC=…" to extended the abort time beyond + TimeoutAbortSec=. The first receipt of this message must occur before + TimeoutAbortSec= is exceeded, and once the abort time has extended beyond + TimeoutAbortSec=, the service manager will allow the service to continue to abort, + provided the service repeats "EXTEND_TIMEOUT_USEC=…" within the interval specified, + or terminates itself (see + sd_notify(3)). +

    TimeoutSec=

    A shorthand for configuring both + TimeoutStartSec= and + TimeoutStopSec= to the specified value. +

    TimeoutStartFailureMode=, TimeoutStopFailureMode=

    These options configure the action that is taken in case a daemon service does not signal + start-up within its configured TimeoutStartSec=, respectively if it does not stop within + TimeoutStopSec=. Takes one of terminate, abort and + kill. Both options default to terminate.

    If terminate is set the service will be gracefully terminated by sending the signal + specified in KillSignal= (defaults to SIGTERM, see + systemd.kill(5)). If the + service does not terminate the FinalKillSignal= is sent after + TimeoutStopSec=. If abort is set, WatchdogSignal= is sent + instead and TimeoutAbortSec= applies before sending FinalKillSignal=. + This setting may be used to analyze services that fail to start-up or shut-down intermittently. + By using kill the service is immediately terminated by sending + FinalKillSignal= without any further timeout. This setting can be used to expedite the + shutdown of failing services. +

    RuntimeMaxSec=

    Configures a maximum time for the service to run. If this is used and the service has been + active for longer than the specified time it is terminated and put into a failure state. Note that this setting + does not have any effect on Type=oneshot services, as they terminate immediately after + activation completed. Pass "infinity" (the default) to configure no runtime + limit.

    If a service of Type=notify/Type=notify-reload sends + "EXTEND_TIMEOUT_USEC=…", this may cause the runtime to be extended beyond + RuntimeMaxSec=. The first receipt of this message must occur before + RuntimeMaxSec= is exceeded, and once the runtime has extended beyond + RuntimeMaxSec=, the service manager will allow the service to continue to run, + provided the service repeats "EXTEND_TIMEOUT_USEC=…" within the interval specified + until the service shutdown is achieved by "STOPPING=1" (or termination). (see + sd_notify(3)). +

    RuntimeRandomizedExtraSec=

    This option modifies RuntimeMaxSec= by increasing the maximum runtime by an + evenly distributed duration between 0 and the specified value (in seconds). If RuntimeMaxSec= is + unspecified, then this feature will be disabled. +

    WatchdogSec=

    Configures the watchdog timeout for a service. + The watchdog is activated when the start-up is completed. The + service must call + sd_notify(3) + regularly with "WATCHDOG=1" (i.e. the + "keep-alive ping"). If the time between two such calls is + larger than the configured time, then the service is placed in + a failed state and it will be terminated with + SIGABRT (or the signal specified by + WatchdogSignal=). By setting + Restart= to on-failure, + on-watchdog, on-abnormal or + always, the service will be automatically + restarted. The time configured here will be passed to the + executed service process in the + WATCHDOG_USEC= environment variable. This + allows daemons to automatically enable the keep-alive pinging + logic if watchdog support is enabled for the service. If this + option is used, NotifyAccess= (see below) + should be set to open access to the notification socket + provided by systemd. If NotifyAccess= is + not set, it will be implicitly set to main. + Defaults to 0, which disables this feature. The service can + check whether the service manager expects watchdog keep-alive + notifications. See + sd_watchdog_enabled(3) + for details. + sd_event_set_watchdog(3) + may be used to enable automatic watchdog notification support. +

    Restart=

    Configures whether the service shall be + restarted when the service process exits, is killed, or a + timeout is reached. The service process may be the main + service process, but it may also be one of the processes + specified with ExecStartPre=, + ExecStartPost=, + ExecStop=, + ExecStopPost=, or + ExecReload=. When the death of the process + is a result of systemd operation (e.g. service stop or + restart), the service will not be restarted. Timeouts include + missing the watchdog "keep-alive ping" deadline and a service + start, reload, and stop operation timeouts.

    Takes one of + no, + on-success, + on-failure, + on-abnormal, + on-watchdog, + on-abort, or + always. + If set to no (the default), the service will + not be restarted. If set to on-success, it + will be restarted only when the service process exits cleanly. + In this context, a clean exit means any of the following: +

    • exit code of 0;
    • for types other than + Type=oneshot, one of the signals + SIGHUP, + SIGINT, + SIGTERM, or + SIGPIPE;
    • exit statuses and signals specified in + SuccessExitStatus=.

    + If set to + on-failure, the service will be restarted + when the process exits with a non-zero exit code, is + terminated by a signal (including on core dump, but excluding + the aforementioned four signals), when an operation (such as + service reload) times out, and when the configured watchdog + timeout is triggered. If set to on-abnormal, + the service will be restarted when the process is terminated + by a signal (including on core dump, excluding the + aforementioned four signals), when an operation times out, or + when the watchdog timeout is triggered. If set to + on-abort, the service will be restarted only + if the service process exits due to an uncaught signal not + specified as a clean exit status. If set to + on-watchdog, the service will be restarted + only if the watchdog timeout for the service expires. If set + to always, the service will be restarted + regardless of whether it exited cleanly or not, got terminated + abnormally by a signal, or hit a timeout.

    Table 2. Exit causes and the effect of the Restart= settings

    Restart settings/Exit causesnoalwayson-successon-failureon-abnormalon-aborton-watchdog
    Clean exit code or signal XX    
    Unclean exit code X X   
    Unclean signal X XXX 
    Timeout X XX  
    Watchdog X XX X

    As exceptions to the setting above, the service will not + be restarted if the exit code or signal is specified in + RestartPreventExitStatus= (see below) or + the service is stopped with systemctl stop + or an equivalent operation. Also, the services will always be + restarted if the exit code or signal is specified in + RestartForceExitStatus= (see below).

    Note that service restart is subject to unit start rate + limiting configured with StartLimitIntervalSec= + and StartLimitBurst=, see + systemd.unit(5) + for details. A restarted service enters the failed state only + after the start limits are reached.

    Setting this to on-failure is the + recommended choice for long-running services, in order to + increase reliability by attempting automatic recovery from + errors. For services that shall be able to terminate on their + own choice (and avoid immediate restarting), + on-abnormal is an alternative choice.

    SuccessExitStatus=

    Takes a list of exit status definitions that, when returned by the main service + process, will be considered successful termination, in addition to the normal successful exit status + 0 and, except for Type=oneshot, the signals SIGHUP, SIGINT, + SIGTERM, and SIGPIPE. Exit status definitions can be + numeric termination statuses, termination status names, or termination signal names, separated by + spaces. See the Process Exit Codes section in + systemd.exec(5) for + a list of termination status names (for this setting only the part without the + "EXIT_" or "EX_" prefix should be used). See signal(7) for + a list of signal names.

    Note that this setting does not change the mapping between numeric exit statuses and their + names, i.e. regardless how this setting is used 0 will still be mapped to "SUCCESS" + (and thus typically shown as "0/SUCCESS" in tool outputs) and 1 to + "FAILURE" (and thus typically shown as "1/FAILURE"), and so on. It + only controls what happens as effect of these exit statuses, and how it propagates to the state of + the service as a whole.

    This option may appear more than once, in which case the list of successful exit statuses is + merged. If the empty string is assigned to this option, the list is reset, all prior assignments of + this option will have no effect.

    Example 1. A service with the SuccessExitStatus= setting

    SuccessExitStatus=TEMPFAIL 250 SIGKILL

    Exit status 75 (TEMPFAIL), 250, and the termination signal + SIGKILL are considered clean service terminations.


    Note: systemd-analyze exit-status may be used to list exit statuses and + translate between numerical status values and names.

    RestartPreventExitStatus=

    Takes a list of exit status definitions that, when returned by the main service + process, will prevent automatic service restarts, regardless of the restart setting configured with + Restart=. Exit status definitions can either be numeric exit codes or termination + signal names, and are separated by spaces. Defaults to the empty list, so that, by default, no exit + status is excluded from the configured restart logic. For example: +

    RestartPreventExitStatus=1 6 SIGABRT

    + ensures that exit codes 1 and 6 and the termination signal SIGABRT will not + result in automatic service restarting. This option may appear more than once, in which case the list + of restart-preventing statuses is merged. If the empty string is assigned to this option, the list is + reset and all prior assignments of this option will have no effect.

    Note that this setting has no effect on processes configured via + ExecStartPre=, ExecStartPost=, ExecStop=, + ExecStopPost= or ExecReload=, but only on the main service + process, i.e. either the one invoked by ExecStart= or (depending on + Type=, PIDFile=, …) the otherwise configured main + process.

    RestartForceExitStatus=

    Takes a list of exit status definitions that, + when returned by the main service process, will force automatic + service restarts, regardless of the restart setting configured + with Restart=. The argument format is + similar to + RestartPreventExitStatus=.

    RootDirectoryStartOnly=

    Takes a boolean argument. If true, the root + directory, as configured with the + RootDirectory= option (see + systemd.exec(5) + for more information), is only applied to the process started + with ExecStart=, and not to the various + other ExecStartPre=, + ExecStartPost=, + ExecReload=, ExecStop=, + and ExecStopPost= commands. If false, the + setting is applied to all configured commands the same way. + Defaults to false.

    NonBlocking=

    Set the O_NONBLOCK flag for all file descriptors passed via socket-based + activation. If true, all file descriptors >= 3 (i.e. all except stdin, stdout, stderr), excluding those passed + in via the file descriptor storage logic (see FileDescriptorStoreMax= for details), will + have the O_NONBLOCK flag set and hence are in non-blocking mode. This option is only + useful in conjunction with a socket unit, as described in + systemd.socket(5) and has no + effect on file descriptors which were previously saved in the file-descriptor store for example. Defaults to + false.

    NotifyAccess=

    Controls access to the service status notification socket, as accessible via the + sd_notify(3) + call. Takes one of none (the default), main, exec + or all. If none, no daemon status updates are accepted from the + service processes, all status update messages are ignored. If main, only service + updates sent from the main process of the service are accepted. If exec, only + service updates sent from any of the main or control processes originating from one of the + Exec*= commands are accepted. If all, all services updates from + all members of the service's control group are accepted. This option should be set to open access to + the notification socket when using + Type=notify/Type=notify-reload or + WatchdogSec= (see above). If those options are used but + NotifyAccess= is not configured, it will be implicitly set to + main.

    Note that sd_notify() notifications may be attributed to units correctly only if + either the sending process is still around at the time PID 1 processes the message, or if the sending process + is explicitly runtime-tracked by the service manager. The latter is the case if the service manager originally + forked off the process, i.e. on all processes that match main or + exec. Conversely, if an auxiliary process of the unit sends an + sd_notify() message and immediately exits, the service manager might not be able to + properly attribute the message to the unit, and thus will ignore it, even if + NotifyAccess=all is set for it.

    Hence, to eliminate all race conditions involving lookup of the client's unit and attribution of notifications + to units correctly, sd_notify_barrier() may be used. This call acts as a synchronization point + and ensures all notifications sent before this call have been picked up by the service manager when it returns + successfully. Use of sd_notify_barrier() is needed for clients which are not invoked by the + service manager, otherwise this synchronization mechanism is unnecessary for attribution of notifications to the + unit.

    Sockets=

    Specifies the name of the socket units this + service shall inherit socket file descriptors from when the + service is started. Normally, it should not be necessary to use + this setting, as all socket file descriptors whose unit shares + the same name as the service (subject to the different unit + name suffix of course) are passed to the spawned + process.

    Note that the same socket file descriptors may be passed + to multiple processes simultaneously. Also note that a + different service may be activated on incoming socket traffic + than the one which is ultimately configured to inherit the + socket file descriptors. Or, in other words: the + Service= setting of + .socket units does not have to match the + inverse of the Sockets= setting of the + .service it refers to.

    This option may appear more than once, in which case the list of socket units is merged. Note + that once set, clearing the list of sockets again (for example, by assigning the empty string to this + option) is not supported.

    FileDescriptorStoreMax=

    Configure how many file descriptors may be stored in the service manager for the + service using + sd_pid_notify_with_fds(3)'s + "FDSTORE=1" messages. This is useful for implementing services that can restart + after an explicit request or a crash without losing state. Any open sockets and other file + descriptors which should not be closed during the restart may be stored this way. Application state + can either be serialized to a file in /run/, or better, stored in a + memfd_create(2) + memory file descriptor. Defaults to 0, i.e. no file descriptors may be stored in the service + manager. All file descriptors passed to the service manager from a specific service are passed back + to the service's main process on the next service restart (see + sd_listen_fds(3) for + details about the precise protocol used and the order in which the file descriptors are passed). Any + file descriptors passed to the service manager are automatically closed when + POLLHUP or POLLERR is seen on them, or when the service is + fully stopped and no job is queued or being executed for it. If this option is used, + NotifyAccess= (see above) should be set to open access to the notification socket + provided by systemd. If NotifyAccess= is not set, it will be implicitly set to + main.

    USBFunctionDescriptors=

    Configure the location of a file containing + USB + FunctionFS descriptors, for implementation of USB + gadget functions. This is used only in conjunction with a + socket unit with ListenUSBFunction= + configured. The contents of this file are written to the + ep0 file after it is + opened.

    USBFunctionStrings=

    Configure the location of a file containing + USB FunctionFS strings. Behavior is similar to + USBFunctionDescriptors= + above.

    OOMPolicy=

    Configure the out-of-memory (OOM) killing policy for the kernel and the userspace OOM + killer + systemd-oomd.service(8). + On Linux, when memory becomes scarce to the point that the kernel has trouble allocating memory for + itself, it might decide to kill a running process in order to free up memory and reduce memory + pressure. Note that systemd-oomd.service is a more flexible solution that aims + to prevent out-of-memory situations for the userspace too, not just the kernel, by attempting to + terminate services earlier, before the kernel would have to act.

    This setting takes one of continue, stop or + kill. If set to continue and a process in the unit is + killed by the OOM killer, this is logged but the unit continues running. If set to + stop the event is logged but the unit is terminated cleanly by the service + manager. If set to kill and one of the unit's processes is killed by the OOM + killer the kernel is instructed to kill all remaining processes of the unit too, by setting the + memory.oom.group attribute to 1; also see kernel documentation.

    Defaults to the setting DefaultOOMPolicy= in + systemd-system.conf(5) + is set to, except for units where Delegate= is turned on, where it defaults to + continue.

    Use the OOMScoreAdjust= setting to configure whether processes of the unit + shall be considered preferred or less preferred candidates for process termination by the Linux OOM + killer logic. See + systemd.exec(5) for + details.

    This setting also applies to systemd-oomd. Similarly to the kernel OOM + kills, this setting determines the state of the unit after systemd-oomd kills a + cgroup associated with it.

    OpenFile=

    Takes an argument of the form "path[:fd-name:options]", + where: +

    • "path" is a path to a file or an AF_UNIX socket in the file system;
    • "fd-name" is a name that will be associated with the file descriptor; + the name may contain any ASCII character, but must exclude control characters and ":", and must be at most 255 characters in length; + it is optional and, if not provided, defaults to the file name;
    • "options" is a comma-separated list of access options; + possible values are + "read-only", + "append", + "truncate", + "graceful"; + if not specified, files will be opened in rw mode; + if "graceful" is specified, errors during file/socket opening are ignored. + Specifying the same option several times is treated as an error.

    + The file or socket is opened by the service manager and the file descriptor is passed to the service. + If the path is a socket, we call connect() on it. + See sd_listen_fds(3) + for more details on how to retrieve these file descriptors.

    This setting is useful to allow services to access files/sockets that they can't access themselves + (due to running in a separate mount namespace, not having privileges, ...).

    This setting can be specified multiple times, in which case all the specified paths are opened and the file descriptors passed to the service. + If the empty string is assigned, the entire list of open files defined prior to this is reset.

    ReloadSignal=

    Configures the UNIX process signal to send to the service's main process when asked + to reload the service's configuration. Defaults to SIGHUP. This option has no + effect unless Type=notify-reload is used, see + above.

    Check + systemd.unit(5), + systemd.exec(5), and + systemd.kill(5) for more + settings.

    Command lines

    This section describes command line parsing and + variable and specifier substitutions for + ExecStart=, + ExecStartPre=, + ExecStartPost=, + ExecReload=, + ExecStop=, and + ExecStopPost= options.

    Multiple command lines may be concatenated in a single directive by separating them with semicolons + (these semicolons must be passed as separate words). Lone semicolons may be escaped as + "\;".

    Each command line is unquoted using the rules described in "Quoting" section in + systemd.syntax(7). The + first item becomes the command to execute, and the subsequent items the arguments.

    This syntax is inspired by shell syntax, but only the meta-characters and expansions + described in the following paragraphs are understood, and the expansion of variables is + different. Specifically, redirection using + "<", + "<<", + ">", and + ">>", pipes using + "|", running programs in the background using + "&", and other elements of shell + syntax are not supported.

    The command to execute may contain spaces, but control characters are not allowed.

    The command line accepts "%" specifiers as described in + systemd.unit(5).

    Basic environment variable substitution is supported. Use + "${FOO}" as part of a word, or as a word of its + own, on the command line, in which case it will be erased and replaced + by the exact value of the environment variable (if any) including all + whitespace it contains, always resulting in exactly a single argument. + Use "$FOO" as a separate word on the command line, in + which case it will be replaced by the value of the environment + variable split at whitespace, resulting in zero or more arguments. + For this type of expansion, quotes are respected when splitting + into words, and afterwards removed.

    If the command is not a full (absolute) path, it will be resolved to a full path using a + fixed search path determined at compilation time. Searched directories include + /usr/local/bin/, /usr/bin/, /bin/ + on systems using split /usr/bin/ and /bin/ + directories, and their sbin/ counterparts on systems using split + bin/ and sbin/. It is thus safe to use just the + executable name in case of executables located in any of the "standard" directories, and an + absolute path must be used in other cases. Using an absolute path is recommended to avoid + ambiguity. Hint: this search path may be queried using + systemd-path search-binaries-default.

    Example:

    Environment="ONE=one" 'TWO=two two'
    +ExecStart=echo $ONE $TWO ${TWO}

    This will execute /bin/echo with four + arguments: "one", "two", + "two", and "two two".

    Example:

    Environment=ONE='one' "TWO='two two' too" THREE=
    +ExecStart=/bin/echo ${ONE} ${TWO} ${THREE}
    +ExecStart=/bin/echo $ONE $TWO $THREE

    This results in /bin/echo being + called twice, the first time with arguments + "'one'", + "'two two' too", "", + and the second time with arguments + "one", "two two", + "too". +

    To pass a literal dollar sign, use "$$". + Variables whose value is not known at expansion time are treated + as empty strings. Note that the first argument (i.e. the program + to execute) may not be a variable.

    Variables to be used in this fashion may be defined through + Environment= and + EnvironmentFile=. In addition, variables listed + in the section "Environment variables in spawned processes" in + systemd.exec(5), + which are considered "static configuration", may be used (this + includes e.g. $USER, but not + $TERM).

    Note that shell command lines are not directly supported. If + shell command lines are to be used, they need to be passed + explicitly to a shell implementation of some kind. Example:

    ExecStart=sh -c 'dmesg | tac'

    Example:

    ExecStart=echo one ; echo "two two"

    This will execute echo two times, + each time with one argument: "one" and + "two two", respectively. Because two commands are + specified, Type=oneshot must be used.

    Example:

    ExecStart=echo / >/dev/null & \; \
    +ls

    This will execute echo + with five arguments: "/", + ">/dev/null", + "&", ";", and + "ls".

    Examples

    Example 2. Simple service

    The following unit file creates a service that will + execute /usr/sbin/foo-daemon. Since no + Type= is specified, the default + Type=simple will be assumed. + systemd will assume the unit to be started immediately after the + program has begun executing.

    [Unit]
    +Description=Foo
    +
    +[Service]
    +ExecStart=/usr/sbin/foo-daemon
    +
    +[Install]
    +WantedBy=multi-user.target

    Note that systemd assumes here that the process started by + systemd will continue running until the service terminates. If + the program daemonizes itself (i.e. forks), please use + Type=forking instead.

    Since no ExecStop= was specified, + systemd will send SIGTERM to all processes started from this + service, and after a timeout also SIGKILL. This behavior can be + modified, see + systemd.kill(5) + for details.

    Note that this unit type does not include any type of notification when a service has completed + initialization. For this, you should use other unit types, such as + Type=notify/Type=notify-reload + if the service understands systemd's notification protocol, + Type=forking if the service can background itself or + Type=dbus if the unit acquires a DBus name once initialization is + complete. See below.


    Example 3. Oneshot service

    Sometimes, units should just execute an action without + keeping active processes, such as a filesystem check or a + cleanup action on boot. For this, + Type=oneshot exists. Units + of this type will wait until the process specified terminates + and then fall back to being inactive. The following unit will + perform a cleanup action:

    [Unit]
    +Description=Cleanup old Foo data
    +
    +[Service]
    +Type=oneshot
    +ExecStart=/usr/sbin/foo-cleanup
    +
    +[Install]
    +WantedBy=multi-user.target

    Note that systemd will consider the unit to be in the + state "starting" until the program has terminated, so ordered + dependencies will wait for the program to finish before starting + themselves. The unit will revert to the "inactive" state after + the execution is done, never reaching the "active" state. That + means another request to start the unit will perform the action + again.

    Type=oneshot are the + only service units that may have more than one + ExecStart= specified. For units with multiple + commands (Type=oneshot), all commands will be run again.

    For Type=oneshot, Restart=always + and Restart=on-success are not allowed.


    Example 4. Stoppable oneshot service

    Similarly to the oneshot services, there are sometimes + units that need to execute a program to set up something and + then execute another to shut it down, but no process remains + active while they are considered "started". Network + configuration can sometimes fall into this category. Another use + case is if a oneshot service shall not be executed each time + when they are pulled in as a dependency, but only the first + time.

    For this, systemd knows the setting + RemainAfterExit=yes, which + causes systemd to consider the unit to be active if the start + action exited successfully. This directive can be used with all + types, but is most useful with + Type=oneshot and + Type=simple. With + Type=oneshot, systemd waits + until the start action has completed before it considers the + unit to be active, so dependencies start only after the start + action has succeeded. With + Type=simple, dependencies + will start immediately after the start action has been + dispatched. The following unit provides an example for a simple + static firewall.

    [Unit]
    +Description=Simple firewall
    +
    +[Service]
    +Type=oneshot
    +RemainAfterExit=yes
    +ExecStart=/usr/local/sbin/simple-firewall-start
    +ExecStop=/usr/local/sbin/simple-firewall-stop
    +
    +[Install]
    +WantedBy=multi-user.target

    Since the unit is considered to be running after the start + action has exited, invoking systemctl start + on that unit again will cause no action to be taken.


    Example 5. Traditional forking services

    Many traditional daemons/services background (i.e. fork, + daemonize) themselves when starting. Set + Type=forking in the + service's unit file to support this mode of operation. systemd + will consider the service to be in the process of initialization + while the original program is still running. Once it exits + successfully and at least a process remains (and + RemainAfterExit=no), the + service is considered started.

    Often, a traditional daemon only consists of one process. + Therefore, if only one process is left after the original + process terminates, systemd will consider that process the main + process of the service. In that case, the + $MAINPID variable will be available in + ExecReload=, ExecStop=, + etc.

    In case more than one process remains, systemd will be + unable to determine the main process, so it will not assume + there is one. In that case, $MAINPID will not + expand to anything. However, if the process decides to write a + traditional PID file, systemd will be able to read the main PID + from there. Please set PIDFile= accordingly. + Note that the daemon should write that file before finishing + with its initialization. Otherwise, systemd might try to read the + file before it exists.

    The following example shows a simple daemon that forks and + just starts one process in the background:

    [Unit]
    +Description=Some simple daemon
    +
    +[Service]
    +Type=forking
    +ExecStart=/usr/sbin/my-simple-daemon -d
    +
    +[Install]
    +WantedBy=multi-user.target

    Please see + systemd.kill(5) + for details on how you can influence the way systemd terminates + the service.


    Example 6. DBus services

    For services that acquire a name on the DBus system bus, + use Type=dbus and set + BusName= accordingly. The service should not + fork (daemonize). systemd will consider the service to be + initialized once the name has been acquired on the system bus. + The following example shows a typical DBus service:

    [Unit]
    +Description=Simple DBus service
    +
    +[Service]
    +Type=dbus
    +BusName=org.example.simple-dbus-service
    +ExecStart=/usr/sbin/simple-dbus-service
    +
    +[Install]
    +WantedBy=multi-user.target

    For bus-activatable services, do not + include a [Install] section in the systemd + service file, but use the SystemdService= + option in the corresponding DBus service file, for example + (/usr/share/dbus-1/system-services/org.example.simple-dbus-service.service):

    [D-BUS Service]
    +Name=org.example.simple-dbus-service
    +Exec=/usr/sbin/simple-dbus-service
    +User=root
    +SystemdService=simple-dbus-service.service

    Please see + systemd.kill(5) + for details on how you can influence the way systemd terminates + the service.


    Example 7. Services that notify systemd about their initialization

    Type=simple services are really easy to write, but have the + major disadvantage of systemd not being able to tell when initialization of the given service is + complete. For this reason, systemd supports a simple notification protocol that allows daemons to make + systemd aware that they are done initializing. Use Type=notify or + Type=notify-reload for this. A typical service file for such a + daemon would look like this:

    [Unit]
    +Description=Simple notifying service
    +
    +[Service]
    +Type=notify
    +ExecStart=/usr/sbin/simple-notifying-service
    +
    +[Install]
    +WantedBy=multi-user.target

    Note that the daemon has to support systemd's notification + protocol, else systemd will think the service has not started yet + and kill it after a timeout. For an example of how to update + daemons to support this protocol transparently, take a look at + sd_notify(3). + systemd will consider the unit to be in the 'starting' state + until a readiness notification has arrived.

    Please see + systemd.kill(5) + for details on how you can influence the way systemd terminates + the service.


    diff --git a/readme.md b/readme.md new file mode 100644 index 0000000..5fb4f7d --- /dev/null +++ b/readme.md @@ -0,0 +1,3 @@ +## web-archive.lauka.net + +Web archive of interesting pages