This page has information about systemd.
My professional career started with the job of administrating a number of SCO Unix systems. For this reason I am familiar with how things work under System V, for instance I am used to restart a system by issuing the command
init 6. Of course I knew about the controversy among Debian developers when Debian decided to adopt systemd, but I didn' bother to learn more details about systemd than was necessary to understand the debate. Today (in May 2016) I wanted to perform some maintenance tasks which required me to reboot the system, and I was rather shocked to learn that
init 6 no longer works. So this page is my poor attempt at hastily collecting all the information that is necessary to allow me to continue administrate my system on a basic level.
My gut reaction after reading the FAQ: Yuck! How is anyone supposed to remember a command like
systemctl isolate reboot.target to reboot the system?
Update July 2018: Not sure if this already existed 2 years ago when I wrote the paragraphs above, but these days it's possible to reboot with the fairly simple, easy-to-remember command
- 1 References
- 2 Concepts
- 3 Basics
- 4 Enable a service to start when the system boots
- 5 Manipulate a service's configuration
- 6 Configuring dependencies
- 7 Assorted information bits and pieces
- 8 pelargir configuration
- Systemd website
- Wikipedia article
- SysVinit to Systemd Cheatsheet
man systemd.unit- Unit configuration
systemd provides a dependency system between various entities called "units". There are 12 types of units, here are the most important ones:
- Service units (to start and stop daemons)
- Device units
- Mount units (filesystem mount points)
- Target units (used to group other units). Targets are similar in concept to SysV runlevels
- Timer units (job scheduling, i.e. the stuff that traditionally is the domain of the cron service)
Units can have one of several states. The actual meaning of a state for a specific unit depends on the nature of that unit.
- Active: The unit is started, bound, plugged in, etc.
- Inactive: The unit is stopped, unbound, unplugged, etc.
- Activating: Going from inactive to active
- Deactivating: Going from active to inactive
- Failed: If a service failed for any reason, e.g. a process crashed or returned an error code on exit
There are two types of dependencies between units:
- Requirement dependencies. These are used to expresses that in order to function, unit B requires that unit A is also started. systemd calls this a positive requirement. systemd also allows to state negative requirements, which is used to say that units conflict with each other.
- Ordering dependencies. These are used to express that unit B needs to be started before or after unit A.
I have not yet understood why systemd distinguishes between these two dependency types. For instance, the man page explains that if a requirement dependency is stated without an ordering dependency, then the two units are started in parallel - what the heck is the use of that?!? Because of that, the man page explains, it is therefore common that both requirement and ordering dependencies are placed between two units. So what is the point of all this? Why distinguish between two kinds of dependencies when in the end you will always use both of them? TODO: Requires further research.
The basic command to work with systemd is
Just issuing this command without any parameters lists all running services:
root@pelargir:~# systemctl UNIT LOAD ACTIVE SUB DESCRIPTION [...] apache2.service loaded active running LSB: Apache2 web server [...] cron.service loaded active running Regular background program processing daemon [...]
The following command is an approximation of what the SysV
runlevel command does:
systemctl list-units --type=target
Switch the runlevel:
systemctl isolate runlevel5.target
Use modern names to switch the runlevel. Note that you can omit the ".target" suffix because if missing it's assumed as default.
systemctl isolate poweroff.target # runlevel 0 (halt system) systemctl isolate rescue.target # runlevel 1 (single user mode) systemctl isolate multi-user.target # runlevel 3 (multi-user without GUI) systemctl isolate graphical.target # runlevel 5 (multi-user with GUI) systemctl isolate reboot.target # runlevel 6 (restart system)
Some of these are also available as so-called "system commands" (cf.
systemctl poweroff # runlevel 0 (halt system) systemctl rescue # runlevel 1 (single user mode) systemctl reboot # runlevel 6 (restart system)
The status of a service can be checked with this:
systemctl status cron
This provides quite comprehensive information, including an exerpt of the last log messages pertaining to the service. For instance, the status of the "cron" service looks like this:
root@pelargir:~# systemctl status cron ● cron.service - Regular background program processing daemon Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/cron.service.d └─pelargir.conf Active: active (running) since Wed 2018-07-18 19:18:56 CEST; 1 day 23h ago Docs: man:cron(8) Main PID: 28428 (cron) CGroup: /system.slice/cron.service └─28428 /usr/sbin/cron -f Jul 20 19:00:07 pelargir CRON: pam_unix(cron:session): session closed for user francesca Jul 20 19:02:01 pelargir CRON: pam_unix(cron:session): session opened for user logcheck by (uid=0) Jul 20 19:02:01 pelargir CRON: (logcheck) CMD ( if [ -x /usr/sbin/logcheck ]; then nice -n10 /usr/sbin/logcheck; fi) Jul 20 19:02:03 pelargir CRON: pam_unix(cron:session): session closed for user logcheck Jul 20 19:09:01 pelargir CRON: pam_unix(cron:session): session opened for user root by (uid=0) Jul 20 19:09:01 pelargir CRON: (root) CMD ( [ -x /usr/lib/php5/sessionclean ] && /usr/lib/php5/sessionclean) Jul 20 19:09:01 pelargir CRON: pam_unix(cron:session): session closed for user root Jul 20 19:10:01 pelargir CRON: pam_unix(cron:session): session opened for user www-data by (uid=0) Jul 20 19:10:01 pelargir CRON: (www-data) CMD ([ -x /usr/share/awstats/tools/update.sh ] && /usr/share/awstats/tools/update.sh) Jul 20 19:10:08 pelargir CRON: pam_unix(cron:session): session closed for user www-data
Starting / stopping a service
The following command stop a service. If the service is already stopped this is not an error.
systemctl stop foo
The following command starts a service. If the service is already started this is not an error.
systemctl start foo
The following command restarts a service. If the service is already stopped this is not an error and the service is simply started.
systemctl restart foo
The following command restarts a service. If the service is already stopped this is not an error, but the service is not started.
systemctl try-restart foo
If the service supports it, the following command causes it to "reload", i.e. this typically reloads the configuration.
systemctl reload foo
Enable a service to start when the system boots
The following example command enables the SpamAssassin daemon so that it is started automatically when the system boots. Note that the daemon is not started right now - for that you would have to issue the "start" command.
systemctl enable spamassassin.service
Enabling a service creates all sorts of symlinks, apparently the driver for what exactly is created is the information that is located in the service's "unit file". This is the symlink that was created by the above SpamAssassin enabling command:
root@pelargir:~# find /etc -name spamassassin.service | xargs ls -l lrwxrwxrwx 1 root root 40 Oct 6 23:06 /etc/systemd/system/multi-user.target.wants/spamassassin.service -> /lib/systemd/system/spamassassin.service
And this is some of the content of SpamAssassin's unit file:
root@pelargir:~# cat /lib/systemd/system/spamassassin.service [Unit] Description=Perl-based spam filter using text analysis [...] [Install] WantedBy=multi-user.target
Enabling a service also modifies
/etc/init.d/.depend.start and causes the running systemd process to reload its configuration
The following command disables a service:
systemctl disable spamassassin.service
Manipulate a service's configuration
To completely replace a service's configuration, place your replacement file here:
This is the so-called "unit file". But usually it's better to create a "drop-in file" to override some settings from the unit file's main configuration. In this case, place the drop-in file here:
foo.service.dfolder may contain several drop-in files
- The drop-in files must all have the
- Running the command
systemctl edit foo.serviceautomatically creates the folder and adds a file
man systemd.unitfor details about the configuration file format. The "Example 2. Overriding vendor settings" in the EXAMPLES section has details about the overriding mechanics
The various requirement types are documented in
Requirements can be specified either in the service's main configuration file - the "unit file", or in a "drop-in file", the contents of which override the service's main configuration from the unit file. For details see section Manipulate a service's configuration on this page.
Requires is hard dependency:
- If the dependent unit is started, so is the dependency unit. If the dependency unit fails to start, the dependent unit is also not started.
- If the dependency unit is stopped, so is the dependent unit.
- If the dependency unit is restarted, so is the dependent unit.
- If the dependency unit is first stopped, then started again, in two distinct steps, the dependent unit is stopped but NOT started again. This could happen, for instance, if a service is temporarily stopped during package updates, then started again after the package update has finished.
In general, the
man page has the following to say:
Often, it is a better choice to use
Requires=in order to achieve a system that is more robust when dealing with failing services.
[Unit] Requires=foo.service bar.service
Wants is soft dependency:
- If the dependent unit is started, so is the dependency unit. If the dependency unit fails to start, the dependent unit is still started.
- If the dependency unit is stopped, the dependent unit is not affected.
- If the dependency unit is restarted, the dependent unit is not affected.
- If the dependency unit is first stopped, then started again, in two distinct steps, the dependent unit is not affected.
[Unit] Wants=foo.service bar.service
After are ordering dependency, i.e. they affect in which order units are stopped and started. Ordering dependencies must be specified in addition to requirement dependencies such as
[Unit] Before=foo.service After=bar.service
Assorted information bits and pieces
If you have Debian package
/sbin/init should be a symlink to
/lib/systemd/systemd. That is indeed the case on my machine:
root@pelargir:~# ls -l /sbin/init lrwxrwxrwx 1 root root 20 Feb 4 13:06 /sbin/init -> /lib/systemd/systemd
Timers vs. cron
systemd is capable of periodically running jobs, just like the
cron service, by way of a special type of unit: Timer units. The scope of
systemd is truly amazing - and I am not at all sure whether I like it! Here are some links that cover the topic:
- Replacing Cron Jobs With systemd Timers
- systemd as a cron replacement (Arch Linux wiki page)
- systemd/Timers (Arch Linux wiki page)
systemd timer launches a
systemd service at the specified time(s). The service name is specified in the
.timer file by the
Unit= option. If the option is omitted, a service with the same name as the timer unit must exist. For instance, the "apt-daily" timer launches the "apt-daily" service:
root@pelargir:~# l /lib/systemd/system/apt-daily.* -rw-r--r-- 1 root root 225 Sep 13 2017 /lib/systemd/system/apt-daily.service -rw-r--r-- 1 root root 156 Sep 13 2017 /lib/systemd/system/apt-daily.timer
man systemd.time, specifically the section "Calendar events", to understand the notation used to specify the times when a timer unit fires. Here's a short overview:
weekdays years-months-days hours:minutes:seconds
- The weekdays part can be omitted, in which case every week day will match
- The date part can be omitted, in which case every day will match
- The time part can be omitted, in which case 00:00:00 is assumed
- The seconds component can be omitted in the time part, in which case "00" is assumed
- The seconds component can contain fractions up to 6 decimal places
- In the date and time parts, any component may be specified as "*" in which case any value will match
- Every component can be specified as a list of values separated by commas
- A value can be suffixed with "/" and a repetition value. This matches the value itself and the value plus all multiples of the repetition value.
- Every component can contain a range of values separated by ".."
- Some special expressions that can be used
- minutely = *-*-* *:*:00
- hourly = *-*-* *:00:00
- daily = *-*-* 00:00:0
- monthly = *-*-01 00:00:0
- weekly = Mon *-*-* 00:00:00
- yearly = *-01-01 00:00:00
- quarterly = *-01,04,07,10-01 00:00:00
- semiannually = *-01,07-01 00:00:00
- Run every Thursday and Sunday at 5 in the morning:
Thu,Sun *-*-* 05:00:00
- Run every week between Thursday and Sunday at 5 in the morning:
Thu..Sun *-*-* 05:00:00
- Run every day on midnight and every three hours thereafter:
Automatically restarting service on failure
One of systemd's many capabilities, and this one being actually in scope, is to automatically restart a service, or daemon process, in case it should fail.
The service definition goes into
The key property in the service definition file is the "Restart" property. Here's an example using the common "on-failure" condition:
The "on-failure" condition restarts the service when the process exits with a non-zero exit code, is terminated by a signal (including on core dump, but excluding the signals SIGHUP, SIGINT, SIGTERM or SIGPIP), when an operation (such as service reload) times out, and when the configured watchdog timeout is triggered. There are more conditions than just "on-failure", a thorough description is available from the man page
Finally, here's a complete example of a service definition that restarts the rather fragile Web Socket service from my project Little Go for the Web. I created the definition by adapting a copy of the MySQL service definition.
ubuntu@ip-172-31-39-57:~$ cat /etc/systemd/system/littlego-web-ws.service [Unit] Description=Little Go for the Web web socket server After=network.target [Service] User=root Group=root ExecStart=/usr/local/share/littlego-web/script/startWebSocketServer.sh TimeoutSec=10 Restart=on-failure
Services without a unit file
slapd service does not have a unit file in
/lib/systemd, yet it is possible to start/stop the service using
systemctl. I don't know (yet) how this is possible, I have filed it under "yet another WTF-moment in the happy life of a
Querying the service's status prints this:
root@pelargir:~# systemctl status slapd ● slapd.service - LSB: OpenLDAP standalone server (Lightweight Directory Access Protocol) Loaded: loaded (/etc/init.d/slapd; generated) Active: active (running) since Mon 2019-12-30 15:52:55 CET; 6min ago Docs: man:systemd-sysv-generator(8) Process: 54340 ExecStart=/etc/init.d/slapd start (code=exited, status=0/SUCCESS) Tasks: 4 (limit: 4643) Memory: 13.9M CGroup: /system.slice/slapd.service └─54346 /usr/sbin/slapd -h ldapi:/// ldap://127.0.0.1:389 -g openldap -u openldap -F /etc/ldap/slapd.d
The following override for
cron makes sure that
nscd must be running before
cron is started. Reason: There are
cron jobs for users that exist only in LDAP. If
cron cannot find those users when it starts up (because the LDAP service is not yet running), it will ignore their jobs.
root@pelargir:~# cat /etc/systemd/system/cron.service.d/pelargir.conf # The following dependencies are added to dependencies that already exist. # # IMPORTANT: Use "Wants" not "Requires" so that when the dependency units # are stopped during a package update the dependent unit remains running. # During system startup, "Wants" has the same effect as "Requires" # (except that the dependent unit will start even if the dependency units # fail to start). [Unit] Wants=slapd.service nscd.service After=slapd.service nscd.service